Det medisinske fakultet

Basic ultrasound for clinicians

The page is part of the website on Strain rate imaging
Contact address:

This section updated:     June 2016

Magdalenafjorden, Spitsbergen, Svalbard

This section

This section is the basic ultrasound, B-mode and M-mode part of the previous "Basic ultrasound, echocardiography and Doppler for clinicians. Due to the size and number of illustrations, the page tended to load very slowly. It has now been split into this section on Basic ultrasound, and another on Doppler, including tissue Doppler.

 is intended as an introduction to basic ultrasound physics and technology for clinicians without technical  or mathematical background. A basic knowledge of the physical principles underlying ultrasound, will give a better understanding of the practical limitations in ultrasound, and the technical solutions used to solve the problems. This will give a clearer picture of the reasons for the problems and artifacts. Technical or mathematical background is not necessary, explanations are intended to be intuitive and graphic, rather than mathematical. This section is important for the understanding of the basic principles described in detail in the section on measurements of strain rate by ultrasound. Especially in order to understand the fundamental principles that limits the methods.The principles will also be useful to gain a basic understanding of echocardiography in general, and may be read separately, even if deformation imaging is not interesting.

It is important to realise that the last couple of years has seen tremendous improvements in both hardware (allowing a much higher data input to the scanner as well as processing technology), and software (allowing  more data processing at higher speed). It even allow using input data in a way that also improves the beamforming characteristics in processing, as they are used for the generation of a picture. Thus the simple principles of beamforming outlined here are an over simplification compared to the most advanced high end scanners.

Thus, present technology is far more complex, and neither traditional beamforming, focussing nor image processing conforms to the simple principles described here, in the most advanced high end scanners, but they will still serve to give an idea.
And simpler equipment still conform more closely to the basic principles described here.

The physical principles still apply, although due to the use of MLA and similar even more advanced techniques, the fundamental limitation of the speed of sound has been dealt with

Back to website index


Ultrasound is simply sound waves, like audible sound. Although some physical properties is dependent on the frequency, the basic principles are the same. Sound consists of waves of compression and decompression of the transmitting medium (e.g. air or water), traveling at a fixed velocity. Sound is an example of a longitudinal wave oscillating back and forth in the direction the sound wave travels, thus consisting of successive zones of compression and rarefaction. Transverse waves are oscillations in the transverse direction of the propagation. (For instance surface waves on water or electromagnetic radiation.)

Fig. 1. Schematic illustration of  a longitudinal compression wave (top) and transverse wave (bottom). The bottom figure can also represent the pressure amplitude of the sound wave.

The audible sound frequencies  are below 15 000 to 20 000 Hz, while diagnostic ultrasound is in the range of 1 - 12 MHz. Audible sound travels around corners, we can hear sounds around a corner (sound diffraction). With higher frequencies Shorter wavelengths) the sound tend to move more in straight lines like electromagnetic beams, and will be reflected like light beams. They will be reflected by much smaller objects (also because of shorter wavelengths), and does not propagate easily in gaseous media.

The wavelength  is inversely related to the frequency f by the sound velocity c:


Meaning that the velocity equals the wavelength times the number of oscillations per second, and thus:

The sound velocity i a given material is constant (at a given temperature), but varies in different materials (117):
Velocity ( m/s)
Fat 1440
Average soft tissue
Blood 1570
1500 - 1630
2700 - 4100
Metal 3000 - 6000

Ultrasound is generated by piezoelectric crystals that vibrates when compressed and decompressed by an alternating current applied across the crystal, the same crystals can act as receivers of reflected ultrasound, the vibrations induced by the ultrasound pulse.

What are the ultrasound data?

The ultrasound data can be sampled at different levels of complexity as shown below:

Basically, a reflected ultrasound pulse is a waveform. However, storing the full waveform, called RF data, is demanding in terms of storage, as each point on the curve would have to be represented in some way or other. However, if the full RF data are stored, the amplitude and frequency data could both be calculated in post processing.
The pulse has a certain amplitude. Just storing the amplitude is much les demanding (corresponding more or less to one number per pulse). This is the only data that are used in grey scale imaging, where the amplitude is displayed as brightness of the point correspåonding to the scatterer as in B-mode and M-mode.
However, the reflected ultrasound pulse has a frequency (or a spectrum of frequencies), and this can be represented as a numerical value per image pixel as well, as described in Doppler imaging. Still, the amount of data is far less than the RF data.

Imaging by ultrasound

Reflection and scattering

Basically, all ultrasound imaging is performed by emitting a pulse, which is partly reflected from a boundary between two tissue structures, and partly transmitted (fig. 2). The reflection depends on the difference in impedance of the two tissues. Basic imaging by ultrasound does only use the amplitude information in the reflected signal. One pulse is emitted, the reflected signal, however, is sampled more or less continuously (actually multiple times). As the velocity of sound in tissue is fairly constant, the time between the emission of a pulse and the reception of a reflected signal is dependent on the distance; i.e. the depth of the reflecting structure. The reflected pulses are thus sampled at multiple time intervals (multiple range gating), corresponding to multiple depths, and displayed in the image as depth.

Different structures will reflect different amount of the emitted energy, and thus the reflected signal from different depths will have different amplitudes as shown below. The time before a new pulse is sent out, is dependent of the maximum desired depth that is desired to image.

Fig. 2.  Schematic illustration of  the reflection of an ultrasound pulse emitted from the probe P, being reflected at a, b and c.  Part of the pulse energy is transmitted from the scatterer a, the rest is transmitted, part from b and the rest from c. When the pulse returns to P, the reflected pulse gives information of two measurements: The amplitude of the reflected signal, and the time it takes returning, which is dependent on the distance from the probe(twice the time the sound uses to travel the distance between the transmitter and the reflector, as the sound travels back and forth). The amount of energy  being reflected  from each point  is given in  the diagram as the amplitude. When this is measured, the scatterer is displayed with amplitude and position. Thus, the incoming pulse a a is the full amplitude of P. At b, the incoming (incident) pulse is the pulse transmitted  through a. At c, the incident pulse is the transmitted pulse from b. (In bot cases minus further attenuation in the interval.)

The time lag, , between emitting and receiving a pulse is the time it takes for sound to travel the distance to the scatterer and back, i.e. twice the range, r, to the scatterer at the speed of sound, c, in the tissue. Thus:

The pulse is thus emitted, and the system is set to await the reflected signals, calculating the depth of the scatterer on the basis of the time from emission to reception of the signal. The total time for awaiting the reflected ultrasound is determined by the preset depth desired in the image.

The received energy at a certain time, i.e. from a certain depth, can be displayed as energy amplitude, A-mode. The amplitude can also be displayed as the brightness of the certain point representing the scatterer, in a B.mode plot. And if some of the scatterers are moving, the motion curve can be traced by letting the B-mode image sweep across a screen or paper as illustrated in fig. 3. This is called the M-mode (Motion).

Fig. 3a.  The ultrasound  image is built up as a line of echoes based on the time lag and  amplitude of the reflected signals.
3b. The reflected signals can be displayed in three different modes. A-mode (Amplitude) shows the depth and the reflected energy from each scatterer.  B-mode (Brightness) shows the energy or signal amplitude as the brightness (in this case the higher energy is shown darker, against a light background) of the point. The bottom scatterer is moving. If the depth is shown in a time plot, the motion is seen as a curve, (and horizontal lines for the non moving scatterers) in a M-mode plot (Motion).

The ratio of the amplitude (energy) of the reflected pulse and the incident is called the reflection coefficient. The ratio of the amplitude of the incident pulse and the transmitted pulse is called the transmission coefficient. Both are dependent on the differences in acoustic impedance of the two materials. The acoustic impedance of a medium  is the speed of sound in the material × the density:
Z = c ×

Thus, if the velocities of sound in two materials are very different, the reflection will be close to total, and no energy will pass into the deepes material. This occurs in bondary zones between f.i. soft tissue and bone, and soft tissue and air. This means that the deepest material can be considered to be in a shadow.

The reflecting structures does not only reflect directly back to the transmitter, but scatters the ultrasound in more directions. Thus, the reflecting structures are usually termed scatterers.

It's important to realise that the actual amount of energy that is reflected back to the probe; i.e. the amplitude of the reflected signal, is not only dependent on the reflection coefficient. The direction of the reflected signal is also important.
    - An irregular scatterer will reflect only a portion back to the probe.
    - A more regular scatterer will reflect more if the reflecting surfaces are perpendicular to the ultrasound beam.

Effect of size and direction of the reflecting surface.  The two images on the left shows a perfect reflecting surface. Most of the energy (but not all, as the wavefront is not flat), will reflect back to the transducer resulting in a high amplitude echo, when the surface is perpendicular to the ultrasound beam. On the other hand, if this surface is tilted 45º, almost all energy will be reflected away from the surface, resulting in a very low amplitude return echo to the probe.  The next two images shows a scatterer with a more curved surface, resulting in more energy being spread out in different directions, this will give a lower amplitude signal back to the probe,  but may reflect more energy back towards the probe if it is tilted, as for instance when the heart contracts, walls changing direction. Finally, to the left, a totally irregylar surface will reflect the sound in all directioons, butt very little net reflectionstoward the probe.

The effect of the direction of the reflecting surface in a  long axix image of the left ventricle.  The echo resulting from the  septum-blood interface (arrows) is  far  stronger in the regions where the surfaces are perpendicular to the ultrasound beamns (blue arrows), compared to the region between where the surface is slanted compared to the ultrasound beams. Cyclic variations in the amplitude in reflected ultrasound (integrated backscatter) with heart cycle. This reflect the variations in reflexivity, but not myocardial density, as the myocardium is incompressible. Thus, most of the amplitude variations must be due to changes in fibre directions.

The term: Reflection is used about the return signal, while scattering is used about dispersion of the reflected signal, but as the figure above shows, it's the same process.

Thus, the apparent density of the tissue on the ultrasound image is as dependent on the wall and fibre direction. A part of the heart where the fibres run mainly in a direction across the ultrasound beams, will look much denser. Variations in amplitude (brightness of the reflected signal) do not necessarily mean differences in density, but may also mean variations in reflectivity due to variation in the direction of the reflections. Thus, integrated backscatter can be used for studying of cyclicity, but it is not useful for tissue characterisation.


Some of the energy of the ultrasound is absorbed by the tissues, and converted to heat. This indicates that it may have biological effects, if the absorbed energy is high enough.
Absorption is important for two reasons:
The absorption is dependent on many factors (117):
  1. The density of the tissue. The higher the density, the more absorption. Thus the attenuation is fluid < fat < muscle < fibrous tissue < calcifications and bone.
  2. The frequency of the ultrasound beam. The higher the frequency, the more absorption. In human tissue, a general approximation is that the attenuation is 1 dB/cm MHz. (however, that is for one way, in imaging the distance is 2* the depth). Thus, the desired depth to be imaged, sets the limit for how high frequency that can be used. As can be seen, penetration might be increased by increasing the transmitted energy, but this would increase the total absorbed energy as well, which has to stay below the safety limits.

Ultrasound Power / mechanical index

The ultrasound power is the amplitude of the transmitted signal, at the probe. I.e. The total energy that is transmitted into the patient. This is measured in deciBels.

The mechanical index, is the amount of energy that is absorbed by the patient. This, howver is not inly dependent on the power, but also on the focussing of the beam, and is highest where the beam is focussed, but it also decreases with depth. Thus, the mechanical index is a measure of the possible biological effects of the ultrtasound, and is usually calculated and given as a maximal theoretical entity, by the equipment. Usually, it may vary between 1.5 (in B-mode) and 0.1 (in contrast applications).


It follows that the ultrasound waves are attenuated as some of the energy is reflected or scattered. Thus, in passing through tissue, the energy is attenuated due to the reflection that is necessary to build an image.

Basically, discrete objects with high reflexivity wil cause attenuation shadows, as shown below.  Behind organs with low density (reflexivity) on the other hand, the tissue appears brighter (colouring). This is simply lack of attenuation - acoustic enhacement.

Attenuation. Imaging of a homogeneous tissue, f.i. liver will change the apparent density behind structures with different attenuation.
Behind a structure with high reflexivity (e.g. a calcification), there will be high attenuation, (white; left). Hence, the sector behind receives less energy, and appears less dense (darker), the area behind may even be a full shadow.

Behind a strcture with low reflexivity (e.g. a fluid) there is little attenuation (black; right), the tissue receives more energy and appears denser (brighter - "colouring") than the surrounding tissue. 
Liver with a gallbladder in front, containing gallstones. The gallstones are dense, with a shadow behind. The rest of the gallbladdeer is fluid filled, thus the sector behind the fluid appears denser than the neighbouring tissue due to "colouring", which in fact is only lack of attenuation.

This is about 10% of the total energy loss. In addition, the ultrasound waves are diffracted, resulting in further diffusion of the waves out into the surrounding tissue and loss in the energy available for reflection (imaging). However, the most important factor is the the ultrasound energy is attenuated due to absorption in the tissue, this absorption process generates heating of the tissue. It follows that as attenuation is energy loss, this means that the attenuation increases with increasing depth. ( And the reflections are further attenuated in passing back toward the probe).

The attenuation is the limiting factor for the depth penetration of the beam, i.e. the depth to which the beam can be transmitted, and still give useful signals back. Basically, the shorter the wavelength, the higher the attenuation (and thus the shorter the depth penetration). The effective range can be said to be about 200 - 300 x . For practical medical purposes, the penetration for good imaging is about 10 - 20 cm at 3.5 MHz (adult cardiac), 5 - 10 cm at about 5 MHZ (pediatric cardiac), 2-5 cm at 7.5 MHz, 1-4 cm at 10 MHZ, the last two frequencies being in the vascular domain. However, one method to bypass some of the attenuation problem is by harmonic imaging. Thus the beam is transmitted at a certain frequency, and the received signal is analysed at twice that frequency (Fourier analysis). This increases the signal to noise ratio of the reflected signal, especially at the deepest parts of the image, without a similar loss of resolution.


Attenuation can be dealt with by gain, increasing gain amplifies the reflected signal in post processing. However, increased gain increases signal and noise in the same manner. Gain can be done at acquisition, or in post processing.

Uncompensated image, showing decreasing signal intensity (and, hence, visibility) with depth, due to attenuation.
Increasing over all gain, will increase the amplitude of the signal, and the structures at the bottom of the sector becomes more visible. But the gain in the top of the sector are also increased, including the cavity noise, thus decreasing contast in this part of the image.

Time gain compensation (TGC)

All commercial equipment today has a time gain compensation (TGC). This  increases the gain of the reflected signals with increasing time from the transmitted pulse; equivalent to increasing the gain with increasing depth. However, this is not a perfect solution, as the signal-to-noise ratio may decrease, if the noise does not decrease similarly with depth. However, it will give a better balance in the picture, and compensate for much of the attenuation effects. This is a pre processing function, and has to be set at acquisition.

TGC controls. Basically, each slider controls gain selectively at a certain depth:
In older models, the TGC should be set manually to achieve a balanced image:

Present models, however, have automatic TGC. Thus the default control setting should be neutral to achieve a balanced picture: Using manual setting by old habit will result in a double compensation, with too much gain in the bottom, too little in the top:

Compress and reject:

Low amplitude signals can be filtered away, resulting in filtering out cavity noise, however at the price of risking to loose low amplitude signals (e.g. from valves.) by the reject function.
Finally, the grey scale can be compressed, resulting in a steeper saturation curve. This means that the picture goes to full saturation (pure white) at a lower amplitude, while the brightness of low amplitude signals are reduced.

It is important to realise that all these are post-processing functions that manipulates the image on the screen, without improving the signal quality itself, or the fundamental signal to noise ratio.

Image with default gain, reject and compress settings Principle of gain, reject and compress.  All curves display brightness of the display in relation to the amplitude of the rejected signal. An ordinary gain curve is shown in black, using a linear brightness scale, displays the full range of amplitudes. Increasing gain (red curve), will increase all signals, including the weakest, as in the noise. The disadvantage, in addition to increasing noise, is that the strongest signals will be saturated, so details may disappear. Compress is shown as the blue curve. This results in a steeper brightness curve, resulting in less brightness of the weakest echoes, and more brightness of the strongest. Thus, weak echoes may disappear together with background noise, while strong echoes will be saturated, resulting in loss of detail.  Finally reject is shown by the light grey zone, siply displaying all signals below a certain amplitude as black. (The black brightnes curve drops abruptly to zero at the reject limit (dark grey line).  A combination of high gain and reject will give an effect fairly similar to the compress function.

Same image with high gain (top) showing increased density of the endocardium, but loss of detail due to brightness saturation and a corresponding increase in cavity noise and low gain (bottom), showing reduction in cavity noise, but loss of detail (see endocardium in lateral wall).
Same image with increased reject (top) showing reduction in cavity noise, but also with slight loss of detail (endocardium in lateral wall) and compress function (bottom) with less detail in the myocardium due to increased brightness.

All commercial equipment today has a time gain compensation (TGC), increasing the gain of the reflected signals with increasing time from the transmitted pulse. This is equivalent to increasing the gain with increasing depth. However, this is not a perfect solution, as the noise is constant with depth, while the reflected signals become weaker, and with TGC, the noise will be gained as well as the signal, and the signal-to-noise ratio will decrease, thus the resulting signal will end up as a grey blur at a certain depth. This effect can be seen below. Before harmonic imaging, the TGC was adjustable, relying on the operator to optimise the visibility. AS the greater part of cavity noise is removed by the harmonic imaging, most modern equipment has automated TGC, but retains the possibility of manual adjustment.


The M-mode was the first ultrasound modality to record display moving echoes from the heart (118), and thus the motion could be interpreted in terms of myocardial and valvular function. The M-modes were originally recorded without access to 2-dimensional images.




Fig.  4.  Typical M-mode images. a from left ventricle, b from the mitral valve and c from the aortic valve as indicated on the 2D long axis image above. Here the amplitude is displayed in white on dark background.

Depth resolution. Bandwidth:

The depth resolution of the ultrasound beam, is the resolution along the beam. This is dependent on the length of the transmitted ultrasound pulse. In a blood / tissue interface, the dividing line can be seen as a bright line, which does not reflect a tissue stricture (typically NOT the intima, being far too thin to be seen with ultrasound at the present frequences), but the pulse length. This is the reason for the ASE convention, where depths are measured from leading - to - leading edge of the echoes, as this will neutralise the pulse length form measurements.

Ideally, the pulse length in imaging (B- and M-mode) should be as short as possible, but this is dependent on the physical properties of the probe. Most probes will ring in the resonance frequency for a few oscillations, and thus produce a pulse with a length of several oscillations. By Fourier analysis, the frequency content of the pulse will be less dispersed, the longer the pulse is. Thus, the pulse length is inversely proportional with  the spread of the frequency, i.e. the bandwidth of the pulse, as shown below. This will have consequences for Doppler imaging, where frequencies, and not amplitudes are analysed.

Two different pulses with the same frequency, but different duration (pulse length), i.e. Number of oscillations. The shortest pulse has a wider dispersion of frequencies, i.e. a greater bandwidth. After Angelsen (117).

Higher frequencies will result in shorter pulses for the same number of oscillations, i.e. reduce pulse length without increasing bandwidth to the same degree.  Thus, for imaging, the ideal pulse would be highest possible frequency (depending on the required depth penetration) and the shortest possible pulse length. However, as noise is unevenly distributed in different frequency domains, harmonic imaging, which analyses at half the frequency, will result in less noise. Harmonic imaging thus doubles the pulse length for a given frequency, and results in thicker echoes.

Halving the frequency results in half the number of oscillations per time unit, or longer time (= pulse length) for the same number of oscillations. Thus halving the frequency, as in second harmonic analysis, will result in longer pulse length. However, the bandwidth is far less affected.

Second harmonic (1.7/3.5 MHz) left and fundmental (3.5 MHz) right images of LV septum, showing how the echo from the blood/septum interface (arrows) is thicker in harmonic imaging, due to the reduction in frequency. Observe, however, how cavity noise is much reduced in harmonic imaging, resulting in a far more favorable signal-to-noise ratio.
The thickness of the surtface echoes is dependent n the pulse length, and thus also on the frequency.  This picture of the septum illustrates how the leading-to-leading ASE convention shown in red, eliminates the pulse length in measurement (as the echo blooms in both directions), while the Penn convention will result in increasing overestimation of the thickness by increasing pulse length as it incorporates the interface on both sides.

The most important point is that the echo from an interface reflects the pulse length, and is NOT a picture of the endothelium.

2-dimensional imaging:

A 2-dimensional image is built up by firing a beam vertically, waiting for the return echoes, maintaining the information and then firing a new line from a neighboring transducer along a tambourine line in a sequence of  B-mode lines. In a linear array of ultrasound crystals, the electronic phased array shoot parallel beams in sequence, creating a field that is as wide as the probe length (footprint). A curvilinear array has a curved surface, creating a field in the depth that is wider than the footprint of the probe, making it possible to create a  smaller footprint for easier access through small windows. This will result in a wider field in depth, but at the cost of reduced lateral resolution as the scan lines diverge.

A pulse is sent out, ultrasound is reflected, and the B-mode line is built up  from the reflected signals.
Linear array.
Curvilinear array
The linear array gives a large probe surface (footprint) and near field, and a narrow sector. A curvilinear array will also give a large footprint and near field, but with a wide sector.

But in order to achieve a footprint sufficiently small to get access to the heart between the ribs, and with a sufficiently wide far field, the beams has to diverge from virtually the same point. This means that the image has to be generated by a single beam originating from the same point, being deflected in different angles to build a sector image (cf. figs. 6 and 7).

This can be achieved by a single transducer or array sending a single beam that is stepwise rotated, either mechanically or electronically.
A very small footprint can be achieved by a mechanical probe, sending only one beam, but being mechanically rotated by a motor. Finally with a slightly larger footprint, a phased array with electronic focusing and steering, can generate a beam sweeping at an angle similar to the mechanical probe. Beamforming by phased array, also enables focusing of the ultrasound beam as shown. Focusing can also be performed in a mechanical probe, by a concentric arrangement of several ring shaped transducers, an annular array. This will focus the beam in both transverse directions at the same time, as indicated in fig. 7A.

The next line in the image is then formed by a slight angular rotation , making the beam sweep across a sector:

By making the ultrasound beam sweep over a sector, the image can be made to build up an image, consisting of multiple B-mode lines.
c. In principle, the image is built up line by line, by emitting the pulse, waiting for the reflected echoes before tilting the beam and emitting the next pulse. Resulting in an image being built up with a whole frame taking the time for emitting the total number of pulses corresponding to the total number of lines in the image.

This means that as a pulse is sent out, the transducer has to wait for the returning echoes, before a new pulse can be sent out, generating the next line in the image.

2D echocardiography. A line is sent out, and as all echoes along the beam are received, the picture along the beam is retained, and a new beam is sent out in the neighboring region. building up the next line in the image.  one full sweep of the beam will then build up a complete image; i.e one frame. A cine-loop is then a sequence of frames; i.e. a movie.

The present technology is sufficient to  build up a picture wit sufficient depth and resolution with about 50 frames per second (FPS), which gives a good temporal resolution for 2D visualisation of normal heart action ( about 70 beats per min.). However, the eye has a resolution of about 25 frames per second, so there may seem to be excess information. But off-line replay may be done at reduced frame rate, thus enabling the eye to utilise a higher temporal resolution.

Coordinate systems:

In three dimensions, any point has to be located by three coordinates. Conventional geometry uses a Cartesian coordinate system, with three ortogonal axes, the x, y and z axes.

Strain in three dimensions has in the basic sectioneen discussed in terms of a traditional Cartesian xyz coordinate system, for general purposes.

But three dimensions can be defined by any coordinate system that defines a point in space by three coordinates. A coordinate system can be defined in relation to the ultrasound imaging plane: Axial or radial (depth - i.e. along the ultrasound beams), lateral or azimuth (In-plane angle or distance - i.e. across the beam) and elevation (out of plane distance or angle).

Ultrasound coordinates: The coordinate system gives the position relative to the ultrasound imaging plane. The three directions axial or radial, lateral or azimuth and elevation can be defined as  orthogonal coordinates (green), but one that is relative to the position of the imaging plane instead of fixed in space. However, both lateral and elevation can be defined by the angle instead, i.e. a coordinate system of one distance and two angles.

In relation to the left ventricle, a local coordinate system is most used; longitudinal, transmural or radial and circumferential. At any given position, those coordinates are orthogonal, but the direction of the circumferential and transmural axis changes as one moves around the ventricle (so does longitudinal if the direction is defined along the wall instead of along the main axis of the ventricle):

(Figure courtesy of A Heimdal)

Figure depicting that a coordinate system based on the three directions longitudinal, transmural or circumferential is actually orthogonal, but changes in relation to the orientation of the myocardium.
Figure showing the principal directions in the myocardium



Again, modern technology now allows a much more complex processing technology allows using input data in a way that also improves the beamforming characteristics in processing, as they are used for the generation of a picture. Thus the simple principles of beamforming outlined here are an over simplification compared to the most advanced high end scanners.

It is important to realise that the last couple of years has seen tremendous improvements in both hardware (allowing a much higher data input to the scanner as well as processing technology), and software (allowing  more data processing at higher speed). It even allow using input data in a way that also improves the beamforming characteristics in processing, as they are used for the generation of a picture. Thus the simple principles of beamforming and focussing outlined here are an over simplification compared to the most advanced high end scanners.

However, they will still serve to give an idea.
And simpler equipment still conform more closely to the basic principles described here.

Fig. 7A. Mechanical transducer. The sector is formed by rotating a single transducer or array of transducers mechanically, firing one pulse in each direction and then  waiting for the return pulse before rotating the transducer one step. In this beam there is electronic focusing as well, by an annular array.
B. Electronic transducer in a phased array. By stimulating the transducers in a rapid sequence , the ultrasound will be sent out in an interference pattern. According to Huygens principle, the wavefront will behave as a single beam, thus the beam is formed by all transducers in the array, and the direction is determined by the time sequence of the pulses sent to the array. Thus, the beam can be electronically steeredand will then sweep stepwise over the sector in the same way as the mechanical transducer in A, sending a beam in one direction at a time.

Beam focusing:

Dynamic focusing. The same principle of phase steering can be applied to make a concave wavefront, resulting in focusing  of the beam with its narrowest part  a distance from the probe. Combining  the steering in B and C will result in a focussed beam that sweeps across the sector, as in the moving image above.
Resulting Ultrasound beam as shown by a computer simulation, focusing due to the concave wavefront created by the dynamic focusing. The wavelength is exaggerated for illustration purposes. Image Courtesy of Hans Torp.

Focusing is illustrated above. In a mechanical probe, there may be several transducers, arranged in a circular array, focusing the beam in a manner analogous to that shown in fig. 7c. In a circular array, however, the focusing can be done in all directions transverse to the beam direction, i.e. in the imaging plane and transverse to the plane, while a linear array can only focus in one direction, in the imaging plane.

Annular focusing in all directions both in plane and transverse to the plane.

Linear focusing in the imaging plane only.

A matrix array, can focus in both directions at the same plane.

The focusing increases the concentration of the energy at the depths where the beam is focussed, so the energy in each part of the tissue has to be calculated according to both wavelength, transmission and focusing to ensure that the absorbed energy stays within safe limits.

Modern high end scanners has beams that are more focussed along the whole length, allowing narrower and more lines in the image, i.e. higher line density, and at the same time allows higher frame rate due to among other things MLA related image forming.

Unfocussed (planar) beams

In order to increase aperture size, the beams should be less focussed. In completely unfocussed beams, the wavefront is more or less flat, and the beam has more or less parallel edges.

The advantage of this is:
The main disadvantage is that with planar waves, the energy is too low for second harmonic imaging, Thus, it cannot be used for B-mode imaging, neither in 2D nor 3D. However, it can be used for tissue Doppler, where harmonic imaging is unfeasible anyway, because of the Nykvist limit.

Lateral resolution

The apparent width of the scatterer in the image is more or less given by the lateral resolution of the beam. (The thickness in the axial direction is determined by the depth resolution, i.e. the pulse length as discussed above). In addition, two echoes within one beam, will only be separated by the difference in depth.

The lateral resolution of a beam is dependent on the focal depth, the wavelength and  probe diameter (aperture) of the ultrasound probe.
(Reproduced from Hans Torp by permission)
Two points in a sector that is to be scanned. The ultrasound scan will smear the points out according to the lateral resolution in each beam.

Thus a small scatterer will appear to be "smeared out", and the apparent size in the image is determined by the beam width and pulse length.  As the pulse length is less than the beam width, the object will
 be "smeared out" most in the lateral direction.

Two scatterers at the same depth, separated laterally by less than the beam width, will appear as one.
Two scatterers at different depths will appear separate  if separated by more than the pulse length.
But, if separated both laterally and in depth, they will appear as being in the same line, if lateral separation is within the beam.


Line density

The width of the echo will be determined by the beam width, and thus the distance between the beams (most ultrasound scanners today will intrapolate between beams if the distance between the beams is greater than the beam width). Ideally, the distance between the beam width should be the same as the beam width at the focal depth, for maximal resolution, thus lateral resolution of a beam determining the line density. This means that the line density would be suited to the beam width. This, however, holds only for a linear array.

However, as the beam width also increases at depths greater than the focal depth, the ideal line density for a sector probe is the one where  beam distances are equal to the beam width at the focal depth. This will give the best lateral resolution. A line density that is so high as to make lines overlap, will not result in increased lateral resolution. A line density that leaves gaps between the lines, will have less than optimal lateral resolution as determined by the probe aperture and focal depth.

But as the time it takes to build each line in the image for any given depth that is desired, the number of beams in an image limits the frame rate. And if a greater sector width is desired without reducing the frame rate, the line density is reduced (same number of lines over a wider angle).

Thus,  the line density itself is limited by other factors as well:
Due to these factors, the line density often falls below the theoretically desirable described above, and the line density, not the probe size and wavelength becomes the limiting factor for the lateral resolution.

Two different lateral resolutions, the speckles can be seen to be "smeared". In this case the loss of resolution in the right image is due to lower line density . By rights the image should appear as split in different lines as indicated in the middle, as each beam is separated, line density being less than optimal relative to the beam width. Instead the image is interpolated beween lines. This reduction in line density is done to achieve a higher frame rate, as illustrated below.

So a distinction should be made between the lateral beam resolution, given by the fundamental properties of the system, and the image resolution that is a compromise between the requirements of frame rate, angle width and depth.

The discussion may be extended, taking all issues into consideration:

A: Beam width. Speckles (true speckles: black) are smeared out across the whole beam width ( Apparent speckles dark grey, top). This means that with this beam width the speckles from to different layers cannot be differentiated, and layer specific motion cannot be tracked.
B: Line density. Only the lines in the ultrasound beams (black) are detected, and can be tracked, beams between lines are not detected or tracked. The spaces between lines cannot be seen in the final image due to image lateral smoothing.
C:  Divergence of lines in the depth due to the sector image will both increase beam width and decrease line density in the far field. this may result in the line density and width being adequate (in this example for two layer tracking) in the near field, but inadequate in the far field, situation there being analoguous to A.
D:  Focussing. The beams being focussed at a certain depth mau mean that line density may be inadequate at the focus depth. Thus speckles in some layers may be missed. IN general, the default setting will usually give the best line density at the focus depth, so unless frame rate is increased, this problem may be minor. Howewever, line density will decrease ifalso if sector width is increased, there is a given number of lines for a given frame rate and depth. In any case, in the far field, the beams will be broader, and the beam width will be more like A and C.
E: Focussing may even result in beams overlapping int the far field. A speckle in the overlap zone may be smeared out across two beams.


Thus, the line density can be increased by
  1. Reducing the sector width (gives higher line density by spreading the lines over a smaller angle)
  2. Reducing frame rate (enables time for builing more lines between frames)
  3. Reducing depth (enables a higher line density for a given frame rate, as the shorter lines takes shorter time to build).
This is discussed in detail below:

Temporal resolution (frame rate):

To imagine moving objects, structures such as blood and heart, the frame rate is important, related to the motion speed of the object. The eye generally can only see 25 FPS (video frame rate), giving a temporal resolution of about 40 ms. However, a higher frame rate and new equipment offers the possibility of replay at lower rate, f.i. 50 FPS played at 25 FPS, which will in fact double the effective resolution of the eye.

In quantitative measurement, whether based on the Doppler effect or 2D B-mode data, sufficient frame rate is important to avoid undersampling.  In Doppler, the frame rate is also important in the  Nykvist phenomenon.

The temporal resolution  is limited by the sweep speed of the beam. And the sweep speed is limited by the speed of sound, as the echo from the deepest part of the image has to return before the next pulse is sent out ad a different angle in the neighboring beam.

If the desired depth is reduced, the time from sending to receiving the pulse is reduced, and the next pulse (for the next beam) can be sent out earlier, thus increasing sweep speed and frame rate, as shown below.

As the depth  of the sector determines the time before next pulse can be sent out, higher depth results in longer time for building each line, and thus longer time for building the sector from a given number of lines, i.e. lower frame rate.
Thus reducing the desired depth of the sector results in shorter time between pulses, and thus shorter time for building each line, shorter time for building the same number of lines, i.e. higher frame rate. In this case, the depth has been halved, and the time for building a line is also halved.

For a depth of 15 cm, this means that the time for building one line will be 2 x 0.15 m / 1540 m/s =  0.19 ms. The frame rate is then given by the depth and the number of lines, which again is a function of sector width and line density. Thus, for 64 lines the time for a full sector will be about 12 ms, which in theory may give a frame rate of around 80 FPS, in practice the frame rate is lower, around 50.

The point of this, is that reducing the depth to the field of interest will give a higher frame rate, that can either be used for higher temporal resolution, or for increased spatial resolution or sector width (see later). Looking at commercial scanners, the effect of reducing depth is often surprisingly little, this may be due to the manufacturers automatically using the increased temporal capacity to increase line density rather than frame rate.

Still, the field of view should be limited to the field of interest. In practice, when studying the ventricles, the atria should be excluded.

In this case, in the image to the left, the depth has been halved, reducing the time for building each line to half, thus also halving the time for building the full sector, doubling the frame rate.

Number of beams: Sector width and line density.

The sweep speed can also be increased by reducing the number of beams  for a full sector. Reducing the number of lines in the image will reduce the time for building up the whole image. This can be achieved by either decreasing the sector angle (width), but keeping the line density, i.e. reducing the field of view but keeping lateral resolution. Decreasing the line density, but keeping the same sector angle  will achieve the  same increase in frame rate, but  reduce lateral resolution. 

Fig.7a. A sector with a given depth, sector width and line density determines the frame rate.

b. Reducing sector width, but maintaining the  line density, gives unchanged lateral resolution but higher frame rate, at the cost of field of view.
c. Reducing the  line density instead and maintaining sector width, results in lower number of lines, i.e. lateral resolution, and gives the same increase in frame rate.

So frame rate can be increased either by reducing the sector width, or reducing the line density. The one has the effect of narrowing the field of view, the other of reducing the lateral resolution. A narrow sector means that you can maintain lateral resolution with higher frame rate, reduced line density means that you can maintain your field of view (for instance for comparison of the walls) at a higher frame rate.

As can be seen there has always to be a trade off. Frame rate is a compromise between sector size (width and depth), resolution (line density) and frame rate. Line density is a trade off with frame rate and sector width. The fundamental limitation is the speed of sound. In the example above for a depth of 12 cm (giving at time for building one line of 30 ms), with 64 lines, the time for a full sector is 64 x 0.19 ms = 12.5 ms, and the frame rate is  1000 / (12.5) = 80. In practice the real frame rate is lower.

This is shown in the example below:

Ultrasound acquisitions of the same ventricle at frame rate 34 ( left), 56 (middle) and 112 (right), all other setting being equal. Increased frame rate is achieved by reducing the number of lines; i.e. the line density. This can be seen as an increasing width of the speckles in the image with increasing frame rate, resulting in a lateral blurring of the image. The first step from 34 to 56 seems to retain an acceptable image quality, indicating that the line density was  redundant  at the lowest frame rate.  ( In fact, it may seem that the image in the middle has the best quality, as the left image seems more grainy. But the graininess is the real appearance of the echoes, while the more homogeneous appearance in the middle and the left is due to smearing). However, as line density decreases toward the bottom of the sector (by the divergence of the lines), the effect is mos clearly seen here, i.e. in the atrial walls, the mitral ring and valve.  In the image to the right, the endocardial definition is lost.  As it is the echoes that are smeared, the effect will result in an apparent decreased cavity size. 

These images also illustrates the drawback of time gain compensation, all three images has the same TGC, showing about the same brightness of the walls from base to apex, (the attenuation being offset by the TGC), but with increasing cavity noise.

Multiple line acquisition (MLA)
However, a method for increasing the frame rate for a given sector and line density, is to fire a wide transmit (Tx) beam, and listen on more narrow receiver (Rx) beams (crystals) simultaneously. This is called multiple line acquisition (MLA), and is illustrated below:

In this example, a wide beam is fired, and for each of the four transmit beams, there are four receiver beams (4MLA). thus, the frame rate is increased fourfold for the same number of lines.

Limitations of the MLA technique

The MLA has limitations that are especially important in forming the B-mode image. 

MLA angle discrepancy. The width of one transmit beam is exaggerated for visualisation. One wide beam is transmitted, and four narrow recieve beams.  The transmit beam has has a main direction shown by the red arrow. The receive beams has directions (blue arrows) with an angle to the transmit beam, and this angle increases with increasing distance of the receive beam from the middle of the transmit, i.e. with the MLA factor.

Especially the last point will result in MLA artefacts, that are visible as longitudinal lines of blocks in the B-mode image as shown below:

MLA angle artefacts in B-mode. Left: single line acquistion, where frame rate is acquired by a fairly low line density, and the image is then smoothed with interpolation between scanlines as described above, right, 4MLA acquisition. This should in principle result in a quadrupling of the number of lines, and an image with better lateral resolution. However, the increasing angle deviation between the Tx beam and the RX beams in the lateral parts of the Tx, will result in the lines being visible as blocks, the improvement in image quality being negligible or none. Image courtesy of Tore Bjaastad.

Image smoothing this in the image will result in "smearing", and hence, reduced resolution again. Thus, increasing frame rate with MLA and then smoothing the image,  becomes similar to increasing frame rate by reduced line density.
in B-mode, where image quality is the main focus, there has been a practical limitation of 2 MLA.

In tissue Doppler, the image quality is of less concern, as the main emphasis is on velocity data, rather than image quality. Thus, the MLA factor, and hence, the frame rate of tissue Doppler is thus usually higher, but at the cost of lower lateral resolution. This may not be apparent, unless one compares data across the beams. An example can be seen here. In practice, the MLA factor can at least be increased to 4 MLA.

However, modern equipment will allow more data to be transmitted directly into the scanner, and modern computer technology allows more data processing at higher speed. Thus, technology will become far more complex, and neither traditional beamforming nor image processing conforms to the simple principles described here, but they will still serve to give an idea.
And the physical principles still apply.

In practice, for modern B-mode, frame rates will become similar to colour tissue Doppler, when the MLA artefact preoblem is dealt with.

3D ultrasound

3D ultrasound increases complexity a lot, resulting in a new set of  additional challenges.

The number of crystals need to be increased, typically from between 64 and 128 to between 2000 and 3000. However, the probe footprint still needs to be no bigger than being able to fit between the ribs. And the aperture size must still be adequate for image resolution.

The number of data channels increases also by the square, from 64 to 642 = 4096. This means that the transmission capacity of the probe connector needs to be substatially increased, and some processing has to take place in the probe itself to reduce number of transmission channels. .

The number of lines also increase by the square of the number for 2D, given the same line density, meaning that  each plane shall have the same number of lines, and a full volume then shall be n=built by the same number of planes. This means that given 64 lines per plane, the number of planes should be 64, which means a total of 64 x 64 = 4096 lines. This means that the frame rate (usually termed the "volume rate" in 3D imaging), will be 0.19 ms x 4096 = 778 ms, or about 0.8 secs. Meaning about 1 volume per heartbeat for a heart rate of 75. This is illustrated below.

Building a 2D sector with lines. (Even though each line (and the sector) has a definite thickness, this is usually not considered in 2D imaging, except in beamforming for image quality.
Building a 3D volume. Each plane has the same number of lines as in the 2D sector to the left, and takes as long to build. The number of planes equals the number of lines in each plane. Here is shown only the building of the first plane (compare with left), but the time spent on each of the following planes are in proportoion. The time for a full volume is then equal to the square of the number of lines in each plane.

This means that full volume 3D ultrasound has to pay a price of a substantially reduction in both frame rate and line density (resolution) at the same time. Thus, the lateral resolution is poor in 3D acquisition compared to 2D acquisition. The images can be seen to be very smoothed, compared to 2D.

Possible compensations are:

Gated volume acquisition (stitching). In this case of four heartbeats. Only one fourth of the full volume is taken in one heartbeat, so the full heartbeat is used for increased number of lines and planes, as well as shortening the time for the acquisition of the partial volume. In the next heartbeat, the next fourth of the volume is acquired, and so on acquiring a full volume in four heartbeats. The four partial volumes are then aligned by ECG gating into one reconstructed volume, and the reconstructed volume thus has the same volume rate as the four partial volumes.

Thus, the limitation in 3D sector size can be used both for more lines (resolution) and increased volume rate. However, the reconstructed acquisition is no longer real time.
Volume acquisition can then be displayed as either a surface rendering, or multiple section planes through the volume:

Surface rendering of a 3D volume. The image shows a cut through the LV between base and apex, looking down toward the base, the papillary muscles and mitral valve can be seen. The illustration also shows that the temporal resolution is to low to actually show the opening of the mitral valve during trial systole, only a slight flicker can be seen at end diastole.
The same volume, now displayed as a series of short axis slices from apex (top left) to base (bottom right). A slight stiching artefact (spatial discontinuity) can be seen in the anterior wall (top of each slice).

The rendering is mainly useful for morphology, especially valves, but here, TEE gives better images. The short axis slices are more useful in assessing wall motion.

The disadvantage of this is that each full-volume heart cycle is constructed from multiple beats, and small movements, f.i. by respiration, may result in mis alignment of wall segments. The acquisition is thus usually taken in a breathold, and thus there is a practical limitation to the number of beats that can be stitched. Usually four to six are used, six being at the limit for many patients. Four beats, by most vendors now will result in a volume rate of around 20 VPS.

Any  small movement will result in mis alignment of the sub volumes, with a sharp boundary within the volume where there is both spatial and temporal discontinuity (stitching artefact).

3D acquisition of a ventricle with inferior infarct. The display is shown as the apical planes to the left, and nine cross sectional planes to the right, going from the apex (top left) to the base (bottom right - reading order). The infarct can be seen as inferoseptal a - to dyskinesia in the basal sections. The image also illustrates that the software can be enabled to track the planes, thus eliminating out of plane artefacts when evaluating wall motion. Note that there is drop outs that cannot be eliminated by moving the imaging plane, in the anterior wall. Image courtesy of Dr. A. Thorstensen .
Styitching artefacts. In this volume, reconstructed from four heartbeats, i.e. four sub volumes, there are stitching artefacts between each of the sub volumes. This is due to motion of either the heart (f.i.) because of respiration, or of the probe. In the inferior wall (bottom of each slice), the spatial discontinuity is very evident, less so at the other stiches,, but in the anterior wall there is a discontinuity that illudes a dyssynergy.



Basically, discrete objects with high reflexivity wil cause attenuation shadows.

Attenuation. Imaging of a homogeneous tissue, f.i. liver will change the apparent density behind structures with different attenuation.
Behind a structure with high reflexivity (e.g. a calcification), there will be high attenuation, (white; left). Hence, the sector behind receives less energy, and appears less dense (darker), the area behind may even be a full shadow.

Behind a strcture with low reflexivity (e.g. a fluid) there is little attenuation (black; right), the tissue receives more energy and appears denser (brighter - "colouring") than the surrounding tissue. 
Liver with a gallbladder in front, containing gallstones. The gallstones are dense, with a shadow behind. The rest of the gallbladdeer is fluid filled, thus the sector behind the fluid appears denser than the neighbouring tissue due to "colouring", which in fact is only lack of attenuation.

In echocardiography, the shadows may even be useful, as the usually denotes that hagh reflexivity structures are canlcified (thick, dense stuctures may seem very bright in the usual scanner setting but will usually not cast a shadow if not calcified.:

Calcification in the posterior mitral ring seen both in parasternal and apical views from the same patient. The shadow is clearly visible in both views.

Calcified aortic valves, with heavy shadows behind.

Both air and bone will attenuate the ultrasound beam almost totally, thus creating a shadow.

However, a shadow has different effects depending on the distance from the probe. A distant shadow will simply create a drop out behind the shadowing object. On the other hand, a shadow close to the probe will simply reduce the effective aperture, thus not creating a drop out, but instead reducing the lateral resolution. The principle is illustrated below.
Illustration of effects of shadows on an ultrasound beam. Left: no shadow. Middle, a shadow distant from the beam (e.g. a calcification or the lung seen at a distance), resulting in a shadow with no image below it. Right a shadow close to the transducer surface (e.g. lung edge or rib) will result in a narrow beam (reduced apparent aperture) which will not be seen as a shadow in the picture, but rather a reduced lateral resolution.  (Original simulation image to the left courtesy of Hans Torp, modifications by me.) The effect of the depth of the origin of the shadows in the images is shown below, indicated by the green arrows.
Left, shadow originating at a depth of ca 3 cm, as can seen by the visible structures of the chest wall closer to the probe.  The shadow is probably due to the edge of the lung.  Right; a small repositioning of the probe solves the problem. Left shadow originating close to the chest wall (< 1 cm), probably the edge of a costa.  It can be seen as a shadow, but the main effect is loss of lateral resolution in the shadow, and again a small repositioning of the probe solves the problem as seen to the right.
more pronounced drop out of the anterior wall in this 2-chamber view due to a lung shadow distant from the probe. However, the lateral resolution may be seen to be reduced at the basal part of the  border between the picture and the shadow. Reduced lateral resolution due to costal shadow. The effects of both costae and shadows will vary, according to the distance from the probe. In this case the patient was extremely thin, thus there was virtually no distance between the probe and the costa. In this case, no localised shadow can be seen, the costa was to the left in the image, where resolution is poorest.
If the near shadow is in the centre of the probe, the result may be that the beam is split in two, resulting in two apparent apertures. The effect on the image is shown below. Split image due to two virtual appertures, caused by a near shadow in the middle of the probe footprint.


Reverberations is defined as the sound remaing in a particular space after the original sound pulse has passed.
Thus, a single echo is a reverberation (first order), and multiple echoes will be higher order reverberations as illustrated below.

the phenomenon that a sound pulse bounce back between different structures before being reflected back to the observer. , while in ultrasound iomages the term is usually restricted to artefacts caused by the echo bouncing more times (higher order reverberations) , creating false images

The phenomenon of thunder is a typical reverberation effect:

Reverberations: Simplified animation of thunder. The sound of lightning is a short, sharp crack. The wavefront of that sound (red) reaches the listener first, but the wavefront is then reflected from different cloud surfaces with different distance to the listener as secondary echoes, ( primary reverberations; blue and green), an also tertiary echo (Secondary reverberation; yellow) and even higher orders. Thus, the crack is "smeared out" to a long lasting rumble.

In ultrasound imaging, actually the primary echoes are first order reverberations. However, in ultrasound images the term is usually restricted to artefacts caused by the echo bouncing more times (higher order reverberations) , creating false images as the partial delay due to multiple reflections will be interpreted as images at greater and different depths. One of the most typical phenomenons are the stationary reverberations caused by the bouncing of the pulse between a structure close to the surface, and the probe surface:

Stationary reverberations are caused by stationary structures, usually in the chest wall, causing the ultrasound to bounce back and forth between the skin and the structure, increasing the time before the echo returns and giving rise to a false image of an apparent stationary structure deeper down.

Top, a common reverberation in the lateral wall, seen as a stationary echo (arrows). Below, the principle shown diagrammatically, a reflector causing the ultrasound pulse to bounce, for each bounce back, the echo is interpreted as a structure at a depth corresponding to multiples of the original depth.
This is even more evident in this image, showing multiple, stationary reverberations from the apex. All the reverberations have the same distance. In the blow up below, the reverberation space can be seen to be a echolucent space in front of the apical pericardium, and the distance between the reverberations equals the original distance between the probe and the pericardium.

Reverberations needs not necessarily be totally stationary, if the reflecting surface that gives rise to the echo moves, the reverberations will move as well.

Reverberations in colour Doppler

Reverberations may also occur in colour Doppler:

The red jet shown in the atrium, is a reverberation originating from the aortic regurgitation jet.
From the B-mode acquisition, there can be seen a slight, possibly clutter line as well, buty in this case the reverberation signal is predominantly in the Doppler signal.

To document that this is not a pathological jet, the apical long axis and four-chamber views do not show such a jet in the same location.

The simultaneous duration of the two jets shown on the reconstructed M-mode also confirms that this is a reflection, and not something else (f.i. a venous signal or fistula)
The distance between the jets is compatible with the reflecting layer being the immovable structure outside the pericardium.
-and quantitative analysis shows the reversal of the phase in the reflected signal.

Ring down artefacts

The "ring down" phenomenon is a special instance of reverberations in the form of a bright beam radiating out behind a small echo lucent (often fluid filled, but may be fat) layer behind a scatterer with high reflexivity. The source of the ring down artefact is thus a small reverberating space in front of a powerful reflector, despite the fact that it is projected behind it.

Ringdown artefacts in an echo from a healthy (and young) person originating from the base of the left ventricle and right atrium. As explained below, they most probably originate from the pericardial space. The persistence of the phenomenon through the  depth may partly be a function of the Time Gain Compensation, and the fan like appearance of course, is due to using a sector scanner.

This artefact was originally described in relation to small gas bubbles in the abdomen, and also to small cholesterol crystals in the gall bladder. However, as seen above and below, small structures in the pericardium as well as mechanical valve components, may also give rise to this. The mechanism has been proposed as being resonance, i.e. that the pulse hitting a small gas bubble or cluster of bubbles may give rise to the bubbles resonating, and thus emitting energy long after the original pulse has been reflected. In the scanner this would be interpreted to successive echoes in the same direction, but with increasing depth, i.e. a bright ray. In echocardiography, however, it is not uncommon, most often from the pericardium. Normal subjects, of course, do not have air in the pericardium.

If present, they do not disappear with change of probe frequency, excluding resonance as a mechanism:

Ringdown artefacts from the pericardium. As seen by this image, they are present with all probe transmission frequencies, which would not be the case if this was due to resonance.

Resonance is basically related to a specific frequency, the eigenfrequency of the source. Frequencies above that can basically cause resonance, but mainly in the harmonic frequencies, i.e. those that are one or more octaves (multiples of the basic frequency) removed from the eigenfrequency. In the example above, the artefact is present with frequencies that are not multiples of each other, so resonance is ruled out.

Thus, the ring down artefact is a special instance of reverberations, where there are multiple reverberations within a short space as illustrated in the diagram below:

Reverberations. In all cases, reverberations are the result of the ultrasound pulse bouncing back and forth between two layers, the low reflecting space between them can be called the "reverberation space". Here the probe surface is shown in black, the reflector causing the reverberation in dark grey, while the artefact echoes are shown in lighter grey. The reverberation space in front of the reflector is illustrated in light red.
It is clear that fluid filled layers in the body may act as reverberation spaces, provided the structure behind is sufficiently reflective. In the lungs, this phenomenon is seen in connection with oedema in the interlobular septa. The air space in the alveoli is almost totally reflective. This is called comet tails.

Comet tails

The comet-tail artefact is used to describe the ring down phenomenon doing utlrasound of the lungs, with a cardiac probe (281). This has been seen to be a marker of interstitial fluid in the lungs (282), i.e. edematous interstitial septa (equivalent to the Kerley B-lines on X-ray), and has been seen to be quick and reliable. The reverberations should then be within the edematous interstitial septa, as air filled alveolar clusters in front and behind would be strong reflectors, causing the reverberation within a very short distance (283). As penetration through the lung is poor, they have to originate close to the lung surface:

Lung ultrasoud showing comet tails from a patient with heart failure. In this case theymove with the lung during rspiration. The lung tissue can be seen in the upper few centimeters, below that the signal is totally attenuated, but the comet tails are clearly visible. Image acquired with a hand held ultrasound device. Image courtesy of Bjørn Olav Haugen, NTNU

As with calcification shadows, it is an example of an artefact giving useful information.
The source of the ring down artefact is a small reverberating space in front of a powerful reflector, which means that the reflector may give rise to an attenuation shadow as well. This is also shown up in the cone behind the reflector, and this attenuation shadow itself may act to increase the apparent gain of the ringdown beam.

Ring down echoes from the pericardium. They can be seen as bright bands radiating down, and the source seem to be real, as the ring down beams are visible both in long and short axis views from the same patient. The reverberating space is probably the pericardial space itself. The uneven distribution of the ring down beams may be due to the varying reflectivity due to different directions of the surfaces relative to the transmitted beams.

Ring down beam seen to originate from the apicolateral pericardium. As with sidelobes, in this case the shadow is not constant, probably due to the source moving in and out of the plane.
Parasternal image from a patient with a mechanic aortic valve, combining shadows and ring down shadows. The thick metal ring itself gives rise to an ordinary shadow from the anterior part,, while the thin part of the carbon fibre ring protruding out into the sinus valsalvae, gives rise to a ring down beam. The reverberating space may be the sinus in front of the protruding carbon ring.

Discrete reverberations as shown above, is due to the fact that the signal remains coherent, i.e. remains reciognisable by the sacnner as a distinct echo.

Also, the echoes may be scattered in all directions, the pulse may bounce in different directions (as in the thunder animation above) before part of the reflected pulse reaches the probe. Also The refelcted signal looses it's coherence. This will not give a distinct echo like the one above, but rather more diffuse, less dense shadows, as in the example below:

Heavy reverberation band across this long axis image. The shadow is not ditinct, and thus far less coherent than the examples above. Shadowy reverberations covering the naterior wall in this 2-chamber image. It is differentiated from the drop out shown above, as we can se a "fog" of structures covering the anterior wall. The structures are stationary. On the other hand, this is not distinct reverberations shadows, but incoherent clutter.

Shadowy reverberations may seem of little importance, as the B-mode often is faily well visualised anyway. This is partluy duye to the motion, and partly due to the second harmonic mode, which reduces the amplitude of reverberation noise, but only in the B-mode, as tissue Doppler must be done in fundamental mode due to the Nykvist limit.

The impact of reverberations on tissue Doppler are discussed below, on strain rate imaging by tissue Doppler in the measurements section, and on speckle trackingin the measurements section  here and here.

Stationary echoes and noise is also referred to as "clutter". This noise may also result in a more random pattern (shadowy reverberations), resulting in a more blurred picture.

Side Lobes Each beam is not solely concentrated in the main beam as illustrated above. In addition, some of the energy is dispersed in side lobes originating among other things from interference as illustrated below.

Simulated beam with focusing, showing interference pattern dispersing some of the beam to the sides. (image courtesy of Hans Torp).
Side lobes from a single focussed ultrasound beam. These side lobes will also generate echoes from a scatterer hit by the ultrasound energy in the side lobes, i.e. outside the main beam.

As echoes  from a scatterer in the side lobe pathway is perceived coming from the main beam, this will result in a false echo, apparent from the main beam..

AS the beam with side lobes sweeps back and forth a cross the sector, each echo from the scatterer in both the main beam and the side lobes will generate the false echo in the position of the main beam.

This again will result in the echo being smeared out across the sector, resulting in a smeared out echo across a large part of the sector.

Patient with an aortic valve. The strong echo from the metal in the ring creates sidelobes across most of the sector. It can be seen to move awith the AV-plane motion as expected.

IN most cases, the sidelobes originate form less intense echoes, which gives smaller sidelobes, that are more difficult to discern from real structures.

Side lobes originating from the fusion line of the aortic cusps, seen to extend into both the LV cavity and the aortic root cavity (arrows).
As opposed to reverberations, the side lobes moves with the structure, and may change with time (in this case the echo intensity of the fusion line decreases as the valve opens, and thus the intensity of the side lobes too) .

As opposed to reverberations, the side lobes will move as well as increase and decrease in intensity in parallel with the source of the echo as shown below.

Angle dependency

The angle dependency of Doppler measurements, is well known. However,  M-mode measurements are just as angle dependent:

Effect of angulation in thickness measurement. The true thickness is L0, the measurement, however may be done along an M-mode beam parallel to the length L. As the cosine to the angle between them is defined as cos ( ) = L0 / L, the true length L0 is overestimated by the cosine to the angle, the measured line is the true line divided by the cosine to the angle.
Example from reconstructed M-mode, with vertical scales aligned at zero and 6 cm depth for comparison. To the left, the line crosses the septum transversely resulting in a diastolic thickness measure of 7mm, to the right the M-mode line is skewed, and the measurement across the septum is longer (10 mm).

Thus, a skewed cross sectional M.mode will overestimate both wall thicknesses and chamber diameters. The angle distortion is eliminated, however, by usin ratios. Fractional shortening and wall thickening are ratios of diastolic values and systolic changes, where the overestimation is present in both diastole and systole (unless the angle changes during systole, of course), and the ratios will remain unchanged.Thus wall thickening and fractional shortening can be estimated even if the line is skewed.

However measuring absolute dimensions or motion by M-mode, will lead to an over estimation by M-mode just as in measuring distances:

Angle dependency of motion measurement by M-mode. As a reflector moves from a to b in the direction 1, the true motion (displacement) is L1. If the ultrasound beam deviates from the direction of the motion by the angle , the apparent length along the ultrasound beam will be L2, which is the hypothenuse of the triangle, and thus L2 = L1 / cos (). Thus angle deviation of M-mode measures will always over estimate the real motion (as opposed to Dopller measurements). The angle error in displacement measurement demonstrated in a reconstructed M-mode. As the skewed M-mode line is shorter, scales have been lignes at 0 and 6 cm (green lines). But the caliper measures are showing how increasing angle between M-mode line and direction of motion increases the overestimation of the MAPSE.

Again measuring displacement relative to end diastolic wall length, will give correct values, as both wall length and displacement will have the same ratio despite the angulation error, if measured along the same straight line. Thus, global strain in not affected to the same degree.

This means, that in motion tracking by B-mode or M-mode, the

Measured displacement increases with the angle, if there is an angle deviation beteen the direction of the motion and the tracking. Also, this means that if velocity is calculated from this motion, this too, will increase with the angle, opposite to the angle effect on Doppler where velocity decreases with the angle as discussed in the Doppler section.


For correct display of the left ventricle, the imaging plane has to transect the apex. This is ensured by finding the apex beat by palpation. However, the apex does not necessarily offer the optimal window for imaging, and the intercostal space above may give a better view. However, this may lead to a geometrical distortion as illustrated below:

Correct transapical plane (blue) versus foreshortened plane (yellow). Firstly, it is evident that the foreshortened plane excludes parts of the apical wall, but the foreshortened image still shows an ellipsoid figure, so the foreshortening is not immediately evident. There is an angle between the planes (), and the apparent longitudinal wall in the foreshortened image is actually partly circumferential.

This is shown in the images below:

Foreshortening. The three images are taken with identical gain, compress and reject settings. Left: correct apical position, showing the apex in the centre of the sector. The wall vivibility is poor. MIddle: by moving the probe one intercostal space higher, the wall visibility becomes much better. However, the ventricle canbe seen to be foreshortened, being much shorter than in the left image. But this is only evident by the comparison, without the reference image to the left, this is not apparent, as the (virtual) apex is in the centre of the sector. However, rotationg the probe to the two-chamber posisition, reveals that the apex in fact is not in the centre at all, thus the four chamber image is foreshortened.

The foreshortened image in four chamber view may seem to be better, at least for wall motion assessment, but the consequences may be:

  • Volumes and LV length will be underestimated (I've seen diagnosed bi atrial enlargement due to foreshortening simply because the normal atria looked bigger in relation to the foreshortened ventricle)
  • The apex is not imaged, and any apical abnormalities will be missed as seen below:

Stress echo image at peak stress. The foreshortened image to the left shows good wall visibility, and apparent normal wall motion in all segments. Left: correct placement of the probe as seen by the slighty longer ventricle, shows poorer visibility, but the akinetic apicolateral part of the wall is evident, showing how foreshortening may almost totally mask any abnormality in the apex (Although some asynchrony may be seen). 

Another example is shown below:

In this case, there is foreshortening in the four chamber view (left), which is not very evident. However, automatic adjustment of probe position when rotating to 2-chamber view (middle) and long axis view (left) masks the fact that there is foreshortening. The para apical position in the two latter views, however, is evident by the inward motion of the apical endocardium.

- and the apical aneurysm evident on this ventriculogram is missed.
  • The anterior wall is partly apical and partly circumferential. Thus longitudinal shortening, especially by speckle tracking may be circumferential rather than longitudinal. This may also be related to the curvature dependency of strain as measured by speckle tracking

Out of plane motion

The most common out of plane motion, is the movement of the base towards the apex, which is the longitudinal shortening of the ventricle. This is evident from the long axis view, but not the short axis:

Normal long axis image. The motion of the base of the ventricle towards the apex is evident in the long axis view.
Lopoking at the short axis view from the base, this is not evident, but comparing with the image on the left, this mus mean that during systole, an entirely new part of the ventricle moves into the imaging plane.

This, of course affects M-mode measurements as well:

As can be seen, the base of the heart moves through the M-mode line during the heart cycle.
This means that measurements in fact are taken from different part of the ventricle in end diastolie and end systole. It seems to indicate that systolic measurements are done in a part of the ventricle with narrower lumen and thicker wall, thus may over estimating  both fractional shortening and wall thickening.

Non linear wave propagation and harmonic imaging.

Non linear propagation of the signal in the body, leads to distortion of the waves in the signal. But this again leads to a dispersion of the wavelength content in the signal, as assessed by Fourier analysis in the received signal.

Non-linear propagation. The upper panel shows the waveform of a pulse as originally transmitted, and after 6 mm transmission through tissue. The lower panels shows how the energy distribution is shifted to a more evenly distribution between more frequencies. (image courtesy of Hans Torp).

By Fourier analysis it is thus possible to send at half the frequency (typical 1.7 MHz as opposed to 3.4 MHz in native imaging), but receive at the same frequency (the second harmonic frequency: Twice the frequency is one octave higher). Thus, it improves penetration, which is important especially in obese subjects, while it retains the resolution (almost).

Fourier analysis of the resulting signal in native frequency (left) and second harmonic mode (left) shows that the native signal contains much more energy at all depth, while the harmonic signal contains most of the energy at a certain depth, in this case at the level of the septum, showing a much better signal-to-noise ratio.(image courtesy of Hans Torp). Energy distribution of the signal from cavity (lower curve) and septum (upper curve), showing the same phenomenon as the middle picture. The difference between cavity signal (being mostly clutter) and tissue is small in the native frequency domain (1.7 MHz), but there is little clutter at the harmonic frequency (3.4 MHz). Thus, filtering the native signal will reduce clutter, as shown below. (image courtesy of Hans Torp).

The noise from clutter and aberrations is mainly in the primary frequency, so the use of second harmonic will suppress noise, improving the noise-to-signal ratio. Also, the echoes from the side lobes are mainly in the primary frequency and will be reduced in second harmonic imaging.

Harmonic imaging, however removes all energy in the primary frequency. This means that there is an over all reduction in the reflected energy, even with improved signal to noise ratio. This means that there is limitations to how low it is possible to go in trnsmit energy (for instance i contrast echo). In addition, focussing is more important as this consentrates the energy in the beam.

Thus second harmonic imaging leads to:

1: Reduced noise and side lobe artifacts
2: Improved depth penetration.

Examples of the effect of harmonic imaging can be seen below.

The same image in  harmonic (left) and fundamental (right) mode, showing the improved signal-to-noise ratio in harmonic imaging, especially in rducing noise from the cavity.  (Thanks to Eirik Nestaas for correcting my left-right confusion in this image text)
Stationary reverberation in harmonic (left) and fundamental (right) imaging, showing the effect of harmonic imaging on clutter.

However, due to the increase in pulse length with lower frequency, harmonic imaging also leads to:

3: Thicker echoes from speckles as discussed above.

Fundamental (left) and harmonic (right) images of the left ventricle at the level of the chorda tendineae. The echo of the chorda (blue arrow) can be seen to be thicker in harmonic imaging, due to the longer pulse length. The echo generates a side lobe that can be seen to the right of the chorda. The side lobe is more prominent in fundamental than harmonic imaging. Note also the reduction in cavity noise from the right and left ventricle in the harmonic image.

Halving the transmit frequency will also halve the Nykvist limit, and thus is less suited to Doppler imaging as will be discussed below.

Speckle formation:

The gray scale image is seen to consist of a speckled pattern. The pattern is not the actual image of  the scatterers in the tissue itself, but the interference pattern generated by the reflected ultrasound:

Interference pattern. Here is simulated two wave sources or scatterers at the far field (white points). The emitted or reflected waves are seen to generate a speckle pattern (oval dots) as the amplitude is increased where wave crests cross each other, while the waves are neutralised where a wave crest crosses a though. This can be seen by throwing two stones simultaneously in still water . The speckle pattern can be seen in front of the scatterers, towards the probe.
Irregular interference pattern. This is generated by more scatterers somewhat randomly distributed. The speckle pattern is thus random too.  Again there may be a considerable distance between the speckles and the scatterers generating the pattern.

Speckle tracking

Humpback whale diving. Each humpback has an unique (speckle) pattern on the underside of the tail (and flukes). Thus each individual can be identified by its speckle pattern. Photographs at different times and places can thus track the wandering of each individual all over the area it wanders, without recourse to anything else than the pattern.  - Speckle tracking! This is thus a method with low frame rate, giving mainly the extent of wandering over a long time period (the sampling interval). To measure swimming velocity, a Doppler sonar would have been useful.

The speckle pattern can be used to track myocardial motion due to two facts about the speckle pattern.

1. The randomness of the speckle pattern ensures that each region of the myocardium has its own unique speckle pattern: that can differentiate a region from other regions (just as in the whales).

Principle of speckle tracking. Demonstrating the difference between two different regions of the myocardium by their different random speckle pattern. The two enlarged areas show completely different speckle patterns, which is due to the randomness of the interference. This creates an unique pattern for any selected region that can identify this region and hence, the displacement of the region in the next  frame.

2. The speckle pattern remains reasonably stable, and the speckles follow the myocardial motion. This can be demonstrated by M-mode:

An M-mode along the septum demonstrates how the speckles is shown as motion curves. It is evident that many speckles are only visible during part of the heart cycle, but if the speckle pattern is compared from frame to frame, the changes will be small. The grainy texture of the lines is due to the limited frame rate as the M-mode on the right is reconstructed from the 2D image at the left. When the speckle pattern is followed by an M-mode in the wall, the alternating bright and dark points are seen as alternating bright and dark lines. The lines remaining to a large degree unbroken, shows the pattern to be relatively stable, the speckles moving along with the true myocardial motion, and thus myocardial motion can be tracked by the speckles.

By this, defining a region (kernel) in one frame, this kernel can be identified as region in the next frame with the same size and shape with the most similar speckle pattern, and the motion of the kernel can be tracked from frame to frame.

Speckle tracking. Real time M-mode demonstrates how the speckle pattern follows the myocardial motion. (Remark how this image is not grainy, due to the high frame rate of real time M-mode).
Defining a kernel in the myocardium will define a speckle pattern within (red). Within a defined search area ( blue), the new position of the kernel in the next frame (green) can be recognised by finding the same speckle pattern in a new position. The movement of the kernel  (thick blue arrow) can then be measured.
Speckle tracking search algorithm. The kernel is defined in the original frame at t=0 (red square). In the next frame, at t=t, the algorithm defines a search area (white square), and the search is conducted in all directions for the matching kernel.

Thus, speckle tracking is basically pattern recognition, identifying an area (kernel) in one frame, and then tracking by identifying the kernel with the best match in the next frame.

Thus, the kernel can be tracked from frame to frame as illustrated here

The algorithm for this seas is simple, it simply searches for the area with the smallest difference in the total sum of pixel values, the smallest sum of absolute differences (SAD). This has been shown to be as effective as cross correlation (26, 27). However, the speckle pattern will not repeat perfectly. This is due to both true out of plane motion (rotation and torsion relative to apical planes and longitudinal deformation relative to short axis planes) and to small changes in the interference pattern. But the frame to frame change is small, and the approach to recognition is statistical. This means, however, that the search should be done from frame to frame, the changes over longer time intervals will be to great.

Speckle tracking can be done by a two-dimensional search. Defining a kernel region in the myocardium will define the speckle pattern within. The initial frame is shown in red. Within a defined search area (marked in blue), the new position of  this kernel can be recognised by finding the same speckle pattern within a like-sized frame in a new position. This indicates that each speckle has moved the same distance in the same direction (thin blue arrows), and the movement of the whole kernel then will have been the same (thick blue arrow). The size of the kernel defines the spatial resolution, and the size of the search region is defined by the maximal expected displacement from frame to frame. Higher frame rate will mean a smaller search region, if the velocity is the same. In practice, the speckle pattern does not repeat perfectly. However, every kernel has a unique speckle pattern due to the random nature of speckles. Finding the new position of the kernel can then be reduced to finding the like-sized area with the smallest difference or error in total pixel intensity with a trial matching kernel sized region within the search area. This is called the sum of absolute differences (SAD).

Where K is the original kernel area and Kt is a like sized area in the new location. The new kernel position is the area with the smallest SAD within the search region. This has been shown to track well-developed speckle patterns as accurately as normalized cross correlation (26).

In practice and in two dimensions, the algortihm works as (27):

Cross correlation can be used to weight the movement of the original kernel region to the kernel region with the lowest value for SAD method (28):


Speckle tracking and angle dependency

In principle, pure speckle tracking  is direction independent, and can track crosswise. Thus, Iin principle, there should be no angle effect, as the tracking occurs in the direction of the motion.

However, lateral resolution is important in delineating the speckles in the lateral direction. If the lateral resolution is low,  the interpolation will result in a "smeared" picture, with speckles that are nor so easily tracked in the lateral direction. In addition the lateral resolution decreases in depth with sector probes.

LOngitudinal speckle tracking in apical 4 chamber view. The resulting tracking of the kernels shown in motion. As can be seen, with a drop out apicolateral, this ROI tracks less than perfect, giving too low strain both in LA and MA segments. Speckle tracking can be applied crosswise. In this parasternal long axis view, the myocardial motion is tracked both in axial and transverse (longitudinal) direction. It is evident that the tracking is far poorer in the inferior wall, due to the poor lateral resolution at greater depth.

Also, drop outs and reverberations will affect the tracking, and in the lateral direction, low lateral resolution will "smear" the speckles in the lateral direction, making tracing less perfect, as can be seen in the parasternal long axis image above. It also means that the lateral tracking will be poorer with increasing depth (as the lines diverge as well as becoming wider). Thus, in fact there is some angle distortion in speckle tracking which is the same mechanism as in B-mode measurement and M-mode tracking as discussed above.

This is illustrated below:

Angle dependency of speckle tracking is related to lateral resolution. Left good resolution, as the speckles move, the kernel (rectangle) follows the speckle pattern. Right, poor resolution. As the speckles move, the kernel will follow the vertical motion, due to better radial resolution. The kernel will be unable to follow the lateral motion, at least until all the kernels have crossed the kernel boundary. This mean that there is only tracking along the ultrasound beam.

There is reduced lateral resolution with depth, with increased frame rate (if obtained with reduced line density), and with near shadows reducing virtual aperture.

This might lead to angle dependence of speckle tracking strain as shown here.

Drop outs in speckle tracking

Drop out affecting speckle tracking. The application cannot track where there are no tissue data, in this case in the anterior wall and the application doesn't track (the markings don't move). The inferior wall seems to track normally.

Reverberations (clutter) in speckle tracking

Reverberation in the lateral wall affecting speckle tracking. As is visually evident, the application does not track across the reverberation, thus the two segments apical to the reverberations are seen as akinetic, the basal as hyperkinetic. All shortening is seen in the basal segment. In this case, the smoothing is seen to spread the effect of the reverberation out across two segments apical to the reverberation.

The kernel is in a reverberation in the lateral wall, and will not track, thus both the segment below and above the reverberation will show artefacts.
Adjusting the position of the kernel manually, allows speckle tracking despite the reverberation, if the kernel remains outside the reverberation during the whole heart cycle.

Drift in speckle tracking

The speckle pattern will not repeat perfectly. This is due to both true out of plane motion (rotation and torsion relative to apical planes and longitudinal deformation relative to short axis planes) and to small changes in the interference pattern. But the frame to frame change is small, and the approach to recognition is statistical, the basic algorithms are shown here. Still, small inaccuracies in tracking may cause over all drift in the tracking. If there is a non random element of appearance and disappearance of these speckles, there will be an over all drift of the kernel relative to the myocardium.

Drift in ultrasound. As speckles disappear out of plane, or by changing interference pattern, this may cause less than perfect tracking. The kernel is defined in frame 1, indicated by the red rectangle. In the next frame, due to out of plane motion, or simply changes in reflectivity some of the speckles disappear or have lower intensity in the next frame due to complete or partial out of plane motion in the B-mode image.  Then the kernel may find a slightly different area as the new kernel position. (Especially if the tracking is done by the sum of absolute differences where the identification rests with the summed intensity within the kernel area). In frame 2, the true kernel motion is identified by the dark grey rectangle, the tracking, however, identifies the new position as the red rectangle. Some of the speckles above the kernel have decreased in intensity, while the speckles below have all increased. In frame 3, further changes in speckle visibility results in further  slippage, i.e. slippage in relation to frame 2, which then is a larger cumuated slippage from frame 1.  Two speckles from frame 2 above the kernel have disappeared, four speckles have decreased in intensity. Two speckles below the kernel have increased. The true position of the kernel from frame 1 is indicated by the light grey rectangle, the position of the red kernel from frame 2 by the dark grey rectangle, and the tracking by the red rectangle.

This is a potential. Most of the speckle appearence and disappearence may be random, causing random noise instead.

This means, however, that the with lower frame rate, the changes from frame to frame are greater, resulting in poorer tracking. Higher heart rate (f.i. in stress) will result in the same, as the number of frames per cycle will be reduced, i.e. lower relative frame rates.

Thus: speckle tracking is frame rate sensitive:
  1. Too low frame rate will result in too great changes from frame to frame, resulting in poor tracking. This may also limit the use in high heart rates, as the motion and thus frame to frame change increases relative to the frame rate.
  2. Too high frame rate is obtained by reduced lateral resolution, and thus resulting in poorer tracking at least in the transverse direction. If the lateral resolution is low,  the interpolation will result in a "smeared" picture as shown here, with speckles that are nor so easily tracked in the lateral direction. In addition the lateral resolution decreases in depth with sector probes, making lateral tracking at greater depths doubtful. The poorer the lateral resolution, the poorer tracking in the lateral direction, and the more angle dependent the method becomes.
Thus, both too high and too low frame rate may affect speckle tracking adversely. With the present equipment, the optimal frame rate seems to be between 40 - 70 if image quality is good, slightly higher with poorer image quality.

This is a fundamental property of speckle tracking,  and the drift from start of cycle to end of cycle may actually be used as a criterion for quality of speckle tracking. And even more advanced comparing tracking forwards and backwards through the whole cycle , f.i. by cross correlation. It may be less with a higher frame rate. (Although that will lead to more angle dependency). If the speckle tracking is used for calculating a velocity field as the primary variable, as in 2D strain, the integration to displacement an strain will result in further drift by cumulating small errors. In addition undersampling is a property of low frame rate, i.e. B-mode. This reduces peak velocity, and the peak values is even more reduced if smoothing is applied before integration as it is in 2D strain.

As speckle tracking can track in both transverse and axial directions, with a sufficient number of kernels, deformation can in principle be measured in two dimensions.

It's also important in dealing with applications to realise that not all apparent tracking is true speckle tracking, some of the motion seen in the image may be due to an advanced algorithm using information from other parts of the image:

False speckle tracking. This is due to the algorithm using motion data from the mitral ring, distributing it along the ROI in a kind of "model" of the motion for smoothing, in order to reduce the imapact of drift and other sources of noise. As can be seen, in this image it "tracks" even if there are no speckles. This, however, is not true tracking, the bullets move according to the model calculating where they should be

This is discussed more in the measurements section.

Within its limitations, however, speckle tracking can be used for measuring displacement, velocity, strain and strain rate as described below. And new computational techniques has served to increase  both focus and frame rate of B-mode, especially after leaving the crude MLA approach, thus inmprovements in 2D speckle ltracking may be expected as well. However, speckle tracking can not become better than the eye, motion that cannot be seen by the eye (provided one zooms the image and replays in slow motion), as speckle tracking only tracks the visible speckles.

In fact, the eye is better in recognising motion from non-motion, as we have specialised neural circuits for that. (And of course the speckle tracking is better in analysing the whole of the sector simultaneously.

3D speckle tracking. 

Hypothetically, 3D speckle tracking may have some advantages over 2D:

  • There is no out of plane motion of speckles, thus eliminating one source of drift.
  • A full volume acquisition hypothetically allows tracking in all strain directions in the same heartbeat, without having to acquire multiple views for a reconstruction.
However, the limitations of 3D ultrasound is still very severe:

  • Low frame rate. The greater the change from frame to frame is, the poorer tracking, and measurements may be subject to undersampling as well.
  • Stitching still allows speckles to disappear across stitching boundaries.
  • Low line density. This results in speckles being "smeared out", much in the same way as in 2D with high frame rate. And if the line density is increased by MLA, this will result in MLA artefacts that has to be compensated by smoothing as well, with relatively little gain in lateral resolution.
  • Diverging lines from apex to base. This is the same in both 2D and 3D, but the effects becomes much more severe when line density is low at the outset. Even in 2D speckle tracking, transmural tracking from apical images become unfeasible in the base of the left ventricle as shown here.

In practice, the temporal and resolution is so low, as to make 3D speckle tracking inferior. And as 2D image with modern computational techniques improve in both resolution AND frame rate, 2D speckle tracking seems to increase it's edge.

In a recent study (279) of myocardial infarcts, 3D strain did not show incremental diagnostic value to the other modalities. 3D longitudinal strain was inferior to 2D longitudinal strain, and 3D Circumferential, longitudinal and area strain did not add information, as opposed to infarct area by tissue Doppler (243).

Back to section index

Back to website index

Editor: Asbjorn Støylen, Contact address: