“Helping educational programs make digital radiography easy to understand.”

How Digital Has Changed Spatial Resolution: Slow and Easy

Quinn B. Carroll, MEd, RT

Atlanta Society of Radiologic Technologists 2018Conference

 

Part A: The Digital Image

  1. The Bottom Line: Pixel and Dexel Size
  2. Foundational Principle:

                The single essential element that ultimately determines the sharpness of a digital image

is the size of the dexels or pixels being used to acquire or display the image

  1. [General principle: For a given physical area, the greater the matrix size, the smaller the pixels, and the greater the spatial resolution (But, is this always true?)

-e.g., We already intuitively talk ab out the resolution of display monitors and cameras

in terms of the total number of pixels in their entire active matrix array ( 3 megapixels, 5 megapixels, etc.)

  1. For all digital systems, the maximum spatial resolution is equal to the sampling frequency (the

 Nyquist frequency), us. expressed in LP/mm:

-For CR systems, the sampling frequency is the number of pixels scanned per mm across the

 PSP plate by the laser beam in the reader 

-For DR systems, the sampling frequency is the number of dexels (hardware detector elements)

 per mm across the detector plate

  1. This image sampling frequency depends upon the dexel or pixel pitch, defined as the

 distance from the center of one dexel or pixel to the center of an adjacent one

– Dexel pitch is approximately equal to dexel size (width)

-Pixel pitch is approximately equal to pixel size (width)

(Technically, for DR detectors, the dexel pitch includes any spaces between dexels, but

 this is a minor point here)

  1. Therefore, we can also say that sampling frequency depends upon the dexel or pixel size
  2. Smaller dexel or pixel pitch à smaller dexel or pixel size à sharper image à higher spatial

 Resolution

  1. Since two pixels are required to record a line pair from a resolution template:]

Maximum spatial resolution is inversely proportional to a doubling of the pixel or dexel size

SR = 1/2p   -where “p” is the pixel or dexel size OR pitch

Example: For a pixel size of 0.2 mm:

                                SR = 1/2 X 0.2  =  1/0.4  =  2.5 LP/mm

(practice on slides)

  1. Some Confusion With Terms:

-The term “pixel” is sometimes improperly used to describe the hardware detector

 elements of a DR detector plate – These are properly referred to as dexels or

 dels, contractions of “detector element”

-For radiographers, pixels come in at least 2 general types:

  1. The hardware pixels that comprise the viewing screen of an LCD monitor
  2. The “soft” pixels that comprise an actual light image

 

  1. Formula Relating FOV to Matrix Size:
  2. “Soft” Pixel Size Formula:

– For a given physical area, Pixel Size   =  __FOV___

                                                                                Matrix

Practice:

 

     What is the “soft” pixel size for a 305 mm image (FOV)

     displayed on a monitor screen with a 1024 X 1024 matrix? 

 

                Solution:              __305___   =   0.298   =   0.3 mm pixel size

                                                   1024                

 

  1. Any formula relating the FOV and the matrix to pixel size must take into account that …

-There are different types of matrices:

  1. Hardware matrix array of dexels in a DR detector
  2. Light image matrix created by CR reader sampling of the PSP plate
  3. Hardware matrix array of hardware pixels (dots) in a display monitor
  4. Light matrix of the displayed image itself

 

  1. Also, the field-of-view (FOV) can be defined in several different contexts:
  1. Physical size of a display monitor or the image receptor
  2. Size of the collimated field
  3. Level of magnification (zoom) applied at display monitor
  1. Whether the pixel size formula applies can be affected by whether these FOVs or matrices are

 fixed (unchangeable) or not

-At the display monitor, FOV is changed when zoom (magnification) is applied

-For some modalities, FOV of the initially displayed image may be selected by the operator

-Example, following slide: CT Scan of the Lateral Skull (on slides – skip?)

 

  1. KEY POINT: Hardware dexels and hardware pixels are not subject to change, invalidating the

 pixel size formula for these devices

 

  1. Hardware matrix array of dexels in a DR detector: Fixed del size
  2. Hardware matrix array of pixels in a display monitor: Fixed pixel size
  3. Light image matrix created by CR reader sampling of the PSP plate: Pixel size may be variable
  4. Displayed image matrix: Variable pixel size

 

  1. Spatial Resolution: DR Detectors

– Since DR detector elements are hardware, their size is fixed and never changes,

 regardless of detector plate size or collimation

                  -These dels range from 100 to 200 microns

                  -A 100-micron del produces an SR of about 5 LP/mm,

                                much less than traditional 200-speed film

Therefore, their inherent spatial resolution is consistent

The PS/FOV/Matrix formula does not apply

 

  1. Spatial resolution for CR:

-Upper limit that can be produced is equal to the scanning Nyquist frequency

-Due to light spread between the PSP plate and the light guide, the net Spatial frequency

 actually produced is slightly less than the Nyquist frequency

-The upper limitations set by the Nyquist frequency and light spread can override and

 reduce the sharpness achieved by good x-ray beam geometry during the

 original projection, but …

-It is still important to maximize sharpness in the remnant beam signal reaching the IR

 so that a sharpness level less than the digital processing factors impose does

 not result.

  1. CR Readers: The size of the light matrix and the pixels may both be variable, therefore …

 Spatial resolution for CR units may be variable. It depends on whether they are:

                –Fixed matrix systems

                -Fixed sampling systems

  1. Spatial Resolution:  Fixed Matrix CR Systems:

-Scan by dividing the image into the same number of pixels per row, regardless of the

 “effective FOV” created by the IR size or collimation

– This causes the sampling frequency to vary based on the size of the image receptor

-A smaller IR has shorter “rows”

-For a smaller IR, pixels must be smaller relative to the image

-Therefore, smaller IR’s present improved resolution compared to larger imaging plates

  1. Spatial Resolution:  Fixed Sampling Systems:

-Achieve consistent spatial resolution by keeping the sampling pixel size the same

 regardless of the “effective FOV” created by the IR

-Most recently produced CR systems are fixed sampling systems

 

Pixel Size   =  __FOV___                                   

           Matrix

                                – The pixel size formula applies to fixed matrix CR systems

-It does not apply to fixed sampling CR systems

 

  1. Spatial Resolution: Display Monitors

– In an LCD display monitor, each hardware pixel is composed of the intersection of two

 flat, transparent wires that conduct electricity

                                – For a particular manufacturer and model, the size of these hardware pixels is fixed

                                – Therefore, each type of display monitor has a fixed inherent spatial resolution

-that can be measured in hardware pixels per cm

-The pixel size formula does not apply

 

  1. Spatial Resolution:  The “Soft” Matrix of the Displayed Light Image

– The displayed field-of-view may be selected

-The effective field-of-view changes with magnification (zoom)

-For a given physical screen area, the greater the magnification of the image, the smaller

 the field-of-view displayed

  1. The displayed light image has its own matrix size relative to the level of magnification (zoom)

 applied

-This image matrix size is reduced with as magnification (zoom) is increased

-As image pixels are enlarged, each will now occupy more than one hardware pixel on

 the monitor screen

  1. Magnification of the displayed image is only accomplished by magnifying each “soft” image pixel

-The pixel value for an original single pixel is spread out across a 4-pixel square on the

 monitor. Further magnification spreads it out over a square of nine hardware

 pixels

  1. Excessive magnification of the displayed image results in a pixelly image as spatial resolution is

 Lost

– Since image pixel size is increasing, spatial resolution is lost and the image becomes

 “pixelly” as individual image pixels become more apparent

-Even though the spatial resolution of the hardware pixels of the monitor itself is

 consistent, here we are talking about the relative pixel size of the displayed

 image, the “image pixels”  

-Example: CT scan reconstructed at 7mm Vs 3mm pixels ]

  1. For the displayed light image, the pixel size formula applies

 

  1. Summary: Field of View, Matrix Size, and Spatial Resolution

-The inherent spatial resolution of hardware arrays is unaffected by changes in FOV, IR

 size, collimation, or “zoom”, because the physical size of any hardware pixel or

 dexel is not subject to change

-The spatial resolution of a light image from a display monitor or PSP plate is affected by

 FOV and matrix size. The pixel size formula applies in these cases            

  1. Foundational Principle:

Ultimately, it is the size of the dexels or pixels being used to acquire or display the image that

 determines sharpness (spatial resolution)

-Any effect that field-of-view or matrix size have on SR must be due to their effect on

pixel or dexel size

-“If it doesn’t change pixel size, it doesn’t affect resolution!”]

 

  1. The Display Monitor, Compression and Transmission
  2. Spatial Resolution for Display Monitors:

-Typically much more limited in both resolution and dynamic range (bit depth) than the

 digital image processing system of the computer

        -Also more limited in spatial resolution than original projection geometry of x-ray beam

  1. For a given physical area, the larger the matrix, the smaller the pixels, and the higher the spatial

 resolution

-Class 1 monitors for radiologist’s diagnosis station are expensive 3-5 megapixel monitors

-Class 2 workstation monitors used by other physicians can be as low as 2 megapixels

-Class 2 workstation monitors used by radiographers for screening images can be as low as 1 MP

  1. Radiographers must not judge an image to have insufficient resolution by using a class 2 monitor
  1. The Weakest Link in the Imaging Chain: The Display monitor:

A poor quality display monitor can effectively destroy the spatial resolution already

 achieved during image acquisition and processing

 

  1. Foundational Principle:

The single essential element that ultimately determines the sharpness of a digital image is the

 size of the dexels or pixels being used to acquire or display the image

 

  1. Image Compression:

-Necessary due to huge file sizes for medical images:

4-5 megabytes each image for DR and CR

  • MB each for MRI
  • MB each for CT

-Can typically add up to 150 Gigabytes per month in storage requirements

  1. Lossy compression ratios, above 10:1, result in irreversible loss of spatial resolution unacceptable for medical images
  2. Lossless compression ratios, less than 8:1 have been deemed “visually acceptable” by radiologists
  3. DICOM Viewers: Viewing software programs that should be included on any CD, DVD or flash drive along with images sent to clinical sites

-Also can be made available on password-protected websites

-Using the viewer software to display images preserves image quality and manipulation

 features which are lost when an image is simply sent over the internet or as an

 attachment to an email

-Without a DICOM viewer, viewing monitors and software at the clinical site can severely

compromise spatial resolution

  1. Criteria for Spatial Resolution in Digital Radiography:

 Maximum resolution should be apparent in all digital images:

    1. At least 8 LP/mm for static images
    2. At least 6 LP/mm for digital fluoroscopy

 

Part B: The Latent Image

  1. Problem #1: Extra care is needed when we turn to physicists to clarify terms in a field of clinical

 Application

 

  1. Resolution at the Microscopic Level:
  2. A Physicists typically use the terms contrast resolution and spatial resolution in a very different

 context than radiographers do

  1. Radiographers are primarily concerned with the readily apparent clinical aspects of the gross

 image, such as a bone

  1. Physicists tend to focus on individual microscopic details in the image, often too small for the

 human eye, such as a single dot or line

  1. The change in scale makes all the difference:

(Example: Quantum physics for the very small, cosmological physics for the very large)

 

  1. Scientific principles apply only on certain scales, from the very small to the very large.

At the smallest scales of an image, where details are so fine that they approach the resolving

limits of the human eye, the distinctions between sharpness and contrast become

blurred, (to the point where the only relevant question is whether each detail was

resolved.

-For MTF, the effects of penumbra round off the corners of the resulting exposure trace

                                -This degree of penumbra (rounding off) remains constant no matter how small the lines

 themselves become or how close together they are

                                -This is because penumbra is a function of the imaging system, not the line-pair

 template

-When the lines get close enough to each other, their penumbras will begin to overlap

-The exposure trace takes on the shape of a sine-wave

                                -In B, the depth of the sine wave formed remains at 1 cm as it is in A

-But in C, when the lines get too close, overlapping penumbras cause a loss in depth

 (now at 0.8 cm)

                                -This represents a loss in subject contrast at a microscopic level

-At these very smallest levels, the only question becomes “Can you resolve the little dot (or line)

as separate from the background and from other dots (or lines)?”

-For this, resolution templates are used, not anatomy

 

  1. Questions the radiographer might ask about a bone image, (Is it gray or black, is it full of mottle

 or other noise, is it distorted in shape?) become irrelevant to a single dot – Either you

 can make out the dot visually or you can’t: This is how physicists evaluate an image

  1. At this microscopic level, a single dot has only 2 aspects: Contrast Resolution and Spatial

 Resolution

  1. On an exposure trace diagram, these are equivalent to the vertical dimension (contrast) and the

 horizontal dimension (i.e., “point-spread function”) of an image detail

  1. When two blurry details are close together, their penumbras can overlap, making it more

 difficult to distinguish them as separate objects. Poor contrast produces the same

 result.

  1. Resolution template images show that overall resolution can be lost by either:
    1. Blurred edges resulting in poor sharpness even though contrast is high, or
    2. Poor contrast even though sharpness is high
  2. Radiographers should be familiar with these two physics concepts, because they are the

 standards by which different types of radiographic equipment can be compared in a

 measurable way

  1. (However, for radiographers, the concepts of spatial resolution and contrast resolution are often

 used in a different context, as part of six qualities for the gross clinical image,

 equivalent to contrast and sharpness)

 

  1. Demo: Physicists use vertical dimension of a detail to measure its “contrast”, horizontal

dimension to measure “sharpness”

 

                                                                                       Overall Resolution

                                                                                ↙                                            ↘

                                                Contrast Resolution                                       Spatial Resolution

                                (Vertical dimension on trace diagram)     (Horizontal dimension on trace diagram)

                                                                               

 

  1. Overall Resolution: Ability to distinguish two adjacent details as being separate and distinct

                -To do this, we must be able to distinguish:

  1. A “background” space between the two details
  2. The two details from each other

 

                                                                                       Overall Resolution

                                                                        “CAN YOU MAKE OUT THE DOT …

                                                                                ↙                                            ↘

                                                Contrast Resolution                                        Spatial Resolution

                                  … AGAINST THE BACKGROUND    …    AND SEPARATE FROM OTHER DOTS?”

 

                               

  1. Definitions: Physicists, ARRT, and Us
  2. At the microscopic level, the image has only 2 aspects (evaluated with a resolution template)

For the radiographer, the image has 6 aspects (evaluated with real anatomy)

  1. Note that the physicists’ concepts of contrast resolution and spatial resolution best correlate

 with the radiographers’ concepts of overall visibility and recognizability below,

 not with the six qualities of the gross image

  1. Physicists:          Visibility = Contrast Resolution

Recognizability = Spatial Resolution

 

  1. Whereas overall resolution, contrast resolution and spatial resolution can apply to either a dot or a bone image, the six qualities of the gross image (brightness, contrast, noise, sharpness, magnification, distortion) apply ONLY to the bone image

 

 

  1. ARRT definition: Spatial Resolution: The sharpness of the structural edges recorded in the image

Def. Sharpness: The abruptness with which the edges of an image detail “stop”

                -Implies moving through space, hence spatial resolution

-i.e., How quickly the edge of a detail transitions from a light (foreground)

density to a dark (background) density;

-Quick transition = “sharp image”, gradual transition = “unsharp image”

  1. Experiment with definitions: Ask yourself how a child would adapt to the terms:

-Ask a 6-year old what a “magnifying glass” is, then what a “size distortion” glass is

-Ask a 10-year old, “Is this image sharp or blurry” Vs. “Is this image spatially resolved”

  1. ARRT definition: Spatial Resolution: The sharpness of the structural edges recorded in the image

-Writing textbook, constantly forced to revert back to “sharpness”

-I taught “sharpness” throughout program, used Seminar to introduce “Recorded Detail”

-Now I recommend teaching “sharpness” throughout program, use Seminar to explain

 that ARRT uses “spatial resolution” for “sharpness”

  1. Demo: Physicists using the term “sharpness”
  2. The categorization problem with using “spatial resolution” for “sharpness”:
  3. My Recommendation: SEPARATE radiographer image qualities from physicist qualities:

-Teach radiographer image qualities early in program

-Save physicist qualities for a later time, (e.g., under “Image Analysis”)

 

                                               

 

 

  1. Maximum Visibility: Optimum brightness, balanced gray scale, minimum noise

(Noise: Anything that obscures visibility of details)

  1. Maximum Recognizability: Maximum spatial resolution, minimum magn. and distortion

 

  1. Blur, Scatter, and Grids:
  2. Scatter radiation can reduce the contrast at the edges of an image detail, thus reducing their visibility, but is unrelated to the formation of penumbra at the edges of the image

-Scatter radiation is NOT related to spatial resolution

  1. Scatter cannot affect spatial resolution because it is not part of the projection geometry of penumbra formation

-Example: Penumbra Diagram showing penumbra measuring mm, unchanged by the   presence of a nearby scattering object (B)

 

 

  1. The effects of scatter and blur are often confused, but they are separate in their origin, nature and effects:

Scatter                                                  Blur   

Completely random                        Geometrically predictable

Affects image visibility                    Affects image recognizabilityi

Affects general area                        Affects only detail edges

Emanates from patient                  Emanates from x-ray tube (focal spot)

 

  1. Scatter affects all 3 visibility functions in the latent image reaching the IR: (exposure, contrast, and noise) , but NONE of the geometrical functions (sharpness, magnification, distortion).
  2. Since scatter is not related to sharpness (spatial resolution) …

Grids are not related to sharpness (spatial resolution) either

-Except indirectly insofar as using the Bucky creates a small OID between the anatomy

 and the IR

  1. What does determine sharpness (spatial resolution) in the latent image is:
  2. Geometrical (projection) penumbra
  3. Absorption penumbra
  4. Motion penumbra

 

  1. Analysis: “Exposure Diagrams” and Sharpness
  2. Exposure trace diagrams help visualize these effects: Thickness of trace = exposure

For a particular detail:           

-Depth of trace = contrast of detail

-Width of slope at detail edges = penumbra: The steeper the slope, the sharper the edge

  1. The Exposure Trace Diagram:

– The shaded area at the bottom represents the level of exposure received at the IR:

-The thicker it is, the greater the exposure

-(With film-based radiography, this was literally the thickness of silver deposit on the

 film)

  1. For a particular detail, represents the general contrast as the depth to which the exposure

 drops at the center of the detail

  1. Penumbra is represented as the spread or width of the slope at the edge of the detail – the

 steeper the slope, the sharper the image

  1. Geometrical Penumbra:

-Has been defined as partial  x-ray absorption that the object is capable of,  but assumed

 an object of uniform thickness 

-The real shapes of most objects vary in thickness, also causing a partial absorption of x-

Rays

  1. Absorption Penumbra:

– When an object’s thickness tapers at its edges, the visible effects of partial absorption

 can then be indistinguishable from geometrical penumbra – This is called

 absorption penumbra

-Absorption penumbra is caused by the change in the projected thickness of an object

 toward its edges

  1. Total Penumbra:

– This exposure trace diagram shows how absorption penumbra forms the inner portion of the total penumbra, while geometrical penumbra forms the outer portion, combining to form the total penumbra

  1. Motion Penumbra:

-Motion “slides” the penumbra in the horizontal plane, effectively stretching it out,

 lengthening edge spread

                                – Extreme Motion can result in shallowing of the central depth of the image detail,

 destroying contrast

  1. Limiting Factor for Sharpness:

-For latent image (and conventional radiography), us. the focal spot

-For digital capture, processing, and display, us. the display monitor pixel size

Whichever is larger becomes the limiting factor for the final image

-Thank you for attending!