Quart#

New in version 3.2.

Overview#

The Quart module provides routines for automatically analyzing DICOM images of the Quart DVT phantom typically used with the Halcyon linac system. It can load a folder or zip file of images, correcting for translational and rotational offsets.

New in version 3.2.

Warning

These algorithms have only a limited amount of testing data and results should be scrutinized. Further, the algorithm is more likely to change in the future when a more robust test suite is built up. If you’d like to submit data, enter it here.

Typical Use#

The Quart phantom analysis follows a similar pattern of load/analyze/output as the rest of the library. Unlike the CatPhan analysis, customization is not a goal, as the phantoms and analyses are much more well-defined. I.e. there’s less of a use case for custom phantoms in this scenario.

To use the Quart analysis, import the class:

from pylinac import QuartDVT
from pylinac.quart import QuartDVT  # equivalent import

And then load, analyze, and view the results:

  • Load images – Loading can be done with a directory or zip file:

    quart_folder = r"C:/CT/Quart/Sept 2021"
    quart = QuartDVT(quart_folder)
    

    or load from zip:

    quart_folder = (
        r"C:/CT/Quart/Sept 2021.zip"  # this contains all the DICOM files of the scan
    )
    quart = QuartDVT.from_zip(quart_folder)
    
  • Analyze – Analyze the dataset:

    quart.analyze()
    
  • View the results – Reviewing the results can be done in text or dict format as well as images:

    # print text to the console
    print(quart.results())
    # view analyzed image summary
    quart.plot_analyzed_image()
    # view images independently
    quart.plot_images()
    # save the images
    quart.save_images()
    # finally, save a PDF
    quart.publish_pdf("myquart.pdf")
    

Hypersight#

New in version 3.17.

The Hypersight variant of the Quart phantom includes a water ROI in the HU module. A sister class can be used to also analyze this phantom: HypersightQuartDVT and will include an additional ROI analysis of the water bubble.

The class can be used interchangeably with the normal class and throughout this documentation.

Advanced Use#

Using results_data#

Using the Quart module in your own scripts? While the analysis results can be printed out, if you intend on using them elsewhere (e.g. in an API), they can be accessed the easiest by using the results_data() method which returns a QuartDVTResult instance.

Continuing from above:

data = quart.results_data()
data.hu_module.roi_radius_mm
# and more

# return as a dict
data_dict = quart.results_data(as_dict=True)
data_dict["hu_module"]["roi_radius_mm"]
...

Algorithm#

The Quart algorithm is nearly the same as the CBCT Algorithm. The image loading and localization use the same type of logic.

High-Resolution#

For high-resolution resolvability, the Quart manual does describe an equation for calculating the MTF using the line-spread function (LSF) of the phantom edge. For simplicity, we use the Varian Halcyon IPA document, which outlines a similar logic with specific measurements of the -700 -> -200 HU distance using a vertical and horizontal profile.

Within pylinac, to reduce the number of input parameters and also match commissioning values, these are the values used. The result is the distance in mm from these two HU values.

Note

The images in pylinac are “grounded”, meaning -1000 -> 0. So the actual algorithm search values are +300 HU (-700 + 1000) and +800 HU (-200 + 1000).

CNR/SNR#

While normally the contrast algorithm is chosen by the user, for the Quart phantom it is hardcoded based on the equations in the manual. Specifically, contrast to noise is defined as:

\[\frac{|Polystyrene - Acrylic|}{Acrylic}\]

where the values are the median pixel value of the given ROI. Poly was given as a possible recommendations in the Quart user manual. Acrylic is the base material of the phantom, i.e. background.

Note

The numerator is an absolute value.

The signal to noise is defined as:

\[\frac{Polystyrene + 1000}{\sigma_{Polystyrene}}\]

where \(\sigma\) is the standard deviation of the Polystyrene ROI pixel values. The poly ROI was chosen by us to match the selection for the CNR equation.

API Documentation#

class pylinac.quart.QuartDVT(folderpath: str | Sequence[str] | Path | Sequence[Path] | Sequence[BytesIO], check_uid: bool = True, memory_efficient_mode: bool = False)[source]#

Bases: CatPhanBase

A class for loading and analyzing CT DICOM files of a Quart phantom that comes with the Halcyon. Analyzes: HU Uniformity, Image Scaling & HU Linearity.

Parameters#

folderpathstr, list of strings, or Path to folder

String that points to the CBCT image folder location.

check_uidbool

Whether to enforce raising an error if more than one UID is found in the dataset.

memory_efficient_modebool

Whether to use a memory efficient mode. If True, the DICOM stack will be loaded on demand rather than all at once. This will reduce the memory footprint but will be slower by ~25%. Default is False.

Raises#

NotADirectoryError

If folder str passed is not a valid directory.

FileNotFoundError

If no CT images are found in the folder

hu_module_class#

alias of QuartHUModule

uniformity_module_class#

alias of QuartUniformityModule

geometry_module_class#

alias of QuartGeometryModule

static run_demo(show: bool = True)[source]#

Run the Quart algorithm with a head dataset.

analyze(hu_tolerance: int | float = 40, scaling_tolerance: int | float = 1, thickness_tolerance: int | float = 0.2, cnr_threshold: int | float = 5)[source]#

Single-method full analysis of CBCT DICOM files.

Parameters#

hu_toleranceint

The HU tolerance value for both HU uniformity and linearity.

scaling_tolerancefloat, int

The scaling tolerance in mm of the geometric nodes on the HU linearity slice (CTP404 module).

thickness_tolerancefloat, int

The tolerance of the thickness calculation in mm, based on the wire ramps in the CTP404 module.

Warning

Thickness accuracy degrades with image noise; i.e. low mAs images are less accurate.

low_contrast_toleranceint

The number of low-contrast bubbles needed to be “seen” to pass.

cnr_thresholdfloat, int

The threshold for “detecting” low-contrast image. See RTD for calculation info.

Deprecated since version 3.0: Use visibility parameter instead.

zip_afterbool

If the CT images were not compressed before analysis and this is set to true, pylinac will compress the analyzed images into a ZIP archive.

contrast_method

The contrast equation to use. See Low contrast.

visibility_threshold

The threshold for detecting low-contrast ROIs. Use instead of cnr_threshold. Follows the Rose equation. See Visibility.

thickness_slice_straddle

The number of extra slices on each side of the HU module slice to use for slice thickness determination. The rationale is that for thin slices the ramp FWHM can be very noisy. I.e. a 1mm slice might have a 100% variation with a low-mAs protocol. To account for this, slice thicknesses < 3.5mm have 1 slice added on either side of the HU module (so 3 total slices) and then averaged. The default is ‘auto’, which follows the above logic. Set to an integer to explicitly use a certain amount of padding. Typical values are 0, 1, and 2.

Warning

This is the padding on either side. So a value of 1 => 3 slices, 2 => 5 slices, 3 => 7 slices, etc.

expected_hu_values

An optional dictionary of the expected HU values for the HU linearity module. The keys are the ROI names and the values are the expected HU values. If a key is not present or the parameter is None, the default values will be used.

plot_analyzed_image(show: bool = True, **plt_kwargs) None[source]#

Plot the images used in the calculation and summary data.

Parameters#

showbool

Whether to plot the image or not.

plt_kwargsdict

Keyword args passed to the plt.figure() method. Allows one to set things like figure size.

plot_analyzed_subimage(*args, **kwargs) None[source]#

Plot a specific component of the CBCT analysis.

Parameters#

subimage{‘hu’, ‘un’, ‘sp’, ‘lc’, ‘mtf’, ‘lin’, ‘prof’, ‘side’}

The subcomponent to plot. Values must contain one of the following letter combinations. E.g. linearity, linear, and lin will all draw the HU linearity values.

  • hu draws the HU linearity image.

  • un draws the HU uniformity image.

  • sp draws the Spatial Resolution image.

  • lc draws the Low Contrast image (if applicable).

  • mtf draws the RMTF plot.

  • lin draws the HU linearity values. Used with delta.

  • prof draws the HU uniformity profiles.

  • side draws the side view of the phantom with lines of the module locations.

deltabool

Only for use with lin. Whether to plot the HU delta or actual values.

showbool

Whether to actually show the plot.

results(as_str: bool = True) str | tuple[str, ...][source]#

Return the results of the analysis as a string. Use with print().

results_data(as_dict: bool = False) QuartDVTResult | dict[source]#

Return results in a data structure for more programmatic use.

plot_images(show: bool = True, **plt_kwargs) dict[str, Figure][source]#

Plot all the individual images separately.

Parameters#

show

Whether to show the images.

plt_kwargs

Keywords to pass to matplotlib for figure customization.

save_images(directory: Path | str | None = None, to_stream: bool = False, **plt_kwargs) list[Path] | dict[str, BytesIO][source]#

Save separate images to disk or stream.

Parameters#

directory

The directory to write the images to. If None, will use current working directory

to_stream

Whether to write to stream or disk. If True, will return streams. Directory is ignored in that scenario.

plt_kwargs

Keywords to pass to matplotlib for figure customization.

publish_pdf(filename: str | Path, notes: str | None = None, open_file: bool = False, metadata: dict | None = None, logo: Path | str | None = None) None[source]#

Publish (print) a PDF containing the analysis and quantitative results.

Parameters#

filename(str, file-like object}

The file to write the results to.

notesstr, list of strings

Text; if str, prints single line. If list of strings, each list item is printed on its own line.

open_filebool

Whether to open the file using the default program after creation.

metadatadict

Extra data to be passed and shown in the PDF. The key and value will be shown with a colon. E.g. passing {‘Author’: ‘James’, ‘Unit’: ‘TrueBeam’} would result in text in the PDF like: ————– Author: James Unit: TrueBeam ————–

logo: Path, str

A custom logo to use in the PDF report. If nothing is passed, the default pylinac logo is used.

property catphan_size: float#

The expected size of the phantom in pixels, based on a 20cm wide phantom.

find_origin_slice() int#

Using a brute force search of the images, find the median HU linearity slice.

This method walks through all the images and takes a collapsed circle profile where the HU linearity ROIs are. If the profile contains both low (<800) and high (>800) HU values and most values are the same (i.e. it’s not an artifact), then it can be assumed it is an HU linearity slice. The median of all applicable slices is the center of the HU slice.

Returns#

int

The middle slice of the HU linearity module.

find_phantom_axis()#

We fit all the center locations of the phantom across all slices to a 1D poly function instead of finding them individually for robustness.

Normally, each slice would be evaluated individually, but the RadMachine jig gets in the way of detecting the HU module (🤦‍♂️). To work around that in a backwards-compatible way we instead look at all the slices and if the phantom was detected, capture the phantom center. ALL the centers are then fitted to a 1D poly function and passed to the individual slices. This way, even if one slice is messed up (such as because of the phantom jig), the poly function is robust to give the real center based on all the other properly-located positions on the other slices.

find_phantom_roll(func: Callable | None = None) float#

Determine the “roll” of the phantom.

This algorithm uses the two air bubbles in the HU slice and the resulting angle between them.

Parameters#

func

A callable to sort the air ROIs.

Returns#

float : the angle of the phantom in degrees.

classmethod from_demo_images()#

Construct a CBCT object from the demo images.

classmethod from_url(url: str, check_uid: bool = True)#

Instantiate a CBCT object from a URL pointing to a .zip object.

Parameters#

urlstr

URL pointing to a zip archive of CBCT images.

check_uidbool

Whether to enforce raising an error if more than one UID is found in the dataset.

classmethod from_zip(zip_file: str | ZipFile | BinaryIO, check_uid: bool = True, memory_efficient_mode: bool = False)#

Construct a CBCT object and pass the zip file.

Parameters#

zip_filestr, ZipFile

Path to the zip file or a ZipFile object.

check_uidbool

Whether to enforce raising an error if more than one UID is found in the dataset.

memory_efficient_modebool

Whether to use a memory efficient mode. If True, the DICOM stack will be loaded on demand rather than all at once. This will reduce the memory footprint but will be slower by ~25%. Default is False.

Raises#

FileExistsError : If zip_file passed was not a legitimate zip file. FileNotFoundError : If no CT images are found in the folder

localize() None#

Find the slice number of the catphan’s HU linearity module and roll angle

property mm_per_pixel: float#

The millimeters per pixel of the DICOM images.

property num_images: int#

The number of images loaded.

plot_side_view(axis: Axes) None#

Plot a view of the scan from the side with lines showing detected module positions

refine_origin_slice(initial_slice_num: int) int#

Apply a refinement to the origin slice. This was added to handle the catphan 604 at least due to variations in the length of the HU plugs.

save_analyzed_image(filename: str | Path | BinaryIO, **kwargs) None#

Save the analyzed summary plot.

Parameters#

filenamestr, file object

The name of the file to save the image to.

kwargs :

Any valid matplotlib kwargs.

save_analyzed_subimage(filename: str | BinaryIO, subimage: str = 'hu', delta: bool = True, **kwargs) Figure | None#

Save a component image to file.

Parameters#

filenamestr, file object

The file to write the image to.

subimagestr

See plot_analyzed_subimage() for parameter info.

deltabool

Only for use with lin. Whether to plot the HU delta or actual values.

class pylinac.quart.HypersightQuartDVT(folderpath: str | Sequence[str] | Path | Sequence[Path] | Sequence[BytesIO], check_uid: bool = True, memory_efficient_mode: bool = False)[source]#

Bases: QuartDVT

A class for loading and analyzing CT DICOM files of a Quart phantom that comes with the Halcyon, specifically for the Hypersight version, which includes a water ROI. Analyzes: HU Uniformity, Image Scaling & HU Linearity.

Parameters#

folderpathstr, list of strings, or Path to folder

String that points to the CBCT image folder location.

check_uidbool

Whether to enforce raising an error if more than one UID is found in the dataset.

memory_efficient_modebool

Whether to use a memory efficient mode. If True, the DICOM stack will be loaded on demand rather than all at once. This will reduce the memory footprint but will be slower by ~25%. Default is False.

Raises#

NotADirectoryError

If folder str passed is not a valid directory.

FileNotFoundError

If no CT images are found in the folder

hu_module#

alias of HypersightQuartHUModule

hu_module_class#

alias of HypersightQuartHUModule

analyze(hu_tolerance: int | float = 40, scaling_tolerance: int | float = 1, thickness_tolerance: int | float = 0.2, cnr_threshold: int | float = 5)#

Single-method full analysis of CBCT DICOM files.

Parameters#

hu_toleranceint

The HU tolerance value for both HU uniformity and linearity.

scaling_tolerancefloat, int

The scaling tolerance in mm of the geometric nodes on the HU linearity slice (CTP404 module).

thickness_tolerancefloat, int

The tolerance of the thickness calculation in mm, based on the wire ramps in the CTP404 module.

Warning

Thickness accuracy degrades with image noise; i.e. low mAs images are less accurate.

low_contrast_toleranceint

The number of low-contrast bubbles needed to be “seen” to pass.

cnr_thresholdfloat, int

The threshold for “detecting” low-contrast image. See RTD for calculation info.

Deprecated since version 3.0: Use visibility parameter instead.

zip_afterbool

If the CT images were not compressed before analysis and this is set to true, pylinac will compress the analyzed images into a ZIP archive.

contrast_method

The contrast equation to use. See Low contrast.

visibility_threshold

The threshold for detecting low-contrast ROIs. Use instead of cnr_threshold. Follows the Rose equation. See Visibility.

thickness_slice_straddle

The number of extra slices on each side of the HU module slice to use for slice thickness determination. The rationale is that for thin slices the ramp FWHM can be very noisy. I.e. a 1mm slice might have a 100% variation with a low-mAs protocol. To account for this, slice thicknesses < 3.5mm have 1 slice added on either side of the HU module (so 3 total slices) and then averaged. The default is ‘auto’, which follows the above logic. Set to an integer to explicitly use a certain amount of padding. Typical values are 0, 1, and 2.

Warning

This is the padding on either side. So a value of 1 => 3 slices, 2 => 5 slices, 3 => 7 slices, etc.

expected_hu_values

An optional dictionary of the expected HU values for the HU linearity module. The keys are the ROI names and the values are the expected HU values. If a key is not present or the parameter is None, the default values will be used.

property catphan_size: float#

The expected size of the phantom in pixels, based on a 20cm wide phantom.

find_origin_slice() int#

Using a brute force search of the images, find the median HU linearity slice.

This method walks through all the images and takes a collapsed circle profile where the HU linearity ROIs are. If the profile contains both low (<800) and high (>800) HU values and most values are the same (i.e. it’s not an artifact), then it can be assumed it is an HU linearity slice. The median of all applicable slices is the center of the HU slice.

Returns#

int

The middle slice of the HU linearity module.

find_phantom_axis()#

We fit all the center locations of the phantom across all slices to a 1D poly function instead of finding them individually for robustness.

Normally, each slice would be evaluated individually, but the RadMachine jig gets in the way of detecting the HU module (🤦‍♂️). To work around that in a backwards-compatible way we instead look at all the slices and if the phantom was detected, capture the phantom center. ALL the centers are then fitted to a 1D poly function and passed to the individual slices. This way, even if one slice is messed up (such as because of the phantom jig), the poly function is robust to give the real center based on all the other properly-located positions on the other slices.

find_phantom_roll(func: Callable | None = None) float#

Determine the “roll” of the phantom.

This algorithm uses the two air bubbles in the HU slice and the resulting angle between them.

Parameters#

func

A callable to sort the air ROIs.

Returns#

float : the angle of the phantom in degrees.

classmethod from_demo_images()#

Construct a CBCT object from the demo images.

classmethod from_url(url: str, check_uid: bool = True)#

Instantiate a CBCT object from a URL pointing to a .zip object.

Parameters#

urlstr

URL pointing to a zip archive of CBCT images.

check_uidbool

Whether to enforce raising an error if more than one UID is found in the dataset.

classmethod from_zip(zip_file: str | ZipFile | BinaryIO, check_uid: bool = True, memory_efficient_mode: bool = False)#

Construct a CBCT object and pass the zip file.

Parameters#

zip_filestr, ZipFile

Path to the zip file or a ZipFile object.

check_uidbool

Whether to enforce raising an error if more than one UID is found in the dataset.

memory_efficient_modebool

Whether to use a memory efficient mode. If True, the DICOM stack will be loaded on demand rather than all at once. This will reduce the memory footprint but will be slower by ~25%. Default is False.

Raises#

FileExistsError : If zip_file passed was not a legitimate zip file. FileNotFoundError : If no CT images are found in the folder

geometry_module_class#

alias of QuartGeometryModule

localize() None#

Find the slice number of the catphan’s HU linearity module and roll angle

property mm_per_pixel: float#

The millimeters per pixel of the DICOM images.

property num_images: int#

The number of images loaded.

plot_analyzed_image(show: bool = True, **plt_kwargs) None#

Plot the images used in the calculation and summary data.

Parameters#

showbool

Whether to plot the image or not.

plt_kwargsdict

Keyword args passed to the plt.figure() method. Allows one to set things like figure size.

plot_analyzed_subimage(*args, **kwargs) None#

Plot a specific component of the CBCT analysis.

Parameters#

subimage{‘hu’, ‘un’, ‘sp’, ‘lc’, ‘mtf’, ‘lin’, ‘prof’, ‘side’}

The subcomponent to plot. Values must contain one of the following letter combinations. E.g. linearity, linear, and lin will all draw the HU linearity values.

  • hu draws the HU linearity image.

  • un draws the HU uniformity image.

  • sp draws the Spatial Resolution image.

  • lc draws the Low Contrast image (if applicable).

  • mtf draws the RMTF plot.

  • lin draws the HU linearity values. Used with delta.

  • prof draws the HU uniformity profiles.

  • side draws the side view of the phantom with lines of the module locations.

deltabool

Only for use with lin. Whether to plot the HU delta or actual values.

showbool

Whether to actually show the plot.

plot_images(show: bool = True, **plt_kwargs) dict[str, Figure]#

Plot all the individual images separately.

Parameters#

show

Whether to show the images.

plt_kwargs

Keywords to pass to matplotlib for figure customization.

plot_side_view(axis: Axes) None#

Plot a view of the scan from the side with lines showing detected module positions

publish_pdf(filename: str | Path, notes: str | None = None, open_file: bool = False, metadata: dict | None = None, logo: Path | str | None = None) None#

Publish (print) a PDF containing the analysis and quantitative results.

Parameters#

filename(str, file-like object}

The file to write the results to.

notesstr, list of strings

Text; if str, prints single line. If list of strings, each list item is printed on its own line.

open_filebool

Whether to open the file using the default program after creation.

metadatadict

Extra data to be passed and shown in the PDF. The key and value will be shown with a colon. E.g. passing {‘Author’: ‘James’, ‘Unit’: ‘TrueBeam’} would result in text in the PDF like: ————– Author: James Unit: TrueBeam ————–

logo: Path, str

A custom logo to use in the PDF report. If nothing is passed, the default pylinac logo is used.

refine_origin_slice(initial_slice_num: int) int#

Apply a refinement to the origin slice. This was added to handle the catphan 604 at least due to variations in the length of the HU plugs.

results(as_str: bool = True) str | tuple[str, ...]#

Return the results of the analysis as a string. Use with print().

results_data(as_dict: bool = False) QuartDVTResult | dict#

Return results in a data structure for more programmatic use.

static run_demo(show: bool = True)#

Run the Quart algorithm with a head dataset.

save_analyzed_image(filename: str | Path | BinaryIO, **kwargs) None#

Save the analyzed summary plot.

Parameters#

filenamestr, file object

The name of the file to save the image to.

kwargs :

Any valid matplotlib kwargs.

save_analyzed_subimage(filename: str | BinaryIO, subimage: str = 'hu', delta: bool = True, **kwargs) Figure | None#

Save a component image to file.

Parameters#

filenamestr, file object

The file to write the image to.

subimagestr

See plot_analyzed_subimage() for parameter info.

deltabool

Only for use with lin. Whether to plot the HU delta or actual values.

save_images(directory: Path | str | None = None, to_stream: bool = False, **plt_kwargs) list[Path] | dict[str, BytesIO]#

Save separate images to disk or stream.

Parameters#

directory

The directory to write the images to. If None, will use current working directory

to_stream

Whether to write to stream or disk. If True, will return streams. Directory is ignored in that scenario.

plt_kwargs

Keywords to pass to matplotlib for figure customization.

uniformity_module_class#

alias of QuartUniformityModule

class pylinac.quart.QuartHUModule(catphan, offset: int, hu_tolerance: float, thickness_tolerance: float, scaling_tolerance: float, clear_borders: bool = True, thickness_slice_straddle: str | int = 'auto', expected_hu_values: dict[str, float | int] | None = None)[source]#

Bases: CTP404CP504

Parameters#

catphan : ~pylinac.cbct.CatPhanBase instance. offset : int hu_tolerance : float thickness_tolerance : float scaling_tolerance : float clear_borders : bool

property meas_slice_thickness: float#

The average slice thickness for the 4 wire measurements in mm.

property signal_to_noise: float#

Calculate the SNR based on the suggested procedure in the manual: SNR = (HU + 1000) / sigma, where HU is the mean HU of a chosen insert and sigma is the stdev of the HU insert. We choose to use the Polystyrene as the target HU insert

property contrast_to_noise: float#

Calculate the CNR based on the suggested procedure in the manual: CNR = abs(HU_target - HU_background) / sigma, where HU_target is the mean HU of a chosen insert, HU_background is the mean HU of the background insert and sigma is the stdev of the HU background. We choose to use the Polystyrene as the target HU insert and Acrylic (base phantom material) as the background

is_phantom_in_view() bool#

Whether the phantom appears to be within the slice.

property lcv: float#

The low-contrast visibility

property passed_geometry: bool#

Returns whether all the line lengths were within tolerance.

property passed_hu: bool#

Boolean specifying whether all the ROIs passed within tolerance.

property passed_thickness: bool#

Whether the slice thickness was within tolerance from nominal.

property phan_center: Point#

Determine the location of the center of the phantom.

property phantom_roi: RegionProperties#

Get the Scikit-Image ROI of the phantom

The image is analyzed to see if: 1) the CatPhan is even in the image (if there were any ROIs detected) 2) an ROI is within the size criteria of the catphan 3) the ROI area that is filled compared to the bounding box area is close to that of a circle

plot(axis: Axes)#

Plot the image along with ROIs to an axis

plot_linearity(axis: Axes | None = None, plot_delta: bool = True) tuple#

Plot the HU linearity values to an axis.

Parameters#

axisNone, matplotlib.Axes

The axis to plot the values on. If None, will create a new figure.

plot_deltabool

Whether to plot the actual measured HU values (False), or the difference from nominal (True).

plot_rois(axis: Axes) None#

Plot the ROIs onto the image, as well as the background ROIs

preprocess(catphan) None#

A preprocessing step before analyzing the CTP module.

Parameters#

catphan : ~pylinac.cbct.CatPhanBase instance.

property slice_num: int#

The slice number of the spatial resolution module.

Returns#

float

class pylinac.quart.QuartUniformityModule(catphan, tolerance: float | None = None, offset: int = 0, clear_borders: bool = True)[source]#

Bases: CTP486

Class for analysis of the Uniformity slice of the CTP module. Measures 5 ROIs around the slice that should all be close to the same value.

Parameters#

catphanCatPhanBase instance.

The catphan instance.

slice_numint

The slice number of the DICOM array desired. If None, will use the slice_num property of subclass.

combinebool

If True, combines the slices +/- num_slices around the slice of interest to improve signal/noise.

combine_method{‘mean’, ‘max’}

How to combine the slices if combine is True.

num_slicesint

The number of slices on either side of the nominal slice to combine to improve signal/noise; only applicable if combine is True.

clear_bordersbool

If True, clears the borders of the image to remove any ROIs that may be present.

original_imageImage or None

The array of the slice. This is a bolt-on parameter for optimization. Leaving as None is fine, but can increase analysis speed if 1) this image is passed and 2) there is no combination of slices happening, which is most of the time.

property avg_noise_power: float#

The average noise power of the uniformity ROI.

property integral_non_uniformity: float#

The Integral Non-Uniformity. Elstrom et al equation 1. https://www.tandfonline.com/doi/pdf/10.3109/0284186X.2011.590525

is_phantom_in_view() bool#

Whether the phantom appears to be within the slice.

property max_noise_power_frequency: float#

The frequency of the maximum noise power. 0 means no pattern.

property overall_passed: bool#

Boolean specifying whether all the ROIs passed within tolerance.

property phan_center: Point#

Determine the location of the center of the phantom.

property phantom_roi: RegionProperties#

Get the Scikit-Image ROI of the phantom

The image is analyzed to see if: 1) the CatPhan is even in the image (if there were any ROIs detected) 2) an ROI is within the size criteria of the catphan 3) the ROI area that is filled compared to the bounding box area is close to that of a circle

plot(axis: Axes)#

Plot the ROIs but also the noise power spectrum ROIs

plot_profiles(axis: Axes | None = None) None#

Plot the horizontal and vertical profiles of the Uniformity slice.

Parameters#

axisNone, matplotlib.Axes

The axis to plot on; if None, will create a new figure.

plot_rois(axis: Axes) None#

Plot the ROIs to the axis.

property power_spectrum_1d: ndarray#

The 1D power spectrum of the uniformity ROI.

property power_spectrum_2d: ndarray#

The power spectrum of the uniformity ROI.

preprocess(catphan)#

A preprocessing step before analyzing the CTP module.

Parameters#

catphan : ~pylinac.cbct.CatPhanBase instance.

property slice_num: int#

The slice number of the spatial resolution module.

Returns#

float

property uniformity_index: float#

The Uniformity Index. Elstrom et al equation 2. https://www.tandfonline.com/doi/pdf/10.3109/0284186X.2011.590525

class pylinac.quart.QuartGeometryModule(catphan, tolerance: float | None = None, offset: int = 0, clear_borders: bool = True)[source]#

Bases: CatPhanModule

Class for analysis of the Uniformity slice of the CTP module. Measures 5 ROIs around the slice that should all be close to the same value.

Parameters#

catphanCatPhanBase instance.

The catphan instance.

slice_numint

The slice number of the DICOM array desired. If None, will use the slice_num property of subclass.

combinebool

If True, combines the slices +/- num_slices around the slice of interest to improve signal/noise.

combine_method{‘mean’, ‘max’}

How to combine the slices if combine is True.

num_slicesint

The number of slices on either side of the nominal slice to combine to improve signal/noise; only applicable if combine is True.

clear_bordersbool

If True, clears the borders of the image to remove any ROIs that may be present.

original_imageImage or None

The array of the slice. This is a bolt-on parameter for optimization. Leaving as None is fine, but can increase analysis speed if 1) this image is passed and 2) there is no combination of slices happening, which is most of the time.

plot_rois(axis: Axes)[source]#

Plot the ROIs to the axis.

distances() dict[str, float][source]#

The measurements of the phantom size for the two lines in mm

high_contrast_resolutions() dict[source]#

The distance in mm from the -700 HU index to the -200 HU index.

This calculates the distance on each edge of the horizontal and vertical geometric profiles for a total of 4 measurements. The result is the average of the 4 values. The DICOM data is already HU-corrected so -1000 => 0. This means we will search for 300 HU (-1000 + 700) and 800 HU (-1000 + 200) respectively.

This cuts the profile in half, searches for the highest-gradient index (where the phantom edge is), then further cuts it down to +/-10 pixels. The 300/800 HU are then found from linear interpolation. It was found that artifacts in the image could drastically influence these values, so hence the +/-10 subset.

Assumptions: -The phantom does not cross the halfway point of the image FOV (i.e. not offset by an obscene amount). -10 pixels about the phantom edge is adequate to capture the full dropoff. -300 and 800 HU values will be in the profile

mean_high_contrast_resolution() float[source]#

Mean high-contrast resolution

is_phantom_in_view() bool#

Whether the phantom appears to be within the slice.

property phan_center: Point#

Determine the location of the center of the phantom.

property phantom_roi: RegionProperties#

Get the Scikit-Image ROI of the phantom

The image is analyzed to see if: 1) the CatPhan is even in the image (if there were any ROIs detected) 2) an ROI is within the size criteria of the catphan 3) the ROI area that is filled compared to the bounding box area is close to that of a circle

plot(axis: Axes)#

Plot the image along with ROIs to an axis

preprocess(catphan)#

A preprocessing step before analyzing the CTP module.

Parameters#

catphan : ~pylinac.cbct.CatPhanBase instance.

roi_dist_mm#

alias of float

roi_radius_mm#

alias of float

property slice_num: int#

The slice number of the spatial resolution module.

Returns#

float

class pylinac.quart.QuartDVTResult(phantom_model: str, phantom_roll_deg: float, origin_slice: int, num_images: int, hu_module: QuartHUModuleOutput, uniformity_module: QuartUniformityModuleOutput, geometric_module: QuartGeometryModuleOutput)[source]#

Bases: ResultBase

This class should not be called directly. It is returned by the results_data() method. It is a dataclass under the hood and thus comes with all the dunder magic.

Use the following attributes as normal class attributes.

phantom_model: str#
phantom_roll_deg: float#
origin_slice: int#
num_images: int#
hu_module: QuartHUModuleOutput#
uniformity_module: QuartUniformityModuleOutput#
geometric_module: QuartGeometryModuleOutput#
class pylinac.quart.QuartHUModuleOutput(offset: int, roi_settings: dict, rois: dict, measured_slice_thickness_mm: float, signal_to_noise: float, contrast_to_noise: float)[source]#

Bases: object

This class should not be called directly. It is returned by the results_data() method. It is a dataclass under the hood and thus comes with all the dunder magic.

Use the following attributes as normal class attributes.

class pylinac.quart.QuartUniformityModuleOutput(offset: int, roi_settings: dict, rois: dict, passed: bool)[source]#

Bases: object

This class should not be called directly. It is returned by the results_data() method. It is a dataclass under the hood and thus comes with all the dunder magic.

Use the following attributes as normal class attributes.

class pylinac.quart.QuartGeometryModuleOutput(offset: int, roi_settings: dict, rois: dict, distances: dict, high_contrast_distances: dict, mean_high_contrast_distance: float)[source]#

Bases: object

This class should not be called directly. It is returned by the results_data() method. It is a dataclass under the hood and thus comes with all the dunder magic.

Use the following attributes as normal class attributes.