# Log Analyzer module documentation¶

## Overview¶

The log analyzer module reads and parses Varian linear accelerator machine logs, both Dynalogs and Trajectory logs. The module also calculates actual and expected fluences as well as performing gamma evaluations. Data is structured to be easily accessible and easily plottable.

Unlike most other modules of pylinac, the log analyzer module has no end goal. Data is parsed from the logs, but what is done with that info, and which info is analyzed is up to the user.

Features:

• Analyze Dynalogs or Trajectory logs - Either platform is supported. Tlog versions 2.1 and 3.0 are supported.
• Read in both .bin and .txt Trajectory log files - Read in the machine data from both .bin and .txt files to get all the information recorded. See the txt attribute.
• Save Trajectory log data to CSV - The Trajectory log binary data format does not allow for easy export of data. Pylinac lets you do that so you can use Excel or other software that you use with Dynalogs.
• Plot or analyze any axis - Every data axis (e.g. gantry, y1, beam holds, MLC leaves) can be accessed and plotted: the actual, expected, and even the difference.
• Calculate fluences and gamma - Besides reading in the MLC positions, pylinac calculates the actual and expected fluence as well as the gamma map; DTA and threshold values are adjustable.
• Anonymize logs - Both dynalogs and trajectory logs can be “anonymized” by removing the Patient ID from the filename(s) and file data.

## Concepts¶

Because the log_analyzer module functions without an end goal, the data has been formatted for easy exploration. However, there are a few concepts that should be grasped before diving in.

• Log Sections - Upon log parsing, all data is placed into data structures. Varian has designated 4 sections for Trajectory logs: Header, Axis Data, Subbeams, and CRC. The Subbeams are only applicable for auto-sequenced beams and all v3.0 logs, and the CRC is specific to the Trajectory log. The Header and Axis Data however, are common to both Trajectory logs and Dynalogs.

Note

Dynalogs do not have explicit sections like the Trajectory logs, but pylinac formats them to have these two data structures for consistency.

• Leaf Indexing & Positions - Varian leaf identification is 1-index based, over against Python’s 0-based indexing. Thus, indexing the first MLC leaf would be [1], not [0].

Warning

When slicing or analyzing leaf data, keep the Varian 1-index base in mind.

Leaf data is stored in a dictionary, with the leaf number as the key, from 1 up to the number of MLC leaves. E.g. if the machine has a Millennium 120 standard MLC model, leaf data will have 120 dictionary items from 1 to 120. Leaf numbers have an offset of half the number of leaves. I.e. leaves 1 and 120 are a pair, as are 2 and 119, on up to leaves 60 and 61. In such a case, leaves 61-120 correspond to the B-bank, while leaves 1-60 correspond to the A-bank. This can be described by a function $$(A_{leaf}, B_{leaf}) = (n, N_{leaves} + 1 - n)$$, where $$n$$ is the leaf number and $$N_{leaves}$$ is the number of leaves.

• Units - Units follow the Trajectory log specification: linear axes are in cm, rotational axes in degrees, and MU for dose.

Note

Dynalog files are inherently in mm for collimator and gantry axes, tenths of degrees for rotational axes, and MLC positions are not at isoplane. For consistency, Dynalog values are converted to Trajectory log specs, meaning linear axes, both collimator and MLCs are in cm at isoplane, and rotational axes are in degrees. Dynalog MU is always from 0 to 25000 no matter the delivered MU (i.e. it’s relative), unless it was a VMAT delivery, in which case the MU is actually the gantry position.

• All data Axes are similar - Log files capture machine data in “control cycles”, aka “snapshots” or “heartbeats”. Let’s assume a log has captured 100 control cycles. Axis data that was captured will all be similar (e.g. gantry, collimator, jaws). They will all have an actual and sometimes an expected value for each cycle. Pylinac formats these as 1D numpy arrays along with a difference array if applicable. Each of these arrays can be quickly plotted for visual analysis. See Axis for more info.

## Running the Demos¶

As usual, the module comes with demo files and methods:

from pylinac import Dynalog
Dynalog.run_demo()


Which will output the following:

Results of file: C:\Users\James\Dropbox\Programming\Python\Projects\pylinac\pylinac\demo_files\AQA.dlg
Average RMS of all leaves: 0.037 cm
Max RMS error of all leaves: 0.076 cm
95th percentile error: 0.088 cm
Number of beam holdoffs: 20
Gamma pass %: 18.65
Gamma average: 0.468


Your file location will be different, but the values should be the same. The same can be done using the demo Trajectory log:

from pylinac import TrajectoryLog
TrajectoryLog.run_demo()


Which will give:

Results of file: C:\Users\James\Dropbox\Programming\Python\Projects\pylinac\pylinac\demo_files\Tlog.bin
Average RMS of all leaves: 0.001 cm
Max RMS error of all leaves: 0.002 cm
95th percentile error: 0.002 cm
Number of beam holdoffs: 19
Gamma pass %: 100.00
Gamma average: 0.002


Note that you can also save data in a PDF report:

tlog = ...
tlog.publish_pdf('mytlog.pdf')


Logs can be loaded two ways. The first way is through the main helper function load_log. Note that if you’ve used pylinac versions <1.6 the helper function is new and can be a replacement for MachineLog and MachineLogs, depending on the context as discussed below. The second way is loading directly through the class:

from pylinac import load_log

log_path = "C:/path/to/tlog.bin"


In addition, a folder, ZIP archive, or URL can also be passed:

log1 = load_log('path/to/folder')


Note

If loading from a URL the object can be a file or ZIP archive.

Pylinac will automatically infer the log type and load it into the appropriate data structures for analysis. The load_log function is a convenient wrapper around the classes within the log analysis module. However, logs can be instantiated a second way: directly through the classes.

from pylinac import Dynalog, TrajectoryLog, MachineLogs

dlog_path = "C:/path/to/dlog.dlg"
dlog = Dynalog(dlog_path)

tlog_path = "C:/path/to/tlog.bin"
tlog = TrajectoryLog(tlog_path)

path_to_folder = "C:/path/to/dir"
logs = MachineLogs(path_to_folder)


## Working with the Data¶

Working with the log data is straightforward once the data structures and Axes are understood (See Concepts for more info). Pylinac follows the data structures specified by Varian for trajectory logs, with a Header and Axis Data structure, and possibly a Subbeams structure if the log is a Trajectory log and was autosequenced. For accessible attributes, see TrajectoryLog. The following sections explore each major section of log data and the data structures pylinac creates to assist in data analysis.

Note

It may be helpful to also read the log specification format in parallel with this guide. It is easier to see that pylinac follows the log specifications and where the info comes from. Log specifications are on MyVarian.com.

Header information is essentially anything that isn’t axis measurement data; it’s metadata about the file, format, machine configuration, etc. Because of the different file formats, there are separate classes for Trajectory log and Dynalog headers. The classes are:

Header attributes are listed in the class API docs by following the above links. For completeness they are also listed here. For Trajectory logs:

• header
• version
• header_size
• sampling_interval
• num_axes
• axis_enum
• samples_per_axis
• num_mlc_leaves
• axis_scale
• num_subbeams
• is_truncated
• num_snapshots
• mlc_model

For Dynalogs the following header information is available:

Example

Let’s explore the header of the demo trajectory log:

>>> tlog = TrajectoryLog.from_demo()
'VOSTL'
2.1
2


### Working with Axis Data¶

Axis data is all the information relating to the measurements of the various machine axes and is accessible under the axis_data attribute. This includes the gantry, collimator, MLCs, etc. Trajectory logs capture more information than Dynalogs, and additionally hold the expected positions not only for MLCs but also for all axes. Every measurement axis has Axis as its base; they all have similar methods to access and plot the data (see Plotting & Saving Axes/Fluences). However, not all attributes are axes. Pylinac adds properties to the axis data structure for ease of use (e.g. the number of snapshots) For Trajectory logs the following attributes are available, based on the TrajectoryLogAxisData class:

• collimator

• gantry

• jaws

Note

The jaws attribute is a data structure to hold all 4 jaw axes; see JawStruct

• couch

Note

The couch attribute is a data structure to hold lateral, longitudinal, etc couch positions; see CouchStruct

• mu

• beam_hold

• control_point

• carriage_A

• carriage_B

• mlc

Note

The mlc attribute is a data structure to hold leaf information; see MLC for attributes and the Working with MLC Data section for more info.

Dynalogs have similar attributes, derived from the DynalogAxisData class:

Example

Let’s access a few axis data attributes:

>>> log = Dynalog.from_demo()
>>> log.axis_data.mu.actual  # a numpy array
array([  0, 100, ...
>>> log.axis_data.num_snapshots
99
>>> log.axis_data.gantry.actual
array([ 180, 180, 180, ...


### Working with MLC Data¶

Although MLC data is acquired and included in Trajectory logs and Dynalogs, it is not always easy to parse. Additionally, a physicist may be interested in the MLC metrics of a log (RMS, etc). Pylinac provides tools for accessing MLC raw data as well as helper methods and properties via the MLC class. Note that this class is consistent between Trajectory logs and Dynalogs. This class is reachable through the axis_data attribute as mlc.

#### Accessing Leaf data¶

Leaf data for any leaf is available under the leaf_axes attribute which is a dict. The leaves are keyed by the leaf number and the value is an Axis. Example:

>>> log = Dynalog.from_demo()
>>> log.axis_data.mlc.leaf_axes[1].actual  # numpy array of the 'actual' values for leaf #1
array([ 7.56374, ...
>>> log.axis_data.mlc.leaf_axes[84].difference  # actual values minus the planned values for leaf 84
array([-0.001966, ...


#### MLC helper methods/properties¶

Beyond direct MLC data, pylinac provides a number of helper methods and properties to make working with MLC data easier and more helpful. All the methods are listed in the MLC class, but some examples of use are given here:

>>> log = Dynalog.from_demo()
>>> log.axis_data.mlc.get_error_percentile(percentile=95)  # get an MLC error percentile value
0.08847
>>> log.axis_data.mlc.leaf_moved(12)  # did leaf 12 move during treatment?
False
>>> log.axis_data.mlc.get_RMS_avg()  # get the average RMS error
0.03733
>>> log.axis_data.mlc.get_RMS_avg('A')  # get the average RMS error for bank A
0.03746
>>> log.axis_data.mlc.num_leaves  # the number of MLC leaves
120
>>> log.axis_data.mlc.num_moving_leaves  # the number of leaves that moved during treatment
60


### Working with Fluences¶

Fluences created by the MLCs can also be accessed and viewed. Fluences are accessible under the fluence attribute. There are three subclasses that handle the fluences: The fluence actually delivered is in ActualFluence, the fluence planned is in ExpectedFluence, and the gamma of the fluences is in GammaFluence. Each fluence must be calculated, however pylinac makes reasonable defaults and has a few shortcuts. The actual and expected fluences can be calculated to any resolution in the leaf-moving direction. Some examples:

>>> log = Dynalog.from_demo()
>>> log.fluence.actual.calc_map()  # calculate the actual fluence; returns a numpy array
array([ 0, 0, ...
>>> log.fluence.expected.calc_map(resolution=1)  # calculate at 1mm resolution
array([ 0, 0, ...
>>> log.fluence.gamma.calc_map(distTA=0.5, doseTA=1, resolution=0.1)  # change the gamma criteria
array([ 0, 0, ...
>>> log.fluence.gamma.pass_prcnt  # the gamma passing percentage
99.82
>>> log.fluence.gamma.avg_gamma  # the average gamma value
0.0208


### Plotting & Saving Axes/Fluences¶

Each and every axis of the log can be accessed as a numpy array and/or plotted. For each axis the “actual” array/plot is always available. Dynalogs only have expected values for the MLCs. Trajectory logs have the actual and expected values for all axes. Additionally, if an axis has actual and expected arrays, then the difference is also available.

Example of plotting the MU actual:

log = TrajectoryLog.from_demo()
log.axis_data.mu.plot_actual()


Plot the Gantry difference:

log.axis_data.gantry.plot_difference()


Axis plots are just as easily saved:

log.axis_data.gantry.save_plot_difference(filename='gantry diff.png')


Now, lets plot the actual fluence:

log.fluence.actual.plot_map()


And the fluence gamma:

log.fluence.gamma.plot_map()


Additionally, you can calculate and view the fluences of subbeams if you’re working with trajectory logs:

log = TrajectoryLog.from_demo()
log.subbeams[0].fluence.actual.calc_map()
log.subbeams[0].fluence.actual.plot_map()


## Converting Trajectory logs to CSV¶

If you already have the log files, you obviously have a record of treatment. However, trajectory logs are in binary format and are not easily readable without tools like pylinac. You can save trajectory logs in a more readable format through the to_csv() method. This will write the log to a comma-separated variable (CSV) file, which can be read with Excel and many other programs. You can do further or specialized analysis with the CSV files if you wish, without having to use pylinac:

log = TrajectoryLog.from_demo()
log.to_csv()


## Anonymizing Logs¶

Machine logs can be anonymized two ways. The first is using the anonymize() method, available to both Trajectory logs and Dynalogs. Example script:

tlog = TrajectoryLog.from_demo()
tlog.anonymize()
dlog = Dynalog.from_demo()
dlog.anonymize()


The other way is the use the module function anonymize(). This function will anonymize a single log file or a whole directory. If you plan on anonymizing a lot of logs, use this method as it is threaded and is much faster:

from pylinac.log_analyzer import anonymize

log_file = 'path/to/tlog.bin'
anonymize(log_file)
log_dir = 'path/to/log/folder'
anonymize(log_dir) # VERY fast


## Batch Processing¶

Batch processing/loading of log files is helpful when dealing with one file at a time is too cumbersome. Pylinac allows you to load logs of an entire directory via MachineLogs; individual log files can be accessed, and a handful of batch methods are included.

Example

Let’s assume all of your logs for the past week are in a folder. You’d like to quickly see what the average gamma is of the files:

>>> from pylinac import MachineLogs
>>> log_dir = r"C:\path\to\log\directory"
>>> logs = MachineLogs(log_dir)
>>> logs.avg_gamma(resolution=0.2)
0.03  # or whatever


You can also append to MachineLogs to have two or more different folders combined:

>>> other_log_dir = r"C:\different\path"
>>> logs.append(other_log_dir)


Trajectory logs in a MachineLogs instance can also be converted to CSV, just as for a single instance of TrajectoryLog:

>>> logs.to_csv()  # only converts trajectory logs; dynalogs are already basically CSV files


Note

Batch processing methods (like avg_gamma() can take a while if numerous logs have been loaded, so be patient. You can also use the verbose=True argument in batch methods to see how the process is going.

## API Documentation¶

pylinac.log_analyzer.load_log(file_or_dir, exclude_beam_off=True, recursive=True)[source]

Load a log file or directory of logs, either dynalogs or Trajectory logs.

Parameters: file_or_dir (str) – String pointing to a single log file or a directory that contains log files. exclude_beam_off (bool) – Whether to include snapshots where the beam was off. recursive (bool) – Whether to recursively search a directory. Irrelevant for single log files.
pylinac.log_analyzer.anonymize(source, inplace=False, destination=None, recursive=True)[source]

Quickly anonymize an individual log or directory of logs. For directories, threaded execution is performed, making this much faster (10-20x) than loading a MachineLogs instance of the folder and using the .anonymize() method.

Note

Because MachineLog instances are not overly memory-efficient, you may run into MemoryError issues. To avoid this, try not to anonymize more than ~3000 logs at once.

Parameters: source (str) – Points to a local log file (e.g. .dlg or .bin file) or to a directory containing log files. inplace (bool) – Whether to edit the file itself, or created an anonymized copy and leave the original. destination (str, None) – Where the put the anonymized logs. Must point to an existing directory. If None, will place the logs in their original location. recursive (bool) – Whether to recursively enter sub-directories below the root source folder.
class pylinac.log_analyzer.Dynalog(filename, exclude_beam_off=True)[source]

Bases: pylinac.log_analyzer.LogBase

header
axis_data
fluence
a_logfile

Path of the A* dynalog file.

b_logfile

Path of the B* dynalog file.

num_beamholds

Return the number of times the beam was held.

classmethod from_demo(exclude_beam_off=True)[source]

Load and instantiate from the demo dynalog file included with the package.

static run_demo()[source]

Run the Dynalog demo.

publish_pdf(filename=None, unit=None, notes=None, open_file=False)[source]

Publish (print) a PDF containing the analysis and quantitative results.

Parameters: filename ((str, file-like object}) – The file to write the results to. unit (str) – The name of the unit. notes (str, list of strings) – Any additional notes to be included. A string will print a single line, while a list of strings will print each item on a new line.
static identify_other_file(first_dlg_file, raise_find_error=True)[source]

Return the filename of the corresponding dynalog file.

For example, if the A*.dlg file was passed in, return the corresponding B*.dlg filename. Can find both A- and B-files.

Parameters: first_dlg_file (str) – The absolute file path of the dynalog file. raise_find_error (bool) – Whether to raise an error if the file isn’t found. The absolute file path to the corresponding dynalog file. str
class pylinac.log_analyzer.TrajectoryLog(filename, exclude_beam_off=True)[source]

Bases: pylinac.log_analyzer.LogBase

A class for loading and analyzing the data of a Trajectory log.

header
Type: ~pylinac.log_analyzer.TrajectoryLogHeader, which has the following attributes:
axis_data
Type: ~pylinac.log_analyzer.TrajectoryLogAxisData
fluence
Type: ~pylinac.log_analyzer.FluenceStruct
subbeams
Type: ~pylinac.log_analyzer.SubbeamManager
txt_filename

The name of the associated .txt file for the .bin file. The file may or may not be available.

classmethod from_demo(exclude_beam_off=True)[source]

Load and instantiate from the demo trajetory log file included with the package.

static run_demo()[source]

Run the Trajectory log demo.

to_csv(filename=None)[source]

Write the log to a CSV file.

Parameters: filename (None, str) – If None (default), the CSV filename will be the same as the filename of the log. If a string, the filename will be named so. The full filename of the newly created CSV file. str
publish_pdf(filename=None, unit=None, notes=None, open_file=False)[source]

Publish (print) a PDF containing the analysis and quantitative results.

Parameters: filename ((str, file-like object}) – The file to write the results to. unit (str) – The name of the unit. notes (str, list of strings) – Any additional notes to be included. A string will print a single line, while a list of strings will print each item on a new line.
num_beamholds

Return the number of times the beam was held.

is_hdmlc

Whether the machine has an HDMLC or not.

class pylinac.log_analyzer.MachineLogs(folder, recursive=True)[source]

Bases: list

Read in machine logs from a directory. Inherits from list. Batch methods are also provided.

Parameters: folder (str) – The directory of interest. Will walk through and process any logs, Trajectory or dynalog, it finds. Non-log files will be skipped. recursive (bool) – Whether to walk through subfolders of passed directory. Only used if folder is a valid log directory.

Examples

>>> log_folder = r'C:\path\log\directory'
>>> logs = MachineLogs(log_folder)


Batch methods include determining the average gamma and average gamma pass value:

>>> logs.avg_gamma()
>>> 0.05 # or whatever it is
>>> logs.avg_gamma_pct()
>>> 97.2

classmethod from_zip(zfile)[source]

Instantiate from a ZIP archive.

Parameters: zfile (str) – Path to the zip archive.
num_logs

The number of logs currently loaded.

num_tlogs

The number of Trajectory logs currently loaded.

num_dlogs

The number of Trajectory logs currently loaded.

load_folder(directory, recursive=True)[source]

Load log files from a directory and append to existing list.

Parameters: directory (str, None) – The directory of interest. If a string, will walk through and process any logs, Trajectory or dynalog, it finds. Non-log files will be skipped. If None, files must be loaded later using .load_dir() or .append(). recursive (bool) – If True (default), will walk through subfolders of passed directory. If False, will only search root directory.
report_basic_parameters()[source]

Report basic parameters of the logs.

• Number of logs
• Average gamma value of all logs
• Average gamma pass percent of all logs
append(obj, recursive=True)[source]

Append a log. Overloads list method.

Parameters: obj (str, Dynalog, TrajectoryLog) – If a string, must point to a log file. If a directory, must contain log files. If a Dynalog or Trajectory log instance, then simply appends. recursive (bool) – Whether to walk through subfolders of passed directory. Only applicable if obj was a directory.
avg_gamma(doseTA=1, distTA=1, threshold=0.1, resolution=0.1)[source]

Calculate and return the average gamma of all logs. See calc_map() for further parameter info.

avg_gamma_pct(doseTA=1, distTA=1, threshold=0.1, resolution=0.1)[source]

Calculate and return the average gamma pass percent of all logs. See calc_map() for further parameter info.

to_csv()[source]

Write trajectory logs to CSV. If there are both dynalogs and trajectory logs, only the trajectory logs will be written. File names will be the same as the original log file names.

Returns: A list of all the filenames of the newly created CSV files. list
anonymize(inplace=False, suffix=None)[source]

Save anonymized versions of the logs.

For dynalogs, this replaces the patient ID in the filename(s) and the second line of the log with ‘Anonymous<suffix>. This will rename both A* and B* logs if both are present in the same directory.

For trajectory logs, the patient ID in the filename is replaced with Anonymous<suffix> for the .bin file. If the associated .txt file is in the same directory it will similarly replace the patient ID in the filename with Anonymous<suffix>. Additionally, the Patient ID row will be replaced with Patient ID: Anonymous<suffix>.

Note

Anonymization is only available for logs loaded locally (i.e. not from a URL or a data stream). To anonymize such a log it must be first downloaded or written to a file, then loaded in.

Note

Anonymization is done to the log file itself. The current instance(s) of MachineLog will not be anonymized.

Parameters: inplace (bool) – If False (default), creates an anonymized copy of the log(s). If True, renames and replaces the content of the log file. suffix (str, optional) – An optional suffix that is added after Anonymous to give specificity to the log. A list containing the paths to the newly written files. list
class pylinac.log_analyzer.Axis(actual, expected=None)[source]

Bases: object

Represents an ‘Axis’ of a Trajectory log or dynalog file, holding actual and potentially expected and difference values.

Parameters are Attributes
Parameters: actual (numpy.ndarray) – The array of actual position values. expected (numpy.ndarray, optional) – The array of expected position values. Not applicable for dynalog axes other than MLCs.
difference

Return an array of the difference between actual and expected positions.

Returns: Array the same length as actual/expected. numpy.ndarray
plot_actual()[source]

Plot the actual positions as a matplotlib figure.

plot_expected()[source]

Plot the expected positions as a matplotlib figure.

plot_difference()[source]

Plot the difference of positions as a matplotlib figure.

class pylinac.log_analyzer.MLC(log_type, snapshot_idx=None, jaw_struct=None, hdmlc=False, subbeams=None)[source]

Bases: object

The MLC class holds MLC information and retrieves relevant data about the MLCs and positions.

Parameters: snapshot_idx (array, list) – The snapshots to be considered for RMS and error calculations (can be all snapshots or just when beam was on). jaw_struct (Jaw_Struct) – hdmlc (boolean) – If False (default), indicates a regular MLC model (e.g. Millennium 120). If True, indicates an HD MLC model (e.g. Millennium 120 HD).
leaf_axes

The dictionary is keyed by the leaf number, with the Axis as the value.

Warning

Leaf numbers are 1-index based to correspond with Varian convention.

Type: dict containing Axis
classmethod from_dlog(dlog, jaws, snapshot_data, snapshot_idx)[source]

Construct an MLC structure from a Dynalog

classmethod from_tlog(tlog, subbeams, jaws, snapshot_data, snapshot_idx, column_iter)[source]

Construct an MLC instance from a Trajectory log.

num_pairs

Return the number of MLC pairs.

num_leaves

Return the number of MLC leaves.

num_snapshots

Return the number of snapshots used for MLC RMS & Fluence calculations.

Warning

This number may not be the same as the number of recorded snapshots in the log since the snapshots where the beam was off may not be included. See MachineLog.load()

num_moving_leaves

Return the number of leaves that moved.

moving_leaves

Return an array of the leaves that moved during treatment.

add_leaf_axis(leaf_axis, leaf_num)[source]

Add a leaf axis to the MLC data structure.

Parameters: leaf_axis (LeafAxis) – The leaf axis to be added. leaf_num (int) – The leaf number. Warning Leaf numbers are 1-index based to correspond with Varian convention.
leaf_moved(leaf_num)[source]

Return whether the given leaf moved during treatment.

Parameters: leaf_num (int) –

Warning

Leaf numbers are 1-index based to correspond with Varian convention.

pair_moved(pair_num)[source]

Return whether the given pair moved during treatment.

If either leaf moved, the pair counts as moving.

Parameters: pair_num (int) –

Warning

Pair numbers are 1-index based to correspond with Varian convention.

get_RMS_avg(bank='both', only_moving_leaves=False)[source]

Return the overall average RMS of given leaves.

Parameters: bank ({'A', 'B', 'both'}) – Specifies which bank(s) is desired. only_moving_leaves (boolean) – If False (default), include all the leaves. If True, will remove the leaves that were static during treatment. Warning The RMS and error will nearly always be lower if all leaves are included since non-moving leaves have an error of 0 and will drive down the average values. Convention would include all leaves, but prudence would use only the moving leaves to get a more accurate assessment of error/RMS. float
get_RMS_max(bank='both')[source]

Return the overall maximum RMS of given leaves.

Parameters: bank ({'A', 'B', 'both'}) – Specifies which bank(s) is desired. float
get_RMS_percentile(percentile=95, bank='both', only_moving_leaves=False)[source]

Return the n-th percentile value of RMS for the given leaves.

Parameters: percentile (int) – RMS percentile desired. bank ({'A', 'B', 'both'}) – Specifies which bank(s) is desired. only_moving_leaves (boolean) – If False (default), include all the leaves. If True, will remove the leaves that were static during treatment. Warning The RMS and error will nearly always be lower if all leaves are included since non-moving leaves have an error of 0 and will drive down the average values. Convention would include all leaves, but prudence would use only the moving leaves to get a more accurate assessment of error/RMS.
get_RMS(leaves_or_bank)[source]

Return an array of leaf RMSs for the given leaves or MLC bank.

Parameters: leaves_or_bank (sequence of numbers, {'a', 'b', 'both'}) – If a sequence, must be a sequence of leaf numbers desired. If a string, it specifies which bank (or both) is desired. An array for the given leaves containing the RMS error. numpy.ndarray
get_leaves(bank='both', only_moving_leaves=False)[source]

Return a list of leaves that match the given conditions.

Parameters: bank ({'A', 'B', 'both'}) – Specifies which bank(s) is desired. only_moving_leaves (boolean) – If False (default), include all the leaves. If True, will remove the leaves that were static during treatment.
get_error_percentile(percentile=95, bank='both', only_moving_leaves=False)[source]

Calculate the n-th percentile error of the leaf error.

Parameters: percentile (int) – RMS percentile desired. bank ({'A', 'B', 'both'}) – Specifies which bank(s) is desired. only_moving_leaves (boolean) – If False (default), include all the leaves. If True, will remove the leaves that were static during treatment. Warning The RMS and error will nearly always be lower if all leaves are included since non-moving leaves have an error of 0 and will drive down the average values. Convention would include all leaves, but prudence would use only the moving leaves to get a more accurate assessment of error/RMS.
create_error_array(leaves, absolute=True)[source]

Create and return an error array of only the leaves specified.

Parameters: leaves (sequence) – Leaves desired. absolute (bool) – If True, (default) absolute error will be returned. If False, error signs will be retained. An array of size leaves-x-num_snapshots numpy.ndarray
create_RMS_array(leaves)[source]

Create an RMS array of only the leaves specified.

Parameters: leaves (sequence) – Leaves desired. An array of size leaves-x-num_snapshots numpy.ndarray
leaf_under_y_jaw(leaf_num)[source]

Return a boolean specifying if the given leaf is under one of the y jaws.

Parameters: leaf_num (int) –
get_snapshot_values(bank_or_leaf='both', dtype='actual')[source]

Retrieve the snapshot data of the given MLC bank or leaf/leaves

Parameters: bank_or_leaf (str, array, list) – If a str, specifies what bank (‘A’, ‘B’, ‘both’). If an array/list, specifies what leaves (e.g. [1,2,3]) dtype ({'actual', 'expected'}) – The type of MLC snapshot data to return. An array of shape (number of leaves - x - number of snapshots). E.g. for an MLC bank and 500 snapshots, the array would be (60, 500). ndarray
plot_mlc_error_hist(show=True)[source]

Plot an MLC error histogram.

save_mlc_error_hist(filename, **kwargs)[source]

Save the MLC error histogram to file.

plot_rms_by_leaf(show=True)[source]

Plot RMSs by leaf.

save_rms_by_leaf(filename, **kwargs)[source]

Save the RMS-leaf to file.

class pylinac.log_analyzer.DynalogHeader(dlogdata)[source]
version

The Dynalog version letter.

Type: str
patient_name

Patient information.

Type: str
plan_filename

Filename if using standalone. If using Treat =<6.5 will produce PlanUID, Beam Number. Not yet implemented for this yet.

Type: str
tolerance`

Plan tolerance.