The directory watcher service allows one to “set it and forget it” and let pylinac analyze files that are moved to an appointed directory. Results can be emailed upon analysis. The service allows for configurable analysis settings and email settings. Configure with Windows Task Scheduler to run analysis regularly.
There are two ways to use the directory watching service: through normal Python functions & directly from the command line. In addition there are two modes of watching: continual watching and one-time run-through. Continual watching is appropriate for watching machine logs coming from a Clinac or TrueBeam. This watcher would continually query for new logs, copy them to the analysis folder, and then analyze them. The one-time analysis is best suited for processing ad-hoc data, e.g. monthly CBCT datasets.
To use the watcher via Python, make a script that uses the
process() function, depending on your need.
The recommended way to use pylinac’s watcher is the one-time run function
process. This can be combined with Windows task schedule to regularly monitor folders:
from pylinac import process analysis_dir = "C:/path/to/analysis/directory" process(analysis_dir) # will process and then return
Analysis is also available via the command line and is similar in behavior.
$ pylinac process "dir/to/process" # analyze and return
process call initiates a thread that runs in the terminal. The directory to start watching is also
required. A logger will notify when the script has started, when a file gets added, and what the analysis status is. If a file
gets analyzed successfully, a .png and .txt file with the same name as the originals plus a suffix (default is
_analysis) will be generated in the directory.
You can also set up an email service when analysis runs, described below.
How it works¶
The watcher/process functions query files in the analysis directory. Existing files as well as files that are moved into the directory are processed immediately to see if pylinac can analyze it. Because many files use the same format (e.g. DICOM), keywords and/or image classifiers are used to filter which type of analysis should be done. When a file is deemed analysis-worthy, pylinac will then run the analysis automatically and generate a PDF file with the analysis summary image and quantitative results. If the email service is setup, an email can be sent either on any analysis done or only on failing analyses.
The watcher/process service runs using default values for keywords and tolerance. These values are in a YAML configuration file. Pylinac comes with a default file and settings; if not config file is passed to the functions the default one is used. You can make your own YAML config file and pass that into the service initialization call:
$ pylinac watch "dir/to/watch" --config="my/config.yaml"
import pylinac pylinac.watch("dir/to/watch", config_file="my/config.yaml")
The YAML configuration file is the way to change keywords, set up a default analysis directory, change analysis settings, and set up email service. The recommended way of customizing the config file is to copy the pylinac default YAML file as a starting template and edit it as desired. Also see below for a copy of the file contents.
Setting up Email¶
The pylinac watcher service allows the user to set up an email trigger. The user must supply a gmail account (…@gmail.com). The gmail account name and password must be supplied in the YAML configuration file.
It is strongly recommended to create an ad hoc email account for the watcher service. To use the pylinac email service requires that the account have lower-than-normal security by nature of the non-gmail origin (i.e. you didn’t log in and send it yourself).
To allow gmail to send the emails, log into the gmail account and go to account settings. Go to the sign in & security section. At the very bottom in the section “Connected apps & sites” will be an option to “Allow less secure apps”. Turn this ON. This account will now allow the watcher service to send emails.
I’ll say it again: don’t use your personal account for this. Create a new account for the sole purpose of sending pylinac analysis emails.
You can set emails to be sent after every analysis or only when an analysis fails. The emails will contain a simple message, let you know when it was analyzed, and where to find the results. All the emails also have the results attached in many cases, so no need to dig for the files.
Default YAML Configuration¶
The default configuration is reproduced here. All options are listed. You may remove or add keywords at will.
The analysis options must match the parameter names exaclty (e.g.
# Pylinac Watcher Service Configuration File # See documentation here: http://pylinac.readthedocs.org/en/latest/watcher.html#configuration # Copy and edit this file to customize analysis # For each section's `analysis` group, the values correspond to the main class' `analyze()` keyword parameters. # The `failure` section is the criteria for sending "failure" emails # keywords are character sequences that must be in the file name to be considered of that analysis type general: directory: path/to/analysis/directory # path to the folder where analysis is performed; # can also be specified as keyword arg in the `start_watching` and `process` functions sources: - path/to/folder1 # e.g. the trajectory log folder: I:\Transfer\TDS\<TB SN>\TrajectoryLog\Treatment - path/to/folder2 # e.g. the exported images folder: I:\Transfer\TDS\<TB SN>\Imaging\ExportedImages file-suffix: Report # the suffix added to the .pdf file created after analysis avoid-keywords: # keywords in a file name that cause a skip of analysis - .png - .txt - .pdf - .pkl query-frequency: 60 # the frequency at which pylinac queries new files; units are in seconds; N/A if using the `process` function rolling-window-days: 15 # when analyzing files, only examine files newer than the specified days. # I.e. if the file is older than the value in days it won't be evaluated. # If the value is 0 no window is applied and all files are considered. unit: TrueBeam 1234 email: enable-all: false # set to true to send an email after every analysis enable-failure: false # set to true to only send an email after an analysis fails sender: email@example.com # sender MUST be a Gmail account sender-password: senderpassword recipients: # add as many recipients as desired - firstname.lastname@example.org - email@example.com subject: Pylinac results # subject line of the email # MACHINE LOG SETTINGS logs: keywords: # keywords needed in the file name to trigger analysis - .dlg - .bin analysis: # analysis settings; see each module's analyze() method for parameter options. # Keywords must match the analyze() method keyword arguments exactly. doseTA: 1 distTA: 1 threshold: 0.1 resolution: 0.1 failure: # what constitutes a "failure" in analysis gamma: 95 # gamma below this value triggers a failure avg-rms: 0.05 # average RMS value above this value triggers a failure max-rms: 0.5 # maximum RMS value above this value triggers a failure # WINSTON-LUTZ SETTINGS winston-lutz: use-classifier: true keywords: - wl - winston - lutz failure: gantry-iso-size: 2 mean-cax-bb-distance: 2 max-cax-bb-distance: 2.5 # STARSHOT SETTINGS starshot: use-classifier: true keywords: - star analysis: tolerance: 1 radius: 0.8 sid: 1000 # ignored for EPID images since SID is embedded; If using CR or film, set to the value your clinic does starshots at. failure: passed: false # PICKET FENCE SETTINGS picketfence: use-classifier: true keywords: - pf - picket analysis: tolerance: 0.5 action_tolerance: 0.3 hdmlc: false failure: passed: false # CATPHAN SETTINGS catphan: model: CatPhan504 keywords: - cbct - ct analysis: hu_tolerance: 40 scaling_tolerance: 1 zip_after: true failure: hu-passed: false uniformity-passed: false geometry-passed: false thickness-passed: false # VMAT SETTINGS vmat: use-classifier: true keywords: - vmat - drgs - drmlc - mlcs analysis: tolerance: 1.5 failure: passed: false # LEEDS TOR SETTINGS leeds: use-classifier: true keywords: - leed - tor analysis: low_contrast_threshold: 0.005 hi_contrast_threshold: 0.4 # Standard Imaging QC-3 SETTINGS qc3: use-classifier: true keywords: - pips - qc analysis: low_contrast_threshold: 0.005 hi_contrast_threshold: 0.4 # Las Vegas SETTINGS las-vegas: keywords: - vegas analysis: low_contrast_threshold: 0.005
Using the watcher with Windows Task Scheduler¶
You can easily set up the pylinac
process function with Windows task scheduler in order to regularly process
If you have 2+ machines you will be monitoring it is recommended to make two separate “schedules” and folders for analysis along with two YAML config files. Specifically, set the “unit” in the “general” section of the YAML file for each machine.
On the Windows computer you plan on using for analysis, open or search for the “Task Scheduler”:
Click on “Create Basic Task…” in the right panel and fill in the name/description with whatever you want:
After that, the trigger is asked for. For the time being, say daily; you can change this later if you like:
Set the start date and time; I run mine in the middle of the night so as not to eat resources during clinic hours:
For the action, select “Start a program”:
Now we get to the critical part, starting the pylinac process function. There are several ways to do this. You can create a batch file that calls pylinac through the command line, you can create a batch file that calls a python file, or you can call python to run a python file. We will do the last option as it allows you to do any post-analysis stuff you may want (like a daily “I ran today” email) while staying in our favorite language. There are two pieces needed:
- The python.exe location (I’m assuming Miniconda/Anaconda)
- The custom python file that runs the process function
If you’re using Miniconda or Anaconda you will find your environments in the main conda folder. In the root env folder is the python.exe file. The default is something like C:Anacondaenvs<your env name>python.exe. As a last resort you can find the location of the python interpreter you’re using by doing
import sys print(sys.executable)
Now, create a new python file that will run the process function along with the analysis directory and config file:
Once that’s made, save the file to a known location. We’re ready to set the task program now. In the “Program/script” line enter the full path to the python.exe file. In the “Add arguments” section, copy the location of the python file we just made:
Proceed through the summary and you’re done. Back at the main Task Scheduler page click on the “Task Scheduler Library” on the left panel to list the current task schedules. You should now find your new task listed. Highlight the task and you can now edit or run the task via the right panel:
If you have multiple machines, create individualized YAML config files and python files and schedules for each of them.