The PEPT Library’s Documentation¶
A Python library that unifies Positron Emission Particle Tracking (PEPT) research, including tracking, simulation, data analysis and visualisation tools.
Positron Emission Particle Tracking¶
PEPT is a technique developed at the University of Birmingham which allows the non-invasive, three-dimensional tracking of one or more ‘tracer’ particles through particulate, fluid or multiphase systems. The technique allows particle or fluid motion to be tracked with sub-millimetre accuracy and sub-millisecond temporal resolution and, due to its use of highly-penetrating 511keV gamma rays, can be used to probe the internal dynamics of even large, dense, optically opaque systems - making it ideal for industrial as well as scientific applications.
PEPT is performed by radioactively labelling a particle with a positron- emitting radioisotope such as fluorine-18 (18F) or gallium-68 (68Ga), and using the back-to-back gamma rays produced by electron-positron annihilation events in and around the tracer to triangulate its spatial position. Each detected gamma ray represents a line of response (LoR).

Transforming gamma rays, or lines of response (left) into individual tracer trajectories (right) using the pept library. Depicted is experimental data of two tracers rotating at 42 RPM, imaged using the University of Birmingham Positron Imaging Centre’s parallel screens PEPT camera.
Tutorials and Documentation¶
A very fast-paced introduction to Python is available here (Google Colab tutorial link); it is aimed at engineers whose background might be a few lines written MATLAB, as well as moderate C/C++ programmers.
A beginner-friendly tutorial for using the pept package is available here (Google Colab link).
The links above point to Google Colaboratory, a Jupyter notebook-hosting website that lets you combine text with Python code, executing it on Google servers. Pretty neat, isn’t it?
Performance¶
Significant effort has been put into making the algorithms in this package as fast as possible. Most computationally intensive code has been implemented in Cython, C or C++ and allows policy-based parallel execution, either on shared-memory machines using joblib / ThreadPoolExecutor, or on distributed computing clusters using mpi4py.futures.MPIPoolExecutor.
Copyright¶
Copyright (C) 2021 the pept developers. Until now, this library was built directly or indirectly through the brain-time of:
Andrei Leonard Nicusan (University of Birmingham)
Dr. Kit Windows-Yule (University of Birmingham)
Dr. Sam Manger (University of Birmingham)
Matthew Herald (University of Birmingham)
Chris Jones (University of Birmingham)
Prof. David Parker (University of Birmingham)
Dr. Antoine Renaud (University of Edinburgh)
Dr. Cody Wiggins (Virginia Commonwealth University)
Thank you.
Indices and tables¶
Getting Started¶
These instructions will help you get started with PEPT data analysis.
Prerequisites¶
This package supports Python 3.6 and above - it is built and tested for Python 3.6, 3.7 and 3.8 on Windows, Linux and macOS (thanks to conda-forge, which is awesome!).
You can install it using the batteries-included Anaconda distribution or the bare-bones Python interpreter. You can also check out our Python and pept tutorials.
Installation¶
The easiest and quickest installation, if you are using Anaconda:
conda install -c conda-forge pept
You can also install the latest release version of pept from PyPI:
pip install --upgrade pept
Or you can install the development version from the GitHub repository:
pip install -U git+https://github.com/uob-positron-imaging-centre/pept
Tutorials¶
This part contains copy-pastable tutorials for using the pept
library to
initialise, track, post-process and visualise your PEPT data.
Saving / Loading Data¶
All PEPT objects can be saved in an efficient binary format using pept.save
and
pept.load
:
import pept
import numpy as np
# Create some dummy data
lines_raw = np.arange(70).reshape((10, 7)
lines = pept.LineData(lines_raw)
# Save data
pept.save("data.pickle", lines)
# Load data
lines_loaded = pept.load("data.pickle")
The binary approach has the advantage of preserving all your metadata saved in the object
instances - e.g. columns
, sample_size
- allowing the full state to be reloaded.
Matrix-like data like pept.LineData
and pept.PointData
can also be saved in a slower,
but human-readable CSV format using their class methods .to_csv
; such tabular data can then
be reinitialised using pept.read_csv
:
# Save data in CSV format
lines.to_csv("data.csv")
# Load data back - *this will be a simple NumPy array!*
lines_raw = pept.read_csv("data.csv")
# Need to put the array back into a `pept.LineData`
lines = pept.LineData(lines_raw)
Plotting¶
Interactive 3D Plots¶
The easiest method of plotting 3D PEPT-like data is using the pept.plots.PlotlyGrapher
interactive grapher:
# Plotting some example 3D lines
import pept
from pept.plots import PlotlyGrapher
import numpy as np
lines_raw = np.arange(70).reshape((10, 7)
lines = pept.LineData(lines_raw)
PlotlyGrapher().add_lines(lines).show()
# Plotting some example 3D points
import pept
from pept.plots import PlotlyGrapher
import numpy as np
points_raw = np.arange(40).reshape((10, 4)
points = pept.PointData(points_raw)
PlotlyGrapher().add_points(points).show()
The PlotlyGrapher
object allows straightforward subplots creation:
# Plot the example 3D lines and points on separate subplots
grapher = PlotlyGrapher(cols = 2)
grapher.add_lines(lines) # col = 1 by default
grapher.add_points(points, col = 2)
grapher.show()
# Plot the example 3D lines and points on separate subplots
grapher = PlotlyGrapher(rows = 2, cols = 2)
grapher.add_lines(lines, col = 2) # row = 1 by default
grapher.add_points(points, row = 2, col = 2)
grapher.show()
Initialising PEPT Scanner Data¶
The pept.scanners
submodule contains converters between scanner specific data formats
(e.g. parallel screens / ASCII, modular camera / binary) and the pept
base classes,
allowing simple initialisation of pept.LineData
from different sources.
ADAC Forte¶
The parallel screens detector used at Birmingham can output binary list-mode data, which can
be converted using pept.scanners.adac_forte(binary_file)
:
import pept
lines = pept.scanners.adac_forte("binary_file.da01")
Parallel Screens¶
If you have your data as a CSV containing 5 columns [t, x1, y1, x2, y2] representing the
coordinates of the two points defining an LoR on two parallel screens, you can use
pept.scanners.parallel_screens
to insert the missing coordinates and get the LoRs into
the general LineData
format [t, x1, y1, z1, x2, y2, z2]:
import pept
screen_separation = 500
lines = pept.scanners.parallel_screens(csv_or_array, screen_separation)
Modular Camera¶
Your modular camera data can be initialised using pept.scanners.modular_camera
:
import pept
lines = pept.scanners.modular_camera(filepath)
PEPT-ML¶
PEPT-ML one pass of clustering recipe¶
import pept
from pept.tracking import *
max_tracers = 1
pipeline = pept.Pipeline([
Cutpoints(max_distance = 0.5),
HDBSCAN(true_fraction = 0.15, max_tracers = max_tracers),
SplitLabels() + Centroids(),
Stack(),
])
locations = pipeline.fit(lors)
PEPT-ML second pass of clustering recipe¶
import pept
from pept.tracking import *
max_tracers = 1
pipeline = pept.Pipeline([
Stack(sample_size = 30 * max_tracers, overlap = 30 * max_tracers - 1),
HDBSCAN(true_fraction = 0.6, max_tracers = max_tracers),
SplitLabels() + Centroids(),
Stack(),
])
locations2 = pipeline.fit(lors)
PEPT-ML complete recipe¶
Including two passes of clustering and trajectory separation: Including an example ADAC Forte data initisalisation, two passes of clustering, trajectory separation, plotting and saving trajectories as CSV.
# Import what we need from the `pept` library
import pept
from pept.tracking import *
from pept.plots import PlotlyGrapher, PlotlyGrapher2D
# Open interactive plots in the web browser
import plotly
plotly.io.renderers.default = "browser"
# Initialise data from file and set sample size and overlap
filepath = "DS1.da01"
max_tracers = 1
lors = pept.scanners.adac_forte(
filepath,
sample_size = 200 * max_tracers,
overlap = 150 * max_tracers,
)
# Select only the first 1000 samples of LoRs for testing; comment out for all
lors = lors[:1000]
# Create PEPT-ML processing pipeline
pipeline = pept.Pipeline([
# First pass of clustering
Cutpoints(max_distance = 0.5),
HDBSCAN(true_fraction = 0.15, max_tracers = max_tracers),
SplitLabels() + Centroids(),
# Second pass of clustering
Stack(sample_size = 30 * max_tracers, overlap = 30 * max_tracers - 1),
HDBSCAN(true_fraction = 0.6, max_tracers = max_tracers),
SplitLabels() + Centroids(),
# Trajectory separation
Segregate(window = 20 * max_tracers, cut_distance = 10),
])
# Process all samples in `lors` in parallel, using `max_workers` threads
trajectories = pipeline.fit(lors, max_workers = 16)
# Save trajectories as CSV
trajectories.to_csv(filepath + ".csv")
# Plot trajectories - first a 2D timeseries, then all 3D positions
PlotlyGrapher2D().add_timeseries(trajectories).show()
PlotlyGrapher().add_points(trajectories).show()
The Birmingham Method¶
Birmingham method recipe:
import pept
from pept.tracking import *
pipeline = pept.Pipeline([
BirminghamMethod(fopt = 0.5),
Stack(),
])
locations = pipeline.fit(lors)
Recipe with Trajectory Separation¶
import pept
from pept.tracking import *
pipeline = pept.Pipeline([
BirminghamMethod(fopt = 0.5),
Segregate(window = 20, cut_distance = 10),
])
locations = pipeline.fit(lors)
Manual¶
All public pept
subroutines are fully documented here, along with copy-pastable examples. The base functionality is summarised below; the rest of the library is organised into submodules, which you can access on the left. You can also use the Search bar in the top left to go directly to what you need.
We really appreciate all help with writing useful documentation; if you feel something can be improved, or would like to share some example code, by all means get in contact with us - or be a superhero and click Edit this page on the right and submit your changes to the GitHub repository directly!
Base Functions¶
|
Read a given number of lines from a file and return a numpy array of the values. |
|
Load a binary saved / pickled object from filepath. |
|
Save an object obj instance as a binary file at filepath. |
Base Classes¶
|
A class for PEPT LoR data iteration, manipulation and visualisation. |
|
A class for general PEPT point-like data iteration, manipulation and visualisation. |
|
A class that manages a 2D pixel space, including tools for pixel traversal of lines, manipulation and visualisation. |
|
A class that manages a single 3D voxel space, including tools for voxel traversal of lines, manipulation and visualisation. |
|
A PEPT processing pipeline, chaining multiple Filter and Reducer for efficient, parallel execution. |
Auxilliaries¶
|
Define a sample_size as a fixed time window / slice. |
Base / Abstract Classes (pept.base
)¶
Base class for all PEPT-oriented objects. |
|
|
An class for iterating through an array (or array-like) in samples with potential overlap. |
Base class for PEPT filters (transforming a sample into another) and reducers (transforming a list of samples). |
|
Abstract class from which PEPT filters inherit. |
|
Abstract class from which PEPT reducers inherit. |
|
An abstract class that defines a filter for samples of pept.PointData. |
|
An abstract class that defines a filter for samples of pept.LineData. |
|
An abstract class that defines a filter for samples of pept.Voxels. |
Initialising Scanner Data (pept.scanners
)¶
Convert data from different PET / PEPT scanner geometries and data formats into the common base classes.
The PEPT base classes PointData, LineData, and VoxelData are abstractions over the type of data that may be encountered in the context of PEPT (e.g. LoRs are LineData, trajectory points are PointData). Once the raw data is transformed into the common formats, any tracking, analysis or visualisation algorithm in the pept package can be used interchangeably.
The pept.scanners subpackage provides modules for transforming the raw data from different PET / PEPT scanner geometries (parallel screens, modular cameras, etc.) and data formats (binary, ASCII, etc.) into the common base classes.
If you’d like to integrate another scanner geometry or raw data format into this package, you can check out the pept.scanners.parallel_screens function as an example. This usually only involves writing a single function by hand; then all functionality from LineData will be available to your new data format, for free.
|
Initialise PEPT lines of response (LoRs) from a binary file outputted by the ADAC Forte parallel screen detector list mode (common file extension “.da01”). |
|
Initialise PEPT LoRs for parallel screens PET/PEPT detectors from an input CSV file or array. |
|
Compute the geometric efficiency of a parallel screens PEPT detector at different 3D coordinates using Antonio Guida’s formula [R71bb6ad21a70-1]. |
|
Initialise PEPT LoRs from the modular camera DAQ. |
Tracking Algorithms (pept.tracking
)¶
Tracer location, identification and tracking algorithms.
The pept.tracking subpackage hosts different tracking algorithms, working with both the base classes, as well as with generic NumPy arrays.
All algorithms here are either pept.base.Filter
or pept.base.Reducer
subclasses, implementing the .fit and .fit_sample methods; here is an
example using PEPT-ML:
>>> from pept.tracking import *
>>>
>>> cutpoints = Cutpoints(0.5).fit(lines)
>>> clustered = HDBSCAN(0.15).fit(cutpoints)
>>> centres = (SplitLabels() + Centroids() + Stack()).fit(clustered)
Once the processing steps have been tuned (see the Tutorials), you can chain all filters into a pept.Pipeline for efficient, parallel execution:
>>> pipeline = (
>>> Cutpoints(0.5) +
>>> HDBSCAN(0.15) +
>>> SplitLabels() + Centroids() + Stack()
>>> )
>>> centres = pipeline.fit(lines)
If you would like to implement a PEPT algorithm, all you need to do is to
subclass a pept.base.Filter
and define the method .fit_sample(sample)
-
and you get parallel execution and pipeline chaining for free!
>>> import pept
>>>
>>> class NewAlgorithm(pept.base.LineDataFilter):
>>> def __init__(self, setting1, setting2 = None):
>>> self.setting1 = setting1
>>> self.setting2 = setting2
>>>
>>> def fit_sample(self, sample: pept.LineData):
>>> processed_points = ...
>>> return pept.PointData(processed_points)
General-Purpose Transformers¶
|
Stack iterables - for example a |
|
Split a sample of data into unique |
|
Stack all samples and split them into a list according to a named / numeric column index. |
|
Compute the geometric centroids of a list of samples of points. |
|
Compute the minimum distance point of some |
|
Select only data satisfying multiple conditions, given as a string, a function or list thereof; e.g. |
|
Remove columns (either column names or indices) from pept.LineData or pept.PointData. |
Space Transformers¶
|
Asynchronously voxelize samples of lines from a pept.LineData. |
|
Interpolate between data points at a fixed sampling rate; useful for Eulerian fields computation. |
Tracer Locating Algorithms¶
|
The Birmingham Method is an efficient, analytical technique for tracking tracers using the LoRs from PEPT data. |
|
Transform LoRs (a pept.LineData instance) into cutpoints (a pept.PointData instance) for clustering, in parallel. |
|
Transform LoRs (a pept.LineData instance) into minpoints (a pept.PointData instance) for clustering, in parallel. |
|
Use HDBSCAN to cluster some |
|
FPI is a modern voxel-based tracer-location algorithm that can reliably work with unknown numbers of tracers in fast and noisy environments. |
Trajectory Separation Algorithms¶
|
Segregate the intertwined points from multiple trajectories into individual paths. |
Post Processing Algorithms¶
|
Append the dimension-wise or absolute velocity to samples of points using a 2D fitted polynomial in a rolling window mode. |
Post Processing (pept.processing
)¶
The PEPT-oriented post-processing suite, including occupancy grid, vector velocity fields, etc.
This module contains fast, robust functions that operate on PEPT-like data and integrate with the pept library’s base classes.
|
Compute the 2D occupancy of circles of different radii. |
|
Compute the 2D occupancy / residence time distribution of a single circular particle moving along a trajectory. |
Visualisation (pept.plots
)¶
PEPT-oriented visulisation tools.
Visualisation functions and classes for PEPT data, transparently working with both pept base classes and raw NumPy arrays (e.g. PlotlyGrapher.add_lines handles both pept.LineData and (N, 7) NumPy arrays).
The PlotlyGrapher class creates interactive, publication-ready 3D figures with optional subplots which can also be exported to portable HTML files. The PlotlyGrapher2D class is its two-dimensional counterpart, handling e.g. pept.Pixels.
|
A class for PEPT data visualisation using Plotly-based 3D graphs. |
|
A class for PEPT data visualisation using Plotly-based 2D graphs. |
pept.utilities
¶
PEPT-oriented utility functions.
The utility functions include low-level optimised Cython functions (e.g. find_cutpoints) that are of common interest across the pept package, as well as I/O functions, parallel maps and pixel/voxel traversal algorithms.
Even though the functions are grouped in directories (subpackages) and files (modules), unlike the rest of the package, they are all imported into the pept.utilities root, so that their import paths are not too long.
|
Compute the cutpoints from a given array of lines. |
|
Compute the minimum distance points (MDPs) from all combinations of num_lines lines given in an array of lines sample_lines. |
|
Group the rows in a 2D data_array based on the unique values in a given column_to_separate, returning the groups as a list of numpy arrays. |
Return the number of lines (or rows) in a file. |
|
|
Read a given number of lines from a file and return a numpy array of the values. |
|
Read chunks of data from a file lazily, returning numpy arrays of the values. |
|
Utility for parallelising (read CSV chunk -> process chunk) workflows. |
|
Fast pixel traversal for 2D lines (or LoRs). |
|
Fast voxel traversal for 3D lines (or LoRs). |
|
Class for fast, on-demand reading / parsing and iteration over chunks of data from CSV files. |
pept.simulation
¶
|
Simulate PEPT data. |
Contributing¶
The pept library is not a one-man project; it is being built, improved and extended continuously (directly or indirectly) by an international team of researchers of diverse backgrounds - including programmers, mathematicians and chemical / mechanical / nuclear engineers. Want to contribute and become a PEPTspert yourself? Great, join the team!
There are multiple ways to help:
Open an issue mentioning any improvement you think pept could benefit from.
Write a tutorial or share scripts you’ve developed that we can add to the pept documentation to help other people in the future.
Share your PEPT-related algorithms - tracking, post-processing, visualisation, anything really! - so everybody can benefit from them.
Want to be a superhero and contribute code directly to the library itself? Grand - fork the project, add your code and submit a pull request (if that sounds like gibberish but you’re an eager programmer, check this article). We are more than happy to work with you on integrating your code into the library and, if helpful, we can schedule a screen-to-screen meeting for a more in-depth discussion about the pept package architecture.
Naturally, anything you contribute to the library will respect your authorship - protected by the strong GPL v3.0 open-source license (see the “Licensing” section below). If you include published work, please add a pointer to your publication in the code documentation.
Licensing¶
The pept package is GPL v3.0 licensed. In non-lawyer terms, the key points of this license are:
You can view, use, copy and modify this code _freely_.
Your modifications must _also_ be licensed with GPL v3.0 or later.
If you share your modifications with someone, you have to include the source code as well.
Essentially do whatever you want with the code, but don’t try selling it saying it’s yours :). This is a community-driven project building upon many other wonderful open-source projects (NumPy, Plotly, even Python itself!) without which pept simply would not have been possible. GPL v3.0 is indeed a very strong copyleft license; it was deliberately chosen to maintain the openness and transparency of great software and progress, and respect the researchers pushing PEPT forward. Frankly, open collaboration is way more efficient than closed, for-profit competition.
Citing¶
If you used this codebase or any software making use of it in a scientific publication, we ask you to cite the following paper:
Nicuşan AL, Windows-Yule CR. Positron emission particle tracking using machine learning. Review of Scientific Instruments. 2020 Jan 1;91(1):013329. https://doi.org/10.1063/1.5129251
Because pept is a project bringing together the expertise of many people, it hosts multiple algorithms that were developed and published in other papers. Please check the documentation of the pept algorithms you are using in your research and cite the original papers mentioned accordingly.
References¶
Papers presenting PEPT algorithms included in this library: 1, 2, 3.
- 1
Parker DJ, Broadbent CJ, Fowles P, Hawkesworth MR, McNeil P. Positron emission particle tracking-a technique for studying flow within engineering equipment. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 1993 Mar 10;326(3):592-607.
- 2
Nicuşan AL, Windows-Yule CR. Positron emission particle tracking using machine learning. Review of Scientific Instruments. 2020 Jan 1;91(1):013329.
- 3
Wiggins C, Santos R, Ruggles A. A feature point identification method for positron emission particle tracking with multiple tracers. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 2017 Jan 21;843:22-8.
Pages