Lotus and Image Reconstruction Efficiency

Written by Sarah Hensley, Intern, LeafLabs

Lotus is a LeafLabs project that uses a light-field microscope to capture high-resolution 3D images of neuron activity in zebrafish brains. Before a 3D image can be reconstructed from the raw images taken by the microscope, three processing steps must happen. First, a matrix known as the point spread function must be calculated using physical parameters from the experiment. Second, the raw images must be cropped and aligned. Finally, a 3D image can be reconstructed by using the point spread function to perform a 3D deconvolution.

These processing steps are computationally intensive, since each requires several calculations on tens of thousands of data points. Without leveraging computational tricks to speed up the code, these steps would be infeasible, taking half a year of constant computation. This blog post discusses how to speed up processing to make reconstructing 3D images possible.

The most challenging part of processing is calculating the point spread function (PSF). The PSF is a 5D matrix that maps how light from a 3D point will appear to a raw image taken by the microscope. Thus, the first two dimensions can be understood as the (x, y) position of a pixel in the raw image. The last three dimensions can be understood as the (x y, z) position of a voxel in the space being imaged. Fixing the last three dimensions at a certain voxel gives a 2D matrix that represents what the raw image would look like if there were a point source at that voxel. These 2D matrices can be represented as pictures to better understand the process of calculating the PSF. The image below gives an example of what the raw image would look like if there were a point source at the center of a plane at z = -100 μm.'

Figure 1: The raw image output from having a point source centered on a plane set at z = -100 μm.

Figure 1: The raw image output from having a point source centered on a plane set at z = -100 μm.

Calculating the PSF is a difficult task because of the number of elements and the amount of computation necessary to get each element. A naive calculation of a 2D slice of the PSF, like the one seen above, would take approximately 50 minutes. Depending on the experiment, the entire PSF contains between 3000 and 5000 of these slices. Clearly, this computation needs to be sped up to make finding the entire PSF feasible.

The computation that takes the longest is the calculation of the light wavefront, as caused by a single voxel, before it goes through all of the lenses. This wavefront is a 450 x 450 matrix that requires a complex integral at every element. As a result, because Python and SciPy cannot handle complex integrals, the real and imaginary parts must be integrated separately, making this step calculate over 405,000 integrals for a single 2D slice of the PSF.

First, taking advantage of symmetries drastically reduces the number of integrals needed (and therefore the computation time). The wavefront matrix is a discretely sampled circularly symmetric pattern, pictured below. The entire matrix can be found by just calculating one-eighth of the pattern, then using symmetry to fill in the rest.

Figure 2: The wavefront from a single voxel, before going through all of the lenses. This matrix is important in calculating the PSF. Note how it is circularly symmetric.

Figure 2: The wavefront from a single voxel, before going through all of the lenses. This matrix is important in calculating the PSF. Note how it is circularly symmetric.

This matrix above is generated for a specific (x, y, z) voxel. However, shifting its elements up/down and left/right gives the corresponding matrix for any voxel that shares the same z value. This reduces the number of times this matrix has to be calculated from the 3000 to 5000 range to the 15 to 25 range. Overall, then, using symmetry reduces the computation time from doing a 50 minute calculation 3000 to 5000 times to doing a 7 minute calculation 15 to 25 times.

While this is a great improvement, the PSF will take between 1.5 and 3 hours to calculate, which is too long. Even with the symmetry reductions, finding the matrix pictured above is still the step that takes the most time, as it requires calculating 50,000 integrals. With no way of reducing the number of integrals or the complexity of the integral, the best way to reduce computation time was to find some way of speeding up the code.

The first option was to use multiprocessing. Because the integrals don’t depend on each other, they can be calculated in parallel. This sped up the code by a factor of four, the number of processors the computer had available. While this was an improvement, it made the total PSF calculation take between 30 and 60 minutes, which was still longer than desirable.

The second option was to use Cython, which compiles Python code and allows the use of C libraries. Simply changing the sine and cosine functions in the integral from NumPy to C functions gave the same 4x speed gain as multiprocessing. Fully changing all high level functions to their C equivalents and making full use of Cython’s compiling abilities reduced the computation of the above matrix from its previous 380 seconds to just 12 seconds.

By leveraging Cython, the overall computation of the PSF now takes between 11 and 18 minutes. This makes the process of reconstructing 3D images from the raw images taken with Lotus much more efficient!