Making a Scanning Hyperspectral Microscope
Instead of photographing three color channels like Red, Green, and Blue, hyperspectral imaging systems capture dozens or hundreds of color channels for every single pixel in an image. This is useful because lots of things actually have imperceptible differences in the ways they interact with light which tell us important things about their composition, like how plants change color when they are healthy or sick, how dirt and rocks change color depending on their mineral content, or how bacteria can change color as it grows. There’s a pretty large scientific field (spectroscopy) that measures how light and matter interact - it’s one of the most important ways we make sense of the world with uses pretty much everywhere! Anywhere we look at things to understand them, spectroscopy and hyperspectral imaging have applications.
How to make a hyperspectral microscope
There are a number of different methods to make hyperspectral imaging systems. Perhaps the simplest is to use an actual spectrometer (a spectrum meter) and just move it around to measure each pixel of our image - this is called “point scanning”
Different methods of capturing hyperspectral images
To make a point scanning system we need just three parts - a spectrometer to measure the light spectrum at a point, a motion control system to move it around, and some software to coordinate the two.
Motion Control
The only reason we need to use motors instead of doing this by hand is so that we can precisely control the position of our spectrometer to create the pixel grid of our image. To build our motion system, we just need two motors - one for our x axis and one for our y axis. Many microscopes already have positioning stages where rotating knobs can change the x and y position of the microscope slide. That’s great! This means we can easily turn the rotation of the motors into linear motion.
I hopped into Fusion 360 and created a simple motor mount which will allow some cheap stepper motors to control the X and Y position by turning the knobs of my microscope:
These motors are coupled using GT2 timing belts and some 3D printed gears that press-fit onto the microscope XY knobs
The actual CAD model
Some technical notes about motor selection: 28BYJ-48 stepper motors are as cheap as they come at about 1$/piece, but they’re not too bad for this sort of application. While they have a fairly low torque and low 32 steps/revolution, they also have an integrated 64:1 gearing, meaning that 1 rotation of the output shaft actually has 2048 software-defined steps. This can be further multiplied with 2x micro-stepping (which I didn’t do here) and in this design, an additional 3:1 mechanical reduction in the belt system. This happens on top of the mechanical gearing in the microscope stage (I have no idea what this is) and the end result is a fairly accurate (if slow) system.
If I were to redesign this, the only thing I would add would be 14-bit magnetic encoders to help counteract backlash and the variance in step size.
With these motors in place, we need some way to control them. I used an Arduino Uno that I had lying around, and flashed it with some initial motor control code using AccelStepper, a standard stepper motor control library. Our electronics now look something like this:
Our motor control schematic
Here’s what the electronics actually look like
I also added a little joystick so that I could manually jog the microscope stage around
Spectrometer
For this build I modified this affordable little garden spectrometer (~60$). I was pleasantly surprised by this piece, it has a spectral resolution of under a nanometer! I redesigned the case to be able to mount onto one of the optical ports of my microscope, and other than that it’s operating just the way it came.
I recommend this spectrometer - it’s a labor of love from an independent researcher who makes all the parts himself
I modified the case to more easily mount on the optical port of my microscope (and to be modular for future projects)
Here’s the CAD cross-section of my modified case
This spectrometer uses a modified version of Theremino spectrometer software - I would like to transition away from this eventually to Grillbaer’s open source Spectracle software, which looks cleaner and easier to interface with.
Software Development and Integration
We need a bridge between the Arduino motion control system and the spectrometer - for this we can use a normal desktop PC which will also handle the heavy-lifting associated with data processing.
Now our electronics look something like this:
The rest of the project is entirely software development. We will need software on the Arduino which will need to communicate with the PC over a serial connection (the USB.) The PC will run some software that will synchronize the motor movement with the spectrometer data collection.
After we have the data, we will need a data processing pipeline to turn our thousands of spectrometer readings into a hyperspectral image. After that we will want to visualize that hyperspectral image somehow.
That means our software architecture looks something like this, with the Arduino code on the left in green and the PC code on the right in blue:
It took about a week to get it all working! The serial communication in particular was difficult, as we need to ensure the motors are always in-sync with the spectrometer data collection. I looked around and built off of the SerialTransfer library by PowerBroker2 and ended up with a fairly custom serial communication protocol.
There’s also a number of motor control nuances (like backlash compensation, overheat protection, etc) which complicated the software system. At one point I left the room and came back to a melted motor mount / destroyed stepper which had been continuously energized. After that I built in some additional overheat protection.
Here’s how the scan looks when it’s running:
After our scan we have thousands of text files, each representing a “pixel” in our hyperspectral image.
These files are actually CSVs with tab separation - one column for nm, one column for intensity. In our data processing we combine all of these to create our final data set. If we plot it in a scatter plot it would look something like this:
Our “datacube”
A little hard to interpret! If we filter to just the brightest 80% of points, we can start to see some shapes emerge:
Some dark spots that seem to be in a shape - a good sign!
Although we also highlight an additional nuance:
Big gaps across the spectrum where all the points are darker
The reason for this is currently the brightest points in our data tell us more about the light source we’re using than they do about the sample we’re measuring.
This is the spectrum of our light-source, a CREE CXA3050 LED, as measured by our spectrometer
To correct for this, we can look at the spectrum of our light source, average some readings of this spectrum, and then normalize our scan relative to the background light source.
When we do this, we can start to highlight the parts of our scan that have spectrums that are noticeably different from our background light source
This 3D view is fine for data processing but it’s a little confusing if we actually want to know what we’re looking at - we could also view a 2D cross-section and scroll through the different nanometer values all the way from ultraviolet to infrared. This is a little more familiar, more like a picture - looks like we’re seeing the edge of something:
The resolution of this scan isn’t incredible at 32x32 points, but we can actually dynamically control the scan resolution by adjusting our scan path to measure however many points we need. Here’s a higher resolution scan:
Increasing resolution comes with a tradeoff however - it takes about a second to measure each point. This means to capture a 128x128 resolution picture, we need to measure over 16,000 points, taking over four hours. We’ve encountered one of the major limitations with point-scan systems - to take a single 1920x1080 image (having over 2 million points) would take this platform over 20 days(!) So far the highest resolution images I have created are 180x180, which run overnight. There are a few paths to improve this, but to dramatically improve the capture speed we would want to build around a different capture method.
At this point you may be curious what we’ve actually been scanning - we’ve been looking at this cotton stem cross-section:
Note that this has also been stained with a dye to enhance contrast - this is pretty common with prepared microscope slides
Conclusion
The system we have now is pretty flexible and, remarkably, cost less than100$! When developing this I was surprised that the data processing was never the hard part - the hard part was consistently the integration and timing of the control systems. There’s still room for improvement in motor coordination, speed, and precision. We can see some defects like missing pixels and motor positioning artifacts. I’ll continue to take more scans and update as things improve.
If you’re wondering what’s next for this project, there are a few things on the horizon. I’ll keep tuning and scanning things, but also I’ll look into modifying this affordable spectrometer to do line-scanning, which would allow a hyperspectral capture to happen in seconds rather than hours or days. The other major next step is to build out the rest of the pipeline for what you can do once you have hyperspectral data. By looking at the spectrum at each point, we can try to identify specific molecules, proteins, and pigments based on their unique spectral signatures. Ideally we’d be able to turn our scans into heatmaps showing where concentrations of different molecules are found!
I’ll probably open-source this project soon - mostly I want to clean it up a little and gauge feedback before a full release. If you found this interesting, I’d love to hear about it!
If you like interesting microscopy projects, you should check out my other project which recycles old camera lenses to make projection microscopes :-)