Pycromanager Tutorial
Henry Pinkard came up with this wonderful way to simultaneously access Micromanager's user-friendly GUI and deliver automated commands via easy-to-use Python code rather than Micromanager's somewhat difficult-to-use internal scripting language, Beanshell.
I've used the base-level wrapping parts of Pycromanager to control my live-cell experiments during my time in the Weiner lab. This tool makes it possible to track our rapidly moving cells and automatically and dynamically extract image-based features to control our optogenetic experiments.
As a simple example, here is some code that should print the current stage position and move the stage 100 units in the x direction and print the new location. Note that micromanager needs to be open first for the Bridge object to be able to connect to Micromanager.
Always use extreme caution when moving the stage of the microscope. The code will try to do what you tell it to do, even if it is not a good idea.
from pycromanager import Bridge
bridge = Bridge()
core = bridge.get_core()
y = core.get_y_position()
x = core.get_x_position()
print(x, y)
core.set_xy_position(x + 100, y)
print(x, y)
This code could be extended to create customizable positions for imaging or for scanning large fields of view. For my work, we've found it even more useful to figure out the relationship between the coordinate systems of the image and the stage. This allows you to center objects of interest reproducibly or track rapidly moving objects (like white blood cells) over time. To do this, we use linear affine transformations. These can account for differences in position, scale, rotation, and shear. This transformation can be represented by a 3x3 matrix.
# collect first 6 values of the affine transformation matrix
# if this produces an error run the objective calibration in micromanager
affine_values = [float(i) for i in core.get_pixel_size_affine_as_string().split(';')]
affine_matrix = np.vstack([np.array(affine_values).reshape(2, 3)], [0,0,1]])
aft_img_to_stage = transform.AffineTransform(affine_matrix)
If for example, you could find locations of interest in your images as they come off the microscope (which you can snap with Pycromanager and record as Numpy arrays by the way), you can easily move something from one location in the image coordinate system to another location in that same coordinate system using something like the code below which will move whatever is at (x_in_img, y_in_img) to (512,512), which is the center of our image field on the Weiner Lab Microscope.
target_location_x, target_location_y = 512, 512
move_from = aft_im_to_stage([x_in_img, y_in_img])[0]
move_to = aft_im_to_stage([target_location_x, target_location_y])[0]
DX, DY = move_from - move_to
new_x = core.get_x_position() + DX
new_y = core.get_y_position() + DY
core.set_xy_position(new_x, new_y)
Of course in order to find locations of interest, you might want to be able to snap images and have them locally available in your python code for further analysis. To do that:
core.set_config('-TIRF', channel)
core.set_exposure(200)
core.snap_image()
tagged_image = core.get_tagged_image()
metadata_dict = tagged_image.tags
# we can add extra experiment-related metadata if we want as well
metadata_dict['elapsed_time_s'] = str(time.time() - experiment_start_time)
metadata_dict['stage_x_pos'] = str(core.get_x_position())
metadata_dict['stage_y_pos'] = str(core.get_y_position())
metadata_dict['dmd_affine_transform'] = dmd_affine_as_string
# we need to reshape the pixel values back into an image shape like this
h, w = metadata_dict['Height'], metadata_dict['Width']
pixels = np.reshape(img.pix, newshape=[h, w])
# and we can save the image and metadata with tifffile
image_name = f"\\img_channel{str(channel).zfill(3)}_time{str(t).zfill(9)}.tif"
image_dest = folder_path + image_name
tifffile.imwrite(image_dest, pixels, metadata = md_collection[channel])
# we can now do some processing on our numpy array-formatted image (pixels)
# you could use traditional image analysis approaches (like skimage)
# or newer machine-learning-based models to look for features of interest