Use Python to collect and save data from microcontrollers to your PC

I’m at PyCon 2018 in Cleveland this upcoming week, so I thought that it would be a great time to post a quick Python writeup on how you can collect data from microcontrollers like Arduino and then save the data to your PC to process however you wish! PySerial makes this remarkably easy. This is a great project for science teachers as they show how you can use simple, ~$10 microcontrollers to create excellent data collection systems by connecting a few sensors and modifying my simple software! If you want to skip the tutorial you can download my software for Windows or Mac at the bottom of this page or use the link there to clone my repository on GitHub.

As usual,


How can I collect a bunch of sensor data and then save it to my computer so that I can analyze it in a Jupyter notebook or in Excel?


Over serial, and then use a simple Python program to save the data into a format like CSV that can be easily read by any analysis program!

For this writeup, I was specifically trying to collect data from an analog temperature sensor (TMP36) and then plot that temperature over time. I used a Teensy LC- a $12 Arduino-compatible microcontroller board that is so cheap and easy to use that I throw them at any of my rapid-prototyping problems. It features an ARM Cortex-M0+ processor, lots of analog and digital pins (including capacitive touch!) and a convenient micro-USB port for setting up an easy USB serial connection.

Here’s how I wired up the sensor:

This probably would not pass UL certification.

Since all of my projects must include 3D printing, I also 3D printed a great case off of Thingiverse by a user named Kasm that nicely contains the project and makes sure I don’t accidentally short any of the pins to anything:

Slightly more safe and professional looking. The temperature sensor is down on the bottom left.

For those curious how this works, the temperature sensor is an analog component that, when given 3-5V, returns a voltage that is proportional to the temperature around it. We are feeding the voltage into an analog to digital converter (ADC) so that our digital microcontroller can make sense of the reading. This means that in our code, we need to convert a number from 0-1023 back into a voltage so that we can use the formula from the data sheet to convert to a temperature. Fortunately, this is a simple conversion:


This will return a voltage in millivolts. The 3300 comes from the 3.3V reference voltage of the Teensy, and the 1024 comes from the Teensy having a 10-bit ADC, meaning it has 210 discrete analog levels.

We can then convert to a temperature using the formula from the device data sheet:


This will return a temperature in Celsius. I have adapted the Arduino sketch from this excellent Adafruit tutorial so that it will work with the Teensy:

Note what we are doing here:

  1. We are opening a serial port at 9600 baud (bits per second)
  2. We are reading from analog pin 9 on the Teensy, where the voltage out pin is connected
  3. We are converting the reading there to a voltage.
  4. We are converting that voltage to a temperature
  5. We are printing that line to serial so that it is transmitted
  6. We convert again from Celsius to Fahrenheit
  7. We print this line to serial as well
  8. We print a new line character so each row only has the one pair of readings
  9. We wait one minute to take another reading

Pretty simple! You can change the delay to any interval you wish to collect more or less data- the units are milliseconds. I wanted to only collect data once per minute, so I set it to 60000 milliseconds. Go ahead and flash this to your Teensy or Arduino using the Arduino IDE, making sure your device type is set to Serial. This is important, as without doing it you will not be able to read the data being sent over serial USB.

All software must have a nice icon.

Now to build the Python program. If you would rather simply download the binaries or clone the repository on Github for the Python client the links are available at the bottom of this page. We are going to use PySerial. We are also going to use Python 3, so make sure you have it installed if you are still using Python 2. Pip install PySerial by typing pip install pyserial. The logger is going to be wrapped in a simple Tkinter GUI. We want to accomplish the following:

  • Read the temperature data being printed to serial by the Teensy
  • Save it to a CSV value
  • Let the user select an interval to read from serial
  • Let the user select where they want the csv file saved
  • Let the user select what serial port they want to read off of.

The end result will look like this:

Who says Tkinter has to look ugly?

Let’s take a look at the code and then break it down:

What’s happening here is actually more simple than it seems! Let’s go through how it works, step by step, starting in Main():

  1. We create the root window where all of our widgets are stored (buttons, fields, etc.)
  2. We give it a name
  3. We grid a label
  4. We grid a dropdown menu
  5. We use PySerial to look at all serial ports open on the system and look for one called ‘USB Serial’. By default when you flash the Arduino or Teensy with the sketch shown above it will appear as ‘USB Serial’ on either Windows or Mac computers. We want to make sure the user selects the right serial port, so we filter the choices to these. More than likely there will be only one- the Teensy or Arduino connected to the machine.
  6. We populate the dropdown menu with these serial ports.
  7. We add a button and a counter. The default value of this counter is 60. This is letting the user know that they don’t have to sample every 60 seconds, and they can increment this value if they wish. You can change this value to whatever sampling rate you have set on your Teensy or Arduino in the sketch above.
  8. We add a button that lets the user start the data collection. This triggers a function where we ask the user where they want to save their output file in a save dialog, then open a serial port we called sp using PySerial. We must make sure we open it at 9600 baud, just like the serial port we opened on the Teensy! This is so the two ports can talk to each other. We then create a new thread to run our code in- if we did not, the program would freeze and you would not be able to stop the program because it would constantly be running the timer() function, collecting data, and never returning to the thread running the GUI! We set up this thread to run the timer() function as long as the global variable running is set to true. This lets us have a stop condition to terminate the thread when we are finished. In the timer function we use the filename the user has given to use and every sixty seconds save a time stamp in the first column, the temperature in Celsius in the second column, and the temperature in Fahrenheit in the third column. We read a line of of serial using the PySerial serial port’s readline() function and then convert whatever it gets to UTF-8 to make sure that what we are saving are readable Unicode characters. The output database is always being updated with each minute’s data in a new row! We open the CSV value in append mode (open(, a)) so that we do not overwrite any existing data.
  9. We add a button to stop data collection. This sets the global variable running to false, causing the thread where we are saving the temperature data to stop. This halts all data collection.

And we are done! PySerial handles the heavy lifting and we have a nice, simple GUI to collect our data from. The CSV files can then be read into your favorite analysis program to create plots or fits:

Screen Shot 2018-05-05 at 12.28.30 PM
A quick plot I created in Numbers, showing that over the short interval tested the temperature did not stray far from 23 degrees.


You can clone or fork the repository that contains all of these files here or you can download binaries to use right away on your system:

Clone the Repository

Download for macOS High Sierra or Windows 10

Stop the flu with this DIY Ultraviolet-C iPad Sanitizer and Charger


Our touchscreens are disgusting, and researchers are increasingly turning to UV-C light to disinfect them. UV-C is naturally blocked by the ozone layer, but can be artificially created and it is known to be germicidal. How can we create a convenient way for people such as teachers to disinfect many screens at once?


Using hardware store materials (making them easy for anyone to procure) we can create an enclosed system that bathes screens in UV-C light and charges the devices at the same time! All we need is a box, a UV-C bulb, a power strip, and some elbow grease. Watch the video above for all the details, and click these links to buy the materials I used to create my sanitizer/charger:

Beat the flu with this easy and inexpensive charger and sanitizer!


I also designed this 3D printable iPad stand to sit inside the system. As always, you can download it on Thingiverse or edit it for yourself on OnShape!


Exploring 3D Printing with First Graders and Tinkercad

This month I have had the privilege to work with a class of talented first graders in order to teach them about computer aided design, or CAD. This is an essential tool in any engineer’s toolbox, but it using it can get very complicated very quickly. Frequent readers of this blog likely know my fondness for OnShape and it’s spiritual predecessor and industry titan, SolidWorks. These programs are far too complicated for a quick introduction in how to use CAD, and the number of features can be overwhelming. While OnShape does work on iPads, which these children are fortunate to have in their classrooms, there is a much friendlier way to introduce 3D design to children- Tinkercad.

Screen Shot 2018-02-23 at 10.16.35 AM.png
The Tinkercad interface and a quick elephant I drew using the basic primitives it offers.

Users create their models out of simple 3D shapes like boxes, cylinders, and pyramids just like they were building with wooden blocks. The big difference is that these blocks can be morphed and stretched like clay, letting users make pretty much whatever they want. They can set how big they are in millimeters and give a precise orientation in degrees.

Tinkercad is maintained by industry heavyweight Autodesk, which is famous for it’s CAD packages like AutoCAD and 3D art software like Maya and 3DS Max. It gives a nice set of features for beginners to computer aided design- lots of primitives and shape generators to choose from, rulers and workplanes to reference your work to and measure distances, the ability to dimension shapes, and a navigation system straight out of the best parametric modelers are all present here. This is great because it means the skills you develop learning Tinkercad transfer into using other popular packages, like OnShape or Inventor, likely the most similar Autodesk offering. As an engineer this is the biggest selling point for teaching children with this package- I know that whatever they get out of this activity, they can transfer forward to bigger and better things. This cannot be said for other programs I tried- Autodesk has another offering called 123D Sculpt that simulates making something out of clay that I found too limiting and too application specific. Microsoft Paint 3D is an impressive upgrade of the drawing program I remember, but it is more oriented towards 3D art than serious 3D part design.

That said, the activity we tackled was not one of designing parts for a rocket or model car but instead making models of the animals they were studying in their science unit. This may seem like a bad application on the surface for modeling programs like Tinkercad if the goal is to teach students how to make useful parts on 3D printers, but in my opinion it is perfect. A big part of imaging how a part should look or be designed is to imagine how it can be made out of simple shapes and figures. When working with a parametric modeler you often observe a designer create the basic shape out of circles, boxes, arcs, and lines and then begin to add the finer details. Working with something like an animal the children are familiar with is perfect for teaching this higher-level skill. It is one thing to teach them how to draw boxes and spheres and dimension them, but to teach them how to think about what they are designing and introducing them to the creative process is much more valuable than the technical skill of using the program itself. Take this elephant the children drew, for example:

Funky Jaagub-Esboo.png
An impressive elephant the first graders drew. This was the first model they created.

I had shown them my demo elephant, and they used it as a springboard to make their own. They identified the shapes I had used to make mine and impressed me with how they understood the creative process I had gone through to create those shapes. They then extended it by thinking of what shape they wanted to use for standing legs (I modeled mine laying down to make things easier), and what shape they wanted to use for eyes, both of which I did not have in my model! They also creatively added a baseplate for the elephant to be printed on with a label, taking advantage of the Tinkercad text tool. They were already thinking in terms of what shapes they had available to them and how they could be stretched and morphed into the figures they wanted. They were also thinking in terms of what could actually be 3D printed. These students had the privilege of having a 3D printer in the classroom and with a little coaching on my part realized why you cannot have big pieces hanging in thin air if you want a successful print, and identified that parts have to be stacked one on top of the other, not hanging in space. I taught them the ‘sandcastle principle’- imagine that you are making this model in sand. You cannot have sand floating in midair, and it won’t stay up without support. They seemed to quickly grasp the idea and they ran with it.

Here are a few of their amazing creations:

If you would like to view the presentation I gave to the students that explains some easy dos and don’ts about 3D printing and design, I’ve included it here in Google Slides format:

This was an awesome experience, and I know the students enjoyed it too. They are now obsessed with Tinkercad and 3D printing, and I have been amazed by what they have independently been creating. This is truly an awesome tool for introducing 3D art, design, and 3D printing to primary and intermediate students, as there is a lot of flexibility here for making very complicated things. I have even found that for very quick designs Tinkercad is more than adequate and it is fast. I am excited to return and work with a new set of students on another set of animals. I am sure I will be just as surprised at their creativity and inventiveness.


Making a Star Wars Poster Lithopane Lamp

Star Wars has some of the most iconic artwork of any movie franchise in history. Nowhere is this more apparent than in its theatrical posters. This lithopone lamp captures all nine main storyline Star Wars posters (with a blank standing in for Episode 9 until the official poster is released!) in a lovely desktop lamp that celebrates your favorite movies. Simply place an approximately 2.5 inch diameter LED puck light (like you would place under a cabinet) inside the indentation in the base and run the cord out the slot!

As always, you can edit this project on OnShape!

Also as usual you can download and print these files right now from my Thingiverse account.

Drawing johngineer’s Oscilloscope Christmas Tree using a Particle Photon


One of my favorite blogs is johngineer, which features projects by the very talented J. M. De Cristofaro. Basically, his blog is what my blog wants to be when it grows up. Anyway, I was in a festive mood and wanted to revisit one of my favorite articles of his in which he draws a Christmas tree on an oscilloscope using an Arduino and a pair of RC filters. He does a great job of going into the details of how exactly this works, but the gist is that the RC filters on the end of your PWM output pins on your Arduino act as a very simple digital to analog converter, letting you draw arbitrary shapes on an oscilloscope in X-Y mode! For those who are unaware of what an oscilloscope is, they are measurement tools that let you visualize electronic signals by plotting the signal versus time. Older ones, like mine, use cathode ray tubes so that you can draw with an electron beam! I made a small modification to his setup- my Arduinos were otherwise indisposed, so I used my favorite dev board, the Particle Photon. I also used different values for my RC filter- 18 kilohms for the resistor and 0.47 microfarads for my capacitors. These were chosen purely because they gave me the best picture on my scope- you may need to play with your part values as well, as your goal is to smooth the square wave that will be coming out of your digital outputs. A diagram of the quick circuit I built is shown below:


I also had to modify the original code a bit to get it to work on the Photon. Mainly you will want to set the X and Y variables to whatever digital pins you want to use, and you will need to modify the trace delay. Using the original value is too fast with the Photon, and I found the value below produces the tracing effect you see in my above .GIF. You can adjust it to taste to get a more constant image or to make the hypnotic drawing process slower. My modifications are shown here:

Simply paste this into the Particle IDE and flash it to your board- this is a quick and geeky holiday season project that is sure to get some curious looks in the lab. What better way to start the holiday season than bending a beam of electrons produced by an expensive and invaluable piece of equipment in order to draw yourself a little green tree?

3D Printing Dinosaur Skulls after the Thornton Triceratops Discovery

Recently a triceratops fossil was found at a construction site in my hometown of Thornton, Colorado. Inspired by some incredible 3D printing work being done with 3D scans of dinosaur fossils, I decided to make my own triceratops skull and later made a t-rex skull after the discovery of a tyrannosaur tooth at the site days later. These were done on my Robo 3D R1 Plus printer with special marbled PLA that gives the models a nice stone look. The time lapses were done with an iPhone 7 and then sped up considerably in editing!

Assembled Skeeter unit

Skeeter- an Open Source Internet-of-Things Pranking Device


I want to be able to harmlessly prank friends, pets, and neighbors from my smartphone (and a safe distance away).


Skeeter is an open-source internet-of-things hardware project I have developed that will harmlessly prank your friends, pets, neighbors, etc. by emitting a 17.4kHz tone (just within the hearing of young people) or an 8kHz tone (just within the hearing range of older people) with the press of a button on a smartphone app I have developed. By emitting tones just within the hearing of your victims, they will become annoyed and agitated just as if there were a mosquito or fly buzzing in their ear, hence the name Skeeter! Use it as a dog whistle, a way to shut your loud neighbors up, or leave it hidden somewhere in the office so that you can trigger it at will.

How is this possible? Skeeter is driven by the Particle Photon WiFi IoT development board and runs on rechargeable batteries. A small amplifier is put on the output of the Particle Photon to drive a speaker with an impedance of 4 ohms or higher, perfect for small speakers like the one I used, which was an 8 ohm, 3W speaker which produced a nice, clean output sound.

Skeeter Electronics Assembly
Assembled Skeeter circuit with all parts labeled.

Here is what you need:

  • 1 Particle Photon WiFi development board
  • 1 Particle Photon Power Shield (with rechargeable battery). This will let you charge the batteries over USB and the Photon slides right into it.
  • 1 Adafruit 2.5W Mono Audio Amplifier, or an appropriate circuit to drive your speaker as the Photon cannot do it alone.
  • 1 speaker (if using the Adafruit amplifier choose any speaker of 4-8 ohms impedance that can accept up to 2.5W of power.)

The Particle Photon is an exceptionally handy board to have around- you can write code for it that can be triggered via an HTTP request, meaning that any functions you write you can set up so that clicking a link on a website, in an app, or even querying Alexa will trigger a function to occur in your system. They are low-cost, tiny, and there are a number of peripherals you can buy. I bought the Power Shield which includes a charging circuit and a battery so that the system could be powered by rechargeable lithium ion batteries. I chose a small speaker I had handy (3W, 8 ohms) and wired it with a small amplifier circuit. I could have built my own, but the Adafruit 2.5W mono amplifier is perfect for this application and was actually cheaper than ordering the independent components that I wanted to use, not to mention it works out of the box without debugging. Thus, the whole thing fits together quite nicely and simply. The D0 pin is connected to the audio input of the amplifier (this is arbitrary of course- you can modify the code to use another pin if you would like) and the amplifier is connected to the power and ground pins of the Photon so that it can drive your speaker. The speaker itself is connected to the two output pins on the amplifier, as shown in the wiring diagram. It comes with handy screw terminals so that you can simply connect or swap speakers until you find one you like or that is loud enough. You can see exactly how to wire it up in the wiring diagram I have included, which excludes the batteries and the power shield. These simply plug into the Photon directly.

Skeeter wiring diagram, showing exactly how to wire your Skeeter on a breadboard or on a perforated board.

Skeeter, since it was built around the Particle Photon, is programmed using their online development environment. The details of how exactly to do this can be found here, however the gist is that you connect your Photon to your WiFi using the Particle app you can download on your smartphone. Then, from their online IDE you can simply directly paste the code you find in the Firmware section of my GitHub repository for Skeeter. The code itself is quite simple- it generates a square wave at either 17.4kHz or 8kHz depending on what the software receives in an HTTP POST request from your website or app.

Screen Shot 2017-05-26 at 4.03.34 PM
WolframAlpha diagram of a 17.4kHz square wave.

This square wave goes directly to the audio input of the amplifier and then out to your speaker. An off function is also provided that sets the outputs on the audio pin, D0, low. A quick test shows that this works extremely well, giving a nice, well defined peak on a spectrum analyzer where one would expect the 17.4kHz tone to be. It is also satisfyingly loud! You can see if you can hear a 17.4kHz sound yourself by clicking here.

Output of a spectrum analyzer as the Skeeter plays a 17.4kHz tone.

You can follow this tutorial provided by Particle to build your own website or app to control your Skeeter or you can use the Ionic smartphone application I have provided in the GitHub repository under Ionic Application to drive your Skeeter from your smartphone!

Screenshot of the Ionic application I developed for Skeeter

Ionic, if you are not aware, is a framework for developing mobile apps which lets you build with your favorite web development languages. I put together a quick app that simply performs a different HTTP POST request each time one of the different buttons in the screenshot shown on the left is pressed. The Photon will then cause the Skeeter to play a 17.4kHz tone, an 8kHz tone, or turn off all tones. You can compile the app’s code found in the GitHub repository using Ionic for your smartphone’s OS and then load it. Alternatively, you can simply use the HTML from Skeeter.HTML in that repository to build a web app or website for your own use.

The app needs a quick modification before you compile and use it however. It needs your Particle access token and your device ID, which you can post in Skeeter.HTML in the spots I have outlined for you. These details can be found for your device and account by going to the Particle Photon IDE and looking in your settings. Skeeter.html should look like this:

Simply paste your token and your device ID in the form tag as indicated and you are good to go!

3D Printing the Case

Render of Skeeter’s 3D Printed Case

Skeeter’s case was designed in OnShape in a public document that you can edit yourself! I tried to make it as small as possible, with holes for the speaker and the charge cable. The small jog in the perimeter of the case allows for the USB port on the power shield to be accessed on the side, as shown in the above image of the inside of the Skeeter unit. This can be customized to your content! Simply download the files from the OnShape document or from your own copy of the case files and send them off to your printer!

What do I use this for again?

Hide it in your house to annoy your siblings or the family dog. Leave it on your porch and annoy the neighbors. Connect it to the school WiFi and annoy your classmates. Bring it to the office and probably get fired. The possibilities are endless! Produce an annoying buzzing sound that will drive everyone crazy with the press of a button. So long as you’re in range of a WiFi network you can put your Skeeter anywhere and cause hilarious mayhem wherever you go.

You don’t even have to be on the same network as Skeeter- leave it somewhere and go get a coffee or lunch and press the button. Skeeter will start playing the noise even if you are miles away since it communicates with Particle’s cloud service, so you can craft the perfect alibi while still driving people nuts. Have fun!

My Honors Thesis- Into the world of Ion Coulomb Crystals

A week ago I graduated from the University of Colorado Boulder with my B.S. in Engineering Physics, which was an amazing experience where I learned more than I ever could imagine about natural science, engineering, and developed skills to conduct research. One of the best things I got to do as part of my degree program was write an honors thesis- a piece of independent research that at CU includes a defense and can earn you Latin honors. My thesis, Improving Molecular Dynamics Simulations of Ion Coulomb Crystals, is now available on CU’s public honors thesis repository. I was able to graduate magna cum laude thanks to this work, and I am quite proud of it! The details are all there in that 79 page document- good bedtime reading if I do say so myself. If you’re not inclined, I wanted to give a big picture summary of what exactly I did.

My thesis is centered on improving molecular dynamics simulations of ion Coulomb crystals. What are ion Coulomb crystals?

An ion Coulomb crystal composed of hundreds of Calcium 40 ions

Ion coulomb crystals are three-dimensional objects composed of charged particles. They are one-component plasmas- the one in the above image is composed of calcium ions. They are governed by the classical forces you likely learned about in physics class- Coulomb repulsion between the charged particles in a field provided by an ion trap force them to arrange themselves in these ordered structures. The one in the above image is football-shaped due to the trapping field it is exposed to. This trapping field can come from Paul and Penning traps, and therefore from electric and magnetic fields.

This doesn’t really give you a reason why we care about them though- and that is where the story gets interesting. These humble little crystals (the ions are about 10 microns apart – that’s micrometers) are thought to be found in extreme environments such as the surface of Neutron stars, and they are being used today for simulating chemistry in the interstellar medium as well as for quantum information experiments, paving the way towards quantum computing! These crystals have the potential to become fantastic high-tech engineering tools for building advanced computers or even helping to pave the way for putting together designer molecules like LEGOs. The crystals we make in the lab are very cold- to form crystals in the lab we use ultra-high vacuum chambers (single water molecules coming and reacting with your calcium will ruin your day) and Doppler laser cooling to cool the ions to below 10 millikelvin- extremely cold, just above absolute zero. Lasers are often thought to make objects warmer, but here we are exploiting quantum mechanics to cause the ions to lose energy. Recall high school chemistry when you learned about electron energy levels. When you kick an electron up an energy level it will cascade back down and emit a photon. This is great because these photons can be captured by a camera, creating beautiful pictures like the one above. As an added benefit, the ions over time will lose their momentum and eventually come nearly to rest, at which points they form the neatly ordered structures in these crystals. This cold temperature makes them advantageous for studying chemistry in extreme environments, which is something we know very little about. It also means that we can keep these ions confined and stored for long periods of time, making them advantageous for quantum computing research.

The lab I worked in was trying to characterize reactions. In order to do this, they needed analysis tools to help them out. This is where the work in my thesis comes in. I extended their existing molecular dynamics suite to help them produce accurate simulated images of ion Coulomb crystals that could be compared to the pictures they could take experimentally of these crystals. Why would we do this? We need to know the temperature of the ions, as well as well as confirm our understanding of how the system works. Molecular dynamics simulations use computers to evolve simulated ions over time until they form crystals using the same parameters as those used in the experiment- voltages on the ion trap electrodes, ion masses, number of ions, etc. With the crystals extremely isolated in the center of an oscillating trapping field a way was needed to figure out what the temperature of those ions is, which comes from the calculated energy of the ions. By using the Mathieu equations, which are a set of differential equations that can be used to model the motion of the ions in an ion trap, the program evolves the ions in time until they arrive in their final positions, where an image can be generated.

The work I undertook extended these simulations in a number of key ways. First, the original images rarely matched with experimental images for the same parameters and often looked blurry or had ions in non-physical positions. Second, the accuracy of the positions of the ions were improved by writing a new fourth-order integrator, which is the part of the computer program that actually evolves these ions in time by solving the equations of motion over and over (usually using a time step on the order of nanoseconds) so that the ion positions could be predicted with great accuracy. This had the added benefit of helping to fix some uncertainties in the ion energy we had, which is important when one of the main goals of the project is to extract temperature data for experimentally characterizing reaction rates. Finally I fixed the laser cooling model in the program- when Doppler laser cooling is being applied the atom will be hit with photons from the laser and absorb a photon if it corresponds with the right transition. The electron hops up an energy level and cascades back down, emitting a photon. This is called scattering. This happens in a completely random direction- there is no way to predict with certainty what direction this photon is going to go. This process heats the ions slightly. In the original program, a random direction would get picked but then the magnitude of the momentum kick applied to the ion from scattering this photon would be picked from a normal distribution. This means that most of the time the kick was extremely small because the distribution was centered by zero, and low values had the highest probability. The new model corrected this, and as a result we were able to find that the heating of the ions was now symmetric, making the images of the crystals look much more realistic.

An actual ion Coulomb crystal versus a simulated ion Coulomb crystal with the same experimental parameters.

As one can see from the above image, the simulations match the experimental images quite well. The ion positions are turned into images and then returned to the user. This also means that the ion counts can be extracted from the images by matching simulations of varying number of ions to actual crystals where you don’t know how many ions are present, say if you did not dump the crystal into the mass spectrometer to count them. This makes the simulations a nice check on the experiment and they work together to analyze data as well as to predict interesting parameters to use experimentally to produce crystals that are adventageous for measuring reaction rates in extreme environments.

More details and some sample code can be found in my thesis, as linked above. The simulations were written in C++ and I also developed a handy GUI to make using the simulations in the lab easier. This was a lot of fun and I learned a lot. It was also fantastic to be able to do some work that I knew would contribute to science and would help make the work done by the researchers in the lab easier. It was a great way to wrap up nearly four years of work at JILA at CU Boulder ahead of starting a new engineering position at Lockheed Martin this June.

If you have any questions, feel free to drop a comment or send me an email on my contact page!

Tutorial: Make a simple Alexa skill that uses a REST API

This tutorial adapts code from this excellent JavaScript cookbook prepared by

If you just want the code, click here.

If you just want to enable the Thornton Windchill skill we’ll make, click here!

UPDATE 11/25/18: IBM has deprecated the Weather Underground API. This tutorial has been updated to use the OpenWeatherMap API instead.

I spent some time back around Christmas trying to find an Alexa skill that would do one simple thing- tell me what it feels like outside. Sure there are plenty of weather apps, but I really wanted one that took the wind chill into account- that funny other temperature websites give you that’s usually called the “feels like temperature”. Finding none I decided to make my own.

The Amazon Echo surprised me with how easy it is to customize it and make your own skills (the apps of the Alexa world). Amazon has an amazing set of tools its Amazon Web Services, and it is brilliant that their microservices product, Lambda, is tied in with Alexa as a platform for hosting Alexa skills. This tutorial I have prepared will show how to develop an extremely simple but useful Alexa skill- one that interfaces with the API of your favorite website, pulls down some information, and then has Alexa tell you the latest updates when you trigger your new skill.

First things first, some vocabulary:

API: Application Program Interface. Many websites and tools have these so that developers can incorporate their functionality into their projects. If you’ve ever logged into a website using your Facebook login, you used the Facebook API. Google has many APIs for products like maps and search. Dropbox and OneDrive have APIs for saving files from apps. Dig around your favorite site and see if they have an API you’d like to use. For this tutorial, one that gives you information like headlines or the weather is best.

REST: Representational State Transfer. Essentially these APIs usually consist of a number of “links” like the ones you’d click on a webpage. They correspond to different functions in the API. There are endpoints for logins, uploading files, and other functions for all sorts of APIs. In the documentation for each API you should be given a list of endpoints to use. We will use Node.js to easily make requests to these links, known as HTTP Endpoints, to make our app work.

Node.js: A serverside JavaScript environment that is very handy for handling web functionality. You won’t need to install it for this tutorial but I recommend messing around with it. It makes web servers and making HTTP requests very easy. This skill will be written in Node.js and we will use the code editor on the Amazon Lambda website.

Using a REST API

REST APIs are how we will get information from our website of choice to our app so that Alexa can say it out loud. I want to make a weather app of course, so the first place I turn is OpenWeatherMap. You can get started with their API for free and they provide all the information needed to calculate a ‘feels-like’ temperature like the one we want, namely the current temperature and the wind speed. You can follow along with me there or you should find the steps to using other APIs are roughly the same. Go to your favorite news site and see if they have an API you can use.

Most APIs have you register an account so that you can be given a key. This key prevents abuse of their system and also lets you track the usage of your app in many cases. This key will have to be part of your requests to their servers.

Let’s go to the documentation to figure out how to make a request to get the current weather. I see on the sidebar that they have “conditions” listed so I click there. Depending on what you want to do you will have to learn the “lingo” of your specific API. For example, the Dropbox API has (or at least had) two different ways of uploading files. One request was for upload, the other was for chunking up files, and they were both called different things. Be careful and read what each endpoint does to make sure you build the most optimized app!

The documentation gives a nice example of what a call to the current conditions endpoint looks like, and I immediately identify the data I want to parse out:

Screen Shot 2018-11-25 at 2.30.52 PM


You will need to pass in your own API key you receive when registering at OpenWeatherMap when you make this request for your local. Keep your API key a secret- malicious users can use your key to deplete your free calls to the API and get your account disabled! Additionally, I have added an ‘&units=imperial’ to get the units in US customary units, but you can switch this to metric if that is the system your country uses.

What you are seeing here is the response to the HTTP request. REST APIs often respond in what is called JSON format, which is basically a really nice way of formatting data so that everything has a key and a value. That way you can search for the data you want just by having the key (since you likely aren’t sure of what the value is!). So here I would want to write code that picks out “temp” and ‘speed’ so that I can calculate the ‘feels like’ temperature using these values.

Knowing this, it’s time to start writing some code!

Getting Started with Amazon Lambda

You will need to go to the Amazon Lambda website and create an account. Lambda is for microservices- you write code and get a URL that triggers that code to run. This lets you do all sorts of fun things, like periodically check on  and analyze data from a weather-station in your back yard or send data to a database all without having to worry about servers or hosting the script. Today we will use it to build our Alexa skill.


From the console, select “Create a Lambda function”


You don’t actually have to select a blueprint, but if you want to go ahead and just use the blank function blueprint. Click “Configure triggers”.alexaskills

We need to configure the app so that it uses Amazon’s kit for developing Alexa skills. You don’t need to have this installed or anything like that- it simply tells Lambda how exactly it is going to get triggered so it knows to accept requests from the Alexa Skills kit for triggering your app. Click the Alexa Skills Kit option and then click next.


Now we are into the meat and potatoes of the actual skill development! Give your skill a name and a quick description. Leave the runtime alone- it’s fine as is using Node.js. If you’re reading this in the future and thinking “Ah man, we’re on Node.js 7.2 and this script will never work now!” I apologize but I’m trying to keep things as future proof as possible.

The Lambda function code box is where we will finally begin writing our app.

If you would like to simply grab all the code at once, the Github link is here.

The bulk of every Lambda function is the handler. You can see the tiny sample code they give you already. This is what Lambda will run when it is triggered. All your functionality is called from in here.

Let’s think about what we need to accomplish in our handler:

  1. Make an HTTP request to get our weather information
  2. Parse that big JSON response to just get the feels like temperature
  3. Store this in a way Alexa can say what it is.

Not too hard! Let’s take a look at the handler I wrote and then I will break it down.

We keep the structure of the handler the same- the function definition looks the same, we just do more inside! First, we get the Node.js HTTP client so that we can make our requests. The URL module will make it easier for you to format your request URL, but I don’t use it here. I included it so you know that it exists in case you need to build more complicated URLs, such as subsituting a user’s query. Here I am keeping it simple- simply paste in your zip code and your API key so that OpenWeatherMaps gives you the conditions at your location.  You can give your OpenWeatherMaps link a try in your browser. It should display a JSON object of current conditions. Not all endpoints let you do this, but it can be handy way to test your work. The next section is the actual HTTP request. We make a GET request because we are GETTING something. If you want to upload something to a server you would make a PUSH or PUT request, and there are many other types of requests you can try. But for now we use the simplest- a GET request. You can see we use our HTTP client and set up a function with a single response parameter. This response is what Alexa is going to say! The empty data string simply allocates memory for us to put our desired JSON object into. For me it will be the ‘feels like’ temperature. I tell Lambda that after it makes the GET request, every time it gets data from the server it should add it to my empty data string. This way we get the entire response and we handle our inputs. Finally the real logic comes in handling the end of the response. When there is no more data we now need to parse our huge JSON object to get the specific data we want. By running JSON.parse I break the entire string up into keys and values that I can now search through to get my temperature and wind speed values for our ‘feels like’ temperature. Notice how I index into the JSON response using dot-notation (e.g. I know that the temperature is stored in main looking at the response in the above screenshot so I get at it by writing json.main.temp, since the temp is in the json object under main). The formula for calculating a temperature with wind chill using US Customary units is as follows (you should be able to find a corresponding metric formula on the web):

Wind Chill = 35.74 + 0.6215T – 35.75(V^0.16) + 0.4275T(V^0.16) (Courtesy of MentalFloss)

Finally you can see that I placed our ‘feels like’ temperature value in the middle of a written response for Alexa to read. You can make this whatever you want (so long as it fits Amazon’s community guidelines). We then output this response. How does Output work? You define it yourself. Let’s take a look:

The function of output is simple: We are now giving Alexa a JSON object to read! It’s really JSON all the way down if you’re starting to catch on. This is mostly provided by Amazon’s documentation but let’s explain it anyway. Response is our JSON response when Alexa triggers our Lambda function, so that’s what it stores. It has a section for specifying how the output speech will work- PlainText is what you will use almost all the time and Alexa will simply read what you give it. The “Card” is what appears in the Alexa app when the user checks on their Amazon device what people have been asking Alexa or if they want to read what the response was later. We specify a simple card- the name of the app (for identification) and the text that Alexa gave. That’s it! Finally we set the end session variable to True. We have no reason to tie up Alexa any longer waiting for any more input after we get our weather, so we tell the device that the skill is done. The final line simply says that if the response is successfully built to return it to the Alexa device calling the function.

Go ahead and check your work against the whole code file on Github now.

Before you create your function you need to assign a role to it. This basically just lets Amazon know what permissions your skill needs. Go ahead and let Lambda create a role for you and give it a name. This is handled automatically. Select the parameters as shown:


Go ahead and click next. On the review page, click Create Function. You’re done! Click the ‘Test’ button on the top toolbar to create a test invocation of your new Lambda function and name it whatever you wish. You can leave the default inputs as this simple Alexa skill does not process any user input. From the dropdown menu select your new test event and press ‘Test’ to run it. You should see a successful result displayed on your screen:

Screen Shot 2018-11-25 at 2.39.04 PM


More complicated Lambda functions will let you specify JSON test files that will simulate various inputs and outputs so you can test your skill.

But I wanted to hear Alexa run my skill!

I know! In order to do that though you need to go about adding your Alexa skill to the Amazon Developer Console. Keep your Lambda tab open- you’ll see that there is now a number associated with your function called an ARN you’ll have to paste into the skill form.  Amazon covers this process really well in step 2 of this great visual guide! When it asks you for your intent schema and sample utterances, go ahead and use the ones from my GitHub repository and modify them to taste. You will then get a chance to test your skill and hear Alexa say your response on your PC from your browser! Once you are done testing submit the skill for certification and if Amazon approves it, your friends can find your skill and enable it. You can also create skills just for yourself and add them to your device now that you have the skill up on the developer console.

Have fun! Creating skills for yourself and for your friends can be a rewarding and fun aspect of owning an Alexa device. It can also get you some free swag. Let me know if you have any problems in the comments, and good luck!