Make Children’s Artwork look like Eric Carle Illustrations

v_hungry_caterpillar
The Very Hungry Caterpillar, by Eric Carle

Famous author and artist Eric Carle turns 91 today. I remember loving his books when I was a kid, especially The Very Hungry Caterpillar and Brown Bear, Brown Bear, What Do You See?. Each book features his distinctive art style. The images are collages composed of tissue paper and acrylic paint, producing vivid depictions of animals and nature.

THE PROBLEM

Carle’s work is as complex as it is beautiful. How can we make it easier for children to produce their own homages to his creations?

THE SOLUTION

Neural style transfer is a technique that allows you to compose images in another’s style using deep learning. That is, you teach a computer to identify key elements of an image’s style and redraw that image in that style it has just learned.

I found this excellent Google Colab notebook which taught me all about how to do this with tf.Keras!

Taking the code from the tutorial I built a website that lets you upload images, have the style of Eric Carle’s The Very Hungry Caterpillar transferred to it, and display it for the world to see and for you to download! At any given time the latest 10 images will be displayed for any visitors to see. The website is built in one of my favorite frameworks, Flask.

You can access the website at ericcarletransfer.ml. Be warned, the transfer time can be in excess of 10 minutes- it is very computationally intensive.

The results have been encouraging though! Take a look:

The neural network is picking up on the look of the tissue paper and paint. In the future I want to work on reducing the amount of noise seen in the backgrounds.

SHARING THE SOLUTION

The URL again is http://ericcarlearttransfer.ml/

As always, the entire project is opens source and can be found here on GitHub!

Text to Word Search!

Try it out for free here!

wordsearchgif

THE PROBLEM

Word searches can be a great way to build a summary activity for reading a story, article, or book. However, they are time consuming and difficult to make.

THE SOLUTION

text2wordsearch uses the Rapid Automatic Keyword Extraction (RAKE) algorithm to automatically extract the top key words from a blob of text! Simply copy the text from the article or story and choose how many words you want in your word search. Then copy the word search into your favorite word processor (be sure to use a monospace font!). The keywords selected are found in the bottom box.

The technical details are that this uses an AWS Lambda function to run the RAKE algorithm and generate the word search, ingesting the text from the web interface above which is deployed on AWS API Gateway. The Lambda function is written in Python and leverages two excellent packages: python-rake and word-search-puzzle. Because it is a Lambda function they had to be installed to a directory and uploaded as part of a zip bundle along with my function code. This zip is included in the repo linked below for you to deploy and play with yourselves!

SHARING THE SOLUTION

Try it for free here!

As always, the code is available to browse and deploy yourselves!

A novel method for preventing “Zoom Bombing”

THE PROBLEM

Zoom Bombing is exposing children learning remotely to inappropriate content and disrupting meetings so a few pranksters can have a laugh. The biggest unsolved issue with Zoom Bombing is that people are sharing links and passwords on social media in order to egg trolls and classmates on to bomb these classes and meetings. How can we share a meeting without disclosing the meeting ID and password?

THE SOLUTION

BombSquad(4)

BombSquad is a solution I built on Amazon Web Services to help mitigate the worst of Zoom Bombing. Here’s how it works:

  1. Get a Zoom meeting invitation link like normal (and make sure the password feature is turned on!)
  2. Go to www.BombSquad.us
  3. Select your meeting options- you can permanently turn off the participant microphone and camera so that nobody can reenable it by clicking the checkboxes.
  4. Paste your invitation link
  5. Get a sharable cloaked URL that goes right to your meeting!
  6. Continue orchestrating your meeting from the Zoom client like normal.

The technical details are as follows: BombSquad takes your URL, transforms it to force the user to use the Zoom web client, stores the original URL securely, and only redirects the browser to the real meeting URL if the user clicks through the sharable link you receive. The invitation link inside the window is disabled. Thus, all a user can see are BombSquad URLs! This is performed using a combination of AWS S3 and Lambda instances as shown above, making this a neat example of a serverless application– the first I am distributing publicly!

SHARING THE SOLUTION

Head on over to www.bombsquad.us and give it a try!

interface
BombSquad interface

FreeDisplay – Share your screen with everyone on your local network for free!

COVID-19 is taxing our internet infrastructure, and many stuck at home are struggling with tasks where it would be useful to share one’s screen with others, such as teaching from home, sharing content with someone without handing them your device and getting it contaminated, or monitoring what is happening on a home computer in real time.

FreeDisplay is a free open-source program written in Python that allows you to share your screen with anyone on your local network, such as your home Wi-Fi network. It creates a QR code other can scan for easy sharing and serves a simple webpage with a mirror of your screen so that any device with a web browser can easily view your screen! Use it for home teaching, sharing content without handing someone your device, presentations, monitoring activity on your home computer and more. Download for free here: https://kevinl95.github.io/freedisplay/

As always, the code is open-source and can be viewed here: https://github.com/kevinl95/freedisplay

DIY WiFi Smartbulb Classroom Sound Meter

This is an exciting new project I’ve been working on to use off-the-shelf smart lightbulbs to make an inexpensive and automatic classroom management gadget. Using a bit of Node I was able to get the noise level of a classroom and translate it into a color for a connected smart lightbulb, from green to red as the classroom gets louder! Inspired by the ‘traffic light’ noise warning gadgets I see in classrooms, this one is fully automatic. There are no switches to throw- just set the maximum volume in the free software and the lightbulb will change color on its own!

screenshot-2
Screenshot of the GUI. The program is an Electron application, and I have applied a material design stylesheet.

The software can be downloaded for free for Windows PCs here!

Works with MagicHome brand smart lightbulbs, such as these which are known to work:

MagicLight WiFi Smart Light Bulb, 2nd Generation Dimmable Multicolor A19 E26 Household LED Bulb

MagicLight Smart WiFi Alexa Light Bulb, A19 7w (60w Equivalent)

As always, this program is open source! Click here to view the code on GitHub!

DIY ‘Ghost Box’ for Halloween 2019!

This is a DIY Ghost Box like the Ovilus ghost hunting device. While I don’t believe in ghosts, I do think ghost hunting gear is fascinating. This box chooses words out of a 1000 word dictionary based on magnetic field and temperature changes. The code is available for free on GitHub: https://github.com/kevinl95/ghostbox

Electronics:
1x Adafruit Feather M4 Express (If substituting, make sure you either buy a board with a DAC for the speaker or build one)
1x Adafruit 9-DOF Accel/Mag/Gyro+Temp Breakout Board – LSM9DS0
1x Adafruit Illuminated Toggle Switch with Cover – Green
1x Adafruit Thin Plastic Speaker w/Wires – 8 ohm 0.25W
1x Adafruit Lithium Ion Battery – 3.7v 2000mAh

CAD and STL files can be accessed from the Thingiverse project page!

Getting a ‘Feels Like’ Temperature when transitioning away from the Weather Underground API

IBM has made the disappointing decision to retire the Weather Underground API effective 12/31, leaving many developers scrambling due to the abruptness of this decision and the complete lack of roadmap or guidance as to how to transition to a replacement so that their applications will work on January 1st.

One key element of the Weather Underground API that my popular tutorial for making an Alexa Skill makes use of is the ‘feels-like’ temperature. When updating this tutorial due to the bad news I chose to transition this tutorial to the OpenWeatherMaps API, which offers a free tier like the Weather Underground API used to that allows for up to 60 requests per minute and the current weather.

While it does not offer a ‘feels-like’ temperature, this can be easily calculated by simply factoring in wind chill to the temperature you report. This means you do not need to use OpenWeatherMaps- you can really use any API that gives you a current temperature and wind speed! It really is as simple as this:

 

The formula in the above code is to calculate a temperature with windchill using US Customary units:

Wind Chill = 35.74 + 0.6215T – 35.75(V^0.16) + 0.4275T(V^0.16) (Courtesy of MentalFloss)

where T is a temperature in Fahrenheit and V is a wind speed in MPH. You should be able to find a corresponding formula for metric online.

I also went ahead and rounded my ‘feels like’ temperature to one decimal place, to make it easier to read (or for a voice assistant to read out loud, which is how I eventually used this code).

I hope this helps others as they transition to other weather APIs as Weather Underground winds down. Its developer community will be missed!

Making your first Capsule for Samsung Bixby – An Exercise in Teaching your Phone to Listen

 

Today Samsung made the Bixby Developer Studio available for download and use so that developers can start building capsules to publish on their marketplace starting in 2019. I am an early adopter of the new Bixby and wanted to share how to build a simple capsule using the new developer kit as well as share my experience with using the new platform. Readers of this blog know that I have made and published Skills for Amazon’s Alexa and published tutorials on how you can develop for that platform. Similarly this writeup will focus on a minimal application that will help you get started with the features of Bixby’s impressive development tools.

Bixby is a wonderful platform to develop for and it has top-notch development tools. Building software for Bixby is a lot like teaching someone a new skill. It uses natural language model training so you can show Bixby what parts of user phrases are important and it uses the idea of concepts to define Bixby’s understanding of what capability you are giving it. These concepts will be discussed in detail below.

Today we will be making a capsule to generate passwords made up of a random string of words, inspired by XKCD’s Password Strength comic. We will be taking advantage of Bixby’s visual interface to make a password users can easily remember as well as easily copy and use for their accounts. The XKCD algorithm sticks random, memorable words together so that passwords are complex but can also be easily recalled by the user. They also are higher entropy than a random string of characters and numbers, making them harder to crack my many brute-force methods. After giving the comic a quick look, read on.

As always, you can download and view the code on my GitHub!

THE PROBLEM

We want to build a Bixby capsule that can generate memorable passwords for users. These passwords should be of a user-specified length.

The overall requirements are:

  • Generates a password using regular English words
  • Takes in a user’s specified length
  • Displays the password graphically for copying
  • Displays a calculation of the entropy of the password so the user knows how good the password is.

THE SOLUTION

You should now have Bixby Studio installed on your system. As of writing it is available for Windows and macOS.

Create a new project by clicking File>New Capsule.

The first bit of code we will focus on is the generator.js file. This is where we define our entry point and what we are going to return.

Notice how we export the function generate- this is the function where I generate everything we need for the response you see in the screenshot above. We take our wordlist dictionary file (how to get that will be discussed shortly), we build our password using a user-specified length called numWords, and we calculate the entropy of the password. We then return a result we can parse into a nice, visual response like in the screenshot.

Whew, there’s a lot going on in here though! Let’s start with the wordlist. This is a JSON-formatted list of common English words I found searching for open-source corpora. Why JSON? As you might have surmised from the above code snippet, Bixby capsules are written in JavaScript! Importing this data as JSON makes it very easy to loop through and use, as you can see in my generate function. I stored this in a directory called lib but you can call it whatever you please. Just be sure to update the path in the generator!

Next, we need to discuss how we get numWords. This is the user-input. We want the user to say ‘Make me a password with three words’ and Bixby needs to know how to do that.

In the resources directory you will find endpoints.bxb. The actions your capsule can take are called endpoints. Let’s define one for generating a password:

Let’s look at what we have here: We have authorization set to none because this endpoint is public and available to any user without authorization. We have specified an action endpoint for our generate function as defined in the generator.js snippet above and we have told Bixby that the input for this endpoint is numWords. We also tell it what file it will find the definition for this endpoint in- generator.js.

Now that Bixby has an available action in the form of the endpoint, we get to the really interesting stuff- teaching Bixby what everything in our capsule means. The way we do this is via a model. In the model directory we have actions and concepts. These make up Bixby’s understanding of what your capsule can do, and we just need to write some high-level markup to make this work. Let’s start with the action our capsule is going to have- generating passwords. This will inform what concepts Bixby needs to have definitions for so that we can move on to training our natural language model.

Above is the generator.model.bxb action file. You will find it in my action directory. What does this do? Read through the comments carefully. It defines the actions Bixby will take when running this capsule, and it covers all our bases regarding various user inputs! We tell it our action is to run our generate function. We tell it to collect numWords, and we tell it that numWords is of the numWords concept type which we will define shortly. We tell Bixby that there can be at most one numWords (so that we ignore other numbers in the user’s invocation) and we tell Bixby that this value is required. If Bixby cannot find a number in the invocation to use, we define a default initialization with four words- the same as our XKCD comic! We then do some validation in the event we find a number in the user’s invocation. If numWords is 0 or less, we want to display some text telling the user that you cannot have a password that is negative in length (duh, but the bulk of software development is anticipating stupid). Finally we tell Bixby what our result is going to be- an instance of our PasswordResult concept, which will be of the type Calculation. This is a type Bixby provides for a result that it needs to compute or otherwise derive. Let’s get started defining what these concepts are.

If you are following along in the repo, look at the numWords concept.

This is a good minimal example of a Bixby concept. These are the variables that are key to our capsule working. You can think of them as teaching Bixby a new idea, slowly building for it the picture of what you are trying to achieve. We tell Bixby that NumWords is an integer (we don’t want fractional words). We also give a brief description of what this has to do with our capsule. For NumWords this is obvious- it is the number of words in the password.

Password is almost the same except this concept is given the ‘name’ type since we need an output string. We describe it as the output password. Entropy is similar- we describe it as the approximate bits of ‘randomness’ in our password and give it the integer type since it will be a number we calculate. Length, predictably, is an integer that represents the length in words for our password. This is utilized in the entropy calculation, which taking the formula from the comic is taking two to the power of the number of words and then dividing for the number of attempts to brute force the password you could make if your computer could make 1000 attempts per second for a year. This yields an estimation of the number of years the password would take to crack in these circumstances. Finally Years is given the integer type and described as the number of years simple brute forcing would take to crack this password- it is also part of the entropy calculation we display at the bottom of our result as you can see in the above screenshot.

The most complicate concept is our PasswordResult:

It has the type Structure because it contains multiple properties- namely every concept we have just defined. We give these properties types- I just made these the same as the property name for simplicity but they can be used in more complicated capsules to link properties together with a descriptive type. We again describe each property and what it does, tell Bixby if the property is required, and for each tell Bixby that there can be at most one value for each. This result, as you may recall from the generate method, is what we will use to generate our visual response on the screen of the device. We have now explicitly told Bixby everything there is to know about how our capsule is going to work! It knows every concept and every result we are going to want. We now can teach Bixby how to handle speech.

Click training in the resources/en directory.

Screen Shot 2018-11-07 at 7.11.44 PM

You will see a list of training examples I have provided the natural language model. We are effectively training Bixby to understand how to parse user phrases and turn them into useful input for our capsule. This is an application of machine learning! Notice the examples I have provided. I have made one: ‘generate a password for me’ with no numbers in it- this is to provide an example where Bixby should use our default input of four words from above, like the XKCD comic. I also provide numerous examples with varying numbers of words asking Bixby to generate a password in various ways. Notice how I have clicked on and highlighted the number in each training phrase and I have labeled this value as numWords! You will do this for each input your capsule needs- the more examples the better. Bixby will use the labels and examples you provide to teach itself that when something sounds similar to your examples Bixby is being asked to open your capsule and feed the data that is similar to the labeled phrases you gave it to the capsule as input. Bixby is learning, so make sure to spend plenty of time here to make sure Bixby really gets it! Compiling the model will make Bixby learn each of your examples and you can view what Bixby’s output for your examples would be so you can be sure that Bixby has not mis-learned how to handle your examples. A well-trained model will make your users happier and your capsule easier to use. This is my favorite part of the Bixby developer tools- it is very intuitive and fun to use, and it offers a look for machine learning enthusiasts into the underlying technologies behind Bixby. This is a defining attribute of the platform for me- it feels much more flexible than Alexa, which as a developer seems to encourage a more robotic and specific interface for its skills than the more flexible Bixby interfaces for capsules.

With your model trained and your concepts laid out, the last thing to do is to specify how Bixby should display our output. This is done with dialogs and layouts.

Dialogs define for Bixby’s interface the concepts (inputs) and the results. Therefore for each input you need there will be a dialog and for each result there will be a dialog.

NumWords therefore gets a dialog like so:

This is pretty bare-bones: We define a concept dialog (input dialog), tell it to look for NumWords (like in our training!) and we provide some template text for this type if we wanted to display something related to this input (in my project I ended up not using it).

The Password Result Dialog defines the dialog for our result. This one is more important for this project as it will populate our layout.

We define an output (result) dialog, have it match this time for our PasswordResult concept (passing in the output from calling generate with our numWords result) and then we tell Bixby what to write on the screen with the template text: Notice that this is the first bit of text in the above screenshot that appears when Bixby is displaying a result telling the user what it did for them!

The layouts for the visual part of the display (like this one, PasswordResult.layout.bml) look a lot like HTML! There are many documented UI widgets you can use such as pictures, hyperlinks, cards, and more. Here you can see we use a card to display the actual password, making it wrap onto the next line for long passwords and making them easy to copy. Down below in a div tag we display the password entropy. This is calculated using the formula from the XKCD comic, as described above. Finally we hyperlink to the comic that inspired this project as a way of giving credit.

A few more example passwords are shown below:

You can try it out for yourself in Bixby Studio! Simply click the icon that looks like a phone on the left hand side of the screen to open the Simulator, giving you an idea of what your capsule will look like on an actual Samsung device when the marketplace opens in a few months.

SHARING THE SOLUTION

This project can be found in its entirety on my GitHub! I hope this very early tutorial can help developers make their first steps into developing for Bixby, which I think has some very compelling development tools and technology behind it.

How to extract PDF file attachments using Python and PyPDF2

Tl;dr: Cut and paste the function I wrote here.

This is a quick technical writeup to hopefully answer a question I’ve seen posted a few times around StackOverflow and the issue trackers of various Python PDF libraries. This is especially handy for those of you who don’t want to dive through the PDF32000 to figure out how Adobe wants us to handle attachments.

PyPDF2 makes working with PDFs easy, but you may have noticed that it only has an addAttachment() function, similar to many other PDF libraries I tried. How do we extract attachments so that we can work with them? Embedding files in PDFs is very common and it would be nice to be able to interact with these objects, like we can with form fields and other things you might find in PDF files.

Fortunately the building blocks how how to do this are already available in the PdfFileReader class!  We just need to stitch them together:

  1. Read the PDF file using PdfFileReader from PyPDF2
  2. Decrypt the PDF if necessary (required, you can’t get to the embedded files without doing this)
  3. Retrieve the file catalog by retrieving the file trailer (reader.trailer[‘/root’])
  4. Navigate in the dictionary this returns to ‘/EmbeddedFiles’
  5. Loop through the list of files that are found there
  6. When we get to an IndirectObject, we have our file parameters. We call getObject() to return the parameters dictionary, then navigate to ‘/F’ where our file data is stored as yet another IndirectObject. Here we simply call getData() and get a byte string back. This can then be written to a destination file or processed however you please!

As always it’s better to show the code, so here’s a proof of concept script:

Easy, just not immediately intuitive when you want to do this fast! I created pull request to hopefully get this function added as a method for the PdfFileReader class.

 

Roller Coaster Tycoon Ride Ratings IRL

Do you remember playing Roller Coaster Tycoon, the famous amusement park simulation game that shattered sales records and that remains one of the most beloved computer games of all time? I do, and I also remember the most important part of building any roller coaster in the game – testing. While it may seem mundane to someone who has never played, testing was how you figured out if your ride was going to make any money. The game would give your ride a score in three categories- excitement, intensity, and nausea. The goal was to maximize excitement, keep intensity reasonable, and keep nausea minimal. Largely this score was determined by the g-forces your ride produced. High g-forces could mean high excitement or it could mean people are too afraid to go on your ride. These ratings each varied from ‘low’ to ‘ultra-extreme’- both being scores you generally wanted to avoid. ‘Medium’ and ‘High’ were the sweet spot (except for nausea of course, which you always wanted ‘low’) and if you started to edge into ‘Very-high’ intensity you would start to see a drop in ridership, and thus revenue.

G-force relates the acceleration produced by something to the gravitational pull of the Earth. Most roller coasters pull at most 5G’s, or 5 times Earth’s gravity. They only do this briefly though- on big hills or tight turns. The Space Shuttle, for example, pulled 3Gs on reentry and sustained them longer – amusement park goers are clearly not astronauts! Big drops, lots of inversions, intense helixes, and lots of air-time (or negative Gs, where you feel like you are floating out of your seat) are what sell big rides. While real coaster designers don’t use the Roller Coaster Tycoon rating system to determine if their ride is any good they surely have the same design philosophy- be exciting, be intense but not too intense, and make sure the poor teenagers running the thing aren’t scrubbing vomit off the seat every time people get off. I hypothesized most rides, if they were in the game, would probably fall in the ‘Medium’ to ‘High’ intensity and excitement scores. Fortunately we now all carry around an accelerometer in our pockets built right into our smart phones so we can find out for ourselves!

smartphonediagram.png
This diagram shows how the accelerometer maps to the G’s measured by the Roller Coaster Test Meter.

The above diagram shows the axes I chose so that your phone could measure acceleration while resting safely in a zipped or sealed pocket while you rode a roller coaster. Vertical Gs are along your phone’s x-axis while lateral G’s are measured along your phone’s z-axis. This assumes that you put your phone into your pocket with your screen facing to your left and top-first, by convention.

The game’s formulas for computing the ratings for each ride were somewhat mysterious until the OpenRCT2 project published their open-source code and formulas. We knew for years that primarily the g-forces the ride produced made up the bulk of the score, and other features like theming, dueling trains, and music among other things also contributed. There are also unique multipliers for each ride that come into play.

I am simply trying to build a toy however that you can turn on, throw in a pocket, and share with your friends so I avoided the design route of asking you a whole survey about the ride’s features before you get on. Instead I went a different route to produce a set of formulas that roughly approximate that in the game regardless of what kind of roller coaster you are on, mystery multiples and all, by comparing the scores of real roller coasters to those in the game. Fortunately this summer I have had access to a roller coaster that was in the game and that I could ride in real life- a ‘boomerang’! These roller coasters are everywhere, as they have a small footprint and low cost that makes them perfect for parks wanting to add a coaster on a small budget. The model in the game is ‘Defibrillator’ and it can be found in the ‘Funtopia’ scenario of the original game.

So, readers, I rode it just for you! Just kidding- I am obsessed with roller coasters and the fact that I needed to ride one to complete this project was no coincidence. I started building a prototype of my app using Ionic and Apache Cordova, which would enable me to release my app for you on either Android or iOS without needing to rewrite any of my code. There are excellent tools for making a fun UI (I tried to keep the colors and theming true to the original game) and you can import great packages for social sharing and interfacing with the accelerometer. I ran my app and saved the base score using the basic formulas from OpenRCT2 with no multiples. I then tested for the scores for ‘Defibrillator’ in the game, computed my multiples empirically to scale my ratings appropriately, and voila! We now get scores we would expect if real coasters were in the game!

Screenshot_20180604-132623.png
My Boomerang test ride with raw (unscaled) scores reported at the bottom.

Additionally I wanted to provide you with the raw data that went into your scores, just like the game. I used the awesome Chart.js library to plot the vertical and lateral G-forces live for you right on the screen, letting you have a nice plot of the forces experienced on the ride once you’re done:

Screenshot_20180606-103541.png

Here are a few scores from my recent trip to Cedar Point and Kings Island in Ohio:

It is amazing fun- my favorite highlights are the legendary The Beast having an appropriate intensity of ‘Very-high’ and the crowd-favorite Maverick having excitement at ‘Very-high’. I even rode the new Steel Vengeance– just look at those vertical G’s! Simply download the app from Google Play or Apple App Store, insert the phone top-first and screen facing left into a pocket, and hang on tight! Obviously follow any rules about loose articles (they are there for a reason) but generally as long as you have a pocket that can be sealed this is a fun way to rate coasters, plot their g-forces, and brag to your friends about how you pulled 5G’s on Steel Vengeance this summer. Once you hit the ‘end’ button hit ‘share’ and post the scores to social media, then hit ‘clear’ and enter the next coaster’s name before going and conquering it. Have fun and make good choices!

Download on Google Play

Download on the Apple App Store