I am a big fan of hackathons. I think they help promote the aspects of software development I most enjoy–creativity and problem solving in a collaborative atmosphere. My first hackathon taught me the basics of git, Flask, and applying concepts I learned in some of my psychology classes to create something someone could interact with. The TechCrunch hackathon in 2016 was something I had been looking forward to because I had two ideas I have been thinking about for a while, but had not really investigated or spent time developing on.
What was the Event?
The event was the annual TechCrunch Disrupt SF hackathon. This is a popular hackathon in the bay area, and it had hundreds of participants. TechCrunch has a presence in new tech trends and startups. This hackathon essentially involved sitting in a huge hangar on Pier 48 next to the SF Giants stadium. There’s open wifi, and a lot of computers. People form groups, or already formed groups prior to the event. The only rules are that you have to start your project during the hackathon, and you only have 24 hours to do the hackathon (also, team size has to be less than 5). Many companies sponsor this event and provide access to software and APIs for free or a discounted price.
As far as How I did, well, I got an article written about me in TechCrunch, which I thought was huge. https://techcrunch.com/2016/09/11/not-today-satan/
Out of the 110+ projects, I made two of them, and one of them seemed to garner a lot of attention, although I won no prizes. I never really go to hackathons for prizes though. I went to hang out with my friends Marion and Will, and to work on projects I’d been putting off for a while. They also serve lunch, dinner, and breakfast (and free beer at midnight ;) )
The second idea was a fun idea I had when playing a lovely game with the local San Francisco metermaids when spending days working from home. That project is posted on devpost here.
I will detail the work of the minecraft + cloudbrain project in a later post.
Details of the metermaid project follow:
What it does is constantly monitor and analyze images and applies the following logic.
- Does this frame contain a car?
- if frame contains a car, is this car a metermaid?
- if true, then send a message and link to an image to recipient
- otherwise, analyze next frame
Here’s the website (the hackathon was giving away free domain names that ended in .space): http://peoplesparking.space/
How I made it? How much Time/Energy did it take me?
It took me less than 15 hours to make it. I had told some friends about a desire to do this project for a while. As you can see, I pretty much connected two open projects from blogs on the internet. One blog-post/tutorial was for car recognition (and speed detection), using a raspberry pi and camera. The other was a tutorial from Google on image classification of different sets of flowers. When I started the hackathon,
I did not think I would be using TensorFlow (which is the google project). In fact, I didn’t even know if I would be doing this project. I had started another project, using a raspberry pi running Minecraft ( a popular computer video game ), and created a mod to a counter-strike-like game, which changed the gameplay based on a person’s heart rate. The heart monitor data was streaming from an open source analytics platform called Cloudbrain (http://getcloudbrain.com/), which my friends and I have developed over the past couple years. We initially used it for streaming EEG data (as you might remember from the Exploratorium Exhibition my group put on a few years back). The raspberry pi game info can be found here: https://devpost.com/software/minecraft-pi-ws I did this project first because I wanted to work on a new cloudbrain API endpoint which analyzed heart rate data from an OpenBCI EEG device. The API endpoint was created by Marion Le Borgne and Will Wnekowicz
So, I finished the majority of the code for the minecraft game in less than twelve hours. I had a bunch of time, and decided to move on to the next project I wanted to do. My friend Paul was there to help grab a bunch of photos of Google Images, which were needed to train the TensorFlow classifier. While he was doing that, I was setting up the raspberry pi with the software packages and tools required to do the frame-by-frame image analysis. Everything came together without a hitch.
A bit about classifiers. Classifiers are trained to recognize patterns. TensorFlow is special in that it has well-developed classifiers for images. All that was needed to train the TensorFlow classifier, was scraping Google Images for two types of images, meter-maid monitor vehicles, and non-meter-maid vehicles (which would be encountered on a city street). The classifier analyzes each ‘bucket’ (target and non-target), trains during this analysis, and creates an executable file that returns percentages based on the image being classified. The percentages are percentage likelihood that this image corresponds to one of these two buckets.
The Raspberry Pi I was using was connected over wi-fi, but I also tested it over 3G/4G cellular network and it was able to upload a compressed image, as well as send a text message to my phone with very little latency.
It took about an hour and a half to create my websites and documentation for my two projects.
Yes, there has been some interest in developing this project further on Indiegogo. (A crowd-sourcing company).
Not being able to install my device in an inconspicuous location in/on my truck. break-ins are common in SF. I do not want to make my truck more of a target by having some fancy computer in the side window.
I am working on doing the classification on the raspberry pi, it should be able to handle that. I would like to clean up the code some more, to give it a more general structure for other purposes