Microsoft OpenHack: A Deep Dive Into Machine Learning

Sebastian Bühnemann 4th December 2018
Microsoft machine learning OpenHack

Cloudreach Cloud Systems Developer, Sebastian Bühnemann, recently discovered how hackathons help you to come to grips with a new subject when he took part in Microsoft’s machine learning themed OpenHack event.


Having worked with big data and machine learning for a year on a high level I was always keen on getting my hands dirty and write code to solve problems myself. However, the vast choice of tools, frameworks and data handling approaches kept me from getting started. However, when a free hackathon on machine learning was announced on a Cloudreach mailing list, I knew the time was right to give it a go!


Fear not, it’s well organized

When I arrived at the Microsoft reactor in London, knowing no one, I discovered that it was actually quite easy to mingle in the open and friendly atmosphere. On check-in, I was assigned to a table and met my team which consisted of four people from different companies and a coach from Microsoft. To minimise the possible issues of setting up a development machine (and of course to be cloudy), we used a machine image in the Azure Cloud. Logins for each attendee and an account for each team were provided at the beginning of the event.


The Challenges

As with many courses, the level of difficulty of the challenges increased with each iteration. There were seven challenges in total and the expectation was set to complete five within the three days of the event.  All challenges related to a fictional outdoor gear company that had various machine learning use cases.

For the first challenge, we used the Azure Custom Vision Service to build recognition models for the different types of gear they sell. After that, we built a simple python web service with Docker, that would recognise the type of gear in the image posted to it using the Azure Custom Vision Service internally. It was a great introduction and such an early success certainly helped with motivation for the challenges ahead.  

In the challenges that followed we encountered more broadly used machine learning tools. An experienced ML developer introduced us to the most popular tools and eventually we ended up using Keras with Jupyter. Keras is another layer on top of frameworks. It is a lot like TensorFlow and CNTK but it is very easy to use and, together with Jupyter, we could easily look a the data preparation results and graphs in the following tasks.

We actually started out with a classic computer vision approach (Support Vector Machines) to reproduce the results of the Azure Custom Vision Service. This showed us the limitations, limited accuracy and necessary preprocessing of computer vision before the advent of deep learning. In a break out session, we attended a great talk about deep learning with a good balance between technical details and a bird’s eye view. This gave us a good primer for the upcoming challenges.

Now that we knew what we were doing in deep learning we solved the image classification exercise again using a Convolutional Neural Network (CNN). As promised the classification accuracy jumped from 80% to 92%. A trained CNN model basically is a set of matrices with weights and some functions with parameters, the weights and parameters are a result of the training. Keras allowed us to store the trained model in a file and load it again to make our classification web service work with Keras and our self-trained model.

The final challenge was to detect helmets in an image. This is trivial to the human eye but object detection is really challenging for a machine. Finally, we figured that we have to take a pre-trained model like COCO (Common Objects in COntext) and re-train it with about 50 samples which were made by tagging some of the helmets in the provided images by hand. For this transfer learning exercise, we actually needed a GPU powered instance. Even with GPU support, the training took about ten minutes compared to two minutes on a CPU for image classification. At last all helmets larger than a few pixels were detected with 94% accuracy.



One of the most challenging parts for me was to let go of perfectionism and over-engineering our simple solutions. Our mixed team of software developers, a data scientist, and a business analyst helped a lot in ensuring we didn’t get stuck in implementation details but actually solving the core data science problems and explaining the issues and solutions to one another.  It was also interesting to work with people you just met and debugging issues no one had a clue about whilst racing against the clock.

Eventually, we were the only team to complete all challenges in time, just half an hour before the deadline. The three days were definitely intense and challenging but they gave me an understanding of deep learning I wouldn’t have achieved in an online course, especially within just a few days( I also now have more Microsoft merch than I could ever possibly use! ).

Tl;dr: Do hackathons. They teach you a subject quickly and thoroughly. Having a diverse team is a big plus. Even three months after the event I still remember important details and caveats.


Register for Microsoft’s next OpenHack event here.