How to Win a Hackathon with Machine Learning

2018 February 25


Last weekend, I won the Google Data Science award, the John Snow Labs award, and the HopHacks 3rd overall best award for ASSIST, an AI which predicts a stroke patients' chance of survival. I made it with Bill Zhang, Edward Paik, and Derrick Wilson-Duncan.

To win the hackathon, we had to find stroke patient data, create a model which used the data, and make it easy to use. Because this was a medical project, I did not understand all the medical symptoms and techniques in the data, so my premed friend Bill Zhang read the medical literature to find out. My other friends, Edward and Derrick, made the client software, which was fairly straightforward.

I'll focus on how I created the AI after finding the data and what you can learn.

First off, don't worry too much about what programming language you use. I used Python 3 because I am most familiar with it, but you may not want to use Python 3. For example, game developers who have worked with Lua may want to use Lua's Torch package for machine learning. In general, avoid using a verbose language like Java or C++ to save your limited time, and avoid languages with slow implementations like Ruby because they will be too slow to train the model. Keep in mine that some of my tips may not apply to non-Python-3 developers.

First off, I recommend using Keras, which makes creating the model incredibly simple. For reference, here is the code which declares the model's architecture:

from keras.models import Sequential from keras.layers import Dense, Flatten, Conv2D, Dropout, BatchNormalization from keras.utils import np_utils

model = Sequential([ Dense(30, input_dim=15, activation='sigmoid'), Dense(30, activation='sigmoid'), Dense(1)])

In total, my code which powers the AI was about 200 lines of code. You can find it here.

Aside from using Keras, make sure that you don't overtrain your model by getting plenty of data. For example, I extended my data by slightly adjusting patient data, such as age or symptoms, so that I can get more data without losing validity. Using this, I extended from 20k to 100k data points.

Finally, make sure that your architecture is reasonable for your problem. I constantly see people try to use absurdly complex models purely for the buzzwords when a simpler model would be just as effective and must faster to train.

You can run ASSIST on my website.