
What do music, laughter, data, and AWS all have in common?
Over the course of four days, over 30 analysts, data engineers, data scientists, and software engineers collaborated in teams of four to build out a pipeline in the well-known AWS environment with access to a variety of AWS software’s to a) classify song genre lyrics and b) perform next-word-prediction on song lyrics.
The excitement around the Hackathon was contagious – even Arity employees not involved could feel the energy radiate off the participants. Together they struggled, persevered, laughed, learned, shared and improved. It was a long four days, but our teams came out alive and with a lot more insight into the vast range of skillsets that their own teammates possess! This awarded them the opportunity to collaborate and team up with those they don’t work with on a day to day basis.
Along the way, we connected with our analytics and engineering teams to hear about their experience. Hear what they had to say:
The Hackathon was a fun opportunity to grow my natural language processing skills while learning to use some unfamiliar AWS tools. I really enjoyed collaborating on a small team with people I don’t normally work with toward a common goal—winning! We laughed over our mistakes, we shook our fists in rage at Sagemaker, we persevered, and in the end, created an unexpectedly decent product. I was pleasantly surprised when the classification model correctly predicted the genre for 657 songs out of 1000, putting us in second place for that competition. Not bad for a few days of work! – Mel Hanna
It was a fun and challenging experience where we got to learn more about Amazon Web Services, NLP, and Deep Learning techniques. It was great winning Most Compelling Presentation but was also cool learning about the approaches of other teams. – Ben Yeomans
I had a blast at the hackathon. It was a great opportunity to step away from work and invest time in learning new technologies. I enjoyed bonding with teammates while we worked together to solve a complex problem using methods that had practical applications. The competitive setting brought out the best in every team, and it was great seeing all the creative ideas each team took. – Derek Wong
As a data scientist in training, the AWS hack-a-thon was an awesome opportunity to get my feet wet and learn from other data scientists at Arity. The problem required all of us to step outside of our comfort zones and play roles and use skills we typically don’t use in our normal day to day work; I think my team has a greater appreciation for data engineers! It was a long, challenging four days, but it was completely worth the experience. I learned a ton from my team, and we had a blast working together, which I think was evident from our unique (and winning!) presentation. – Hillary Mahoney
I walk into the office at a brisk pace, as brisk as I can without looking like I’m running. I grab a few leftover scraps of bacon and coffee and head to my desk. Its 8:53 am the first file should land in a few minutes. I meet with my team in an awkward huddle room, but it doesn’t matter. Satish has already prepared the lambda to read and write the data, so we hit the ground running at 9am sharp. We pull up the data and the columns read song name, genre, artist, lyrics and a few others. Scrolling down we see mostly English lyrics but about halfway down the list we see lyrics in Mandarin and Cantonese, near the bottom of the list is Farsi. The sentences are arranged in a way that denotes rhythm; we can see the ends of lines ending in words that rhyme. There are brackets that enclose single capital letters, E, C, F, chords we conclude. After a glance at the data there is clearly some pre-processing, we need to do to pass this into an algorithm. What issues haven’t we seen that are not in the first few rows? What algorithm should we use? What features should we build? After a flurry of ideas, we break off into our individual tasks to reconvene in a few hours. My task was language detection, I create a new notebook and I see mostly python kernels and R(beta), beta, as in not working. I have a decade of experience coding in R and half a Coursera course of experience coding in python, sigh, python it is. I meet with my team around lunch time and hand off the predicted languages.We decide what needs to be done next and assign tasks to everyone, data cleaning, data enrichment, model building, data visualization. I am tasked with building the model that takes in song lyrics and predicts the probability of genre and generate song lyrics. This task requires working knowledge of NLP, neural networks, and python, of which I have none. My next google searches are some combination of the words: genre, prediction, how to, lstm, python, network, rnn, tensorflow, layers, embedding. I find a paper from Stanford that uses hierarchical attention networks and produced state of the art accuracy in genre prediction using lyrics. It uses stacked recurrent neural networks on word level followed by attention model to extract such words that are important to the meaning of the sentence and aggregate the representation of those informative words to form a sentence vector. The same procedure is applied to the derived sentence vectors which then generate a vector who conceives the meaning of the given document and that vector can be passed further for text classification. The idea is “Words make sentences and sentences make documents”. I code it up by end of day. I find out the next day that we should be doing all of this in Sagemaker. I have to scrap my HAN model and use some of the built-in algorithms in Sagemaker. It is possible to build a custom model and put it in Sagemaker but it requires competent Python programming ability and time, neither of which I have. Sagemaker has some decent built in algorithms, there is blazing text for classification and Sequence-to-Sequence or seq2seq for generating text. The blazing text model can be trained on all the data in several minutes while the seq2seq takes ~3 hours on a 5% sample. The input or source sequence to seq2seq model is a sliding window of 96 words of the lyrics and the target sequence is the following 3 words, this results in a lot of duplication of the data. I put in a sample of lyrics and inspect the output. It is mostly non-sense, but I spot some Spanish. I didn’t realize this when building it but this model is useful for translation, so if you give it Spanish, it will generate Spanish, if you give it German it will generate German, amazing. – Tam Huynh