A Machine Learning Engineer’s Fresh Takes on Enterprise Deployment

1 December, 2017
Hashiam Kadhim

Hashiam Kadhim is a Machine Learning Engineer at DeepLearni.ng.  

I've learned a lot this year. I’m a recent grad, and this summer I made the move from research to industry. The choice to leave wasn’t exactly easy. My background’s in mathematics, and I have a Masters in Pure Mathematics from the University of Toronto. Studying math at the graduate level there meant I had a lot of opportunities to work on some of the coolest and most challenging problems in the field: areas like machine learning, general relativity and stochastic PDEs. The hands-on opportunities for machine learning were particularly good, as the faculty there features some of the world’s leading experts on the subject. It wasn’t a bad gig, and I had already started thinking about going on to do my PhD.

In retrospect, though, I’m really glad I looked at some of the industry opportunities. Before long, I was hired by DeepLearni.ng as an engineer, and since then I’ve been helping them translate the latest machine learning and deep learning research into real-world applications for business. Starting at DeepLearni.ng has proved to be a lot more challenging (and exciting) than I first anticipated. Before joining the team, I was pretty confident that deploying the technology for businesses wouldn't be so different from building models for research. After all, it’s just a matter of applying a machine learning model to a dataset, right? I quickly realized that my initial guess was very wrong.

Lessons Learned: Real-world deployment

 

Research vs. applied ML: the math is the same, but the realities greatly differ

There are many challenges surrounding the deployment of machine learning for big companies, the majority of which are hard to detect at the surface level. While AI has been rightfully recognized as having huge potential for many different industries, not many people have had success deploying models that create substantial business value. Now that I’m part of a team designing and deploying AI for business, it’s been thrilling to discover how to optimize for unknowns in the field, in the process of contributing to the opening chapters of the enterprise AI playbook. Here’s a couple of perspectives (and surprises) I’ve had since making the leap to the AI industry.

Models feature complex business requirements

In a research lab, models are selected based on a set of metrics which are well studied and understood. In the enterprise, this is also true--up to a certain point. But you’re also building for business value. Because of this, a series of business metrics has to be defined and measured in addition to the usual means of measuring model performance. Big companies also have an intricate network of processes and regulations, so we also need to take into account compliance requirements, different user groups, ways of measuring risk, to name a few. The list of considerations is long and complicated.

Deploying for enterprise means figuring out how to introduce the newest machines alongside very old ones.

Legacy infrastructure

I’ve worked in environments where the infrastructure is powered by legacy systems that have been around for decades. These systems feature a lot of moving parts, and we have to gain expertise with most, if not all of the parts in order to deploy our solutions properly. If you want to introduce technologies like GPUs or use the cloud in older environments, you need to make sure these tools don’t disrupt the existing infrastructure unnecessarily. This is a pretty big contrast from doing machine learning in a research environment, where you can use whatever tools you like more or less without consequence.

Industry data is messy

The data found in industry is drastically different from the data used in academic problems, which usually comes cleanly packaged and collected specifically for the problem at hand. Industry data often comes from many sources--oftentimes in varying states of disarray and quality. Datasets are also very big and constantly changing. Coupled with the fact that database schemas are often out of date, the reality is that there’s not usually any one person that understands the entire dataset. Sometimes it’s difficult to locate someone who knows what longstanding entries to the database mean.

Because of all of these factors, fitting a dataset into a machine learning paradigm isn’t always obvious. We often have to devise clever solutions to ensure the data – both input and target – required for our business-focussed problems is comprehensively sourced from the enterprise, while also meeting the organization’s highest standards for privacy and compliance.

Collaborating with clients

In our work, DeepLearni.ng’s top focus is to make sure we are helping clients build capabilities with machine learning by providing access to the best education and tools. We’ve learned that we can’t do this unless we make sure we have a comprehensive understanding of how the business works first, so the knowledge and tools we provide are customized for our clients’ organizations. A big part of my job is to help our clients understand the technology’s current capabilities and also its limitations. Before building and deploying models, we provide a series of hands-on workshops that set out to demystify AI so it can be more successfully applied to business. From end-to-end (design, building and deploying machine learning models), we make sure we’re collaborating with clients so that the outcomes of our work extract maximum value from machine learning while also building momentum and enthusiasm for the technology across their organizations.

Extremely tight deadlines for project completion

DeepLearni.ng has established a reputation for ourselves that we get things done 10 times faster than anyone else. Deployment projects on-site are very beefy, so there is an intense project management component, and a high amount of teamwork is crucial to success. Projects have to be completed in a matter of weeks. To deliver projects on time, my team and I have to coordinate closely and assign tasks amongst ourselves that capitalize on our diverse set of strengths. The team’s focus on collaboration, not only with each other but also with clients, has been instrumental to successful projects like the one we did for Scotiabank, where we built and deployed a machine learning model to solve one of the bank’s pressing business problems in only 4 months from beginning to end.

Conclusion

Putting a machine learning solution into production from beginning to end is an extremely gratifying experience, and there’s nothing cooler than knowing I helped build an AI that is saving millions for a company. Industry work has proved to be a tremendous learning experience, teaching me a lot (and fast) about the challenges surrounding machine learning deployment that aren’t really faced in academia. Deploying for enterprise is more challenging than I imagined, but also more rewarding.


Want to meet up and learn more about what it's like to make the jump from research to industry? I'll be heading back to UofT's Mathematics Department on January 23 to take part in a Mentorship Meal, hosted by the university's Backpack 2 Briefcase program. It's hosted on the St. George campus at 6pm and registration is limited. Get in touch by email at deploy@deeplearni.ng for more info.