Scale Your Data Science Teams With Machine Learning Operations Principles

00:00:00
/
00:51:58

November 16th, 2020

51 mins 58 secs

Your Hosts

About this Episode

Summary

Building a machine learning model is a process that requires well curated and cleaned data and a lot of experimentation. Doing it repeatably and at scale with a team requires a way to share your discoveries with your teammates. This has led to a new set of operational ML platforms. In this episode Michael Del Balso shares the lessons that he learned from building the platform at Uber for putting machine learning into production. He also explains how the feature store is becoming the core abstraction for data teams to collaborate on building machine learning models. If you are struggling to get your models into production, or scale your data science throughput, then this interview is worth a listen.

Announcements

  • Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
  • When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With the launch of their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • Do you want to get better at Python? Now is an excellent time to take an online course. Whether you’re just learning Python or you’re looking for deep dives on topics like APIs, memory mangement, async and await, and more, our friends at Talk Python Training have a top-notch course for you. If you’re just getting started, be sure to check out the Python for Absolute Beginners course. It’s like the first year of computer science that you never took compressed into 10 fun hours of Python coding and problem solving. Go to pythonpodcast.com/talkpython today and get 10% off the course that will help you find your next level. That’s pythonpodcast.com/talkpython, and don’t forget to thank them for supporting the show.
  • Python has become the default language for working with data, whether as a data scientist, data engineer, data analyst, or machine learning engineer. Springboard has launched their School of Data to help you get a career in the field through a comprehensive set of programs that are 100% online and tailored to fit your busy schedule. With a network of expert mentors who are available to coach you during weekly 1:1 video calls, a tuition-back guarantee that means you don’t pay until you get a job, resume preparation, and interview assistance there’s no reason to wait. Springboard is offering up to 20 scholarships of $500 towards the tuition cost, exclusively to listeners of this show. Go to pythonpodcast.com/springboard today to learn more and give your career a boost to the next level.
  • Your host as usual is Tobias Macey and today I’m interviewing Mike Del Balso about what is involved in operationalizing machine learning, and his work at Tecton to provide that platform as a service

Interview

  • Introductions
  • How did you get introduced to Python?
  • Can you start by describing what is encompassed by the term "Operational ML"?
    • What other approaches are there to building and managing machine learning projects?
    • How do these approaches differ from operational ML in terms of the use cases that they enable or the scenarios where they can be employed?
  • How would you characterize the current level of maturity for the average organization or enterprise in terms of their capacity for delivering ML projects?
  • What are the necessary components for an operational ML platform?
  • You helped to build the Michelangelo platform at Uber. How did you determine what capabilities were necessary to provide a unified approach for building and deploying models?
  • How did your work on Michelangelo inform your work on Tecton?
  • How does the use of a feature store influence the structure and workflow of a data team?
  • In addition to the feature store, what are the other necessary components of a full pipeline for identifying, training, and deploying machine learning models?
  • Once a model is in production, what signals or metrics do you track to feed into the next iteration of model development?
  • One of the common challenges in data science and machine learning is managing collaboration. How do tools such as feature stores or the Michelangelo platform address that problem?
  • What are the most interesting, unexpected, or challenging lessons that you have learned while building operational ML platforms?
  • What advice or recommendations do you have for teams who are trying to work with machine learning?
  • What do you have planned for the future of Tecton?

Keep In Touch

Picks

Closing Announcements

  • Thank you for listening! Don’t forget to check out our other show, the Data Engineering Podcast for the latest on modern data management.
  • Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
  • If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
  • To help other people find the show please leave a review on iTunes and tell your friends and co-workers
  • Join the community in the new Zulip chat workspace at pythonpodcast.com/chat

Links

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA