Deep learning is a revolutionary category of machine learning that accelerates our ability to build powerful inference models. Along with that power comes a great deal of complexity in determining what neural architectures are best suited to a given task, engineering features, scaling computation, etc. Predibase is building on the successes of the Ludwig framework for declarative deep learning and Horovod for horizontally distributing model training. In this episode CTO and co-founder of Predibase, Travis Addair, explains how they are reducing the burden of model development even further with their managed service for declarative and low-code ML and how they are integrating with the growing ecosystem of solutions for the full ML lifecycle.
- Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great!
- When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
- Your host is Tobias Macey and today I’m interviewing Travis Addair about Predibase, a low-code platform for building ML models in a declarative format
- How did you get involved in machine learning?
- Can you describe what Predibase is and the story behind it?
- Who is your target audience and how does that focus influence your user experience and feature development priorities?
- How would you describe the semantic differences between your chosen terminology of "declarative ML" and the "autoML" nomenclature that many projects and products have adopted?
- Another platform that launched recently with a promise of "declarative ML" is Continual. How would you characterize your relative strengths?
- Can you describe how the Predibase platform is implemented?
- How have the design and goals of the product changed as you worked through the initial implementation and started working with early customers?
- The operational aspects of the ML lifecycle are still fairly nascent. How have you thought about the boundaries for your product to avoid getting drawn into scope creep while providing a happy path to delivery?
- Ludwig is a core element of your platform. What are the other capabilities that you are layering around and on top of it to build a differentiated product?
- In addition to the existing interfaces for Ludwig you created a new language in the form of PQL. What was the motivation for that decision?
- How did you approach the semantic and syntactic design of the dialect?
- What is your vision for PQL in the space of "declarative ML" that you are working to define?
- Can you describe the available workflows for an individual or team that is using Predibase for prototyping and validating an ML model?
- Once a model has been deemed satisfactory, what is the path to production?
- How are you approaching governance and sustainability of Ludwig and Horovod while balancing your reliance on them in Predibase?
- What are some of the notable investments/improvements that you have made in Ludwig during your work of building Predibase?
- What are the most interesting, innovative, or unexpected ways that you have seen Predibase used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on Predibase?
- When is Predibase the wrong choice?
- What do you have planned for the future of Predibase?
- From your perspective, what is the biggest barrier to adoption of machine learning today?
- Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. The Machine Learning Podcast helps you go from idea to production with machine learning.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you’ve learned something or tried out a project from the show then tell us about it! Email firstname.lastname@example.org) with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers
- Support Vector Machine
- Uber Michaelangelo
- Spark ML Lib
- Deep Learning
- Nvidia Triton
- Weights and Biases
- Confusion Matrices
- Self-supervised Learning