The Python Podcast.__init__

The Python Podcast.__init__



The podcast about Python and the people who make it great


19 September 2021

Experimenting With Reinforcement Learning Using MushroomRL - E332

Rewind 10 seconds
1X
Skip 30 seconds ahead
0:00/0:00

Share on social media:


Summary

Reinforcement learning is a branch of machine learning and AI that has a lot of promise for applications that need to evolve with changes to their inputs. To support the research happening in the field, including applications for robotics, Carlo D’Eramo and Davide Tateo created MushroomRL. In this episode they share how they have designed the project to be easy to work with, so that students can use it in their study, as well as extensible so that it can be used by businesses and industry professionals. They also discuss the strengths of reinforcement learning, how to design problems that can leverage its capabilities, and how to get started with MushroomRL for your own work.

Announcements

  • Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.
  • When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With the launch of their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • Your host as usual is Tobias Macey and today I’m interviewing Davide Tateo and Carlo D’Eramo about MushroomRL, a library for building reinforcement learning experiments

Interview

  • Introductions
  • How did you get introduced to Python?
  • Can you start by describing what reinforcement learning is and how it differs from other approaches for machine learning?
  • What are some example use cases where reinforcement learning might be necessary?
  • Can you describe what MushroomRL is and the story behind it?
    • Who are the target users of the project?
    • What are its main goals?
  • What are your suggestions to other developers for implementing a succesful library?
  • What are some of the core concepts that researchers and/or engineers need to understand to be able to effectively use reinforcement learning techniques?
  • Can you describe how MushroomRL is architected?
    • How have the goals and design of the project changed or evolved since you began working on it?
  • What is the workflow for building and executing an experiment with MushroomRL?
    • How do you track the states and outcomes of experiments?
  • What are some of the considerations involved in designing an environment and reward functions for an agent to interact with?
  • What are some of the open questions that are being explored in reinforcement learning?
  • How are you using MushroomRL in your own research?
  • What are the most interesting, innovative, or unexpected ways that you have seen MushroomRL used?
  • What are the most interesting, unexpected, or challenging lessons that you have learned while working on MushroomRL?
  • When is MushroomRL the wrong choice?
  • What do you have planned for the future of MushroomRL?
  • How can the open-source community contribute to MushroomRL?
  • What kind of support you are willing to provide to users?

Keep In Touch

Picks

Closing Announcements

  • Thank you for listening! Don’t forget to check out our other show, the Data Engineering Podcast for the latest on modern data management.
  • Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
  • If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
  • To help other people find the show please leave a review on iTunes and tell your friends and co-workers
  • Join the community in the new Zulip chat workspace at pythonpodcast.com/chat

Links

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA


Share on social media:


Listen in your favorite app:



More options

Here are shows you might like

See show recommendations
Data Engineering Podcast
Tobias Macey
AI Engineering Podcast
Tobias Macey