Dependency Management Improvements In Pip's Resolver - Episode 264


Dependency management in Python has taken a long and winding path, which has led to the current dominance of Pip. One of the remaining shortcomings is the lack of a robust mechanism for resolving the package and version constraints that are necessary to produce a working system. Thankfully, the Python Software Foundation has funded an effort to upgrade the dependency resolution algorithm and user experience of Pip. In this episode the engineers working on these improvements, Pradyun Gedam, Tzu-Ping Chung, and Paul Moore, discuss the history of Pip, the challenges of dependency management in Python, and the benefits that surrounding projects will gain from a more robust resolution algorithm. This is an exciting development for the Python ecosystem, so listen now and then provide feedback on how the new resolver is working for you.

Springboard logoDid you know Data science is a fast-growing career field, with a 650% growth in jobs since 2012 and a median salary of around $125,000? Springboard has identified that data careers are going to shape the future, and has responded to that need by creating the Springboard School of Data, comprehensive, end-to-end data career programs that encompass data science, data analytics, data engineering, and machine learning.

Each Springboard course is 100% online and remote, and each course curriculum is tailored to fit the schedule of working professionals. This means flexible hours and a project-based methodology designed to get real world experience: every Springboard student graduates with a portfolio of projects to showcase their skills to potential employers. Springboard’s unique approach to learning is centered on the very simple idea that mentorship and one-on-one human support is the fastest and most efficient way to learn new skills. That’s why all of Springboard’s data courses are supported by a vast network of industry expert mentors, who are carefully vetted to ensure the right fit for each program. Mentors provide valuable guidance, coaching, and support to help keep Springboard students motivated through weekly, 1:1 video calls for the duration of the program.

Before graduation, Springboard’s career services team supports students in their job search, helping prepare them for interviews and networking, and facilitates their transition in the tech or data industry. Springboard’s tuition-back guarantee allows students to secure the role of their dreams and invest in themselves without risk. Meaning students are not charged if they don’t get a job offer in the field they study. Springboard’s support does not end when students graduate. All Springboard graduates benefit from an extensive support network encompassing career services, 1:1 career coaching, networking tips, resume assistance, interview prep, and salary negotiation.

Since Springboard was founded in 2013, around 94% of eligible graduates secured a job within one year, earning an average salary increase of $26,000. Want to learn more? Springboard is exclusively offering up to 20 scholarships of $500 to listeners of Podcast.__init__. Simply go to for more information.

Do you want to try out some of the tools and applications that you heard about on Podcast.__init__? Do you have a side project that you want to share with the world? With Linode’s managed Kubernetes platform it’s now even easier to get started with the latest in cloud technologies. With the combined power of the leading container orchestrator and the speed and reliability of Linode’s object storage, node balancers, block storage, and dedicated CPU or GPU instances, you’ve got everything you need to scale up. Go to today and get a $100 credit to launch a new cluster, run a server, upload some data, or… And don’t forget to thank them for being a long time supporter of Podcast.__init__!


  • Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
  • When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With 200 Gbit/s private networking, node balancers, a 40 Gbit/s public network, fast object storage, and a brand new managed Kubernetes platform, all controlled by a convenient API you’ve got everything you need to scale up. And for your tasks that need fast computation, such as training machine learning models, they’ve got dedicated CPU and GPU instances. Go to to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
  • You listen to this show because you love Python and want to keep your skills up to date, and machine learning is finding its way into every aspect of software engineering. Springboard has partnered with us to help you take the next step in your career by offering a scholarship to their Machine Learning Engineering career track program. In this online, project-based course every student is paired with a Machine Learning expert who provides unlimited 1:1 mentorship support throughout the program via video conferences. You’ll build up your portfolio of machine learning projects and gain hands-on experience in writing machine learning algorithms, deploying models into production, and managing the lifecycle of a deep learning prototype. Springboard offers a job guarantee, meaning that you don’t have to pay for the program until you get a job in the space. Podcast.__init__ is exclusively offering listeners 20 scholarships of $500 to eligible applicants. It only takes 10 minutes and there’s no obligation. Go to and apply today! Make sure to use the code AISPRINGBOARD when you enroll.
  • Your host as usual is Tobias Macey and today I’m interviewing Tzu-ping Chung, Pradyun Gedam, and Paul Moore about their work to improve the dependency resolution capabilities of Pip and its user experience


  • Introductions
  • How did you get introduced to Python?
  • Can you start by describing the focus of the work that you are doing?
    • What is the scope of the work, and what is the established criteria for when it is considered complete?
  • What is your history with working on the Pip source code and what interests you most about this project?
  • What are the main sources or manifestations of technical debt that exist in Pip as of today?
    • How does it currently handle dependency resolution?
  • What are some of the workarounds that developers have had to resort to in the absence of a robust dependency resolver in Pip?
  • How is the new dependency resolver implemented?
    • How has your initial design evolved or shifted as you have gotten further along in its implementation?
  • What are the pieces of information that the resolver will rely on for determining which packages and versions to install? (e.g. will it install setuptools > 45.x in a Python 2 virtualenv?)
  • What are the new capabilities in Pip that will be enabled by this upgrade to the dependency resolver?
  • What projects or features in the encompassing ecosystem will be unblocked with the introduction of this upgrade?
  • What are some of the changes that users will need to make to adopt the updated Pip?
  • How do you anticipate the changes in Pip impacting the viability or adoption of Python and its ecosystem within different communities or industries?
  • What are some of the additional changes or improvements that you would like to see in Pip or other core elements of the Python landscape?
  • What are some of the most interesting, unexpected, or challenging lessons that you have learned while working on these updates to Pip?

Keep In Touch


Closing Announcements

  • Thank you for listening! Don’t forget to check out our other show, the Data Engineering Podcast for the latest on modern data management.
  • Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
  • If you’ve learned something or tried out a project from the show then tell us about it! Email with your story.
  • To help other people find the show please leave a review on iTunes and tell your friends and co-workers
  • Join the community in the new Zulip chat workspace at


The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

Click here to read the raw transcript...
Tobias Macey
Hello, and welcome to podcast ordinate, the podcast about Python in the people who make it great. When you're ready to launch your next app or want to try a project you hear about on the show, you'll need somewhere to deploy it. So take a look at our friends over at linode. Go to 100 gigabit and private networking node balancers, a 40 gigabit public network fast object storage and a brand new managed Kubernetes platform all controlled by a convenient API, you've got everything you need to scale up. And for your tasks that need fast computation such as training machine learning models are running your ci and CD pipelines. They've got dedicated CPU and GPU instances. Go to Python slash linode. That's Li n o d today to get a $20 credit and launch a new server in under a minute. And don't forget to thank them for their continued support of this show. You listen to this show because you love Python and want to keep your skills up to date. Machine learning is finding its way into every aspect of software engineering. springboard has partnered with us to help you take the next step in your career by offering a scholarship to their machine learning engineering career track program. In this online project based course every student is paired with a machine learning expert who provides unlimited one to one mentorship support throughout the program via video conferences. You'll build up your portfolio of machine learning projects and gain hands on experience in writing machine learning algorithms, deploying models into production and managing the lifecycle of a deep learning prototype. springboard offers a job guarantee meaning that you don't have to pay for the program until you get a job in the space podcast often it is exclusively offering listeners 20 scholarships of $500 to eligible applicants, it only takes 10 minutes and there's no obligation. Go to Python slash springboard and apply today and make sure to use the code AI springboard when you enroll your host as usual as Tobias Macey and today I'm interviewing souping Chung Do good DOM and Paul more about their work to improve the dependency resolution capabilities of PIP and its overall user experience. So starting with you to ping, can you introduce yourself?
Tzu-Ping Chung
So I'm probably most famous of being the author of markdown and markdown editor on Mac. And currently, my occupation is as a freelancer based in Taiwan, and I was also the organizer for Python, Taiwan, for 2017 and 18. And I think tank PIP m photo, I've taken a step back recently, because of the work I'm involved as a maintainer
Tobias Macey
and prod you and how about yourself.
Pradyun Gedam
I'm talking, as you said, I'm a college student still, I'm the youngest in the group today. I'm a book maintainer, a moderator on pi pi, a contributor to a whole bunch of open source software. And I guess another thing that's relevant might be Amoco dev on Tom but you have That's a quick intro, I guess.
Tobias Macey
And Paul, can you introduce yourself as well?
Paul Moore
Hi, I'm Paul Moore. as has been said, I'm a pet maintainer. I've been a pet maintainer for a number of years now. I'm also a C Python core developer, and I, the packaging be DFL delegate for interoperability standards, which is way too many words. But basically what that means is that I work with people in putting together standards, like the metadata standards for for Python packages. And I'm in overall charge of sort of running those discussions and basically signing off on the on the final decisions made in the peps that get that come out of it.
Tobias Macey
And going back to you to ping Do you remember how you first got introduced to Python?
Tzu-Ping Chung
Yeah, well as Like, as typical of my generation in Taiwan, I started at a fairly like, convoluted track. So my first programming experiences in college when I got my first ever notebook, which iBook, g4, and at the time nobody, just like virtually nobody was using MAC's. So there are like, Mac's are lacking a lot of applications at the time. So I taught myself programming, like some of them is in Python, but just basic scripts to automate my workflow. And so I graduated from college and found a job as an iOS programmer because I used to program a lot of Objective C. And so one day my boss came up to me hate so the web guy just quit. Do you know anything about Django? Please Python, right? Okay, I can try. And then like 10 years later, I'm writing seven The 80% Python then I haven't wrote a line of iOS code says,
Tobias Macey
and pretty awesome. How about yourself? Do you remember how you got introduced to Python?
Pradyun Gedam
I do. Yes. I think it was about six years ago. Back when I was in school, high school, my dad gave me a book called python programming by Wesley j chin. And I had too much free time as a kid. And he basically handed me the book said, Hey, instead of playing that game, make one now it's like, oh, okay, cool, picked it up, built a game or two. I was like, hey, this, this is fun. And then I started diving deeper into the language itself, sort of seeing see fighting exists and trying to understand what that means and failing at that, because I didn't know see by then, obviously, Python was my first language and that's the only thing I knew. Then I realized I was using this tool called Pep. And well, then I dove into that and stuff happened since then. And now it will mean Taylor.
Tobias Macey
And Paul, do you remember how you first got introduced to Python?
Paul Moore
I'm not sure my memories that long. I got involved with Python back in the days of about Python 1.4. I was I had an old acorn computer back in the day and it had no software at all on it. So I spent a lot of my time porting stuff to the acorn. One of the things I looked at was Python, but didn't manage to get anywhere with it. But I came back to it when I when I got my first PC and got involved with the community. And I've never used Python as my work. It's always been a hobby for me. So I did a lot of playing around with stuff got involved with community discussions, got involved with core Python discussions on things like imports. And that led to, I guess, things like zip import and then into packaging. Ultimately To pick where there were things that I thought I could help with, got involved and ultimately got invited to be a pet maintainer and never looked back, really. So that's how I got where I am today.
Tobias Macey
And for people who weren't around in the earlier days of Python, I know that there were a hodgepodge of different approaches to being able to handle packages and dependencies. And PIP is one of the packages that came out of that and has become the standard for being able to install things, although there are some competing projects such as conda, and easy install. So I'm wondering if you can maybe give a little bit of the background and history of PIP and how we ended up where we are today, with it being the main way that people interact with getting packages onto their system for working with their applications. Back in the day,
Paul Moore
the very original versions of Python, pretty much everything third party was just had a make file, you you installed it with a makefile with a C compiler or you put things together yourself. It Was it was very, very difficult. And as a Windows user, it was awful because everything was written for Unix and it was way too much work. The first big improvement was when dist utils came along, which basically standardize the process for building stuff that may was a huge step forward. It meant that basically people could write their packages and have some level of confidence that anybody could build them. But it was still very manual in terms of putting things on to your system, you dist utils gave you the bits, but then actually installing them was was still a problem. Pip came along around that time. And originally, Pip was very much about taking the source code using dist utils to build it and installing it. So it was the first real sort of package manager in that sense, but in those days, it was very much. Here's a package, get it on your system, and its hand capabilities to do things like on install, which was quite amazing. Back to the system in those days because that was that was really user friendly around the same sort of time setup tools was invented, which was a slightly different approach involving trying to deal with managing multiple versions of packages on your system. It was involved a lot of complexity and a lot of sort of cutting edge technology to do clever stuff. But I think I will be right in saying that a lot of what it did was was fairly niche, and not many people needed it. So PIP solved a lot of people's problems. It didn't help. Personally for me, it didn't help a lot because it was all building from source and source was still built with Unix compilers and things like that. But I have no idea when it was a few years afterwards down the line, Daniel health invented the wheel package format, which was a format for bringing for building binaries and then just installing them on onto your system and that was added into to PIP around that time, too. So the PIP was able to take something that somebody else had built and just put it on your system. And it was at that point, I think, which PIP really started taking off because it made everything so much easier for people who didn't have the capability of, of building these things in parallel without pi pi was coming along, there have been various attempts to build package repositories modeled on things like pearls, C pan and things like that. But pi pi was the first real centralized place that people could put their packages and PIP was built to, to get packages from there. So the combination of that was a step change for everybody in being able to, to get their packages all in one go. And that's really where PIP and pi pi came from. conda that you mentioned was a similar type of effort, but that was built very much by the scientific community to deal with a lot of the specific problems they had around mathematical software. Scientific software. And that developed in parallel and to this day probably remains a sort of parallel ecosystem with a lot of similarities. But it's addressing a slightly different type of problem to PIP in terms of its it's much more specialized. And I guess that's more or less where we are today. There are other things going on. There's been a lot of work being done in recent years to try and standardize things to try and make it easier for tools to work with each other rather than people having to choose one and stick with it. And that's where we're a lot of the standards work I'm involved in comes in. And that's that's probably it.
Tobias Macey
Yeah, I got involved, I think somewhere in the early two dot x series of Python, maybe around 2.3. And I can recall having to figure out what Python eggs were and ways to install them and having to figure out how to uninstall them when I got it wrong. And I know that a lot of the ecosystem around packaging for Python and dependency management has just grown up organically, which is where we end up today with the work that you're doing to try and improve the dependency resolution capabilities. And so I'm wondering if you could just start by giving a bit of the background of the work that you're doing and the focus of where you're putting your efforts with this current body of work.
Pradyun Gedam
So the focus of the work that we're doing is fairly well scope. We're replacing its current dependency resolution algorithm with something else that's not broken. That's sort of the core bit that we're doing. The other part of it is we have user experience experts who have been brought on as part of the funding we've raised, and they are working to collect user data and collect user information, analyze it, and sort of work with us as Phipps maintainers and the broader community to improve the COI all the error messages all the reporting in to be more useful for the users
Tobias Macey
and in terms of The overall scope of the work, you said that it's fairly well specified. And I'm wondering what the established criteria are for when it is considered complete. And when it's ready to be handed off to the general Python community,
Pradyun Gedam
the criteria would probably be writing the replacement, getting it feature complete, which one good metric would be, hey, it passes all the FIPS tests. And then we're gonna have a beta rollout beta testing phase, where we're gonna work with users. And we have user experience folks who can do user testing and have expertise in that area to collect feedback from them, see if the new resolver is actually doing the right thing for the users is actually better than the existing resolver the workarounds people have had over the years to work around the broken behavior of the existing resolver they still work or they have a clear path to removing those workarounds and things like that. So in terms of criteria, getting a feature Complete and rolling it out with a beta phase. And then addressing all of that feedback and making it generally available, making it generally available is the this is done.
Tobias Macey
And I know that there is a lot of technical debt that has been accrued in PIP over the years, and that there are some different pain points that people have had from it dependency resolution being the main one. And I know that from looking through some of the posts and issues while I was preparing for this interview, that there was a fair body of work that needed to be done in advance of getting to the point where you could even start working on addressing the dependency resolution capabilities. So I'm wondering if you can talk a bit about some of that advanced work that was necessary and some of the main ways that technical debt manifests in PIP and some of the ways that it has sort of crept into the project over the years.
Pradyun Gedam
I guess a lot of it originates from the fact that it was originally designed to just take every single source code and then build from there And get to the install state. And then when wheels were added, and sort of static metadata was a concept that was getting added into the codebase. That's a transitioning, that's probably still happening. In addition to this, as with any software project, the technical debt sort of grew organically. And at no point did people go, Oh, yeah, let's hold off for a moment and clean stuff up. I mean, that probably happened a bunch of times, there have been major reorganizations, but none where it's, oh, let's remove this functionality and just keep the simple case and things like that never happened. So one of the things that we had to do was break out these God classes, these classes that did everything, right. There was a there is a requirement set class in Pep today, which earlier used to do everything, building packages, fetching packages, dependency resolution, all of these happened within that one. class. And it was like, a few thousand lines of code. And so one of the things that had to be done was breaking this out into multiple pieces that did one independent thing. And then bringing them together in not the requirements set object, but in the install command so that it's reusable components that we can use elsewhere. Similarly, there's an installed requirement class that does too many things that we're slowly breaking up. But yeah, if unifying code flow across the code base, so that we're not going through three different parts of the code base for the same thing, three different approaches to building source distributions from directories, and things like that. We're still cleaning a lot of this up. But at this point, we're reasonably good on the router at the point where we decided we're reasonably well separated. The dependency resolution from the rest of the technical debt is when we went here, we should We can probably like, expedite this work with some funding. And yeah, that happened.
Tzu-Ping Chung
So while while probably was busy doing the restructure and refactoring stuff, I was a maintainer of PIP M. And PIP M was an it's still using PIP tools, as is dependency resolver. And PIP tools just use this pips internal dependency resolver, which is, as we have already discussed is not very good. So me and two other maintainers decide to Hey, let's just try a brand new one for Python because there was no such thing at the time. So we set out to implement a simple backtracking resolver for PIP F, and we call the resolve them but all of us got busy afterwards, and we never have Rarely quite finished the project and just set abandoned over there and never got integrated into PIP M. And one day, there's a guy named prodding just came up to me, Hey, I was doing some resolver research and I wrote that thing codes as Oh, and I figured out that is actually fairly similar to what you were doing. So that's basically how I got on board to this, this body of work. And maybe probably you can take it from there.
Tobias Macey
Just wondering what role each of you are playing in this body of work.
Pradyun Gedam
I got involved initially in this space, dependency resolution as a whole in packing packaging, as part of Google Summer of Code. I basically, I had been going to eating to prep for a few months now. And then Google Summer of Code happened and I realized I'm eligible because I'm in college now. So I applied, we made the logistics of it work, and I ended up having three months of time where I could just work on thick, and specifically with the focus and depth in improving dependency resolution. That's around the time that I was working on paying down this technical debt. That's when it started, as well as writing Zazzle as souping pointed out, so right now, I'm sort of, at this point, I've been involved in this getting pictures dependency resolver, much better for like, two and a half, three years now. So I have a lot of sort of a good understanding of that very deep understanding of FIPS code base and how it's quirky and how resolve lab works, how that and I've built these mental models over the years. So I'm sort of contributing that and a bunch of code to word making the results happen now, and I guess,
Paul Moore
over to Paul, right. I I've been involved as a pet maintainer for quite a while and I did a lot of work on the p p 517. Build implementation for Pip. So I was I was made aware that there was an opportunity to get involved in doing some work on the resolver full time, which, basically, I grabbed it the chance to do that, because it looked like really interesting work to get involved in. I've been picking up the resolver side of things from there. And I'm, I guess, just fitting in doing chunks of the coding implementing bits of the features that need to need to be added into the into the resolver implementation. I've also been trying to sort of keep an eye on the question of how this fits into the various standards that we've got, obviously, I've got an interest in that. So things like how metadata is provided to pick that something I'm sort of trying to see back into the wider community, how we, how we manage the data that Pitney In order to do the resolution, so I guess that's where I fit in,
Pradyun Gedam
and also doing a chunk big chunk of the implementation along with a.
Tobias Macey
And so in terms of the current state of affairs, what are some of the ways that people might experience the broken dependency resolution of PIP and how is it currently implemented?
Paul Moore
I take this one. Basically, for a lot of cases the current resolver is fine. But what happens is that when a project has multiple dependencies, and particularly when you're trying to install multiple things at once, and the dependencies are either in conflict or a difficult to resolve pips current resolver takes the approach of effectively first come first served so it goes through in order says I'm going to try and make this one work. Okay, that worked on to the next one on to the next one. If as it goes further down the line Find something that conflicts with what it's already done. It doesn't backtrack and try again, it just says, loops and finishes off as best it can. And the result is that for that type of situation, what PIP installs is actually broken, you may have project a depends on version two of project B. And what PIP actually installs is version three. And it says, Sorry about that. And then you've got to dig yourself out of the mess, which isn't difficult to do. Because what you can do is just don't install the wrong version and install the right one. But for a user, it's knowing that's happening. It's finding out what needs to be done. It's basically doing the job that you wanted PIP to do, you've got to do manually and pin all your dependencies as a result of it.
Tobias Macey
And then as far as the workarounds or resolutions that people can apply manually, what are some of the common practices that people have had to resort to in the absence of a robust dependency Over,
Paul Moore
a lot of it is that type of thing. It's it's pinning the requirements more tightly than they would like to, particularly in libraries, which is obviously bad. You don't want tight requirement pinning in libraries. People are doing things like installing without dependencies and then manually installing the dependencies. There are a number of other projects which have been built up around the packaging ecosystem with the intention of doing the resolution outside of PIP tools like PIP tools, for instance, does that poetry, the project management tool was explicitly built one of its goals was to try and do dependency management on top of what PIP provides. So there's various different things people do, but at the end of the day, it's trying to manually do stuff that PIP ideally should be doing for them.
Tobias Macey
And in terms of the new dependency resolver. How are you approaching that implementation? And what are some of the constraints that exist within the Python ecosystem that are influencing the overall approach?
Pradyun Gedam
So in terms of the news author, one of the things that we're doing is we're using a reusable component resolve being done for usable component, which defines an abstraction layer between the dependency resolution algorithm, the part that does, oh, let me fetch this one. Let me see this one. Let me backtrack, this choice. And all of that algorithmic logic, from the specific details like, here's how I get dependencies of a package. Here's how this package should be represented. And this is what describes the package and things like that. And by separating these details, we're allowing both of them to evolve independently so we could slot in a new resolver in the future date. That's actually somehow better than the one we already have, because there are various dependency resolution algorithms in product space of dividends resolution. So that's one of the decision makers. is having an abstraction layer so that we can swap the underlying resolver and got the data if we want to. The other part is we are not trying to make the existing resolver morph into this new resolver. No, no, we're going to implement a new one all over again. Because the structure of code, the assumptions that the old resolver made that I'm never going to backtrack, are really baked into not the data structures, but the code structure itself. So it just makes a lot more sense to not bother dealing with that bit of debt. And just rewriting that building on top of resolve lifts. So we don't actually have to write dependency resolution or model only implement pips on details, things like, here's how you get metadata and things like that, on top of result, that's one of the design decisions that we made to ignore the existing resolver in the implementation and make a new one. The other benefit of this is that we can have a nice roll out approach to this, which is we can have other resolvers in there at the same time for users to test with, right. So that's useful for when we do the rollout. Because we can just keep improving the new resolver until it's at feature parity slash better than the existing one. And then flip the switch. Once we know it's not going to be disruptive to do so.
Tzu-Ping Chung
So Paul, do you want to mention the resolution logic?
Paul Moore
One thing that people may or may not know is that dependency resolution is is a fairly well known problem. There's lots of things that deal with it, the technical terms are SAT solvers and things like that. So there are libraries that do dependency resolution as a general problem. And many languages already use some form of that. The problem that we have in particular with Python and with PIP is that most of those Libraries are most of the the sort of algorithms behind the process, work on a basis of assuming that they know all the details of the problem up front. So I know what packages I'm being asked to install, what dependencies they have, what versions are available, that's all all available to me, or at least easy to get. Unfortunately, due to the history of how Python packaging came about, a lot of that just isn't easy to get hold off for Python packages. So for example, if I'm trying to install a project that's sitting in a directory on my PC, I can't even necessarily know the name of that project without running through a build process. So asking questions like does this project satisfy this version dependency can involve wheeling out your C compiler to an extent and that means that algorithm All libraries that work on the base of they can just freely go and check versions are going to be really badly performing for Python and for Pip. So one of the big challenges for us was to find a library and an algorithm that would minimize that impact so that we would, as near as possible, only calculate what we needed to know and nothing extra. And that's what that's where resolve live really came into the equation because the decision to use resolve levels because it basically only got information on demand. And we could use that to feed it with Python data without having that excessive cost.
Tobias Macey
Another element that I'm sure factors into the overall complexity of this problem is the existence of self hosted or a third party repositories as well where you can't even necessarily rely on the capability of what's in the PI PI Server of warehouse. And so I'm wondering how that also influences your decisions on how to approach this problem or the capabilities of dependency resolution. And those contacts.
Paul Moore
Probably not as much as you might originally think. I mean, the the worst case scenario for for Python, and for PIP is installing a project from your local disk, just the source of your the project that's in development. That's a key key use that people have for Pip. And basically, if we can cover that, then pretty much anything else is easier, because the biggest problem that we have in all cases is metadata. It's it's what's the versions, what are the dependencies? And as I say, the good thing about the existing code base for PIP was that we already had a lot of that machinery in place. The build process is already there. We just feed it a project and out pops a built version with the metadata we need. So we're the problem of having different sources of data does hit us and it's something that people will have been aware of is when it comes to looking at what projects are valid. What, what, for example, wheels, what binaries are valid on a particular environment. One case that came up recently was when setup tools dropped Python two support. And so in order to install setup tools on a Python two environment, you have to pick an older version. Now there's lots of ways of getting that data. Specifically, there's there's three there are tags built into the wheels specification, which say this wheel is only usable on Python three. That's great. But unfortunately, what that does is it causes PIP to fall back to building from source. So just saying that your will is only available for Python three doesn't really have Because people will then say, Okay, I'll build from source. Within the standard Python metadata, there is a piece of metadata that says this project works on Python versions x, y, and Zed. And that is what's used as a sort of final resort because that's the the metadata that we need to build in order to get. So what will happen there is if we build the project, and it turns out to be for a version that doesn't match the current environment, the new result will at that stage, backtrack and try again. So there'll be a bit of a cost in building but that's it. The old resolver had a big problem here, because by the time it had built the project it had already committed and so you ended up with people under the old resolver getting an incompatible version installed. And that is one of the things that we wanted the new resolve to fix. The other place where you can get the data that requires Python metadata is also exposed on the package index, as part of the data that pi API provides and PIP users that to catch the problem early. That is great. That saves an awful lot of processing, from pips point of view because it can discard a lot of things straight away. It's the one place though, where if somebody is using a different piece of index software that doesn't support that data requires Python tag, then they won't get that benefit. And they will have seen with the current version of the resolver pick will install things that potentially don't match. So that's probably the main place where the various different sources is going to make a noticeable difference.
Pradyun Gedam
The other sort of important thing here is we are not directly using all the information from piping. So there's a need to go a bit of technical detail, but basically, there is a page that lists all the versions of a package that are available on pipe here. That page contains the data request by key information, like some metadata about the package. Here's the name of the file, here's what version it is, here's where you can get the file. Here's a little bit more. But very importantly, for dependency resolution, this does not include dependency data. So we do not rely on this page for getting Oh, flask requires these bunch of packages, like No, that's not what the resolver sees what the resolver sees is, okay, these are the available versions of flask. And that's one of them is what I'm going to get for flask and then when we get it locally is when we trigger the build to get dependencies. Because essentially, because of history, we have an executable file, a file, where which needs to be executed to get canonical. Definitely correct. In error codes, metadata about a packet And that's the build step that Paul's talking about is we need to do the setup.pi. Tell me about this package step, which is expensive. And in dependency resolution when getting dependencies is an expensive step. It's a slow process. Overall, it takes a lot of optimization to make sure we're not going in the wrong direction. And backtracking will mean that there's going to be Oh, okay, I made a bad choice. Let me go and do all of that all over again, with a different set of choices. Right now, we're not going to enhance any of the interoperability standards for basically letting package indexes like pi pi and other third party package indexes to give this data to Pip, partly because that requires writing a pee pee and getting a lot of work done. Because it's a standardization problem. And there's a lot of use cases to consider people use Python in all sorts of ways. So until that happens, For now, what we're going to do is it's going to get the package and generate the metadata and get the dependencies that's going to be so one of the big speed ups that we want to do after the initial role of the resolver is working on that as well didn't do that we have been for so long, is getting that standardized so that we can make that work for not just by API, but all third party package indexes. And to sort of hint at why we do this is we've been burned by having implementation defined standards in the past, where, oh, this works, because that's how it's been implemented. And what's implemented is how we do stuff. Instead of going, this is how we do stuff. And let's implement that from a piece of text to code. So we've been moving away from that through the standardization efforts that the Python packaging authority and the broader Python community has been doing in this space. And yeah, so we are not going to introduce More implementation specific stuff, implementation defined stuff while also working towards reducing doing that. So that's why we're not gonna use information that pi API does provide in like a PI API specific API until we standardize it and make it possible for third party package indexes to expose the same information.
Tobias Macey
And going back to Paul's example of setup tools, it's something that I've personally been burned by because I've got a Python two projects that I have to maintain at my work and you know, creating the virtual end for Python two, and then having an install setup tool is greater than 45. And then just having everything break and having to manually override that. So I'm definitely looking forward to this new resolver being mainlined into Pip, so keep up the good work on that front. And then so in terms of the new capabilities that you're hinting at, and some of the new standards that you're looking forward to, what are some of the overall improvements in PIP itself and some of the surrounding packaging and go systems That you either anticipate or that are explicitly blocked by this work. So
Pradyun Gedam
one of the big ones is better environment management in IP, which has been something that lots of folks have been asking for. The other is the dependency resolution logic is now going to be in a shared library. And so not everybody in this space will be implementing their own will will have at least some way to do interoperability, although this is implementation defined, but it's also a lot closer to the implementation. So the other is it will help simplify some of the other tooling in the system. So as souping hinted depends depends on PIP tools which depends on its internals, if pips internal resolver improves, if tools improve and dependent proves similarly other projects that are using PIP under them, for whatever reason, they benefit from this work as well. And another thing that's been very common is being able to upgrade packages without breaking the existing packages, which I want a new version of Django, and oh, no, now my extensions don't work. Now my plugins don't work. Now my other applications don't work. Like that should not happen. And that's one of the things that this was all will enable, in some senses, although that's something you should be doing already, but it's not.
Tobias Macey
So that's definitely something that I've been burned by as well. And I'm sure many people have also have running PIP dash you for a particular package and having it bring along a half dozen other things that you didn't anticipate.
Pradyun Gedam
Yeah, that got fixed at some point partially, but it was more of a workaround. It was it stopped upgrading everything itself, and only things it needed to. But again, that wasn't ideal and doing the proper resolve the solid side much better and more correctly,
Tobias Macey
and to paying as somebody who is maintaining one of the projects that is dependent on PIP what are some of the improvements in PIP env or some of the other projects that you maintain? You are excited to be able to get started with once the resolver ends up in main line Pip.
Tzu-Ping Chung
So as we produce mentioned PIP M for the depends on PIP tools, which depends on Pip. But one of the, I guess optimizations PIP is doing in this dependence resolution largely gets, it only resolves the dependencies for the current environment. So for example, if you depend on Django 3.0, on Linux, but you for some reason want to depend on Django 3.1, only on Windows, then the current paper is over even the new PIP resolver will only choose either 3.0 or 3.1, based on what platform you're currently on, but for PIP M for for, for example, poetry or maybe PIP tools, the ideal scenario would be to generate and I call the abstract dependency tree. So you We'll need to generate a lock file that says if windows 10, install Django 3.1, otherwise Django 3.0. And this is one of the things I am looking forward to do by like learning with. While I'm learning from the work I'm doing in Pip, I can translate all the knowledge to building a better abstract dependence resolver for PIP and other similar tools.
Tobias Macey
And as part of the overall project. You mentioned, too, that you have a user experience team who are conducting user interviews and I'm wondering how some of that has fed back into your plans for this project and some of the ways that it has influenced your original ideas or change the direction that you're taking for the technical implementations
Tzu-Ping Chung
Yes, uh, if I start from resolving them so because resolve live was kind of developed in a vague in a vacuum without a actual use case until decided to choose it One of the things that resolve lib does not do well is error reporting, because we don't have real world example to report on. So one of the biggest advantage the the user research team are feeding back to us is how people are thinking about pips error reporting of dependencies, and how we can do it better potentially, in the new resolver.
Paul Moore
I think one of the other things that has been very valuable, certainly speaking as a pet maintainer is just simply getting a view on how people are using PIP out there. I mean, obviously, we see reports of what what people are doing, but a lot of what we get in the normal course of events comes from people raising issues. So optimistically I assume that we're getting a fairly biased view of how bad PIP is from from only ever bug reports. And one of the things we've we've definitely got from talking to some end users about how they use PIP what they use it for how it works, I was actually quite surprised that a lot of people were finding that they wanted PIP to be more strict, more definite, more precise in what it did. And they weren't trying to do bizarre, weird and wonderful things. And they wanted fourth pick out of its comfort zone. They were actually just trying to get on with fairly straightforward stuff. And they just wanted PIP to continue doing what it did, but better. So that was a really a really sort of positive bit of feedback and also reassuring that when you're trying to deal with a problem like dependency resolution, you spend your life mired in ridiculously complicated examples of 20 things all all independent, all interdependent on each other, or conflicting, and how are we going to deal with this problem? And getting that step back and getting the feedback that most people have relatively straightforward environments with relatively straightforward problems they want to see fixed. And if we do that, we've solved a lot of the problems that puts a great perspective on on what we're trying to achieve and make makes it makes it more achievable, ultimately.
Pradyun Gedam
Yeah. And yeah, I just strongly agree with what Paul said that with a pip maintainer hat on, like, I think the most valuable bit of information has been some amount of improved visibility into how users are using PIP because to put it mildly, a lot of people are using PIP and we genuinely don't see a good chunk of Hey, Pip just worked because the normal communication channels are biased towards Hey, Pip did not work or Hey, I want to do this other thing is, so getting that sort of perspective, as Paul put it has been super useful. The other bit has been you some amount of better not sure what the word would be better mental model for approaching user facing changes as part of learning about these topics from people who are experts in the space, instead of getting, at least for me personally getting their take on how to handle some of these disruptive changes that we might make in the future or sort of communication around those and how to handle telling users about these and so on. And that has been useful as well. So hopefully that translates into slightly nicer experiences for users moving forward.
Tobias Macey
And when it comes time for people to start using the new resolver. Is there any work that they're going to have to do on there and or changes to their workflow, or is it something that will just land in mainline and it should essentially be invisible to end users other than the fact that they'll start getting fewer errors from their installs?
Paul Moore
It should be as simple as that. What we will The plan for the rollout is that right now, the new resolver is available using a flag within Pip. So you can enable the new resolve by saying dash dash enabled unstable feature equals resolver. And you can see how it's working and how it's progressing. It's obviously only very much in an alpha state. Now, at the point where it's finally released, that unstable feature flag will no longer be needed. And the new resolve will just be there, hopefully. And the the goal is that that will have as little impact as possible. But obviously, as we talked about, people will have been working around existing issues. People may have breakages in their environment that peppers installed things with conflicts. And so we're going through a process at the moment and we'll be continual there's going to be a proper beta release of the new resolve which will be publicized later in May, I believe, At that point, we're looking for people to actually try and exercise the new resolver, if possible, or more generally, run PIP check on their environment to make sure that they don't have issues in their existing installation, which, when the new resolver comes along, it might say, I'm sorry, I don't see how you could possibly have this. I'm not going to deal with it. Because obviously, the new resolver is designed to avoid conflicts. And if you hand it an already conflicting environment, it's going to have problems. So a little bit of pre work on the part of users to make sure their environments don't have such problems in terms of maintaining their projects, looking at how they're specifying on the dependencies, are they doing anything at the moment to work around resolve issues, and what's the plan going to be for may be getting rid of that the workaround should carry on working But they'll, rather than being necessary, they'll change to being suboptimal. So ideally, people should look at planning to phase them out once the new resolver is in place. So there are things people can do. And it will, it will help both them and us to, to get the new resolver in place, but for hopefully the vast majority of people just enjoy the benefits of PIP not breaking your dependencies for you.
Tobias Macey
And then in terms of the broader ecosystem, and broader experience of people using PIP and programming and Python, how do you anticipate these improvements and PIP impacting the overall viability of the Python ecosystem and its use within different communities or industries that might be shying away from it because of some challenges that they faced in dependency conflicts,
Paul Moore
hopefully, I think the main thing would be I think if you take it from the other angle of the moment, dependency resolution in Python is known to be not perfect, shall we say. So there will be people that are looking at their project and saying, should I use Python? We've gotten an awful lot of dependencies. Is it going to be okay? And hopefully having a better story around how the resolve works and dependency resolution mean that people will be more confident in saying, Yeah, I can choose Python because dependencies aren't a problem. So, I mean, I don't I don't want to sort of make it sound like we're going to solve all the problems of the universe here, but a little bit more enthusiasm for Python because it doesn't have as many concerns around it will hopefully, improve adoption. It would be nice to think that the improvement in supporting tools that we were talking about previously that We'll just generally make for a nicer experience in the overall ecosystem. But that's obviously a little bit longer term. And I guess on a sort of very personal level speaking with my standards, guy hat on, seeing how PIP can use good dependency data to produce a good result will encourage the community to think more in terms of providing that data in terms of publishing the metadata statically not building it on the fly, not sort of trying to say, well, we need this here, we need that, that if if we can put all of that in a form that Pitt can just grab in one go that that will be yet further improvements, having that data available will be will be really useful. And I think the new resolver in my mind, will be a good showcase of how more data and more metadata better metadata can improve. The tools people are using, and so will bring people thinking forward on that score.
Pradyun Gedam
Yeah. And the other thing that in addition to the standardization, the other thing is the tertiary effect, which Paul's sort of de emphasizing of like getting more consistent information and good information, good metadata from existing projects. Over time, I think in the longer term, that will be the bigger, more relevant effect of it, since it will push projects to be more correct about, Hey, this is what I depend on. And that will help produce problems with the metadata itself, which right now bit masks completely, and it will start surfacing those issues with the new resolver. And those will start in the longer term getting fixed in the short term. They'll be like a bit of a pain point in the rollout possibly. I am not sure which. That's what the rollout is for figuring out. But yeah, in the longer term, in general, it will push for nicer metadata, more static metadata, all of which will be bringing more towards the nicer packaging experience than it is.
Paul Moore
It's been a It's been a long road. And this is another step on that road. And I think, looking to the future, we will be continuing down that process for some time yet, but it's it's a it's definitely another case of us going in the right direction, in my view,
Tobias Macey
in terms of your own goals. Once this body of work is done, what are some of the additional changes or improvements that you would like to see or be involved with either and PIP or some of the other core elements of the Python landscape?
Pradyun Gedam
So I think we've already mentioned a couple, one of them being static metadata, basically not having to do set it up by give me information step and just going okay, I can read this file and get it information. So putting that metadata in PI projector term, like poetry is already doing that today. But we want to make it possible for it to be done by all the tools. Right? So having an interoperability standard about this. And there's some discussion happening for three authors involved, parties involved, so to attitudes partisan vote. We're all discussing this right now. And I expect that will happen in the near future, although no promises on the timeline, because it's all done on the volunteer basis. The other is, that we've already mentioned also is being able to standardize a way for PIP and other Python packaging installers and environment managers or whatnot to get dependency information from a package index, whether it's pi p i or someone's artifactory instance or something else entirely. Being able to get that information into the installer without needing to download, do a build, look at the results. The third one, which is a lot more broader, is move further in the direction of having reusable libraries for doing these chunks of jobs in Python packaging. So going from view to installing the wheel on the system, Oh, you don't really need to do that. It's a well defined step. Maybe we could have a library that pay fuses, that actually does this. And then other people can use that library. Similarly, building packages, it's a well defined process. It's just that fit contains the only good implementation of it. Maybe if we make a common implementation that PIP also uses. Everybody can use just that step. And sort of moving to these reusable libraries to decouple the ecosystem further from the implementation to find details of, of setup tools, and so on to being more general purpose, and being more of wrappers around these libraries that handle all of the legacy options that they have, and so on and so forth. And then letting newer tooling evolve, potentially replacements, potentially improvements to PIP setup tools, whatnot, themselves that essentially allow for the evolution of the ecosystem as a whole. Because that's the goal, right? To make things better as we go and these reusable libraries and will help with that as well.
Tzu-Ping Chung
Yeah, I want to second I want to second province. Last point of reusable libraries, a lot of pips deficiencies came from the fact that PIP is a 10 year old project that was designed to do something else than what is currently doing so to speak. of the new packaging projects in Python are able to solve this problem better simply because they don't have the same history baggage paypass. But since but right now PIP is the only implementation that can do everything PIP can do. So by splitting out the up usable parts, it will be much easier for the community to come up with different solutions that fit the modern world better than PIP while PIP can keep on doing what it does best and support those people that are already using paper mesh applications.
Pradyun Gedam
Paul, do you want to add something?
Paul Moore
I could? I could thank you guys for being my standard cheerleaders. I think everything
is is essentially correct with my role as standards person That is absolutely my goal is to make sure that the ecosystem is built in such a way that if somebody wants to come along and write a specialized tool for their particular use, they don't have to reinvent all the wheels that peppers invented over time.
Pradyun Gedam
I mean, the already reinvented wheel format, so we don't want to do more of that.
Tzu-Ping Chung
Probably in the end, though, I promised
Paul Moore
my family I wasn't going to make bad puns on this podcast. You've just ruined it for me.
But yeah, I
I'm gonna collapse in disarray now. Essentially, yes. Let's standardize stuff. Let's make it reusable. Let's give the community a chance to innovate in ways that it can't when everything's getting measured against. I'd love to use your tool but it doesn't do this particular bit of what PIP does.
Tobias Macey
In terms of your experiences of working on this new dependency resolution algorithm and improving some of the overall implementation of PIP and paying down its technical debt, what are some of the most interesting or unexpected or challenging lessons that you've learned in the process
Paul Moore
from me, I think I mentioned earlier, this is the first time I've ever actually been paid to work on Python. My my day job does not involve Python, except in very minor ways. And for me, oddly enough, the thing that I found remarkable is how much how different it was working on PIP as a full time well, in my case, part time but nevertheless dedicated piece of work rather than doing it as a hobby in my spare time, the ability to to focus on the bigger problems, the ability to get deeply involved in addressing the difficult issues. Rather than finding an hour to one evening to knock off a couple of little bits that have been bugging me, that's been that's been remarkable how much how much of a difference it's made. And I don't think I had realized even as even while I have said that one of the problems PIP has is a lack of resources we've got I'm not quite sure exactly how many maintainers we've got now five or six, all of whom are volunteer, working on it in their spare time, I don't think I'd realized how much of an impact that has on what what we've been able to do with PIP if you look at a lot of the other language communities and their packaging solutions, a lot of them have funded work going on for them. If you look at conda within the Python environment that's got I believe, got corporate support, but what pips doing is is very much spare time so so actually being the the the really unexpected side for me was just realizing how much more productive it could be if we could get people working as a dedicated task on Python packaging.
Pradyun Gedam
Yeah, one of the more interesting things for me personally and his life, Paul, this is the first time I'm working on a project, getting paid to work on a bison project. It just so happens that it's also the first time I'm probably getting paid to do work, because I'm fresh out of college that is still in college. And I think over the past few years, one of the things that's been super interesting to me as a contributor is just seeing how different the ecosystem funding situation is. Because if I look at other package managers, NPM, cargo, bundler. All of those have some sort of big financial backing organization that's paying people full time to work on these things, and people coming from other toolings other ecosystems come in expecting PIP is working the same way. And for a little bit there, the reality of it was, it was just a college student sitting in his dorm room and fixing those issues or responding to that issue comments. And it's, it took me a while after actually having started working on this to go, Wow, that was the situation this tooling was in. And that's not great. But the other part of it is, it's been awesome to actually have funding to do this stuff. Right. This is the first time that FIPS ever had funding go towards it. And kudos to everybody in the Python software foundations packaging working group who made this happen. Yeah, it's just really different approach to working. The feedback loops are faster, at least compared to what they were earlier. They're not instant, or super fast, but it's really nice to make the faster. The other probably thing I have learned would be just the sheer A number of ways people use prep. In the three years that I've been contributing slash maintaining it, I have got a glimpse of it. But to actually have user surveys and see the results of those and sort of get that information in data form, rather than a qualitative form has been really enlightening. To put it in one way, I guess. And the face to face time with other contributors has been really nice, because as I said, there's not been much resources. So we have not actually like spent time discussing over calls about issues. Most of it has just been collaboration over comments on GitHub, and mailing lists, and disgusted by Tinder and to have like, the ability to go, Hey, do you have a minute to help in a call and discuss this real quick and getting a response within the app and our has been basically amazing compared to needing to wait for Three days to get a response to a detailed comment you made because it was too long to read for someone in the 10 minutes that they had that day. It's,
Tzu-Ping Chung
yeah, that's been a big change in 3d price. So for background, I'm the only one of the three that was not a pip maintainer before joining the project. So my so my impression to the project is entirely opposite in a way. So this is not my first freelancing work. So I've been doing remote work for a while, but this is the first time I'm working with people real time in different time zones, like literally have the globe away.
Pradyun Gedam
Almost everyone in the team is in a different time zone.
Tzu-Ping Chung
Yeah, I believe the four of us are in different time zone right now. So both porn probably mentioned the feedback that was fast. For me the feedback to what initially was terribly slow because everyone's in different time. time zones and they are all working part time or because we are dealing with different posit Pip. And even though we have to PIP in 10 years, they don't necessarily have knowledge to all of the code in Pip. So sometimes we the feedback from other PIP maintainers. And like comparing to a corporate setting, like you can just walk by or just tell me, hey, maybe I would drop by tomorrow morning and see what we can work out. This doesn't happen with this project. So this has been a renovation to me. And it is also very, I think surprising is a world where I'm very impressed how everyone has managed this project, with so much restrictions going on. And it really takes everyone allow the discipline, independence and self consciousness to push things forward.
Tobias Macey
All right. Are there any other aspects of the dependency resolution work that you're doing or your experiences Working on PIP or your experiences of contributing to the overall Python community that we didn't discuss that you'd like to cover before we close out the show.
Pradyun Gedam
I think one of the nice effects of this project actually happening along with API rollout happening earlier, is sort of acting as a showcase to the Python packaging system that, hey, we can actually get funding for open source projects to do stuff with, like, if we can show people that, hey, this money will be put to good use. It's possible for us to get and raise funding for targeted open source projects, which is a really good model for pushing open source tooling forward because for end users, there's a clear expectation and what the funding should result in as an output for them with Let's be honest, how most funding works, and no one's going to give you money to do just whatever you want. And on the other hand, that funding can be used to improve maintainer availability, developer time availability, and improve the project, not just in that one dimension. But also other dimensions, right, put that money towards general maintenance as well. We've had a release process, well, I've been getting paid to work on it. And this release went fairly smoothly compared to the first one because I had a lot more time to pay attention to details and things like those are reasons that we should get funding into source. And I think the model that this project shows of, Hey, have a targeted project, raise funding for it and get the existing maintainers to work on it is perhaps not sustainable, but definitely better than not having funding at
Paul Moore
I think one other thing I would say, and this is everybody says this, I'm going to be so unoriginal. It's not true. But it's been really, really enjoyable working on this project. I think the people in the Python community, the people, we have people I've dealt with, make it so much fun. I've had a blast. And that is all down to the community as a whole the support we get from the people using PIP the support we get from the people working with us. It's It's amazing. And I think that's something that I think it's important to remember. We've got an amazing community and it makes such a difference.
Pradyun Gedam
Yes, definitely. Definitely. It's been so nice to get like random tweets from people like hey, this vieta like the alpha is over works now. Yay. And it's like, it makes your day It makes you really happy. That stuff. You're doing has almost immediate impact on users. And they're super stoked about and about it and also motivating you and telling you Hey, good stuff. It's really nice. I interrupted to think sorry.
Tzu-Ping Chung
No, I was just saying, I very much agree what Paul said. I'm not going to say that it's been a blast that's through the game, but maybe we can do it again, for something else.
Pradyun Gedam
How much of that is because we've had to deal with a lot of technical debt, though.
Tzu-Ping Chung
I don't know. Maybe it's just very nice to have someone to talk to when you're hitting with, like the technical depths like you're not alone in this. There are other people also suffering from the same problems as you are. Yeah, it's a very good experience.
Tobias Macey
All right. Well, for anybody who wants to follow along with the work that you're all doing or get in touch or get involved, I'll help you add your preferred contact information to the show notes. And with that, I'll move us into the pics and So how about we start with you to ping.
Tzu-Ping Chung
The first one is pi launcher from Brett cannon, a core developer and also a member of steering wheel console. Pi Python launcher, pylon draft forgot the name. I'll provide a link later. So it's a project in rust that when you type pi in your command, it dynamically loads up all the pythons in your system and launch the best one for you. And you can supply flags to it to let it choose maybe Python two and Python 3.5 specifically. So if you're familiar with Windows, their Python has had a similar thing called pi the launcher for Windows p y.xe. Which looks in the registry for install pythons and launch one for you based on the arguments are passed in. And, and but we don't really have something similar on Macs or Windows. So that was sort of for Linux, and that has been the problem in tutorials because you need to provide different commands in like four different platforms. And Python laundry is a project that aims to solve this problem. So we can have one canonical way to launch Python. And the other one is is a book author, Joe Abercrombie. He writes fantasy novels I've been reading his shadow sees trilogy, and it was, it's been very interesting. And I've recently picked up the other series he had I forgot the name. I forgot the name. And it's, um, I'm also really looking forward to it. And if you are into these kind of novels, fantasy novels, I highly recommend his writing. And the final one is maybe anime in general. So I know anime is not a big thing outside of Eastern Asia. But And a lot of people think that anime is for kids like cartoons. But comparing animators to movies is kind of like comparing Python to Windows, which does not really make sense. One of one of them is an art form, and one of them is a media. So if you're like, if you have been staying home for too long and are looking for something to pass time, my recommend, like maybe watching maybe watching some more adult things, animals, and maybe you will find something suitable for you.
Tobias Macey
And Paul, do you have any pics this week?
Paul Moore
I guess one of one of the things that I've found has been really useful for me I've, as part of this project, I had to set up a brand new laptop and a couple of really nice tools that I've enjoyed using for that are on the Python side, there's a project called PIP x, which basically lets you set up Python projects that work Sort of command line tools, things like black things like Knox talks, you can set them up as local little commands, they've got their own isolated environment, you don't have them installed within your own main environment. And they just run like independent commands. It's really great for managing all of those tools that make a PC turn into a usable working environment. And the similar thing that I use, I'm based on Windows and finding all the little tools things like Unix utilities command line tools that that you want to use. There's a really nice package manager that I've discovered and use a lot which is called scoop, which effectively manages you installing and uninstalling all those little tools that you want to get hold of just from a simple command line interface. It's really helped me set things up fast and not have to worry about Why am I on a PC that hasn't got such a thing installed again? So, those are really great. I've thoroughly enjoyed them. And I guess following on from the theme of, sort of, what do you do while you're in lockdown?
A guy I've read a lot of books by
his Neil Gaiman who I guess a lot of people will have heard of.
His books, I think are really
He's done a lot of work on TV as well. I've recently watched the
TV series of good omens, which is a thing he did with Terry Pratchett, which I thought was absolutely fantastic. I've thoroughly enjoyed watching that I'd recommend it to anybody who enjoys either Terry Pratchett or or Neil Gaiman.
Tzu-Ping Chung
Well worth a look, says Paul. just mentioned scoop and PIP x i made a scoop formula I forgot the name recipe scoop recipe for paybacks, so you don't need to globally install PIP X to your Python. Excellent.
Pradyun Gedam
Well, Paul picked up defects which was going to be one of my picks. So I guess I'll keep going with the things that have kept me sane in this lock downward that Paul picked up. I think two of them have been mostly recreational things because I've had enough work stuff going on to keep me occupied luckily. So it's been music by Chris Daughtry, who has been around since quite a while he was like an American Idol, contestant 2006, and so on. And he's like an alternative rock band. Trees band is an alternating rock band, and it's a nice bunch of people. They have a really good discography. I don't know of a single song in there that I don't like. So that has kept me sane. The other thing is probably been architect, which is in the game where you build and manage an amusement park, basically. And it's sort of a really good time way to put down a whole bunch of creative energy in one place, and build stuff that at least makes me feel happy about them. So that's another.
Tobias Macey
All right, and for my picks this week, I just recently started experimenting with the language server protocol and its implementation in Emacs with LSP mode. And it's just a way of being able to have one way of sharing a lot of the Id like behavior between different editors, so things like completions and linting and being able to navigate to definitions or finally references to different variables. So definitely a pretty great way of being able to have your environment easier to manage and portable across different editors. So I've been enjoying that and recommend that for anybody who's using any sort of text editing environment. And so with that, I will close out the show. I just want to thank you all for taking the time today to join me and share the work that you're doing on improving the dependency resolution and Pip. It's something that is going to benefit everybody in the community and something that I look forward to taking advantage of myself. So I appreciate all of your time and effort on that and I hope you enjoy the rest of your day.
Paul Moore
Thank you. Thanks, Tobias.
Pradyun Gedam
Thanks, Tobias. Have a good day, folks.
Tzu-Ping Chung
Right Bye bye. Bye everyone.
Tobias Macey
Thank you for listening. Don't forget to check out our other show the data engineering podcast at data engineering podcasts comm for the latest on modern data management and visit the site at Python pi Cast calm to subscribe to the show, sign up for the mailing list and read the show notes. If you've learned something or try it out a project from the show then tell us about it. Email hosts at podcasting with your story. To help other people find the show, please leave a review on iTunes and tell your friends and coworkers
Liked it? Take a second to support Podcast.__init__ on Patreon!