• pjmlp 9 hours ago

    I really find a bummer that there is such a resistance to make JIT enable versions available for download, alongside the other variants.

    Naturally I can easily compile my own Python 3.13 version, no biggie.

    However from my experience, this makes many people that could potentially try it out and give feedback, don't care and rather wait.

    • Ralfp 6 hours ago

      JIT is in weird place now because according to a recent talk from PyCON US by it’s author, the tier 2 optimizer that prepares code for JIT reduces performance by 20% and JIT just recovers that performance loss.

      A lot of langugage is still not optimized by tier 2 [1] and even less has copy and patch templates for JIT. And JIT itself currently has some memory management issues to iron out.

      [1]: https://github.com/python/cpython/issues/118093

      Talk by Brandt Butcher was there but it was made private:

      https://www.youtube.com/watch?v=wr0fVU3Ajwc

      • pjmlp 5 hours ago

        I have seen some stuff about that, however that is to be expected.

        Many of these kind of changes take time, and require multiple interactions.

        See Go or .NET tooling bootstraping, all the years that took MaximeVM to evolve into GraalVM, Swift evolution versus Objective-C, Java/Kotlin AOT evolution story on Android, and so on.

        If only people that really deeply care get to compile from source to try out the JIT and give feedback, it will have even less people trying it out than those that bother with PyPy.

        • targafarian 2 hours ago

          On the other hand, if people who don't care enough to compile it for themselves try it out, the Python devs can be flooded with complaints and bug reports that effectively come down to it being in alpha.

          You get both sides (yes, you might limit some who would otherwise try it out).

          I think requiring people to compile to try out such a still-fraught, alpha-level feature isn't too onerous. (And that's only from official sources; third parties can offer compiled versions to their hearts' content!)

          • sevensor 3 hours ago

            > those that bother with PyPy

            Which itself needed to be compiled from source the first time I tried it. All the hours of Mandelbrot were worth the spectacular speedup.

            • pjmlp 2 hours ago

              Which was a thing several years ago, not in 2024.

              As mentioned I don't have any issues compiling it myself, more of an adoption kind of remark.

              • mattip an hour ago

                There are portable downloads of PyPy here https://pypy.org/download.html

          • pzo 8 hours ago

            not sure if there will distributed in homebrew et al but at least "Pre-built binaries marked as free-threaded can be installed as part of the official Windows and macOS installers" [0]

            [0] https://docs.python.org/3.13/whatsnew/3.13.html#free-threade...

            • pjmlp 8 hours ago

              Yes, that is my point, GIL free is available, but not wit JIT enabled.

              • nerdponx 8 hours ago

                Hopefully Macports will decide to offer it.

              • jononor 4 hours ago

                It will happen in one of the later releases. They might not be ready for widespread testing from people who are not willing to build from source, yet.

              • MrThoughtful 10 hours ago

                Removing the GIL sounds like it will make typical Python programs slower and will introduce a lot of complexity?

                What is the real world benefit we will get in return?

                In the rare case where I need to max out more than one CPU core, I usually implement that by having the OS run multiple instances of my program and put a bit of parallelization logic into the program itself. Like in the mandelbrot example the author gives, I would simply tell each instance of the program which part of the image it will calculate.

                • tonygrue 10 hours ago

                  There is an argument that if you need in process multithreading you should use a different language. But a lot of people need to use python because everything else they’re doing is in python.

                  There are quite a few common cases where in process multi threading is useful. The main ones are where you have large inputs or large outputs to the work units. In process is nice because you can move the input or output state to the work units instead of having to copy it.

                  One very common case is almost all gui applications. Where you want to be able to do all work on background threads and just move data back and forth from the coordinating ui thread. JavaScript’s lack of support here, outside of a native language compiled into emscripten, is one reason web apps are so hard to make jankless. The copies of data across web workers or python processes are quite expensive as far as things go.

                  Once a week or so, I run into a high compute python scenario where the existing forms of multiprocessing fail me. Large shared inputs and or don’t want the multiprocess overhead; but GIL slows everything down.

                  • vlovich123 9 hours ago

                    > Where you want to be able to do all work on background threads and just move data back and forth from the coordinating ui thread. JavaScript’s lack of support here, outside of a native language compiled into emscripten, is one reason web apps are so hard to make jankless

                    I thought transferring array buffers through web workers didn’t involve any copies of you actually transferred ownership:

                        worker.postMessage(view.buffer, [view.buffer]);
                    
                    I can understand that web workers might be more annoying to orchestrate than native threads and the like but I’m not sure that it lacks the primitives to make it possible. More likely it’s really hard to have a pauseless GC for JS (Python predominantly relies on reference counting and uses gc just to catch cycles).
                    • Etheryte 8 hours ago

                      This is true, but when do you really work with array buffers in Javascript? The default choice for whatever it is that you're doing is almost always something else, save for a few edge cases, and then you're stuck trying to bend your business logic to a different data type.

                      • vlovich123 7 hours ago

                        That’s a choice you get to make and probably depends on your problem domain and other things. For example when I was writing R2 it was all ArrayBuffers up and down the stack. And you could use something like capnproto or flat buffers for managing your object graph within an array buffer. But yeah, being able to transfer custom object graphs would be more powerful.

                        • tombl 4 hours ago

                          Is this some internal cloudflare feature flag or can everybody pass ArrayBuffers zero-copy via service bindings?

                          (random question, totally understand if you're not the right person to ask)

                        • formerly_proven 6 hours ago

                          There is this assumption in these discussions that anything consuming significant CPU must necessarily have a simple interface that’s easy to reduce to a C-level ABI, like calling an ML library on an image, a numpy function on two big matrices or some encryption function. Therefore it is trivial to just move these to native code with an easy, narrow interface.

                          This assumption is incorrect. There are plenty of problems that consist entirely of business logic manipulating large and complex object graphs. “Just rewrite the hot function in rust, bro” and “just import multiprocessing, bro” are functionally identical to rewriting most of the application for these.

                          The performance work of the last few years, free threading and JIT are very valuable for these. All the rest is already written in C.

                          • wruza 3 hours ago

                            It's a good assumption though, because it keeps (in this case kept) closed the door to the absolutely nightmarish landscape of "multithreading to the masses". Those who made it open probably see it better, but, imo and ime, it should remain closed. Maybe they'll manage to handle it this time, but I'm 95% sure it's gonna be yet another round of ass pain for the world of python.

                            • pansa2 3 hours ago

                              > “Just rewrite the hot function in rust, bro” and “just import multiprocessing, bro” are functionally identical to rewriting most of the application for these.

                              Isn't "just use threads, bro" likely to be equally difficult?

                    • inoop 21 minutes ago

                      As always, it depends a lot on what you're doing, and a lot of people are using Python for AI.

                      One of the drawbacks of multi-processing versus multi-threading is that you cannot share memory (easily, cheaply) between processes. During model training, and even during inference, this becomes a problem.

                      For example, imagine a high volume, low latency, synchronous computer vision inference service. If you're handling each request in a different process, then you're going to have to jump through a bunch of hoops to make this performant. For example, you'll need to use shared memory to move data around, because images are large, and sockets are slow. Another issue is that each process will need a different copy of the model in GPU memory, which is a problem in a world where GPU memory is at a premium. You could of course have a single process for the GPU processing part of your model, and then automatically batch inputs into this process, etc. etc. (and people do) but all this is just to work around the lack of proper threading support in Python.

                      By the way, if anyone is struggling with these challenges today, I recommend taking a peek at nvidia's Triton inference server (https://github.com/triton-inference-server/server), which handles a lot of these details for you. It supports things like zero-copy sharing of tensors between parts of your model running in different processes/threads and does auto-batching between requests as well. Especially auto-batching gave us big throughput increase with a minor latency penalty!

                      • bdd8f1df777b 2 hours ago

                        The biggest use case (that I am aware of) of GIL-less Python is for parallel feeding data into ML model training.

                        * PyTorch currently uses `multiprocessing` for that, but it is fraught with bugs and with less than ideal performance, which is sorely needed for ML training (it can starve the GPU).

                        * Tensorflow just discards Python for data loading. Its data loaders are actually in C++ so it has no performance problems. But it is so inflexible that it is always painful for me to load data in TF.

                        Given how hot ML is, and how Python is currently the major language for ML, it makes sense for them to optimize for this.

                        • lifthrasiir 9 hours ago

                          > Removing the GIL sounds like it will make typical Python programs slower and will introduce a lot of complexity?

                          This was the original reason for CPython to retain GIL for very long time, and probably true for most of that time. That's why the eventual GIL removal had to be paired with other important performance improvements like JIT, which was only implemented after some feasible paths were found and explicitly funded by a big sponsor.

                          • klranh 3 hours ago

                            That is the official story. None of it has materialized so far.

                          • simonw 10 hours ago

                            My hunch is that in just a few years time single core computers will be almost extinct. Removing the GIL now feels to me like good strategic preparation for the near future.

                            • naming_the_user 10 hours ago

                              It depends what you mean by extinct.

                              I can't think of any actual computer outside of embedded that has been single core for at least a decade. The Core Duo and Athlon X2 were released almost 20 years ago now and within a few years basically everything was multicore.

                              (When did we get old?)

                              If you mean that single core workloads will be extinct, well, that's a harder sell.

                              • simonw 9 hours ago

                                Yeah, I just checked and even a RaspberryPi has four cores these days. So I guess they went extinct a long time ago!

                                • poincaredisk 6 hours ago

                                  Yes, but:

                                  * Most of the programs I write are not (trivially) parallelizable, and a the bottleneck is still a single core performance

                                  * There is more than one process at any time, especially on servers. Other cores are also busy and have their own work to do.

                                  • formerly_proven 6 hours ago

                                    Even many microcontrollers have multiple cores nowadays. It’s not the norm just yet, though.

                                • im3w1l 10 hours ago

                                  Single core computers yes. Single core containers though..

                                  • gomerspiles 9 hours ago

                                    Single core containers are also a terrible idea. Life got much less deadlocked as soon as there were 2+ processors everywhere.

                                    (Huh, people like hard OS design problems for marginal behavior? OSes had trouble adopting SMP but we also got to jettison a lot of deadlock discussions as soon as there was CPU 2. It only takes a few people not prioritizing 1 CPU testing at any layer to make your 1 CPU container much worse than a 2 VCPU container limited to a 1 CPU average.)

                                • pansa2 8 hours ago

                                  > What is the real world benefit we will get in return?

                                  If you have many CPU cores and an embarrassingly parallel algorithm, multi-threaded Python can now approach the performance of a single-threaded compiled language.

                                  • Certhas 8 hours ago

                                    The question really is if one couldn't make multiprocess better instead of multithreaded. I did a ton of MPI work with python ten years ago already.

                                    What's more I am now seeing in Julia that multithreading doesn't scale to larger core counts (like 128) due to the garbage collector. I had to revert to multithreaded again.

                                    • Sakos 7 hours ago

                                      I assume you meant you had to revert to multiprocess?

                                      • Certhas 43 minutes ago

                                        Yes exactly. Thanks.

                                    • 0x000xca0xfe 3 hours ago

                                      You could already easily parallelize with the multiprocessing module.

                                      The real difference is the lower communication overhead between threads vs. processes thanks to a shared address space.

                                      • bmitc an hour ago

                                        Easily is an overstatement. Multiprocessing is fraught with quirks.

                                        • 0x000xca0xfe 13 minutes ago

                                          Well I once had an analytics/statistics tool that regularly chewed through a couple GBs of CSV files. After enough features had been added it took almost 5 minutes per run which got really annoying.

                                          It took me less than an hour to add multiprocessing to analyze each file in its own process and merge the results together at the end. The runtime dropped to a couple seconds on my 24 thread machine.

                                          It really was much easier than expected. Rewriting it in C++ would have probably taken a week.

                                      • bmitc an hour ago

                                        That's not really correct. Python is by far the slowest mainstream language. It is embarrassingly slow. Further more, several mainstream compiled languages are already multicore compatible and have been for decades. So comparing against a single-threaded language or program doesn't make sense.

                                        All this really means is that Python catches up on decades old language design.

                                        However, it simply adds yet another design input. Python's threading, multiprocessing, and asyncio paradigms were all developed to get around the limitations of Python's performance issues and the lack of support for multicore. So my question is, how does this change affect the decision tree for selecting which paradigm(s) to use?

                                      • Zyten 9 hours ago

                                        What you’re describing is basically using MPI in some way, shape or form. This works, but also can introduce a lot of complexity. If your program doesn’t need to communicate, then it’s easy. But that’s not the case for all programs. Especially once we’re talking about simulations and other applications running on HPC systems.

                                        Sometimes it’s also easier to split work using multiple threads. Other programming languages let you do that and actually use multiple threads efficiently. In Python, the benefit was just too limited due to the GIL.

                                        • carapace 27 minutes ago

                                          > What is the real world benefit we will get in return?

                                          None. I've been using Python "in anger" for twenty years and the GIL has been a problem zero times. It seems to me that removing the GIL will only make for more difficulty in debugging.

                                          • cma 10 hours ago

                                            There will be consumer chips with 64 cores before long

                                          • nmstoker 5 hours ago

                                            Just a minor correction: it's looking like the release will be 7th October, back from 2nd October, for the reasons discussed here:

                                            https://discuss.python.org/t/incremental-gc-and-pushing-back...

                                            • drewsberry 3 hours ago

                                              Thanks for pointing this out, I hadn't seen that – I've just pushed an update to reflect this.

                                            • pininja 10 hours ago

                                              I remember first discussions about removing the GIL back in 2021 and a lot of initial confusion about what the implications would be. This is a great summary if, like me, you weren’t satisfied with the initial explanations given at the time.

                                              • ffsm8 7 hours ago

                                                2021 wasn't the first discussion about that.

                                                You can find forum and Reddit posts going back 15-20 years of people attempting to remove the GIL, Guido van Rossum just made the requirement that single core performance cannot be hurt by removing it, this made ever previous attempt fail in the end

                                                I.e. https://www.artima.com/weblogs/viewpost.jsp?thread=214235

                                                • pansa2 7 hours ago

                                                  Did this attempt manage to preserve single-threaded performance, or was the requirement dropped?

                                                  • rightbyte 7 hours ago

                                                    He folded.

                                                    The patches dropped some unrelated dead weight such that the effect is not as bad.

                                                    • klranh 3 hours ago

                                                      The effect is still 20-50% slowdown for single thread, even with the purported unrelated speedups to make the feature more palatable.

                                                      That is absolutely in the range of previous attempts, which were rejected! The difference here is that its goes in now to gratify Facebook.

                                              • bilsbie an hour ago

                                                This sounds silly but I’ve actually turned off garbage collection in short running, small memory programs and gotten a big speed boost.

                                                I wonder if that’s something they could automate? I’m sure there are some weird risks with that. Maybe a small program ends up eating all your memory in some edge case?

                                                • Arch-TK an hour ago

                                                  Reliable implementation most likely involves solving the halting problem.

                                                  • jordan_bonecut an hour ago

                                                    Disagree, there are practical trivial subsets of the halting problem to which I imagine many short-running scripts would conform.

                                                  • tln 28 minutes ago

                                                    Turn on GC after the first sbrk

                                                    • theossuary an hour ago

                                                      You'd run into the halting problem. Maybe for some small subset of programs it'd be possible to prove a short runtime, but in general it wouldn't be, so this type of automation wouldn't be impossible.

                                                      It sounds like maybe you want GCs to be very tunable? That way, developers and operators can change how it runs for a given workload. That's actually one of the (few) reasons I like Java, its ability to monitor and tune its GC is awesome.

                                                      No one GC is going to be optimal for all workloads/usages. But it seems like the prevailing thought is to change the code to suit the GC where absolutely necessary, instead of tuning the GC to the workload. I'm not sure why that is?

                                                    • sandos 7 hours ago

                                                      So is it impossible to optimize the no-GIL case more really? 20% slower sounds like a lot really.

                                                      • Ralfp 6 hours ago

                                                        Where it says that? Its simply that Python releases features in yearly cycles and thats what was completed for release.

                                                        Idea is to let people experiment with no-GIL to see what it breaks while maintainers and outside contractors improve the performance in future versions.

                                                        • klranh 3 hours ago

                                                          No that was not the idea. The feature went in under the assumption that the single thread slowdown would be offset by other minor speed improvements.

                                                          That was literally the official reason why it was accepted. Now we have slowdowns ranging from 20-50% compared to Python 3.9.

                                                          What outside contractors would fix the issue? The Python ruling class has chased away most people who actually have a clue about the Python C-API, which are now replaced by people pretending to understand the C-API and generating billable hours.

                                                      • djoldman 3 hours ago

                                                        Slightly off topic: does anyone have a link to recent work done toward automatic parallelization?

                                                        (Write single threaded code and have a compiler create multithreaded code)

                                                        https://en.m.wikipedia.org/wiki/Automatic_parallelization_to...

                                                        • Vecr 2 hours ago

                                                          Rust's Rayon could probably have a mode with additional heuristics, so it wouldn't multi-thread if it guessed the memory usage increase or overhead would be too much.

                                                          Not really automatic, but for some iterator chains you can just slap an adapter on.

                                                          Currently you have to think and benchmark, but for some scripting type applications the increased overhead of the heuristics might be justified, as long as it was a new mode.

                                                        • bilsbie an hour ago

                                                          I wonder if they could now add a way for the interpreter to automatically find instances where it could run your code in parallel?

                                                          You’d think certain patterns could be probably safe and the interpreter could take the initiative.

                                                          Is there a term for this concept?

                                                          • bmitc 43 minutes ago

                                                            No way that's possible or even desirable with Python's OOP and mutable nature and scoping structure.

                                                          • William_BB 5 hours ago

                                                            Can someone explain this part:

                                                            > What happens if multiple threads try to access / edit the same object at the same time? Imagine one thread is trying to add to a dict while another is trying to read from it. There are two options here

                                                            Why not just ignore this fact, like C and C++? Worst case this is a datarace, best case the programmer either puts the lock or writes a thread safe dict themselves? What am I missing here?

                                                            • oconnor663 7 minutes ago

                                                              It's harder to ignore the problem in Python, because reference counting turns every read into a write, incrementing and then decrementing the recount of the object you read. For example, calling "my_mutex.lock()" has already messed with the recount on my_mutex before any locking happens. If races between threads could corrupt those refcounts, there's no way you could code around that. Right now the GIL protects those refcounts, so without a GIL you need a big change to make them all atomic.

                                                              • 0xFEE1DEAD 4 hours ago

                                                                Let me preface this by saying I have no source to prove what I’m about to say, but Guido van Rossum aimed to create a programming language that feels more like a natural language without being just a toy language. He envisioned a real programming language that could be used by non-programmers, and for this to work it couldn’t contain the usual footguns.

                                                                One could argue that he succeeded, considering how many members of the scientific community, who don’t primarily see themselves as programmers, use Python.

                                                                • William_BB 3 hours ago

                                                                  All the answers were good, but I think this explained it the best. Thank you

                                                                • Numerlor 4 hours ago

                                                                  There is an expectation of not dealing with data races from Python code. Apart from being a language where people expect these things to not be an issue it is also the behaviour with thw Gil in place and would be breaking

                                                                  • tliltocatl 4 hours ago

                                                                    Memory safety, heap integrity and GC correctness i guess. If you ignore facts like C your language will be as safe as C, except it's worse because at least C doesn't silently rearrange your heap in background.

                                                                    • fulafel 3 hours ago

                                                                      We don't want Python programs to become the same kind of security swiss cheese as C/C++ programs.

                                                                      • LudwigNagasena 4 hours ago

                                                                        Why not just use C and C++ at that point? People use Python because they don't want to manage data races or re-implement a hash table.

                                                                        • William_BB 4 hours ago

                                                                          This was not a gotcha, sorry if it came across that way. It was a genuine question

                                                                      • zahsdga 5 hours ago

                                                                        The performance degradation with nogil is quoted as 20%. It can easily be as much as 50%.

                                                                        The JIT does not seem to help much. All in all a very disappointing release that may be a reflection of the social and corporate issues in CPython.

                                                                        A couple of people have discovered that they can milk CPython by promising features, silencing those who are not 100% enthusiastic and then underdeliver. Marketing takes care of the rest.

                                                                        • nmstoker 5 hours ago

                                                                          Could you clarify what/who you mean by the final sentence. It gives the impression you didn't take on board the articles mention about the three phases.

                                                                          • jononor 4 hours ago

                                                                            Why are you disappointed? Do you think that the progress should be faster? Or that this work should never have been started? Or that they should wait until it works better before integrering in master and having it in a mainline release?

                                                                          • v3ss0n 7 hours ago

                                                                            Experimentally JIT WHILE there is totally stable , 4-100x faster, almost 100% compatible PyPy exist and all they need is adopt it but didn't due to some politics.

                                                                            Yeah right..

                                                                            • qeternity 4 hours ago

                                                                              Python allows a lot of paradigms that are really difficult to JIT. I have personally seen many instances where PyPy is actually slower than CPython for a basic web app because of this.

                                                                              • v3ss0n an hour ago

                                                                                It is only slow when you using C-Extension. Basic webapp don't use C-Extensions. If you are going to claim such please provide evidence. We have a chat server in production written in PyPy and Tornadoweb which had sped up 20x from CPython version and test benchmarks against GoLang based Websocket with Channels and Nodejs - The pypy implementation is 30% faster than node and 10-20% faster than Golang . Big plus is PyPy version drops memory usage by 80% . CPython version have around 200MB Ram usage and pypy version only about 40MB.

                                                                                On Heavy load (10k concurrent test) PyPy and Golang version are stable but Node version stop responding sometimes and packet losses occurs.

                                                                              • ksec 6 hours ago

                                                                                I don't follow Python closely. So why PyPy isn't adopted?

                                                                                • v3ss0n an hour ago

                                                                                  The main reason GVR said is "May be this is future of Python , but not now" . At that time pypy was already stable at 2.7.x version but python 2-3 split happen. PyPy team have to rewrite everything for Python3 and took them a while to catchup. I had managed to talk with one ex pypy developer that the main reason GVR and Python community do not promote much about PyPy is due to NIH - Not invented here - Since PyPy is developed by seperate but full time computer scientist and researchers, PHDs , not by the Guido community . https://pypy.org/people.html

                                                                                  One problem about popularity of pypy is they don't do any advertisement , promotion which I had critically voiced about it in their community - they moved to Github finally.

                                                                                  Only other problem is CPython Ext , which is compatible but a little bit slower than CPython - that only pain point we have - which could be solved if there are more contributors and users using pypy. Actually Python , written in Python should be the main

                                                                                  • dagw 5 hours ago

                                                                                    The biggest reasons is that it isn't compatible with the python C API. So any library that uses the C API would have to be rewritten.

                                                                                    • tekknolagi 5 hours ago

                                                                                      They have gotten much better about that recently and it's much much more compatible now.

                                                                                  • jononor 4 hours ago

                                                                                    Thankfully everyone is free to use PyPy, if they consider it the better solution!

                                                                                    • v3ss0n 13 minutes ago

                                                                                      Splitting efforts aren't good . PyPy is decade effort of making JIT Python and they had big success. But it doesn't get the praise by the python community - and they are only doing NIH solutions again and again. (First attemped by Google and they failed - unladen python , now this experiment results dosen't sound good).

                                                                                      Why ignore millions of dollars spent in a decade effort of fulltime PHD researchers' work and doing their own thing?

                                                                                      Yeah NIH is helluva drug.

                                                                                  • geor9e an hour ago

                                                                                    is 3.13 bigger than 3.9

                                                                                    • wruza 6 hours ago

                                                                                      It’s worth mentioning that there is a bit more overhead in using multiple processes as opposed to multiple threads, in addition to it being more difficult to share data.

                                                                                      There’s probably a whole generation of programmers (if not two) who don’t know the feeling of shooting yourself in the foot with multithreading. You spend a month on a prototype, then some more to hack it all together for semi-real world situations, polish the edges, etc. And then it falls flat day 1 due to unexpected races. Not a bad thing on itself, transferrable experience is always valuable. And don’t worry, this one is. Enough ecos where it’s not “difficult to share data”.

                                                                                      • jononor 4 hours ago

                                                                                        This. Multi-threading is very prone to nasty, hard to reproduce, bugs if used liberally in a codebase. It really should be used with care, and compartmentalized to areas where it demonstratively brings critical performance improvements.

                                                                                        • neonsunset 3 hours ago

                                                                                          Let’s not tout abysmal state of multi-tasking in Python as an advantage.

                                                                                          Somehow, it’s perfectly fine in C#, Java, now Go, Rust and many other languages, with relatively low frequency of defects.

                                                                                          • jononor 3 hours ago

                                                                                            That is not all what I did. But there are several languages/ecosystems which are very prone to such issues - notably C and C++. It is critical important (in my opinion) that Python does not regress to anywhere near that state. This might seem like an invalid concern - after all Python is a high-level, memory-safe language - right? The problem is that use of extensions (mostly in C or C++) are very common, and they rely on particular semantics - which are now changing. A vast amount of Python programs make use of such extensions, probably the majority of programs (even if excluding the standard library). Some of the biggest challenges around nogil and multi-threading in Python are mostly around extension/C related stuff. It was notably also on of the main challenges faced by PyPy. So maybe it's actually a little bit tricky to get right - and not just the developers being stupid or lazy ;) I mean in addition to the the usual general trickiness of major changes making to the core of an interpreter relied by thousands of companies/projects, in a wide range of fields, built over decades, with varying level of quality and maintenance....

                                                                                            • neonsunset 2 hours ago

                                                                                              Right, I should have phrased it better - did not intend to make it sound like criticism of your reply. Was just aiming at the tendency to dismiss valid concerns with "it's actually a good thing we don't have it" or "it can't be done well".

                                                                                              Of course changing the concurrency guarantees the code relies on and makes assumptions about is one of the most breaking changes that can be made to a language, with very unpleasant failure modes.

                                                                                              • jononor 2 hours ago

                                                                                                Understood. There has been some amount of that in the past. And probably this kind of pushback will rise up again as the work starts to materialize. I think some are a bit fearful of the potential bad consequences - and it remains unclear which will materialize and which will be non-issues. And of course some have other things they wish to see improved instead, cause they are satisfied with current state. Actually many will be quite happy with (or at least have accepted) the current state - cause those that were/are not probably do not use Python much!

                                                                                                What I see from the development team and close community so far has been quite trust building for me. Slow and steady, gradual integration and testing with feature flags, improving related areas in preparation (like better/simplified C APIs), etc.

                                                                                      • mg74 5 hours ago

                                                                                        The number one thing I wish was addressed in future version of Python is the semantically significant whitespace.

                                                                                        Python is absolutely the worst language to work in with respect to code formatters. In any other language I can write my code, pressing enter or skipping enter however I want, and then the auto formatter just fixes it and makes it look like normal code. But in python, a forgotten space or an extra space, and it just gives up.

                                                                                        It wouldn't even take much, just add a "end" keyword and the LSP's could just take care of the rest.

                                                                                        GIL and JIT are nice, but please give me end.

                                                                                        • dbrueck 2 hours ago

                                                                                          Whitespace is semantically significant in nearly all modern programming languages, the difference is that with Python it is completely significant for both the humans and the tools - it is syntactically significant.

                                                                                          I've actively used Python for a quarter of a century (solo, with small teams, with large teams, and with whole dev orgs) and the number of times that not having redundant block delimiters has caused problems is vanishingly small and, interestingly, is on par with the number of times I've had problems with redundant block delimiters getting out of sync, i.e. some variation of

                                                                                            if (a > b)
                                                                                              i++;
                                                                                              j++;
                                                                                          
                                                                                          Anytime I switch from Python to another language for awhile, one of the first annoying things is the need to add braces everywhere, and it really rubs because you are reminded how unnecessary they are.

                                                                                          Anyway, you can always write #end if you'd like. ;-)

                                                                                          • alfiopuglisi 5 hours ago

                                                                                            The day your wish is fullfilled is the day I stop working with Python. I can't stand all those useless braces everywhere, why are they there at all since good practice mandates proper indentation anyway?

                                                                                            I am at the point where I prefer single quotes for strings, instead of double quotes, just because they feel cleaner. And unfortunately pep8 sometimes mandates double quotes for reasons unknown.

                                                                                            • eviks 39 minutes ago

                                                                                              Single quotes are also easier to type on the default layout, no Shift

                                                                                              • mg74 5 hours ago

                                                                                                No need for braces. Just add "end" for marking block ending to match the already block starting keyword ":".

                                                                                                • pansa2 3 hours ago

                                                                                                  A while ago, when thinking about syntax design for a new language, I considered this combination (`:` and `end`, as opposed to `do` and `end` as used by Lua etc).

                                                                                                  Are there any languages that use it, or is Python unique in using `:` to begin a block?

                                                                                              • stavros 5 hours ago

                                                                                                I don't have this problem, and I've been writing Python for more than twenty years. Sure, I may have the occasional wrong space somewhere, but it's maybe a few times a month, whereas I'd otherwise have to type "end" for every single block.

                                                                                                • mg74 5 hours ago

                                                                                                  I dont think this is a problem anymore in todays world of LSPs and auto formatters. I almost never have to type "end" in Elixir for instance, it is always autocompleted for me.

                                                                                                  • stavros 3 hours ago

                                                                                                    How does it know when to end a block?

                                                                                                • pansa2 5 hours ago

                                                                                                      >>> from __future__ import braces
                                                                                                      SyntaxError: not a chance
                                                                                                  • mg74 5 hours ago

                                                                                                    Thank you, but I rather not inject a tool that hasn't been updated in 6 years into my build chain. Thats how we do things in the Javascript world and frankly it sucks.

                                                                                                    • benediktwerner 4 hours ago

                                                                                                      This is a joke that's actually built into Python. The __future__ module is where you can import/enable features that will become the default in future versions. The point it makes by giving "SyntaxError: Not a chance" is that Python will never add braces.

                                                                                                      And IMO for good reason. It makes the code so much cleaner and it's not like it particularly takes effort to indent your code correctly, especially since any moderately competent editor will basically do it for you. Tbh I actually find it much less effort than typing braces.

                                                                                                  • marliechiller 5 hours ago

                                                                                                    This suggestion gives me chills. I literally never face this issue. Are you using vim? Theres autoindent and smartindent features you can enable to help you.

                                                                                                    • mg74 5 hours ago

                                                                                                      Neovim + ruff lsp. I have gone through many formatters to try and get this better, but it is always worse than any other language where whitespace is not semantic.

                                                                                                      • zo1 an hour ago

                                                                                                        The one nice perk about the state of things atm in python is I can very easily filter out devs by their choice of python IDE (or lack thereof).

                                                                                                  • punnerud 8 hours ago

                                                                                                    The ones saying they will never use Python because it’s slow, is the probability high that their “fast” language is not thread safe?

                                                                                                    • bschwindHN an hour ago

                                                                                                      With Rust, code runs at native speed, multithreading is easy and safe, and the package manager doesn't suck. I will never use Python if I can help it, but not just because it's slow.

                                                                                                      • Quothling 7 hours ago

                                                                                                        How many teams actually decide against using Python because it's "slow"? I'll personally never really get why people prefer interpreted languages in the age of Go, but even if we go back a few years, you would just build the computation heavy parts of your Python in C. Just like you would do with your C#, your Java or whatever you use when it was required. Considering Instagram largely ran/run their back-end on Python, I'd argue that you can run whatever you want with Python.

                                                                                                        Maybe I've just lived a sheltered life, but I've never heard speed being used as a serious argument against Python. Well, maybe on silly discussions where someone really disliked Python, but anyone who actually cares about efficiency is using C.

                                                                                                        • wongarsu 6 hours ago

                                                                                                          You occasionally hear stories about teams writing something in Python, then rewriting it in another language because Python turned out to be slow. I have one such story.

                                                                                                          With the excellent Python/Rust interop there is now another great alternative to rewriting the heavy parts in C. But sometimes the performance sensitive part spans most of your program

                                                                                                          • tgv 4 hours ago

                                                                                                            > How many teams actually decide against using Python because it's "slow"?

                                                                                                            At least mine. Also because of the typing. It's probably improved, but I remember being very disappointed a few years ago when the bloody thing wouldn't correctly infer the type of zip(). And that's ignoring the things that'll violate the specified type when you interface with the outside world (APIs, databases).

                                                                                                            > anyone who actually cares about efficiency is using C.

                                                                                                            Python is so much slower than e.g. Go, Java, C#, etc. There's no need to use C to get a better performance. It's also very memory hungry, certainly in comparison to Go.

                                                                                                            • neonsunset 3 hours ago

                                                                                                              Except “like you would do with your C#, your Java…” does not happen w.r.t. native components - you just write faster code and in 98% situations it’s more than enough. Now, Java and C# are different between each other when it comes to reaching top end of performance (Java has better baseline, but C# can compete with C++ when optimized), but we’re talking about the level far above the performance ceiling of Go.

                                                                                                              • imtringued 4 hours ago

                                                                                                                Python is a lot like PHP. A uniquely bad value proposition in almost all aspects.

                                                                                                                It is a slow interpreted language, but that isn't the only argument against it.

                                                                                                                It has abandoned backwards compatiblity in the past and there are still annoying people harassing you with obsolete python versions.

                                                                                                                The language and syntax still heavily lean towards imperative/procedural code styles and things like lambdas are a third class citizen syntax wise.

                                                                                                                The strong reliance on C based extensions make CPython the only implementation that sees any usage.

                                                                                                                CPython is a pain to deploy crossplatform, because you also need to get a C compiler to compile to all platforms.

                                                                                                                The concept behind venv is another uniquely bad design decision. By default, python does the wrong thing and you have to go out of your way and learn a new tool to not mess up your system.

                                                                                                                Then there are the countless half baked community attempts to fix python problems. Half baked, because they decide to randomly stop solving one crucial aspect and this gives room for dozens of other opportunistic developers to work on another incomplete solution.

                                                                                                                It was always a mystery to me that there are people who would voluntarily subject themselves to python.

                                                                                                              • continuational 8 hours ago

                                                                                                                Python isn't "thread safe", not even its lists are. What are you getting at?

                                                                                                                • meindnoch 7 hours ago

                                                                                                                  Python is not thread safe.

                                                                                                                  • guenthert 4 hours ago

                                                                                                                    Most of us use Python, just not for stuff which needs to be fast.

                                                                                                                    Not sure, what you mean by thread-safe language, but one of the nice things about Java is that it made (safe) multi-threading comparatively easy.

                                                                                                                    • IshKebab 8 hours ago

                                                                                                                      What do you mean by thread safe exactly? Instead of Python I would use Typescript or Rust or possibly Go.