« BackLeaving Meta and PyTorchsoumith.chSubmitted by saikatsg 10 hours ago
  • cs702 13 minutes ago

    Many of the comments here are judging PyTorch in hindsight, which is unfair.

    When Soumith Chintala co-launched PyTorch, and for many years after, the alternatives for fast, interactive, convenient development were much worse. There was no Jax.

    Every single AI researcher I know, including me, who tried PyTorch back then immediately wanted to switch to it, because it was so much better. Andrej Karpathy described what PyTorch felt like back then when he tweeted, on May 2017, "I've been using PyTorch a few months now and I've never felt better. I have more energy. My skin is clearer. My eyesight has improved."[a]

    THANK YOU SOUMITH for your hard work over all these years! Your hard work has made a difference for a huge number of people, including many of us here on HN.

    We wish you success in your future endeavors, whatever they turn out to be!

    Please ignore all the petty criticism.

    ---

    [a] https://x.com/karpathy/status/868178954032513024

    • utopiah 8 hours ago

      What I find most interesting with this is that it shows they believe there is nothing unique at Meta related to AI. There is no resource, people and computing power, that they can't get elsewhere for whatever they believe would be more interesting for them.

      I mention this because it feels analogous to military research, where people "dream" of how advanced the military is, how forward they are compared to public research... and yet, it seems to be a recurring myth they love to sustain.

      So the signal I get here is AI "labs" in BigTech have nothing worth waiting for around the corner, it's just more of the same and boring for people who stick there.

      • GuB-42 an hour ago

        About the military, from my limited experience, they are significantly behind the civilian state of the art, except for technology that has has few applications outside of the military, like stealth.

        In fact everything secret tends to be behind. Secrecy is a huge burden, and seriously limits all forms of collaboration.

        In addition, because military projects are often big and highly politicized you get all the inefficiencies that goes with that. Classification is also convenient for hiding screwups and corruption.

        • dmix 26 minutes ago

          I just assume all government software is poorly written by huge consulting companies, like the famous FBI one https://en.wikipedia.org/wiki/Virtual_Case_File

          > a 318-page report [...] said the SAIC software was incomplete, inadequate and so poorly designed that it would be essentially unusable under real-world conditions. Even in rudimentary tests, the system did not comply with basic requirements

          I figured the reason Palantir was so successful was because it was a SV software company instead of a defense contractor dabbling in IT or specialized government consultancy.

          • FuriouslyAdrift 31 minutes ago

            Post Cold War, most militaries shifted to COTS and less boutique development. Turns out, you only need to put resources in a few places to stay ahead (stealth, sensing and measuring, space, hypersonics, drones, etc).

            It's MUCH cheaper and quicker.

            • shadowgovt 18 minutes ago

              The military doesn't have the luxury of things being unreliable. It puts a pressure on them that corporations don't necessarily have: they'd rather have a less-effective but proven system than a potentially-more-effective but riskier system (especially since each system they have comes with massive logistics support).

              Ironically, corporations can afford to take more risks of failure (financially and project-wise) than militaries because failure for them doesn't mean actual human death (and when it can, you see processes come in that look a lot more like military processes).

            • oxfordmale 3 hours ago

              I think you might be reading a bit too much into this.

              He’s been with Meta for 11 years and is likely in a very comfortable financial position, given the substantial stock options he’s received over that time.

              He also mentioned the arrival of a new child, and it’s well known that Meta's work-life balance isn’t always ideal.

              On top of that, Meta, like many major tech companies, has been shifting its focus toward LLM-based AI, moving away from more traditional PyTorch use cases.

              Considering all of this, it seems like a natural time for him to move on and pursue new, more exciting opportunities.

              • Anon1096 30 minutes ago

                > On top of that, Meta, like many major tech companies, has been shifting its focus toward LLM-based AI, moving away from more traditional PyTorch use cases.

                This is very wrong. Meta is on the forefront of recommendation algorithms and that's all done with traditional ML models made using PyTorch.

                • ralusek an hour ago

                  > toward LLM-based AI, moving away from more traditional PyTorch use cases

                  Wait, are LLMs not built with PyTorch?

                  • gordonhart an hour ago

                    GP is likely saying that “building with AI” these days is mostly prompting pretrained models rather than training your own (using PyTorch).

                    • SV_BubbleTime an hour ago

                      Everyone is fine-tuning constantly though. Training an entire model in excess of a few billion parameters. It’s pretty much on nobody’s personal radar, you have a handful of well fundedgroups using pytorch to do that. The masses are still using pytorch, just on small training jobs.

                      Building AI, and building with AI.

                      • gordonhart an hour ago

                        Fine-tuning is great for known, concrete use cases where you have the data in hand already, but how much of the industry does that actually cover? Managers have hated those use cases since the beginning of the deep learning era — huge upfront cost for data collection, high latency cycles for training and validation, slow reaction speed to new requirements and conditions.

                    • pseudocomposer an hour ago

                      Llama and Candle are a lot more modern for these things than PyTorch/libtorch, though libtorch is still the de-facto standard.

                  • HarHarVeryFunny 3 hours ago

                    > What I find most interesting with this is that it shows they believe there is nothing unique at Meta related to AI

                    Whether or not this is the case, I don't get this as being the reason for Sousmith leaving - it sounds as if he is just ready for a change.

                    Still, it is noticeable that with many of the AI companies claiming that their version of "AGI" is just around the corner, developers and staff don't appear to be particularly excited about this (I assume they realize it is just hype, not some momentous advance around the corner), and leave to pursue different things, such as Mira Murati starting a fine-tuning company, Karpathy going back to education, others switching ship (typically from OpenAI to Anthropic), etc.

                    • moron4hire 3 hours ago

                      "Ready for change" is just the polite way to say, "I can't stand it here anymore. I'd rather roll the dice on a new place because reversion-to-mean means it's probably going to be better than whatever this has become."

                      There are a lot of things I don't like about my current job, but not enough for it to make sense to gamble on a new place. It's easier to push for change from my current position than to count on any new place being any better.

                      But if it gets worse and I do leave, I'll definitely be telling the interviewer, "I was just ready for a change."

                      • assemblyman 35 minutes ago

                        On the other hand, while I know nothing about Soumith, he clearly has enough financial runway (see my calc below) to not have to work again.

                        As far as I know, we all get one life. If one can help it (modulo other constraints), one should not get trapped by prestige, achievement, short-term admiration by others, impact and external facing factors. To see an alternate reality, it helps to escape the bubble, for example, by spending time in a completely different culture or environment where no one knows or cares about what one did.

                        I admire people taking such decisions. It's easy to be on autopilot in life. But, people who wear their success lightly are rare but more philosophically aware, in my opinion at least. I wish him good luck!

                        • embedding-shape 2 hours ago

                          > is just the polite way to say

                          Can be*, that's not necessarily always true. I've quit jobs plenty of times without having any plan for the future or particular drama-reason for leaving, just "It's not as fun here anymore, despite this being a great place to work", I'm sure I'm not the only one who does so.

                          What I've never done though, is leaving a place without being 100% honest exactly why I'm leaving. I won't say "I was just ready for change" if that wasn't the reason, I have no reason not to be honest about why I'm leaving.

                          • ghaff 40 minutes ago

                            I've generally had 10+ year tenures other than a return to school that was basically always in my plan and dot-bomb (leaving a company I wasn't really a fit with anyway). But, yeah, I've always been ready to move on at about that ten year point which is actually fairly long by a lot of people's standards in the tech industry.

                            I do disagree though that, unless there's some actionable change that would specifically benefit you like more money, my answer outside of private conversations with people I know well is going to be some variant of "time for a change." Anything else just invites arguments and conversations I don't want to have.

                          • pelagicAustral 2 hours ago

                            I think age plays an important part in the decision to move away from a place. I think in your 20s or very early 30s you have far more leeway to kind of go away and start again, but a lot of the hope to actually be able to find that unicorn workplace fades away as you approach your late 30s. Once into your 40s, depending on your trade, you're dead on arrival unless you successfully manage to rebrand yourself as a consultant, whatever the fuck that means.

                            • aprilthird2021 2 hours ago

                              I do want to push back on this a little. People leave all the time for this "I wanna see what else is out there" especially at such senior levels and with so much financial security as he inevitably has from working at Meta for 11 years. It is not always a gamble for many of them, and many of them are not so skeptical and cynical of other places they could go and bring their expertise

                          • rtpg 7 hours ago

                            I don't think that's the read? Guy says he wants to work on something small. If you want to work on something big you probably want to be in a big corp to have the resources to do the big thing.

                            Also absolutely unknown if the "new thing" is AI-related at all!

                            • embedding-shape 3 hours ago

                              > If you want to work on something big you probably want to be in a big corp to have the resources to do the big thing.

                              If anything, the reverse seems to be true, if you want to work on something big, you want to be in a small company, sufficiently funded, filled with great people, yet not "big", that's when "something big" seems to be more likely to happen.

                              In contrast, as far as I can think, the bigger a company gets, the less likely they are to actually come up with "something big", it seems like most of the times you need (creative) constraints in order for the results to end up being actually innovative, otherwise you end up like IBM and Meta, throwing money on stuff and getting some results, but nothing really out of the ordinary considering what's happening elsewhere in their ecosystems.

                              • utopiah 7 hours ago

                                Well he left so whatever is coming next, AI related or not, "small" or not (small for them might be reaching just a million people, he wrote that he "lead the software layer that powers the entire AI industry." so his notion of scale is probably unlike mine, maybe yours too) is more exciting to him that whatever he could do next with all of Meta's resources.

                                Edit: to be clear, I didn't mean to imply their next thing is AI related, solely that they obviously know more about AI at Meta than e.g. XR at Meta, just because that's their expertise.

                                • hombre_fatal 3 hours ago

                                  Your assumption is a bad read because it only works if his set of life priorities contains nothing else but maximizing his impact in the world of AI.

                                  If he has just one other priority in that set (which could still include a robotic min/max of AI impact), then your assumption fails.

                                • radicalbyte 4 hours ago

                                  It reads to me as if he was the victim of office politics and decided to say "fuck it" instead of being transferred to something else within Meta.

                                  • sheepscreek 2 hours ago

                                    Pretty crazy/bizarre that a VP/Fellow engineer would have such little say in what they do at Meta. In my mind, companies would do everything possible to retain them. They are a special and rare breed.

                                    • disgruntledphd2 4 hours ago

                                      > It reads to me as if he was the victim of office politics and decided to say "fuck it" instead of being transferred to something else within Meta.

                                      It looks like he'd already been transferred once (to Infra) and maybe didn't want to do it again.

                                  • ErroneousBosh 3 hours ago

                                    > where people "dream" of how advanced the military is

                                    If you've ever worked on "advanced military grade" equipment, you'd know better.

                                    It tends to be what you'd euphemistically call "well-proven technology", built down to a price by the lowest bidder, by comparatively unskilled labour.

                                    The most shocking thing about the "captured" Russian drones is they use name-brand Raspberry Pis inside. I'm prepared to bet the American versions use whatever AliExpress crap is on special this week. The UK stuff definitely does.

                                    • embedding-shape 3 hours ago

                                      Isn't that exactly the point parent was trying to make? Maybe I misunderstood their comment, but it seems like you're repeating what they said.

                                      • ErroneousBosh 3 hours ago

                                        Post cup-of-tea (not NATO-spec, just black thanks, tea with just tea in it) I realise you're correct.

                                        • utopiah 3 hours ago

                                          You read it right, I think they agree. Maybe when I wrote "dream" in quotes the irony was lost.

                                        • esseph an hour ago

                                          I mean, these things do exist. There are always tons of big and small tech projects floating around in the special operations community. Cutting-edge sets of hybrid night/thermal vision. Classified helicopters. Hand-built rifled with custom cartridges. Classified medical tech. Advanced fixed wing aircraft with unique capabilities. Advanced dive gear. So on.

                                          "Big Army" doesn't see that stuff for decades, if ever, and mostly never due to cost. And I'm not even getting into classified submarine and nuclear tech, fixed wing drones and aircraft flying at night out of known test facilities, etc.

                                          There's tons of actually advance tech out there in military circles.

                                        • nrjames 4 hours ago

                                          If you can afford to support yourself, which I’m sure he can, there’s a serenity to working on small projects that are nothin the public eye. It may simply be that he craves some quiet time that enables him to focus on his family and himself.

                                          • KaiserPro an hour ago

                                            I don't think thats what is being said.

                                            Having friends who are at or near both FAIR and other AI parts of meta, reosurces are not the issue, anymore at least. (there had been a massive squeeze for the last two years though) But pytorch and FAIR use(d) a AWS based cluster. (however pytorch is used everywhere else inside facebook though. well not everywhere...)

                                            There is/are plenty of interesting things happening at big tech, and Meta specifically. If you like computer vision, then Meta is pretty much still the world leader. Much as it pains me to say it.

                                            • reactordev 6 hours ago

                                              Negative, what you shave taken away is it’s the people. He mentions standing up clusters. Small shops can’t afford clusters. Ignore the technical aspect of this article and read it for what it’s for. A thank you note to the people he has worked with on amazing projects. Research in a bubble of 1 isn’t very useful. Research in a small team with Meta Budget is extremely useful. With the right people.

                                              • groundzeros2015 22 minutes ago

                                                Or he just takes for granted the resources he has.

                                                • jansan 7 hours ago

                                                  > I mention this because it feels analogous to military research, where people "dream" of how advanced the military is, how forward they are compared to public research... and yet, it seems to be a recurring myth they love to sustain.

                                                  I don't think that you can read this from the blog post at all, but it gives me a chuckle to think how the quest for AGI at Meta may be "The Men Who Stare at Goats" all over again.

                                                  • utopiah 7 hours ago

                                                    I'm totally speculating. I have no extra information there.

                                                    It just makes me think of all the staff, technical staff, that left OpenAI recently. Altman was making grand claims about what was coming next.

                                                    Well, we know what followed, namely I don't think any researcher who left knowing what was in the pipeline feel like they missed much in terms of access.

                                                    • utopiah 7 hours ago

                                                      Just checked BTW and ... premise looks fun but the score is too low https://www.rottentomatoes.com/m/men_who_stare_at_goats was it actually good as movie, not just the idea behind it?

                                                      • jansan 6 hours ago

                                                        It's more the idea behind it. Considering the great cast, the movie could have been much better.

                                                        • vintermann 6 hours ago

                                                          The non-fiction book behind it is probably better comparison than the film adaptation, if you think Meta are doing goat-staring (I don't think they're especially bad on this issue compared to their rivals).

                                                  • jmward01 4 minutes ago

                                                    All I can say is 'thanks!'. It does take a team, and a community, but individuals matter. I use pytorch daily, it has made it possible for me to play with ideas that I would have only dreamed of. It is a big part of my life so, thanks for your contributions and best of luck on the next thing!

                                                    • vintermann 9 hours ago

                                                      That man has an infective enthusiasm. I remember the DCGAN paper inspired me to try getting the (Lua) Torch code to work, and I tried it on the Oxford flowers dataset early on. It worked surprisingly well, and Soumith Chintala even shared it around in social media, surprised at how well it worked on such a small dataset. Of course back then we didn't really appreciate the problem of mode collapse.

                                                      Pytorch and old Lua Torch were a pleasure to work with compared to the contemporary Tensorflow. Lots of S.C's code was copied around liberally, it had its quirks (I remember the DCGAN code had a pretty odd way of doing parameter passing) but it was also really easy to understand and made random people like me feel like we had suddenly stumbled onto something crazy powerful (which we had!). It was wonderfully hackable.

                                                      • mmaunder 25 minutes ago

                                                        “ What's next for me? Something small. Something new. Something I don't fully understand yet. Something uncomfortable. I could have moved to something else inside Meta. But I needed to know what's out there. I needed to do something small again. I couldn't live with the counterfactual regret of never trying something outside Meta.”

                                                        Shades of Siddhartha. Back to the forest.

                                                        • odyssey7 33 minutes ago

                                                          > It's taught in classrooms from MIT to rural India. The tools I dreamed about making accessible? They are. The barrier to entry I wanted to lower? It's almost gone.

                                                          I have an ironic sense that there are classrooms in rural India with better pedagogy and lower barriers to entry than some of our elite engineering programs.

                                                          • john01dav 23 minutes ago

                                                            Many elite engineering programs in the United States (I don't know if this is what you mean by "our") are elite solely due to social status (rankings need to publish rankings that feel right, or they're ignored, and they accept bribes to rank specific programs) and research output, with little to do with quality of pedagogy. Instead, pedagogy is generally poor because the elite researchers usually view teaching as a chore and many don't have any real skill in it either.

                                                          • qmatch 9 hours ago

                                                            As a loyal JAX user, I hope they can play catchup. PyTorch has dominated the AI scene since TF1 fumbled the ball at 10th yard line. What Matt Johnson has done turning Autograd into JAX is hopefully going to be worthy of as much praise as what Soumith has received.

                                                            • n_u 9 hours ago

                                                              > PyTorch has dominated the AI scene since TF1 fumbled the ball at 10th yard line

                                                              can you explain why you think TensorFlow fumbled?

                                                              • probably_wrong 3 hours ago

                                                                I see good answers already, but here's a concrete example:

                                                                In my University we had to decide between both libraries so, as a test, we decided to write a language model from scratch. The first minor problem with TF was that (if memory serves me right) you were supposed to declare your network "backwards" - instead of saying "A -> B -> C" you had to declare "C(B(A))". The major problem, however, was that there was no way to add debug messages - either your network worked or it didn't. To make matters worse, the "official" TF tutorial on how to write a Seq2Seq model didn't compile because the library had changed but the bug reports for that were met for years with "we are changing the API so we'll fix the example once we're done".

                                                                PyTorch, by comparison, had the advantage of a Python-based interface - you simply defined classes like you always did (including debug statements!), connected them as variables, and that was that. So when I and my beginner colleagues had to decide which library to pick, "the one that's not a nightmare to debug" sounded much better than "the one that's more efficient if you have several billions training datapoints and a cluster". Me and my colleagues then went on to become professionals, and we all brought PyTorch with us.

                                                                • jszymborski 2 hours ago

                                                                  The inability to use print debug to tell me the dimensions of my hidden states was 100% why TF was hard for me to use as a greenhorn MSc student.

                                                                  Another consequence of this was that PyTorch let you use regular old Python for logic flow.

                                                                • stared 3 hours ago

                                                                  In 2018, I co-wrote a blog post with the inflammatory title “Don’t use TensorFlow, try PyTorch instead” (https://news.ycombinator.com/item?id=17415321). As it gained traction here, it was changed to “Keras vs PyTorch” (some edgy things that work for a private blog are not good for a corporate one). Yet the initial title stuck, and you can see it resonated well with the crowd.

                                                                  TensorFlow (while a huge step on top of Theano) had issues with a strange API, mixing needlessly complex parts (even for the simplest layers) with magic-box-like optimization.

                                                                  There was Keras, which I liked and used before it was cool (when it still supported the Theano backend), and it was the right decision for TF to incorporate it as the default API. But it was 1–2 years too late.

                                                                  At the same time, I initially looked at PyTorch as some intern’s summer project porting from Lua to Python. I expected an imitation of the original Torch. Yet the more it developed, the better it was, with (at least to my mind) the perfect level of abstraction. On the one hand, you can easily add two tensors, as if it were NumPy (and print its values in Python, which was impossible with TF at that time). On the other hand, you can wrap anything (from just a simple operation to a huge network) in an nn.Module. So it offered this natural hierarchical approach to deep learning. It offered building blocks that can be easily created, composed, debugged, and reused. It offered a natural way of picking the abstraction level you want to work with, so it worked well for industry and experimentation with novel architectures.

                                                                  So, while in 2016–2017 I was using Keras as the go-to for deep learning (https://p.migdal.pl/blog/2017/04/teaching-deep-learning/), in 2018 I saw the light of PyTorch and didn’t feel a need to look back. In 2019, even for the intro, I used PyTorch (https://github.com/stared/thinking-in-tensors-writing-in-pyt...).

                                                                  • stared 3 hours ago

                                                                    Actually, I opened “Teaching deep learning” and smiled as I saw how it evolved:

                                                                    > There is a handful of popular deep learning libraries, including TensorFlow, Theano, Torch and Caffe. Each of them has Python interface (now also for Torch: PyTorch)

                                                                    > [...]

                                                                    > EDIT (July 2017): If you want a low-level framework, PyTorch may be the best way to start. It combines relatively brief and readable code (almost like Keras) but at the same time gives low-level access to all features (actually, more than TensorFlow).

                                                                    > EDIT (June 2018): In Keras or PyTorch as your first deep learning framework I discuss pros and cons of starting learning deep learning with each of them.

                                                                  • HarHarVeryFunny 4 hours ago

                                                                    The original TensorFlow had an API similar to the original Lua-based Torch (the predecessor to PyTorch) that required you to first build the network, node by node, then run it. PyTorch used a completely different, and much more convenient approach, where the network is built automatically for you just by running the forward pass code (and will then be used for the backward pass), using both provided node types and arbitrary NumPy compatible code. You're basically just writing differentiable code.

                                                                    This new PyTorch approach was eventually supported by TensorFlow as well ("immediate mode"), but the PyTorch approach was such a huge improvement that there had been an immediate shift by many developers from TF to PyTorch, and TF never seemed able to regain the momentum.

                                                                    TF also suffered from having a confusing array of alternate user libraries built on top of the core framework, none of which had great documentation, while PyTorch had a more focused approach and fantastic online support from the developer team.

                                                                    • Gazoche 5 hours ago

                                                                      I'm no machine learning engineer but I've dabbled professionally with both frameworks a few years ago and the developer experience didn't even compare. The main issue with TF was that you could only chose between a powerful but incomprehensible, poorly documented [1], ultra-verbose and ever changing low-level API, and an abstraction layer (Keras) that was too high level to be really useful.

                                                                      Maybe TF has gotten better since but at the time it really felt like an internal tool that Google decided to just throw into the wild. By contrast PyTorch offered a more reasonable level of abstraction along with excellent API documentation and tutorials, so it's no wonder that machine learning engineers (who are generally more interested in the science of the model than the technical implementation) ended up favoring it.

                                                                      [1] The worst part was that Google only hosted the docs for the latest version of TF, so if you were stuck on an older version (because, oh I don't know, you wanted a stable environment to serve models in production), well tough luck. That certainly didn't gain TF any favors.

                                                                      • morshu9001 an hour ago

                                                                        I just remember TF1 being super hard to use as a beginner and Google repeatedly insisting it had to be that way. People talk about the layering API, but it's more than that, everything about it was covered with sharp edges.

                                                                        • zapnuk 9 hours ago

                                                                          For me it was about 8 years ago. Back then TF was already bloated but had two weaknesses. Their bet on static compute graphs made writing code verbose and debugging difficult.

                                                                          The few people I know back then used keras instead. I switched to PyTorch for my next project which was more "batteries included".

                                                                          • htrp 30 minutes ago

                                                                            Greenfielding TF2.X and not maintaining 1.X compatibility

                                                                            • michaelt 7 hours ago

                                                                              Imagine a total newbie trying to fine-tune an image classifier, reusing some open source example code, about a decade ago.

                                                                              If their folder of 10,000 labelled images contains one image that's a different size to the others, the training job will fail with an error about unexpected dimensions while concatenating.

                                                                              But it won't be able to say the file's name, or that the problem is an input image of the wrong size. It'll just say it can't concatenate tensors of different sizes.

                                                                              An experienced user will recognise the error immediately, and will have run a data cleansing script beforehand anyway. But it's not experienced users who bounce from frameworks, it's newbies.

                                                                              • mschuster91 7 hours ago

                                                                                > An experienced user will recognise the error immediately, and will have run a data cleansing script beforehand anyway. But it's not experienced users who bounce from frameworks, it's newbies.

                                                                                Even seasoned developers will bounce away from frameworks or libraries - no matter if old dogs or the next hot thing - if the documentation isn't up to speed or simple, common tasks require wading through dozens of pages of documentation.

                                                                                Writing good documentation is hard enough, writing relevant "common usage examples" is even harder... but keeping them up to date and working is a rarely seen art.

                                                                                And the greatest art of all of it is logging. Soooo many libraries refuse to implement detailed structured logging in internal classes (despite particularly Java and PHP offering very powerful mechanisms), making it much more difficult to troubleshoot problems in the field.

                                                                              • qmatch 7 hours ago

                                                                                I personally believe TF1 was serving the need of its core users. It provided a compileable compute graph with autodiff, and you got very efficient training and inference from it. There was a steep learning curve, but if you got past it, things worked very very well. The distributed TF never really took off—it was buggy, and I think they made some wrong earlier bets in the design for performance reasons that they should have been sacrificed in favor of simplicity.

                                                                                I believe some years after the TF1 release, they realized the learning curve was too steep, they were losing users to PyTorch. I think also the Cloud team was attempting to sell customers on their amazing DL tech, which was falling flat. So they tried to keep the TF brand while totally changing the product under the hood by introducing imperative programming and gradient tapes. They killed TF1, upsetting those users, while not having a fully functioning TF2, all the while having plenty of documentation pointing to TF1 references that didn’t work. Any new grad student made the simple choice of using a tool that was user-friendly and worked, which was PyTorch. And most old TF1 users hopped on the band wagon.

                                                                                • tdullien 6 hours ago

                                                                                  I only remember 2015 TF and I was wondering: why would I use Python to assemble a computational graph when what I really want is to write code and then differentiate through it?

                                                                                • intermerda 9 hours ago

                                                                                  Do you have experience in both JAX and PyTorch? Why do you prefer JAX?

                                                                                  • cl3misch 6 hours ago

                                                                                    Not OP. I prefer JAX for non-AI tasks in scientific computing because of the different mental model than PyTorch. In JAX, you think about functions and gradients of functions. In PyTorch you think about tensors which accumulate a gradient while being manipulated through functions. JAX just suits my way of thinking much better.

                                                                                    I also like that jax.jit forces you to write "functional" functions free of side effects or inplace array updates. It might feel weird at first (and not every algorithm is suited for this style) but ultimately it leads to clearer and faster code.

                                                                                    I am surprised that JIT in PyTorch gets so little attention. Maybe it's less impactful for PyTorch's usual usecase of large networks, as opposed to general scientific computing?

                                                                                    • imtringued 5 hours ago

                                                                                      >I also like that jax.jit forces you to write "functional" functions free of side effects or inplace array updates. It might feel weird at first (and not every algorithm is suited for this style) but ultimately it leads to clearer and faster code.

                                                                                      It's not weird. It's actually the most natural way of doing things for me. You just write down your math equations as JAX and you're done.

                                                                                      • Majromax 2 hours ago

                                                                                        > You just write down your math equations as JAX and you're done.

                                                                                        It's natural when your basic unit is a whole vector (tensor), manipulated by some linear algebra expression. It's less natural if your basic unit is an element of a vector.

                                                                                        If you're solving sudoku, for example, the obvious 'update' is in-place.

                                                                                        In-place updates are also often the right answer for performance reasons, such as writing the output of a .map() operation directly to the destination tensor. Jax leans heavily on compile-time optimizations to turn the mathematically-nice code into computer-nice code, so the delta between eager-Jax and compiled-Jax is much larger than the delta between eager-Pytorch and compiled-Pytorch.

                                                                                    • havercosine 4 hours ago

                                                                                      Not Op. I have production / scale experience in PyTorch and toy/hobby experience in JAX. I wish I could have time time or liberty to use JAX more. It consists of small, orthogonal set of ideas that combine like lego blocks. I can attempt to reason from first principals about performance. The documentation is super readable and strives to make you understand things.

                                                                                      JAX seems well engineered. One would argue so was TensorFlow. But ideas behind JAX were built outside Google (autograd) so it has struck right balance with being close to idiomatic Python / Numpy.

                                                                                      PyTorch is where the tailwinds are, though. It is a wildly successful project which has acquired ton of code over the years. So it is little harder to figure out how something works (say torch-compile) from first principles.

                                                                                  • aabhay 8 hours ago

                                                                                    For anyone that’s curious, the underlying Torch library is also a joy to work with, as are the many other torch bindings. For example, Rust has tch and Burn which both work with libtorch.

                                                                                    PyTorch of course has the benefit of being dynamically debuggable. Can’t forget the first time I break pointed my pytorch model and wrote pytorch calls inside the terminal to inspect the behavior. That’s still something I miss a lot now that I’m working with only “fast” compiled code.

                                                                                    • lysecret 7 hours ago

                                                                                      I wrote som truly awful code back in the day because of that but god it was glorious.

                                                                                    • chopete3 9 hours ago

                                                                                      >>Every major AI company and hardware vendor are on a speed dial. This kind of power is really hard to give up. But curiosity ultimately won out in my head.

                                                                                      A simple feeling has such a power. May he gets an opportunity to create one more powerful tool before retiring.

                                                                                      • Lord-Jobo an hour ago

                                                                                        If the curiosity dies, the entire thing crumbles.

                                                                                        The second I stop being curious I stop finding new and exciting things to do, and I stop feeling fulfillment. It’s one of the biggest signs saying “it’s time to move on”.

                                                                                        I feel so strongly for the people who can’t afford the luxury. Ive been there, unfulfilling jobs for years because bills or resumè building.

                                                                                      • ergocoder 6 hours ago

                                                                                        I wonder how much this guy has earned from Meta in total. Would it reach $100M?

                                                                                        • stephenlf 4 hours ago

                                                                                          Considering Meta was trying to Poach AI talent for $250M, I wouldn’t be surprised if this guy has his own 8-figure income

                                                                                          • assemblyman an hour ago

                                                                                            If someone made $2 million/year over 10 years, after taxes, it would be $1 million (NYC has local taxes too). Let's say all of it was saved and invested in SP500 or Meta.

                                                                                            SP500: tripled over 10 years i.e. ~12% a year. Reinvesting dividends gives ~14% a year

                                                                                            Meta: 8x over 10 years i.e. ~23% a year.

                                                                                            If growth was uniform over 10 years and compensation/savings was uniform over 10 years, total portfolio would be:

                                                                                            ((1+r)^11-1)/r (geometric series since each year's contributions grow for different amount of times)

                                                                                            1 (this year) + (1+r) (previous year) + (1+r)^2 (previous-to-previous year) and so on

                                                                                            SP500: 14% -> $23M Meta: 23% -> $38M

                                                                                            Now, it's entirely possible, the compensation for a position like this runs into $10s of millions and one can easily account for non-uniform compensation.

                                                                                            Even in NYC, actually even in Manhattan, $10M is more than comfortable for retirement. It lets you draw $300-$400K (3-4% per year adjusted for inflation annually). If one is taking a short sabbatical, then it's a no-brainer.

                                                                                        • shevy-java 4 hours ago

                                                                                          To me it sounds as if he is trying to open a new chapter in his life. Good for him, but I wonder if everything was really as smooth as described. People often write how everything is always perfect on their blog. Well - could be. But it could also be that not everything was perfect but no description followed on the blog.

                                                                                          • sumedh 6 hours ago

                                                                                            His homepage says he wants to build a robot. So he is probably going to work with robots for his next role.

                                                                                            He is an investor in Anthropic, didnt know you could do that working for Meta.

                                                                                            • geodel 27 minutes ago

                                                                                              Could be Meta is quite liberal in this area. Or it could be one of those "For my friend, anything, for everyone else its corporate policy."

                                                                                            • ninjagoo 4 hours ago

                                                                                              Soumith's 2nd release? https://github.com/pytorch/pytorch/releases/tag/v0.1.1

                                                                                              Also, looking at the contribution history for a long career is very interesting; reflects the changing roles over time https://github.com/soumith

                                                                                              • gdiamos 7 hours ago

                                                                                                This is the end of an era. Amazing work soumith.

                                                                                                • irthomasthomas 7 hours ago

                                                                                                  Counterfactual Regret Minimization irl

                                                                                                  • philipwhiuk 5 hours ago

                                                                                                    There's no context around 'FAIR' - is it https://www.go-fair.org/?

                                                                                                  • mxkopy 9 hours ago

                                                                                                    PyTorch is one of those tools that’s so simple and easy to take apart that you feel like you might’ve been able to make it yourself. I can’t imagine how much engineering effort was behind all those moments where I thought to myself, “of course it should work like that, how can it be any other way?”

                                                                                                    • TechnicolorByte 9 hours ago

                                                                                                      Can anyone recommend a technical overview describing the design decisions PyTorch made that led it to win out?

                                                                                                      • GistNoesis 7 hours ago

                                                                                                        The choice of the dynamic computation graph [1] of PyTorch made it easier to debug and implement, leading to higher adoption, even though running speed was initially slower (and therefore training cost higher).

                                                                                                        Other decisions follow from this one.

                                                                                                        Tensorflow started with static and had to move to dynamic at version 2.0, which broke everything. Fragmentation between tensorflow 1, tensorflow 2, keras, jax.

                                                                                                        Pytorch's compilation of this computation graph erased the remaining edge of Tensorflow.

                                                                                                        Is the battle over ? From a purely computational point, Pytorch solution is very far from optimal and billions of dollars of electricity and GPUs are burned every year, but major players are happy with circular deals to entrench their positions. So at the pace of current AI code development, probably one or two years before Pytorch is old history.

                                                                                                        [1] https://www.geeksforgeeks.org/deep-learning/dynamic-vs-stati...

                                                                                                        • saagarjha 5 hours ago

                                                                                                          Someone’s got to prototype the next generation of architectures.

                                                                                                          • Uehreka 7 hours ago

                                                                                                            > at the pace of current AI code development, probably one or two years before Pytorch is old history.

                                                                                                            Ehhh, I don’t know about that.

                                                                                                            Sure, new AI techniques and new models are coming out pretty fast, but when I go to work with a new AI project, they’re often using a version of PyTorch or CUDA from when the project began a year or two ago. It’s been super annoying having to update projects to PyTorch 2.7.0 and CUDA 12.8 so I can run them on RTX 5000 series GPUs.

                                                                                                            All this to say: If PyTorch was going to be replaced in a year or two, we’d know the name of its killer by now, and they’d be the talk of HN. Not to mention that at this point all of the PhDs flooding into AI startups wrote their grad work in PyTorch, it has a lot of network lock-in that an upstart would have to overcome by being way better at something PyTorch can never be good at. I don’t even know what that would be.

                                                                                                            Bear in mind that it took a few years for Tensorflow to die out due to lock in, and we all knew about PyTorch that whole time.

                                                                                                            • GistNoesis 4 hours ago

                                                                                                              > a lot of network lock-in that an upstart would have to overcome by being way better at something PyTorch can never be good at

                                                                                                              Higher level code migration to the newer framework, is going to 0. You ask your favorite agent (or intern) to port and check that the migration is exact. We already see this in the multitude of deep-learning frameworks.

                                                                                                              The day one optimization trick that PyTorch can't do but another framework can, which reduce your training cost 10x and PyTorch is going the way of the dodo.

                                                                                                              The day one architecture which can't be implemented in PyTorch get superior performance, and it's bye bye python.

                                                                                                              We see this with architectures which require real-time rendering like Gaussian Splatting (Instant Nerf), or the caching strategies for LLM sequence generation.

                                                                                                              Pytorch's has 3 main selling point :

                                                                                                              - Abstracting away the GPU (or device) specific code, which is due to nvidia's mess : custom optimized kernels, which you are forced to adapt to if you don't want to write custom kernels.

                                                                                                              If you don't mind writing optimized kernels, because the machine write them. Or if you don't need Cuda because you can't use Nvidia hardware because for example you are in China. Or if you use custom silicon, like Grok and need your own kernels anyway.

                                                                                                              - Automatic differentiation. It's one of its weak point, because they went for easy instead of optimal. They shut themselves off some architectures. Some language like Julia because of the dynamic low-level compilation can do things Pytorch won't even dream about, (but Julia has its own problems mainly related to memory allocations). Here with the pytorch's introduction of the "scan function"[2] we have made our way full circle to Theano, Tensorflow's/Keras ancestor, which is usually the pain point of the bad automatic differentiating strategy chosen by Pytorch.

                                                                                                              The optimal solution like all physics Phds which wrote simulations know, is writing custom adjoint code with 'Source Code Transformation' or symbolically : it's not hard but very tedious so it's now a great fit for your LLM (or intern or Phd candidate running 'student gradient descent') if you prove or check your gradient calculation is ok.

                                                                                                              - Cluster Orchestration and serialization : a model can be shared with less security risks than arbitrary source code, because you only share weights. A model can be splitted between machines dynamically. But this is also a big weakness because your code rust as you become dependent of versioning, you are locked with the specific version number your model was trained on.

                                                                                                              [2] "https://docs.pytorch.org/xla/master/features/scan.html

                                                                                                              • morshu9001 36 minutes ago

                                                                                                                What would stop PyTorch from implementing whatever optimization trick becomes important? Even if it requires a different API.

                                                                                                          • huevosabio 9 hours ago

                                                                                                            I don't know the full list, but back when it came out, TF felt like a crude set of bindings to the underlying c++/CUDA workhorse. PyTorch felt, in contrast, pythonic. It was much closer in feeling to numpy.

                                                                                                            • puttycat 9 hours ago

                                                                                                              I think it was mostly the eager evaluation that made it possible to debug every step in the network forward/backward passes. Tensorflow didn't have that at the time which made debugging practically impossible.

                                                                                                              • albanD 2 hours ago

                                                                                                                I would highly recommend the podcast by ezyang https://pytorch-dev-podcast.simplecast.com/ for a collection of design discussions on the different parts of the library.

                                                                                                                • mxkopy 7 hours ago

                                                                                                                  I’m not sure if such an overview exists, but when caffe2 was still a thing and JAX was a big contender dynamic vs static computational graphs seemed to be a major focus point for people ranking the frameworks.

                                                                                                              • numice 7 hours ago

                                                                                                                I read one post on his blog and found that Adam Paszke reached out to the author and got an internship. I wonder if it was that easy to get an internship at FAIR. I thought that they hire only PhDs.

                                                                                                                • nswizzle31 3 hours ago

                                                                                                                  I was pretty involved in the PyTorch ecosystem in the early days around 2016 and Adam was nothing short of a genius and prolific developer whose contributions to the codebase and community were immense. I think he was like an undergrad in Poland at the time. My understanding is that his contributions came before the internship, but I don’t know.

                                                                                                                  My memory is that Souminth was really open to other people’s contributions and questions, no matter their credentials. He was a great leader who felt approachable to the open-source community.

                                                                                                                  • vintermann 6 hours ago

                                                                                                                    I didn't know that. Soumith Chintala certainly paid it forward. He was very helpful and supportive of random strangers (like me!) in the early pytorch days. I count him with Andrej Karpathy and Chris Olah as one of the people who really made machine learning accessible to regular software engineers.

                                                                                                                    • chrneu 5 hours ago

                                                                                                                      You can't do anything if you never try.

                                                                                                                    • jsrozner 5 hours ago

                                                                                                                      Is this also partially AI generated? What's with the repeated short phrases? Is this just everyone's style now?

                                                                                                                      • Cthulhu_ 5 hours ago

                                                                                                                        You're asking a lot of questions but are you willing to think about it? For one, no it's not "everyone's style" because you wouldn't have asked whether it was, you'd know.

                                                                                                                        • kelvinjps10 2 hours ago

                                                                                                                          I felt it very human the writing on this post

                                                                                                                        • isusmelj 6 hours ago

                                                                                                                          Very proud as a Swiss that Soumith has a .ch domain!

                                                                                                                          • roflmaostc 5 hours ago

                                                                                                                            Probably because his first name is Chintala

                                                                                                                            • spprashant 5 hours ago

                                                                                                                              That d be his last name

                                                                                                                              • roflmaostc 3 hours ago

                                                                                                                                true haha

                                                                                                                          • kleiba 6 hours ago

                                                                                                                            You forgot to thank Jürgen. /scnr

                                                                                                                            • hshdhdhehd 7 hours ago

                                                                                                                              Nice, that is the dream career!

                                                                                                                              • CommenterPerson 2 hours ago

                                                                                                                                Firstly, Good work.

                                                                                                                                Ironical but one HN front page item today is this: "Meta projected 10% of 2024 revenue came from scams and banned goods, Reuters reports"

                                                                                                                                Glad you're leaving, hopefully you're in a good place financially. Take a page from Bill Gates and work on something that attempts to improve society. Stay away from surveillance capitalism and enshittification.

                                                                                                                                • xpe 4 hours ago

                                                                                                                                  It is notable (but perhaps not surprising) that this is mostly about the people and the work itself. The writing is silent on the downstream impacts on the world. In contrast, there are fields (global development, medicine, etc.) where people tend to focus on the impact on humanity (especially when reaching a milestone in their career).

                                                                                                                                  • perfmode 9 hours ago

                                                                                                                                    Respect.

                                                                                                                                    • msmd74 9 hours ago

                                                                                                                                      Sounds like you had a momentous run.

                                                                                                                                      If you take advice from reformed Internet trolls, consider turning off all your devices and trying to give yourself at least a week, but ideally a month offline staring at your new baby. You'll never get that time back and there's nothing your brain will appreciate more than loading up those memories as they grow.

                                                                                                                                      Good luck.

                                                                                                                                      • BoredPositron 9 hours ago

                                                                                                                                        The last few years must have been incredibly exhausting. Thanks for your work good luck and 73.

                                                                                                                                        • deaux an hour ago

                                                                                                                                          > To Mark Zuckerberg and Mike Schroepfer, who believed that open-sourcing is fundamentally important and is a sound business strategy. This is so hard to understand for most people within the course of business, but we’ve run lock-step on this strategy without ever having to discuss it. Without you two, neither FAIR nor PyTorch would’ve happened. And those mean so much to me.

                                                                                                                                          Disgusting. Absolutely disgusting. I'm so done with people like this. People who only care about their specific interest and zero care for anything else. It's on the level of hypothetical vegetarians thanking Hitler for being vegetarian and how much he did for their cause. Mark Zuckerberg gives absolutely zero shits about open source. None. He cares about personal short-term power and money and is willing to do anything to further those. He's a psychopath, as the large majority of big tech CEOs is, as that is what modern US society rewards and why it's collapsing at an incredible rate.

                                                                                                                                          I'm so sick of it all. There is as much evidence for my above claims as for the holocaust. Denying any of it, or sticking your head in the sand, is on the level of denying the holocaust. Fuck, it's worse if you're in tech and are living in 2025. At least the holocaust happened more than 80 years ago in Europe, rather than over the last 20 years in the country you're working at.

                                                                                                                                          But but my PyTorch!! And lowering the access barriers!! Sure. Like how colonization and slavery also left some infrastructure behind in those countries. Better go raving about those organizations as well.

                                                                                                                                          But but Godwin's law!! It's a meaningless law. The way it's used here, the parallels, are accurate. Go substitute every comparison with Stalin-era USSR analogues if you must so the "law" doesn't hold or something.

                                                                                                                                          But but the PyTorch work doesn't help all the hurt, pain, societal damage and actual lives that the Meta Party costs in any way or form! Mate. If Zuckerberg did not think the PyTorch endeavor helped in his sole goal of personal power, he would not be doing it. End of. Thus it's bad. To think the positive externalities outweigh that is arrogant and purely based on self-interests.

                                                                                                                                          But it's just one sentence in a piece!!. No. The piece makes absolutely clear as day that the author genuinely feels this way about the organization he worked for, and has absolutely zero care in the world to understand what the Party is actually doing.

                                                                                                                                          But but most of modern humankind is like this! No, this person is worse than average in this respect. But to an extent, yes. It makes me want to stop living. I don't care anymore whether this breaks any rules or whatever. I'm right and I'm tired.

                                                                                                                                          • ishouldbework 5 hours ago

                                                                                                                                            Look, I get that some pages require javascript, but

                                                                                                                                                <style class="fallback">body{visibility:hidden;white-space:pre;font-family:monospace}</style>
                                                                                                                                            
                                                                                                                                            which is then unset by JS, with no <noscript> anywhere, is just... I just get white page.

                                                                                                                                            Changing it to

                                                                                                                                                <style class="fallback">body{white-space:pre-wrap;font-family:monospace}</style>
                                                                                                                                            
                                                                                                                                            gives perfectly readable web, so it seem bit... pointless.