• vighneshiyer 9 months ago

    This work from Google (original Nature paper: https://www.nature.com/articles/s41586-021-03544-w) has been credibly criticized by several researchers in the EDA CAD discipline. These papers are of interest:

    - A rebuttal by a researcher within Google who wrote this at the same time as the "AlphaChip" work was going on ("Stronger Baselines for Evaluating Deep Reinforcement Learning in Chip Placement"): http://47.190.89.225/pub/education/MLcontra.pdf

    - The 2023 ISPD paper from a group at UCSD ("Assessment of Reinforcement Learning for Macro Placement"): https://vlsicad.ucsd.edu/Publications/Conferences/396/c396.p...

    - A paper from Igor Markov which critically evaluates the "AlphaChip" algorithm ("The False Dawn: Reevaluating Google's Reinforcement Learning for Chip Macro Placement"): https://arxiv.org/pdf/2306.09633

    In short, the Google authors did not fairly evaluate their RL macro placement algorithm against other SOTA algorithms: rather they claim to perform better than a human at macro placement, which is far short of what mixed-placement algorithms are capable of today. The RL technique also requires significantly more compute than other algorithms and ultimately is learning a surrogate function for placement iteration rather than learning any novel representation of the placement problem itself.

    In full disclosure, I am quite skeptical of their work and wrote a detailed post on my website: https://vighneshiyer.com/misc/ml-for-placement/

    • negativeonehalf 9 months ago

      FD: I have been following this whole thing for a while, and know personally a number of the people involved.

      The AlphaChip authors address criticism in their addendum, and in a prior statement from the co-lead authors: https://www.nature.com/articles/s41586-024-08032-5 , https://www.annagoldie.com/home/statement

      - The 2023 ISPD paper didn't pre-train at all. This means no learning from experience, for a learning-based algorithm. I feel like you can stop reading there.

      - The ISPD paper and the MLcontra paper both used much larger older technology node sizes, which have pretty different physical properties. TPU has a sub 10nm technology node size, whereas ISPD uses 45nm and 12nm. These are really different from a physical design perspective. Even worse, MLcontra uses a truly ancient benchmark with >100nm technology node size.

      Markov's paper just summarizes the other two.

      (Incidentally, none of ISPD / MLcontra / Markov were peer reviewed - ISPD 2023 was an invited paper.)

      There's a lot of other stuff wrong with the ISPD paper and the MLcontra paper - happy to go into it - and a ton of weird financial incentives lurking in the background. Commercial EDA companies do NOT want a free open-source tool like AlphaChip to take over.

      Reading your post, I appreciate the thoroughness, but it seems like you are too quick to let ISPD 2023 off the hook for failing to pre-train and using less compute. The code for pre-training is just the code for training --- you train on some chips, and you save and reuse the weights between runs. There's really no excuse for failing to do this, and the original Nature paper described at length how valuable pre-training was. Given how different TPU is from the chips they were evaluating on, they should have done their own pre-training, regardless of whether the AlphaChip team released a pre-trained checkpoint on TPU.

      (Using less compute isn't just about making it take longer - ISPD 2023 used half as many GPUs and 1/20th as many RL experience collectors, which may screw with the dynamics of the RL job. And... why not just match the original authors' compute, anyway? Isn't this supposed to be a reproduction attempt? I really do not understand their decisions here.)

      • isotypic 9 months ago

        Why does pretraining or not matter in the ISPD 2023 paper? The circuit_training repo, as noted in the rebuttal of the rebuttal by the ISPD 2023 paper authors, claims training from scratch is "comparable or better" than fine-tuning the pre-trained model. So no matter your opinion on the importance of the pretraining step, this result isn't replicable, at which point the ball is in Google's court to release code/checkpoints to show otherwise.

        • negativeonehalf 9 months ago

          The quick-start guide in the repo that said you don't have to pre-train for the sample test case, meaning that you can validate your setup without pre-training. That does not mean you don't need to pre-train! Again, the paper talks at length about the importance of pre-training.

          • marcinzm 9 months ago

            This is what the repo says:

            >Results >Ariane RISC-V CPU >View the full details of the Ariane experiment on our details page. With this code we are able to get comparable or better results training from scratch as fine-tuning a pre-trained model.

            The paper includes a graph showing that it takes longer for Ariane to train without pre-training however the results in the end are the same.

            • negativeonehalf 9 months ago

              See their ISPD 2022 paper, which goes into more detail about the value of pre-training (e.g. Figure 7): https://dl.acm.org/doi/pdf/10.1145/3505170.3511478

              Sometimes training from scratch is able to match the results of pre-training, given ~5X more time to converge. Other times, though, it never does as well as a pre-trained model, converging to a worse final result.

              This isn't too surprising -- the whole point of the method is to be able to learn from experience.

            • anna-gabriella 9 months ago

              That does not mean you need to pre-train either. Common sense, no?

          • wegfawefgawefg 9 months ago

            In reinforcement learning pre-training reduces peak performance. We can argue about this, but it is not a sufficiently strong point to stop reading from alone.

            • 317070 9 months ago

              Do you have a citation for this? I did my Phd on this topic 8 years ago, and I didn't completely follow the field after. I'm curious to learn more.

              • wegfawefgawefg 9 months ago

                Not off the top of my head, but back when self play was first being figured out the competing strategy was behavioural cloning, and there was some flirting with bootstrapping self play with initial behavioural cloning. It would always bias the policy and reduce exploration. You end up with a worse final policy. Best to train from scratch. All the top rl papers did no behavioural pretraining and beat out the ones that did by many orders of magnitude on scores.

                We are going to relearn this lesson with ambulation and grasping as all the large companies are trying to make useful robots from human shadowing to reduce the gigantic sample size requirements burden with self play. Likely after the initial years computers will just get a couple more doublings in compute per watt and we will see the full self training models take over those domains as the old human data biased models get thrown out.

              • negativeonehalf 9 months ago

                See this ISPD 2022 paper where the AlphaChip authors dive more into the value of pre-training (Figure 7, Figure 8): https://dl.acm.org/doi/pdf/10.1145/3505170.3511478

              • clickwiseorange 9 months ago

                Oh, man... this is the same old stuff from the 2023 Anna Goldie statement (is this Anna Goldie's comment?). This was all addressed by Kahng in 2023 - no valid criticisms. Where do I start?

                Kahng's ISPD 2023 paper is not in dispute - no established experts objected to it. The Nature paper is in dispute. Dozens of experts objected to it: Kahng, Cheng, Markov, Madden, Lienig, Swartz objected publically.

                The fact that Kahng's paper was invited doesn't mean it wasn't peer reviewed. I checked with ISPD chairs in 2023 - Kahng's paper was thoroughly reviewed and went through multiple rounds of comments. Do you accept it now? Would you accept peer-reviewed versions of other papers?

                Kahng is the most prominent active researcher in this field. If anyone knows this stuff, it's Kahng. There were also five other authors in that paper, including another celebrated professor, Cheng.

                The pre-training thing was disclaimed in the Google release. No code, data or instructions for pretraining were given by Google for years. The instructions said clearly: you can get results comparable to Nature without pre-training.

                The "much older technology" is also a bogus issue because the HPWL scales linearly and is reported by all commercial tools. Rectangles are rectangles. This is textbook material. But Kahng etc al prepared some very fresh examples, including NVDLA, with two recent technologies. Guess what, RL did poorly on those. Are you accepting this result?

                The bit about financial incentives and open-source is blatantly bogus, as Kahng leads OpenROAD - the main open-source EDA framework. He is not employed by any EDA companies. It is Google who has huge incentives here, see Demis Hassabis tweet "our chips are so good...".

                The "Stronger Baselines" matched compute resources exactly. Kahng and his coauthors performed fair comparisons between annealing and RL, giving the same resources to each. Giving greater resources is unlikely to change results. This was thoroughly addressed in Kahng's FAQ - if you only could read that.

                The resources used by Google were huge. Cadence tools in Kahng's paper ran hundreds times faster and produced better results. That is as conclusive as it gets.

                It doesn't take a Ph.D. to understand fair comparisons.

                • negativeonehalf 9 months ago

                  For AlphaChip, pre-training is just training. You train, and save the weights in between. This has always been supported by the Google's open-source repository. I've read Kahng's FAQ, and he fails to address this, which is unsurprising, because there's simply no excuse for cutting out pre-training for a learning-based method. In his setup, every time AlphaChip sees a new chip, he re-randomizes the weights and makes it learn from scratch. This is obviously a terrible move.

                  HPWL (half-perimeter wirelength) is an approximation of wirelength, which is only one component of the chip floorplanning objective function. It is relatively easy to crunch all the components together and optimize HPWL --- minimizing actual wirelength while avoiding congestion issues is much harder.

                  Simulated annealing is good at quickly converging on a bad solution to the problem, with relatively little compute. So what? We aren't compute-limited here. Chip design is a lengthy, expensive process where even a few-percent wirelength reduction can be worth millions of dollars. What matters is the end result, and ML has SA beat.

                  (As for conflict of interest, my understanding is Cadence has been funding Kahng's lab for years, and Markov's LinkedIn says he works for Synopsis. Meanwhile, Google has released a free, open-source tool.)

                  • clickwiseorange 9 months ago

                    It's not that one needs an excuse. The Google CT repo said clearly you don't need to pretrain. "supported" usually includes at least an illustration, some scripts to get it going - no such thing there before Kahng's paper. Pre-trained was not recommended and was not supported.

                    Everything optimized in Nature RL is an approximation. HPWL is where you start, and RL uses it in the objective function too. As shown in "Stronger Baselines", RL loses a lot by HPWL - so much that nothing else can save it. If your wires are very long, you need routing tracks to route them, and you end up with congestion too.

                    SA consistently produces better solutions than RL for various time budgets. That's what matters. Both papers have shown that SA produces competent solutions. You give SA more time, you get better solutions. In a fair comparison, you give equal budgets to SA and RL. RL loses. This was confirmed using Google's RL code and two independent SA implementations, on many circuits. Very definitively. No, ML did not have SA beat - please read the papers.

                    Cadence hasn't funded Kahng for a long time. In fact, Google funded Kahng more recently, so he has all the incentives to support Google. Markov's LinkedIn page says he worked at Google before. Even Chatterjee, of all people, worked at Google.

                    Google's open-source tool is a head fake, it's practically unusable.

                    Update: I'll respond to the next comment here since there's no Reply button.

                    1. The Nature paper said one thing, the code did something else, as we've discovered. The RL method does some training as it goes. So, pre-training is not the same as training. Hence "pre". Another problem with pretraining in Google work is data contamination - we can't compare test and training data. The Google folks admitted to training and testing on different versions of the same design. That's bad. Rejection-level bad.

                    2. HPWL is indeed a nice simple objective. So nice that Jeff Dean's recent talks use it. It is chip design. All commercial circuit placers without exception optimize it and report it. All EDA publications report it. Google's RL optimized HPWL + density + congestion

                    3. This shows you aren't familiar with EDA. Simulated Annealing was the king of placement from mid 1980s to mid 1990s. Most chips were placed by SA. But you don't have to go far - as I recall, the Nature paper says they used SA to postprocess macro placements.

                    SA can indeed find mediocre solutions quickly, but keeps on improving them, just like RL. Perhaps, you aren't familiar with SA. I am. There are provable results showing SA finds optimal solution if given enough time. Not for RL.

                    • negativeonehalf 9 months ago

                      The Nature paper describes the importance of pre-training repeatedly. The ability to learn from experience is the whole point of the method. Pre-training is just training and saving the weights -- this is ML 101.

                      I'm glad you agree that HPWL is a proxy metric. Optimizing HPWL is a fun applied math puzzle, but it's not chip design.

                      I am unaware of a single instance of someone using SA to generate real-world, usable macro layouts that were actually taped out, much less for modern chip design, in part due to SA's struggles to manage congestion, resulting in unusable layouts. SA converges quickly to a bad solution, but this is of little practical value.

                      • clickwiseorange 9 months ago

                        1. The Nature paper said one thing, the code did something else, as we've discovered. The RL method does some training as it goes. So, pre-training is not the same as training. Hence "pre". Another problem with pretraining in Google work is data contamination - we can't compare test and training data. The Google folks admitted to training and testing on different versions of the same design. That's bad. Rejection-level bad.

                        2. HPWL is indeed a nice simple objective. So nice that Jeff Dean's recent talks use it. It is chip design. All commercial circuit placers without exception optimize it and report it. All EDA publications report it. Google's RL optimized HPWL + density + congestion

                        3. This shows you aren't familiar with EDA. Simulated Annealing was the king of placement from mid 1980s to mid 1990s. Most chips were placed by SA. But you don't have to go far - as I recall, the Nature paper says they used SA to postprocess macro placements.

                        SA can indeed find mediocre solutions quickly, but keeps on improving them, just like RL. Perhaps, you aren't familiar with SA. I am. There are provable results showing SA finds optimal solution if given enough time. Not for RL.

                        • AshamedCaptain 9 months ago

                          SA and HPWL are most definitely used as of today for the chips that power the GPUs used for "ML 101". But frankly this has the same value as saying "some sort algorithm is used somewhere" -- they're well entrenched basics of the field. To claim that SA produces "bad congestion" is like claiming that using steel pans produces bad cooking -- needs a shitton of context and qualification since you cannot generalize this way.

                        • foobarqux 9 months ago

                          I think clicking the time stamp field in the comment will allow you to get a reply box.

                        • sijnapq 9 months ago

                          [flagged]

                        • smokel 9 months ago

                          Wow, you seem to be pretty invested in this topic. Care to clarify?

                          • anna-gabriella 9 months ago

                            Reposting, as someone is flagging my comments. > People in the know are following this topic - big-wow surprise!

                            • anna-gabriella 9 months ago

                              [flagged]

                              • smokel 9 months ago

                                As someone unfamiliar with the topic but trying to piece together information, I have to admit that they do a better job at convincing me of the potential of reinforcement in chip design.

                                As with most, if not all, applications of reinforcement learning, there are always traditional algorithms that outperform it. But that does not mean that the approach lacks promise, or is at least interesting.

                                Sure, the paper might have polished up some results, but if that is the case, it is better addressed through the appropriate channels. Engaging in public criticism does not build too much trust, at least not with this curious observer.

                                • anna-gabriella 9 months ago

                                  Hm... I also had to piece things together and agree that Google PR is pretty slick. The approach had promise 3-4 years ago, but the science seems clear now. Google is avoiding tests on shared chip designs but claims a breakthrough. There is no breakthrough as everyone is still using Cadence or Synopsys software tools.

                                  Maybe you can make RL work for chip design at some point, but if the paper "polished up some results", why is it still getting any respect? You are right about "appropriate channels", that's what Chatterjee and Kahng tried, but Chatterjee was fired by Google as a whistleblower (red flag!) while Kahng is getting flak even these comments (another red flag!). Where would you look next as an independent observer?

                            • djmips 9 months ago

                              > Kahng is the most prominent active researcher in this field. If anyone knows this stuff, it's Kahng.

                              This is written as a textbook example logical fallacy of appeal to authority.

                              • ok_dad 9 months ago

                                The GP would have had to appeal only to the expert’s opinion, with no actual evidence, but the GP actually gave a lot of evidence to the expertise of the researcher in the form of peer reviewed papers and other links. That’s not an appeal to authority at all.

                                • anna-gabriella 9 months ago

                                  [flagged]

                              • dogleg77 9 months ago

                                The problem with the Google Nature paper is that its results were not reproduced outside Google. You can attack attempts to reproduce but that only reinforces the point: those claimed results cannot be trusted.

                                Other commenters already addressed the pre-training issue. Please kindly include a link to Kahng's 2023 discussion addressing your complaints. Otherwise, you are unfairly supporting those people you know.

                                Kahng's placer is open-source and was used in the Nature paper. It does not make sense to accuse Kahng of colluding with companies against open-source.

                                • negativeonehalf 9 months ago

                                  For a more thorough discussion on pre-training, see this ISPD 2022 paper by the AlphaChip people: https://dl.acm.org/doi/pdf/10.1145/3505170.3511478

                                  As for external usage of the method - MediaTek is one of the largest chip design companies in the world, and they built on AlphaChip. There's a quote from a MediaTek SVP at the bottom of the GDM blog post:

                                  "AlphaChip's groundbreaking AI approach revolutionizes a key phase of chip design. At MediaTek, we've been pioneering chip design's floorplanning and macro placement by extending this technique in combination with the industry's best practices. This paradigm shift not only enhances design efficiency, but also sets new benchmarks for effectiveness, propelling the industry towards future breakthroughs."

                                  • dogleg77 9 months ago

                                    Science is not done by quotes from VPs, and we don't know how MediaTek used these methods. Also, would you like to hear from VPs who wasted their company resources on Google RL and gave up?

                                    The more marketing claims we see, the less compelling the Google story is.

                                    Your perseverance is as admirable as it is suspicious. You are the lonely voice here defending the Google announcement.

                                    • negativeonehalf 9 months ago

                                      Unfortunately, there aren't publicly available benchmarks for modern technology node sizes, at least not that I'm aware of. Kahng compared on 45nm and 12nm chips, which are very different from a physical design perspective from the 4nm technology node size used by Dimensity 5G, or the sub-10nm technology node size of TPU. "MLContra" used a >100nm technology node size, which is just crazy.

                                      Even if the AlphaChip authors redid Kahng's study properly, this still wouldn't give us useful information -- what matters is AlphaChip's ability to optimize chips in a real-life, production setting, for modern chips, where millions of dollars are on the line.

                                      • dogleg77 9 months ago

                                        Yes, that's unfortunate. In that case, the Google Nature paper doesn't belong in Nature. Google is definitely free to claim that they revolutionized their business, but without scientific evidence this is just marketing.

                                    • sijnapq 9 months ago

                                      [dead]

                                  • sijnapq 9 months ago

                                    [flagged]

                                  • porphyra 9 months ago

                                    The Deepmind chess paper was also criticized for unfair evaluation, as they were using an older version of Stockfish for comparison. Apparently, the gap between AlphaZero and that old version of Stockfish (about 50 elo iirc) was about the same as the gap between consecutive versions of Stockfish.

                                    • lacker 9 months ago

                                      Indeed, six years later, the AlphaZero algorithm is not the best performing algorithm for chess. LCZero (uses AlphaZero algorithm) won some TCECs after it came out but for the past few years Stockfish (does not use AlphaZero algorithm) has been winning consistently.

                                      https://en.wikipedia.org/wiki/Top_Chess_Engine_Championship

                                      So perhaps the critics had a point there.

                                      • vlovich123 9 months ago

                                        There’s a lot of codevelopment happening in the space where the positions are evaluated by Leela and then used to train the NNUE net within stockfish. And Leela comes from AlphaZero. So basically AlphaZero was directly responsible for opening up new avenues of research for a more specialized chess engine to reach new levels than it could have without it.

                                        > Generally considered to be the strongest GPU engine, it continues to provide open data which is essential for training our NNUE networks. They released version 0.31.1 of their engine a few weeks ago, check it out!

                                        [1]

                                        I’d say the impact AlphaZero has had on chess and go can’t be understated considering it’s a general algorithm that at worst is highly competitive with purpose built engines. And that’s ignoring the actual point of why DeepMind is doing any of this which is for GAI (that’s why they’re not constantly trying to compete with existing engines)

                                        [1] https://lichess.org/@/StockfishNews/blog/stockfish-17-is-her...

                                        • theodorthe5 9 months ago

                                          Do you understand StockFish filled the gap only after using NNs as well in the evaluation function? And that was a direct consequence of AlphaZero research.

                                      • Workaccount2 9 months ago

                                        To be fair, some of these criticisms are a few years old. Which normally would be fair game, but the progress in AI has been breakneck. Criticism of other AI tech from 2021 or 2022 are pretty dated today.

                                        • jeffbee 9 months ago

                                          It certainly looks like the criticism at the end of the rebuttal that DeepMind has abandoned their EDA efforts is a bit stale in this context.

                                          • anna-gabriella 9 months ago

                                            Dated or not, if half of the criticisms are right, the original paper may need to be retracted. No progress on RL for chip design was published by Google since 2022, as far as I can tell. So, it looks like most if not all criticisms remain valid.

                                            • joshuamorton 9 months ago

                                              > No progress on RL for chip design was published by Google since 2022, as far as I can tell.

                                              This makes sense given that both authors of the paper left Google in 2022. And one no longer seems to work in the chip design space, plausibly because of the bullying by entrenched folks.

                                              Then again, since rejoining Google the other author has produced around one patent per month in chip design with RL in 2023 and 2024, so perhaps they feel there is a marketable tool here that they don't want to share.

                                              • undefined 9 months ago
                                                [deleted]
                                            • jeffbee 9 months ago

                                              It seems like this is multiple parties pursuing distinct arguments. Is Google saying that this technique is applicable in the way that the rebuttals are saying it is not? When I read the paper and the update I did not feel as though Google claimed that it is general, that you can just rip it off and run it and get a win. They trained it to make TPUs, then they used it to make TPUs. The fact that it doesn't optimize whatever "ibm14" is seems beside the point.

                                              • clickwiseorange 9 months ago

                                                Good question. It's not just ibm14, but everything people outside Google tried shows that RL is much worse than prior methods. NVDLA, BlackParrot, etc. There is a strong possibility that Google pre-trained RL on certain TPU designs then tested in them, and submitted to Nature.

                                              • smokel 9 months ago

                                                I don't really understand all the fuss about this particular paper. Nearly all papers on AI techniques are pretty much impossible to reproduce, due to details that the authors don't understand or are trying to cover up.

                                                This is what you get if you make academic researchers compete for citation counts.

                                                Pretraining seems to be an important aspect here, and it makes sense that such pretraining requires good examples, which unfortunately for the free lunch people, is not available to the public.

                                                That's what you get when you let big companies do fundamental research. Would it be better if the companies did not publish anything about their research at all?

                                                It all feels a bit unproductive to attack one another.

                                                • gabegobblegoldi 9 months ago

                                                  [dead]

                                                • gdiamos 9 months ago

                                                  Criticism is an important part of the scientific process.

                                                  Whichever approach ends up winning is improved by careful evaluation and replication of results

                                                  • s-macke 9 months ago

                                                    When I first read about AlphaChip yesterday, my first question was how it compares to other optimization algorithms such as genetic algorithms or simulated annealing. Thank you for confirming that my questions are valid.

                                                    • nemonemo 9 months ago

                                                      What is your opinion of the addendum? I think the addendum and the pre-trained checkpoint are the substance of the announcement, and it is surprising to see little mention of those here.

                                                      • dogleg77 9 months ago

                                                        Fair point. I looked at the addendum, and it doesn't really address the critiques. They show an "ablation study" on one additional circuit without describing how big it is, etc. The few numbers they give suggest that this circuit is unlike those in the Nature paper: possibly a newer technology (good) but much smaller in the total length of wires and hence components (bad!). They are trying to debunk the Kahng work, but they aren't addressing many other complaints. Without showing results on public benchmarks in a reproducible way, they are just rehashing some excuses. Maybe their results are no good, maybe they stopped working on this. But with claimed runtimes under 6 hours, they don't need 3 years to add thorough benchmarking results. Anyone who designs competitive chips is doing benchmarking, but Google isn't. Draw your own conclusions.

                                                      • bsder 9 months ago

                                                        EDA claims in the digital domain are fairly easy to evaluate. Look at the picture of the layout.

                                                        When you see a chip that has the datapath identified and laid out properly by a computer algorithm, you've got something. If not, it's vapor.

                                                        So, if your layout still looks like a random rat's nest? Nope.

                                                        If even a random person can see that your layout actually follows the obvious symmetric patterns from bit 0 to bit 63, maybe you've got something worth looking at.

                                                        Analog/RF is a little tougher to evaluate because the smaller number of building blocks means you can use Moore's Law to brute force things much more exhaustively, but if things "looks pretty" then you've got something. If it looks weird, you don't.

                                                        • glitchc 9 months ago

                                                          That doesn't mean the fabricated netlist doesn't work. I'm not supporting Google by any means, but the test should be: Does it fabricate and function as intended? If not, clearly gibberish. If so, we now have computers building computers, which is one step closer to SkyNet. The truth is probably somewhere in between. But even if some of the samples, with the terrible layouts, are actually functional, then we might learn something new. Maybe the gibberish design has reduced crosstalk, which would be fascinating.

                                                      • lordswork 9 months ago

                                                        Some interesting context on this work: 2 researchers were bullied to the point of leaving Google for Anthropic by a senior researcher (who has now been terminated himself): https://www.wired.com/story/google-brain-ai-researcher-fired...

                                                        They must feel vindicated by their work turning out to be so fruitful now.

                                                        • gabegobblegoldi 9 months ago

                                                          Vindicated indeed. The senior researcher and others on the project were bullied for raising concerns of fraud by the two researchers [1]. They filed a lawsuit against Google that has a lot of detailed allegations of fraud [2].

                                                          [1] https://www.theregister.com/AMP/2023/03/27/google_ai_chip_pa...

                                                          [2] https://regmedia.co.uk/2023/03/26/satrajit_vs_google.pdf

                                                          • negativeonehalf 9 months ago

                                                            You are now using multiple new accounts based on the name of one of the authors (Anna Goldie) and her husband (Gabriel). First this one ('gabegobblegoldi'), and then 'anna-gabriella'.

                                                            I think it is time for you to take a deep breath and think about what you are doing and why.

                                                            You seem to be obsessed with the idea that this work is overrated. MediaTek and Google don't think so, and use it in production for their chips, including TPU, Dimensity, Axion, and others. If you're right and they're wrong, using this method loses them money. If it's the other way around, then using this method makes them gain money.

                                                            Please read PG's post and ask yourself if it applies to you: https://www.paulgraham.com/fh.html

                                                            Chatterjee settled his case. He has moved on. This is not some product being sold -- it is a free, open-source tool. People who see value in it use it; others don't, and so they don't. This is how it always works, and it's fine.

                                                            • anna-gabriella 9 months ago

                                                              [flagged]

                                                              • negativeonehalf 9 months ago

                                                                [flagged]

                                                                • anna-gabriella 9 months ago

                                                                  [flagged]

                                                                  • undefined 9 months ago
                                                                    [deleted]
                                                                    • sijnapq 9 months ago

                                                                      [dead]

                                                                  • sijnapq 9 months ago

                                                                    [dead]

                                                                    • sijnapq 9 months ago

                                                                      [dead]

                                                                  • clickwiseorange 9 months ago

                                                                    It's actually not clear who was bullied. The two researchers ganged up on Chatterjee and got him fired because he used the word "fraud" - wrongful termination of a whistleblower. Only recently Google settled with Chatterjee for an undisclosed amount.

                                                                  • hinkley 9 months ago

                                                                    TSMC made a point of calling out that their latest generation of software for automating chip design has features that allow you to select logic designs for TDP over raw speed. I think that’s our answer to keep Dennard scaling alive in spirit if not in body. Speed of light is still going to matter, so physical proximity of communicating components will always matter, but I wonder how many wins this will represent versus avoiding thermal throttling.

                                                                    • therealcamino 9 months ago

                                                                      EDA software has long allowed trading off power, delay, and area during optimization . But TSMC doesn't produce those tools, as far as I'm aware.

                                                                      • hinkley 9 months ago

                                                                        https://www.tsmc.com/english/dedicatedFoundry/oip/eda_allian...

                                                                        They don’t produce but they are tailored for them just the same. “We have” doesn’t have to mean “we made”. They don’t say it as such here but elsewhere they refer to the IP they can make available, which can also be made in house or cross licensed and still count as “we have”.

                                                                        • therealcamino 9 months ago

                                                                          Used in that sense, the same software could be called Samsung's and Intel's and any other foundry's, since it is qualified for use with those processes as well. But that's not really the main point I was making, which was that there have been 20+ years of cooperative effort in both process design and EDA software to optimize for power and trade it off against other optimization goals. While there are design and packaging approaches that are only coming into use because of "end of Moore's law, what do we do now" reactions, and some may have power implications, power optimization predates that by a good while.

                                                                    • pfisherman 9 months ago

                                                                      Questions for those in the know about chip design. How are they measuring the quality of a chip design? Does the metric that Google is reporting make sense? Or is it just something to make themselves look good?

                                                                      Without knowing much, my guess is that “quality” of a chip design is multifaceted and heavily dependent on the use case. That is the ideal chip for a data center would look very different from those for a mobile phone camera or automobile.

                                                                      So again what does “better” mean in the context of this particular problem / task.

                                                                      • Drunk_Engineer 9 months ago

                                                                        I have not read the latest paper, but their previous work was really unclear about metrics being used. Researchers trying to replicate results had a hard time getting reliable details/benchmarks out of Google. Also, my recollection is that Google did not even compute timing, just wirelength and congestion; i.e. extremely primitive metrics.

                                                                        Floorplanning/placement/synthesis is a billion dollar industry, so if their approach were really revolutionary they would be selling the technology, not wasting their time writing blog posts about it.

                                                                        • rossjudson 9 months ago

                                                                          Like when Google wasted its time writing publicly about Spanner?

                                                                          https://research.google/pubs/spanner-googles-globally-distri...

                                                                          or Bigtable?

                                                                          https://research.google/pubs/bigtable-a-distributed-storage-...

                                                                          or GFS?

                                                                          or MapReduce?

                                                                          or Borg?

                                                                          or...I think you get the idea.

                                                                          • thenoblesunfish 9 months ago

                                                                            I am not sure these publications were intended to generate sales of these technologies. My assumption is that they mostly help the company in terms of recruitment. This lets potential employees see cool stuff Google is doing, and see them as an industry leader.

                                                                            • vlovich123 9 months ago

                                                                              Spanner is literally a Google cloud product you can buy ignoring that it underpins a good amount of Google tech internally. The same is true of other stuff. Dismissing it as a recruitment tool indicates you haven’t worked at Google or really know much about their product lines.

                                                                              • ebalit 9 months ago

                                                                                He didn't say that Spanner is only a recruitment tool but that the blog posts about Spanner (and other core technologies of Google) might be.

                                                                                • vlovich123 9 months ago

                                                                                  More people see the blog posts as it’s a more gentle introduction than the paper itself. Sure it might generate interest in Google but it also generates interest for people to further look into the research. They are not for sales of the tech but I’m not sure the impact is just a recruitment tool even if that’s how Google justified the work to itself.

                                                                            • bushbaba 9 months ago

                                                                              Spanner research paper was in 2012. Bigtable was in 2006. GFS 2003. The last decade has been a 'lost decade' of google. Not much innovation to be honest.

                                                                              • bombita 9 months ago

                                                                                Attention is all you need is 2017... https://arxiv.org/abs/1706.03762

                                                                                • Kubuxu 9 months ago

                                                                                  They thought it was dead end, that is why they released it :P

                                                                            • IshKebab 9 months ago

                                                                              > Floorplanning/placement/synthesis is a billion dollar industry

                                                                              Maybe all together, but I don't think automatic placement algorithms are a billion dollar industry. There's so much more to it than that.

                                                                              • Drunk_Engineer 9 months ago

                                                                                Yes in combination. Customers generally buy these tools as a package deal. If the placer/floorplanner blows everything else out of the water, then a CAD vendor can upsell a lot of related tools.

                                                                              • negativeonehalf 9 months ago

                                                                                The original paper reports P&R metrics (WNS, TNS, area, power, wirelength, horizontal congestion, vertical congestion) - https://www.nature.com/articles/s41586-021-03544-w

                                                                                (no paywall): https://www.cl.cam.ac.uk/~ey204/teaching/ACS/R244_2021_2022/...

                                                                                • Drunk_Engineer 9 months ago

                                                                                  From what I saw in the rebuttal papers, the Google cost-function is wirelength based. You can still get good TNS from that if your timing is very simplistic -- or if you choose your benchmark carefully.

                                                                                  • negativeonehalf 9 months ago

                                                                                    They optimize using a fast heuristic based on wirelength, congestion, and density, but they evaluate with full P&R. It is definitely interesting that they get good timing without explicitly including it in their reward function!

                                                                                    • gabegobblegoldi 9 months ago

                                                                                      Interesting == Suspicious? I think this is a big red flag to those in the know.

                                                                                      • ithkuil 9 months ago

                                                                                        Yeah; it means the heuristic they use is a good one

                                                                                      • clickwiseorange 9 months ago

                                                                                        The odd thing is that they don't compute timing in RL, but claim that somehow TNS and WNS improved. Does anyone believe this? With five circuits and three wins, the results are a coin toss.

                                                                                      • sijnapq 9 months ago

                                                                                        [dead]

                                                                                    • q3k 9 months ago

                                                                                      This is just floorplanning, which is a problem with fairly well defined quality metrics (max speed and chip area used).

                                                                                      • Drunk_Engineer 9 months ago

                                                                                        Oh man, if only it were that simple. A floorplanner has to guestimate what the P&R tools are going to do with the initial layout. That can be very hard to predict -- even if the floorplanner and P&R tool are from the same vendor.

                                                                                    • thesz 9 months ago

                                                                                      Eurisco [1], if I remember correctly, was once used to perform placement-and-route task and was pretty good at it.

                                                                                      [1] https://en.wikipedia.org/wiki/Eurisko

                                                                                      What's more, Eurisco was then used in designing Traveler TCS' game fleet of battle spaceships. And Eurisco used symmetry-based placement learned from VLSI design in the design of the spaceships' fleet.

                                                                                      Can AlphaChip's heuistics be used anywhere else?

                                                                                      • gabegobblegoldi 9 months ago

                                                                                        Doesn’t look like it. In fact the original paper claimed that their RL method could be used for all sorts of combinatorial optimization problems. Yet they chose an obscure problem in chip design and showed their results on proprietary data instead of standard public benchmarks.

                                                                                        Instead they could have demonstrated their amazing method on any number of standard NP hard optimization problems e.g. traveling salesman, bin packing, ILP, etc. where we can generate tons of examples and verify easily whether it produces better results than other solvers or not.

                                                                                        This is why many in the chip design and optimization community felt that the paper was suspicious. Even with this addendum they adamantly refuse to share any results that can be independently verified.

                                                                                        • AshamedCaptain 9 months ago

                                                                                          > Yet they chose an obscure problem in chip design

                                                                                          It is not obscure (in chip design). If anything it is one of the most easily reachable problems. Almost every other PhD student in the field has implemented a macro placer, even if just for fun, and there are frequent academic competitions. A lot of design houses also roll their own macro placers since it's not a difficult problem and generally adding a bit of knowledge of your design style can help you gain an extra % over the generic commercial tools.

                                                                                          It does not surprise me at all that they decided to start with this for their foray into chip EDA. It's the minimum effort route.

                                                                                          • gabegobblegoldi 9 months ago

                                                                                            Sorry. I meant obscure relative to the large space of combinatorial optimization problems not just chip design.

                                                                                            Most design houses don’t write their own macro placers but customize commercial flows for their designs.

                                                                                            The problem with macro placement as an RL technology demonstrator is that to evaluate quality you need to go through large parts of the design flow which involves using other commercial tools. This makes it incredibly hard to evaluate superiority since all those steps and tools add noise.

                                                                                            Easier problems would have been to use RL to minimize the number of gates in a logic circuit or just focus on placement with half perimeter wirelength (I think this is what you mean with your grad student example). Essentially solving point problems in the design flow and evaluating quality improvements locally.

                                                                                            They evaluated quality globally and only globally and that destroys credibility in this business due to the noise involved unless you have lots of examples, can show statistical significance, and (unfortunately for the authors) also local improvements.

                                                                                            That’s what the follow on studies did and that’s why the community has lost faith in this particular algorithm.

                                                                                            • AshamedCaptain 9 months ago

                                                                                              > Most design houses don’t write their own macro placers but customize commercial flows for their designs.

                                                                                              Most I don't know, but all the mid-to-large ones have automated macro placers. Obviously, the output is introduced into the commercial flow, generally by setting placement constraints. The larger houses go much further and may even override specific parts of the flow, but not basing it on an commercial flow is out of the question right now.

                                                                                              > The problem with macro placement as an RL technology demonstrator is that to evaluate quality you need to go through large parts of the design flow which involves using other commercial tools.

                                                                                              Not really, not any more than any other optimization such as e.g. frontend which I'm more familiar with. If you don't want to go through the full design flow (which I agree introduces noise more than anything else), then benchmark your floorplans in some easily calculable metric (e.g., HPWL). Likewise, if you want to test the quality of some logic simplification _in theory_ you'd have to also go through the entire flow (backend included), but no one does that and you just evaluate some easily calculable metric e.g. number of gates. These distinctions are traditional more than anything else.

                                                                                              Academic macro placers generally have limited access to commercial flows (either due to licensing issues or computing resource availability) so it is rather common to benchmark them in other metrics. Google paper tried to be too smart for its own good and therefore incomparable to anything academic.

                                                                                              • gabegobblegoldi 9 months ago

                                                                                                Thanks.

                                                                                      • AshamedCaptain 9 months ago

                                                                                        What is Google doing here? At best, the quality of their "computer chip design" work can be described as "controversial" https://spectrum.ieee.org/chip-design-controversy . What is there to gain by just making a PR now without doing anything new?

                                                                                        • negativeonehalf 9 months ago

                                                                                          In the blog post, they announce MediaTek's widespread usage, the deployment in multiple generations of TPU with increasing performance each generation, Axion, etc.

                                                                                          Chips designed with the help of AlphaChip are in datacenters and Samsung phones, right now. That's pretty neat!

                                                                                          • sijnapq 9 months ago

                                                                                            [dead]

                                                                                        • yeahwhatever10 9 months ago

                                                                                          Why do they keep saying "superhuman"? Algorithms are used for these tasks, humans aren't laying out trillions of transistors by hand.

                                                                                          • fph 9 months ago

                                                                                            My state-of-art bubblesort implementation is also superhuman at sorting numbers.

                                                                                            • xanderlewis 9 months ago

                                                                                              Nice. Do you offer API access for a monthly fee?

                                                                                              • int0x29 9 months ago

                                                                                                I'll need 7 5 gigawatt datacenters in the middle of major urban areas or we might lose the Bubble Sort race with the Chinese.

                                                                                                • dgacmu 9 months ago

                                                                                                  Surely you'll be able to reduce this by getting TSMC to build new fabs to construct your new Bubble Sort Processors (BSPs).

                                                                                                  • qingcharles 9 months ago

                                                                                                    I'll give you US$7Tn in investment. Just don't ask where it's coming from.

                                                                                                    • gattr 9 months ago

                                                                                                      Surely a 1.21-GW datacenter would suffice!

                                                                                                      • therein 9 months ago

                                                                                                        Have we decided when are we deprecating it? I'm already cultivating another team in a remote location to work on a competing product that we will include into Google Cloud a month before deprecating this one.

                                                                                                    • HPsquared 9 months ago

                                                                                                      Nice. Still true though! We are in the bubble sort era of AI.

                                                                                                      • kevindamm 9 months ago

                                                                                                        When we get better quantum computers we can start using spaghetti sort.

                                                                                                    • jeffbee 9 months ago

                                                                                                      This is floorplanning the blocks, not every feature. We are talking dozens to hundreds of blocks, not billions-trillions of gates and wires.

                                                                                                      I assume that the human benchmark is a human using existing EDA tools, not a guy with a pocket protector and a roll of tape.

                                                                                                      • yeahwhatever10 9 months ago

                                                                                                        Floorplanning algorithms and solvers already exist https://limsk.ece.gatech.edu/course/ece6133/slides/floorplan...

                                                                                                        • jeffbee 9 months ago

                                                                                                          The original paper from DeepMind evaluates what they are now calling AlphaChip versus existing optimizers, including simulated annealing. They conclude that AlphaChip outperforms them with much less compute and real time.

                                                                                                          https://www.cl.cam.ac.uk/~ey204/teaching/ACS/R244_2021_2022/...

                                                                                                          • hulitu 9 months ago

                                                                                                            > They conclude that AlphaChip outperforms them with much less compute and real time.

                                                                                                            Of course they do. I'm waiting for their products.

                                                                                                            • foobarian 9 months ago

                                                                                                              Randomized algorithms strike again!

                                                                                                              • sudosysgen 9 months ago

                                                                                                                This is moreso amortized optimization/reinforcement learning, not randomized algorithms.

                                                                                                        • thomasahle 9 months ago

                                                                                                          Believe it or not, but there was a time where algorithms were worse than humans at layout out transistors. In particular at the higher level design decisions.

                                                                                                          • justsid 9 months ago

                                                                                                            That’s somewhat still the case, humans could do a much better job at efficient layouting. The problem is that humans don’t scale as well, laying out billions of transistors is hard for humans. But computers can do it if you forego some efficiency by switching to standard cells and then throw compute at the problem.

                                                                                                          • epistasis 9 months ago

                                                                                                            Google is good at many things, but perhaps their strongest skill is media positioning.

                                                                                                            • jonas21 9 months ago

                                                                                                              I feel like they're particularly bad at this, especially compared to other large companies.

                                                                                                              • pinewurst 9 months ago

                                                                                                                Familiarity breeds contempt. They've been pushing the Google==Superhuman thing since the Internet Boom with declining efficacy.

                                                                                                                • undefined 9 months ago
                                                                                                                  [deleted]
                                                                                                              • lordswork 9 months ago

                                                                                                                The media hates Google.

                                                                                                                • epistasis 9 months ago

                                                                                                                  It a love/hate relationship. Which benefits Google and the media greatly.

                                                                                                              • deelowe 9 months ago

                                                                                                                I read the paper. Superhuman is a metric they defined in the paper which has to do with how long it takes a human to do certain tasks.

                                                                                                                • anna-gabriella 9 months ago

                                                                                                                  Does this make any sense, really? - Define some common words and then let the media run wild with them. How about we redefine "better" and "revolutionize"? Oh, wait, I think people are doing that already...

                                                                                                                • negativeonehalf 9 months ago

                                                                                                                  Prior to AlphaChip, macro placement was done manually by human engineers in any production setting. Prior algorithmic methods especially struggled to manage congestion, resulting in chips that weren't manufacturable.

                                                                                                                  • AshamedCaptain 9 months ago

                                                                                                                    > macro placement was done manually by human engineers in any production setting

                                                                                                                    To quote certain popular TV series .... Sorry, are you from the past? Do your "production" chips only have a couple dozen macros or what?

                                                                                                                  • jayd16 9 months ago

                                                                                                                    "superhuman or comparable"

                                                                                                                    What nonsense! XD

                                                                                                                  • Upvoter33 9 months ago

                                                                                                                    To me, there is an underlying issue: why are so many DeepX papers being sent to Nature, instead of appropriate CS forums? If you are doing better work in chip design, send it to IPSD or ISCA or whatever, and then you will get the types of reviews needed for this work. I have no idea what Nature does with a paper like this.

                                                                                                                    • negativeonehalf 9 months ago

                                                                                                                      Chips are the limiting factor for AI, and now we have AIs making chips better than human engineers. This feels like an infinite compute cheat code, or at least a way to get us very, very quickly to the physical optimum.

                                                                                                                      • pptr 9 months ago

                                                                                                                        It's 6% shorter wire length. Hardly an infinite compute glitch.

                                                                                                                        • negativeonehalf 9 months ago

                                                                                                                          6% is just the latest one - this is a real-deal engineering task in the chip design process, that an AI can do better than a human expert, and the gap is growing with time. I'm sure there's a limit, but we don't know what it is yet, especially as they hand over more of the chip design process to AI.

                                                                                                                      • cobrabyte 9 months ago

                                                                                                                        I'd love a tool like this for PCB design/layout

                                                                                                                        • onjectic 9 months ago

                                                                                                                          First thing my mind went to as well, I’m sure this is already being worked on, I think it would be more impactful than even this.

                                                                                                                          • bgnn 9 months ago

                                                                                                                            why do you think that?

                                                                                                                            • bittwiddle 9 months ago

                                                                                                                              Far more people / companies are designing PCBs than there are designing custom chips.

                                                                                                                              • foota 9 months ago

                                                                                                                                I think the real value would be in ease of use. I imagine the top N chip creators represent a fair bit of the marginal value in pushing the state of the art forward. E.g., for hobbyists or small shops, there's likely not much value in tiny marginal improvements, but for the big ones it's worth the investment.

                                                                                                                          • dsv3099i 9 months ago
                                                                                                                          • dreamcompiler 9 months ago

                                                                                                                            Looks like this is only about placement. I wonder if it can be applied to routing?

                                                                                                                            • amelius 9 months ago

                                                                                                                              Exactly what I was thinking.

                                                                                                                              Also: when is this coming to KiCad? :)

                                                                                                                              PS: It would also be nice to apply a similar algorithm to graph drawing (e.g. trying to optimize for human readability instead of electrical performance).

                                                                                                                              • tdullien 9 months ago

                                                                                                                                The issue is that in order to optimize for human readability you'll need a huge number of human evaluations of graphs?

                                                                                                                                • amelius 9 months ago

                                                                                                                                  Maybe start with minimization of some metric based on number of edge crossings, edge lengths and edge bends?

                                                                                                                            • ilaksh 9 months ago

                                                                                                                              How far are we from memory-based computing going from research into competitive products? I get the impression that we are already well passed the point where it makes sense to invest very aggressively to scale up experiments with things like memristors. Because they are talking about how many new nuclear reactors they are going to need just for the AI datacenters.

                                                                                                                              • mikewarot 9 months ago

                                                                                                                                The cognitive mismatch between Von Neumann's folly and other compute architectures is vast. He slowed down the ENIAC by 66% when he got ahold of it.

                                                                                                                                We're in the timeline that took the wrong path. The other world has isolinear memory, which can be used for compute, or as memory, down to the LUT level. Everything runs at a consistent speed, and hardware faults LUTs can be routed around easily.

                                                                                                                                • sroussey 9 months ago

                                                                                                                                  The problem is that the competition (our current von neumann architecture) has billions of dollars of R&D per year invested.

                                                                                                                                  Better architectures without the yearly investment train will no longer be better quite quickly.

                                                                                                                                  You would need to be 100x to 1000x better in order to pull the investment train onto your tracks.

                                                                                                                                  Don’t has been impossible for decades.

                                                                                                                                  Even so, I think we will see such a change in my lifetime.

                                                                                                                                  AI could be that use case that has a strong enough demand pull to make it happen.

                                                                                                                                  We will see.

                                                                                                                                  • therealcamino 9 months ago

                                                                                                                                    If you don't worry about the programming model, it's pretty easy to be way better than than existing methodologies in terms of pure compute.

                                                                                                                                    But if you do pay attention to the programming model, they're unusable. You'll see that dozens of these approaches have come and gone, because it's impossible to write software for them.

                                                                                                                                    • sanxiyn 9 months ago

                                                                                                                                      GPGPU is instructive. It is not easy, but possible to write software for it. That's why it succeeded.

                                                                                                                                    • ilaksh 9 months ago

                                                                                                                                      I think it's just ignorance and timidity on the part of investors. Memristor or memory-computing startups are surely the next trend in investing within a few years.

                                                                                                                                      I don't think it's necessarily demand or any particular calculation that makes things happen. I think people including investors are just herd animals. They aren't enthusiastic until they see the herd moving and then they want in.

                                                                                                                                      • foota 9 months ago

                                                                                                                                        I don't think it's ignorant to not invest in something that has a decade long path towards even having a market, much less a large market.

                                                                                                                                        • ilaksh 9 months ago

                                                                                                                                          I have seen at least one experiment running a language model or other neural network on (small scale) memory-based computing substrates. That suggests less than 1-2 years to apply them immediately to existing tasks once they are scaled up in terms of compute capacity.

                                                                                                                                          • sroussey 9 months ago

                                                                                                                                            Many more years than that. And it must be general enough. Otherwise you optimize for A in hardware and 5yr later when producing chips, A is no longer revenant and everyone moved to B.

                                                                                                                                            • foota 9 months ago

                                                                                                                                              I would have assumed it would take many years longer than that to scale something like this up, based on how long it takes traditional CPU manufacturers to design state of the art chips and manufacturing processes.

                                                                                                                                      • HPsquared 9 months ago

                                                                                                                                        And think of the embedded applications.

                                                                                                                                      • ninetyninenine 9 months ago

                                                                                                                                        What occupation is there that is purely intellectual that has no chance of an AI ever progressing to a point where it can take it over?

                                                                                                                                        • Zamiel_Snawley 9 months ago

                                                                                                                                          I think only sentimentality can prevent take over by a sufficiently competent AI.

                                                                                                                                          I don’t want art that wasn’t made by a human, no matter how visually stunning or indistinguishable it is.

                                                                                                                                          • ninetyninenine 9 months ago

                                                                                                                                            Discounting fraud... what if the AI produces something genuinely better. Genuinely moving you to tears? What then?

                                                                                                                                            Imagine your favorite movie, the most moving book. You read it, it changed you, then you found out it was an AI that generated it in a mere 10 seconds.

                                                                                                                                            Artificial sentimentality is useless in the face of reality. That human endeavor is simply data points along an multi-dimensional best fit curve.

                                                                                                                                            • Zamiel_Snawley 9 months ago

                                                                                                                                              That’s a challenging hypothetical.

                                                                                                                                              I think it would feel hollowed out, disingenuous.

                                                                                                                                              It feels too close to being a rat with a dopamine button, meaningless hedonism.

                                                                                                                                              I haven’t thought it through particularly thoroughly though, I’d been interested in hearing other opinions. These philosophical questions quickly approach unanswerable.

                                                                                                                                              • ninetyninenine 9 months ago

                                                                                                                                                >These philosophical questions quickly approach unanswerable.

                                                                                                                                                With the current trendline of AI progress in the last decade the question has a high possibility of being answered by being actualized in reality.

                                                                                                                                                It's not a random question either. With AI quickly entrenching itself into every aspect of human creation from art, music, to chip design, this is all I can think about.

                                                                                                                                          • alexyz12 9 months ago

                                                                                                                                            anything that needs very real-time info. AI's will always be limited by us feeding them info, or them collecting it themselves. But humans can travel to more places than an AI can, until robots are everywhere too I suppose

                                                                                                                                          • mirchiseth 9 months ago

                                                                                                                                            I must be old because first thing I thought reading AlphaChip was why is deepmind talking about chips in DEC Alpha :-) https://en.wikipedia.org/wiki/DEC_Alpha.

                                                                                                                                            • sedatk 9 months ago

                                                                                                                                              I first used Windows NT on a PC with a DEC Alpha AXP CPU.

                                                                                                                                              • lamontcg 9 months ago

                                                                                                                                                I miss Digital Unix, too (I don't really miss the "Tru64" rebrand...)

                                                                                                                                                • kQq9oHeAz6wLLS 9 months ago

                                                                                                                                                  Same!

                                                                                                                                                  • mdtancsa 9 months ago

                                                                                                                                                    haha, same!

                                                                                                                                                  • red75prime 9 months ago

                                                                                                                                                    I hope I'll still be alive when they'll announce AlephZero.

                                                                                                                                                    • QuadrupleA 9 months ago

                                                                                                                                                      How good are TPUs in comparison with state of the art Nvidia datacenter GPUs, or Groq's ASICs? Per watt, per chip, total cost, etc.? Is there any published data?

                                                                                                                                                    • FrustratedMonky 9 months ago

                                                                                                                                                      So AI designing it's own chips. Now that is moving towards exponential growth. Like at the end of "Colossus" the movie.

                                                                                                                                                      Forget LLM's. What DeepMind is doing seems more like how an AI will rule, in the world. Building real world models, and applying game logic like winning.

                                                                                                                                                      LLM's will just be the text/voice interface to what DeepMind is building.

                                                                                                                                                      • anna-gabriella 9 months ago

                                                                                                                                                        I can tell you get excited by SciFi, that's where Google's work belongs - people have been unable to reproduce it outside Google by a long shot.

                                                                                                                                                        • FrustratedMonky 9 months ago

                                                                                                                                                          Alpha-GO was not sci-fi. And that was 2016

                                                                                                                                                          Protein Folding? That was against a defined data set and other organizations.

                                                                                                                                                          Nobody can re-produce? Isn't that the definition of a competitive advantage?

                                                                                                                                                          They are building something others can't, and that is bad? That is what companies do.

                                                                                                                                                          • anna-gabriella 9 months ago

                                                                                                                                                            We are discussing AlphaChip in 2024, not AlphaGo from 2016. I don't know much about protein folding (there were some controversies there, but that's not relevant). Neither of these has been related to product claims.

                                                                                                                                                            As for "nobody can re-produce", no, that's not the definition. Imaginary things are not competitive advantage. They are exaggerating, and that's bad. But yeah, that's what companies do, you are right.

                                                                                                                                                            • FrustratedMonky 9 months ago

                                                                                                                                                              "Imaginary things"

                                                                                                                                                              I get the impression you just aren't keeping up with DeepMind.

                                                                                                                                                              They have made huge break throughs in science, and they publish their results in Nature. Just because the parent company Google had some bad demo's doesn't mean it is all bunk.

                                                                                                                                                              So guess if you are of the ilk that just doesn't trust anything anymore, that there is no peer reviews, all science is a fraud. I really can't help that.

                                                                                                                                                              • anna-gabriella 9 months ago

                                                                                                                                                                DeepMind made huge breakthroughs, agreed. AlphaGo beat a sitting go champion, which was very cool. AlphaFold solved a large number of proteins with verified results. Are we clear on this? Hope you are taking back your ad hominem.

                                                                                                                                                                The team that did RL for chips work was at GoogleBrain, and you already pointed out that Google had bad demos. The fact that this team was absorbed into DeepMind does not magically rub the successes of DeepMind onto them.

                                                                                                                                                                The RL for chips results were nothing like AlphaGo. Imagine if AlphaGo claimed to beat unknown go players, you would laugh. But the Nature paper on RL for chips claims to outperform unknown chip engineers. Also, imagine if AlphaFold claimed to fold only proprietary proteins. The Nature paper on RL for chips reports results on a small set of proprietary chip blocks (they released one design, and the results are not great on that one). That's where imaginary results come up. One of these things is not like the others.

                                                                                                                                                                • FrustratedMonky 9 months ago

                                                                                                                                                                  And AlphaStar, And recently got Silver Medal in Geometry Olympiad. Didn't beat all humans, but got a silver in one of those tasks that seemingly would be in the domain of humans for awhile. Like Go was considered.

                                                                                                                                                                  Really, I wasn't arguing about the chips so much. I mentioned DeepMind and you said I must like Sci-Fi, so I assumed you were inferring that DeepMind results were not that extraordinary.

                                                                                                                                                                  And, I can't keep up with the internal re-orgs now that DeepMind was merged with the other groups at Google. So Maybe I am assuming too much, if this wasn't the same DeepMind group. -- Though I think when companies merge groups like this, they are definitely hoping some 'magic success rubs off on them'.

                                                                                                                                                                  I guess for the Chip design, is your argument about it was compared against generic human engineer. So if they would set up some competition with some humans, that would satisfy your issue with the results?

                                                                                                                                                                  So my original more flippant post was Sci-Fi, it's just things are changing fast enough that the lines have blurred, and DeepMind has real results that aren't Sci-Fi:

                                                                                                                                                                  Take Games as a simplified world models, DeepMind has made a lot more progress in winning games than other companies, then take some of the other companies that have had break throughs in Video-to-Real-World-Models, where the video can be broken down into categories, and those can be fed into a 'Game' function. Now put that on a loop(default mode network in brain) and in a robot body (so it has embodied subjective experience of the world, there are consequences of actions). And I am making a bit of a Sci-Fi leap that you can get human behavior. And, if they can then make leap to designing chips, then they can reach that hockey stick of increasing intelligence.

                                                                                                                                                                  So, guess I am making Sci-Fi leap. But I think the actual results from DeepMind already seem Sci-Fi like but are real. So are we really that far away, given things we thought would take 100's of years are falling by the wayside.

                                                                                                                                                                  ok. take back ad-hominem. but it is hard to tell on the internet. it seemed like you were questioning verified results, and you must know that there is a large contingent on internet that casts doubt on all science. So once someone goes down that path it is easier to ignore them.

                                                                                                                                                                • undefined 9 months ago
                                                                                                                                                                  [deleted]
                                                                                                                                                        • ur-whale 9 months ago

                                                                                                                                                          Seems to me the article is claiming a lot of things, but is very light on actual comparisons that matter to you and me, namely: how does one of those fabled AI-designed chop compare to their competition ?

                                                                                                                                                          For example, how much better are these latest gen TPU's when compared to NVidia's equivalent offering ?

                                                                                                                                                          • gabegobblegoldi 9 months ago

                                                                                                                                                            Good question. I thought the tpus were a way for Google to apply pricing pressure to nvidia by having an alternative. They are not particularly better (it’s hard to get utilization), and I believe Google continues to be a big buyer of nvidia chips.

                                                                                                                                                          • undefined 9 months ago
                                                                                                                                                            [deleted]
                                                                                                                                                            • colesantiago 9 months ago

                                                                                                                                                              A marvellous achievement from DeepMind as usual, I am quite surprised that Google acquired them for a significant discount of $400M, when I would have expected it to be in the range of $20BN, but then again Deepmind wasn’t making any money back then.

                                                                                                                                                              • dharma1 9 months ago

                                                                                                                                                                it was very early. probably one of their all time best acquisitions in addition to YouTube.

                                                                                                                                                                Re:using RL and other types AI assistance for chip design, Nvidia and others are doing this too

                                                                                                                                                                • sroussey 9 months ago

                                                                                                                                                                  Applied Semantics for $100m which gave them their advertising business seems like their best deal.

                                                                                                                                                                  • hanwenn 9 months ago

                                                                                                                                                                    don't forget Android.

                                                                                                                                                                  • undefined 9 months ago
                                                                                                                                                                    [deleted]
                                                                                                                                                                  • loandbehold 9 months ago

                                                                                                                                                                    Every generation of chips is used to design next generation. That seems to be the root of exponential growth in Moore's law.

                                                                                                                                                                    • AshamedCaptain 9 months ago

                                                                                                                                                                      I'm only tangential to the area, but my impression over the decades is that what is going to happen is that, eventually, designing the next generation is going to require more resources than the current generation can provide, thereby putting a hard stop at the exponential growth stage.

                                                                                                                                                                      I'd even dare to claim we are already at the point where the growth has stopped, but even then you will only see the effect in a decade or so as there are still many small low-hanging fruits you can fix, but no big improvements.

                                                                                                                                                                      • negativeonehalf 9 months ago

                                                                                                                                                                        Definitely a big part of it. Chips enable better EDA tools, which enable better chips. First it was analytic solvers and simulated annealing, now ML. Exciting times!

                                                                                                                                                                        • bgnn 9 months ago

                                                                                                                                                                          That's wrong. Chip design and Moore's law have nothing to do with each other.

                                                                                                                                                                          • smaddox 9 months ago

                                                                                                                                                                            To clarify what the parent is getting at: Moore's law is an observation about the density (and, really about the cost) of transistors. So it's about the fabrication process, not about the logic design.

                                                                                                                                                                            Practically speaking, though, maintaining Moore's law would have been economically prohibitive if circuit design and layout had not been automated.

                                                                                                                                                                            • bgnn 9 months ago

                                                                                                                                                                              That's true. The impact on design is reverse of the post I replied to though. Since we got more density, we had more compute available to automate more, which made it economically viable. Every generation had enough compute to design the next generation. Now the device scaling is stagnated, we have more (financially viable) compute available to us than before (compared to design complexity). This is why this AI generated floorplans become viable I think. I'm not sure if it would have been the same should the device scaling would be continuing at its peak.

                                                                                                                                                                              I want to emphasize the biggest barrier for IC design to the outsiders: prohibitively expensive software licenses. IC design software costs are the much higher than conpute and the production costs, and often similar order of magnitude but definitely higher than engineer salaries. This is because of the monopoly of the 3 big companies (Synopsys, Cadence and Mentor Graphics). What wxcites me the most about stuff like OP isn't AI, everyone is doing that. It's the premise of more competition and even open source tool options. In the good old days companies used to have their im-house tools. They are all sacrificed (and pretty much none made open source) because investors thought it's not a core business, so it's inefficient. Now even Nvidia or Apple have no alternative.

                                                                                                                                                                        • bankcust08385 9 months ago

                                                                                                                                                                          Technology singularity is around the corner as soon as the chips (mostly) design themselves. There will be a few engineers, zillions of semiskilled maintenance people making a pittance, and most of the world will be underemployed or unemployed. Technical people better understand this and unionize or they will find themselves going the way of piano tuners and Russian physicists. Slow boiling frog...

                                                                                                                                                                          • amelius 9 months ago

                                                                                                                                                                            Can this be abstracted and generalized into a more generally applicable optimization method?

                                                                                                                                                                            • kayson 9 months ago

                                                                                                                                                                              I'm pretty sure Cadence and Synopsys have both released reinforcement-learning-based placing and floor planning tools. How do they compare...?

                                                                                                                                                                              • RicoElectrico 9 months ago

                                                                                                                                                                                Synopsys tools can use ML, but not for the layout itself, rather tuning variables that go into the physical design flow.

                                                                                                                                                                                > Synopsys DSO.ai autonomously explores multiple design spaces to optimize PPA metrics while minimizing tradeoffs for the target application. It uses AI to navigate the design-technology solution space by automatically adjusting or fine-tuning the inputs to the design (e.g., settings, constraints, process, flow, hierarchy, and library) to find the best PPA targets.

                                                                                                                                                                                • negativeonehalf 9 months ago

                                                                                                                                                                                  Unfortunately, commercial EDA companies generally have restrictive licensing agreements that prohibit direct public comparison.

                                                                                                                                                                                  Still, the fact that Google uses it for TPU is pretty telling - this is a multi-billion dollar, mission-critical chip design effort, and there's no way they'd make TPU worse just to prop up a research paper. MediaTek's production use is also a good indicator.

                                                                                                                                                                                  • hulitu 9 months ago

                                                                                                                                                                                    They don't. You cannot compare reality (Cadence, Synopsys) with hype (Google).

                                                                                                                                                                                    • pelorat 9 months ago

                                                                                                                                                                                      So you're basically saying that Google should have used existing tools to layout their chip designs, instead of their ML solution, and that these existing tools would have produced even better chips than the ones they are actually manufacturing?

                                                                                                                                                                                      • hulitu 9 months ago

                                                                                                                                                                                        > So you're basically saying that Google should have used existing tools to layout their chip designs, instead of their ML solution

                                                                                                                                                                                        Did they tested their ML solution ? With real world chips ? Are there any "benchmarks" that show that their chip performs better ?

                                                                                                                                                                                        • dsv3099i 9 months ago

                                                                                                                                                                                          It’s more like no one outside of Google has been able to reproduce Google’s results. And not for lack of trying. So if you’re outside of Google, at this moment, it’s vapor.

                                                                                                                                                                                    • bachback 9 months ago

                                                                                                                                                                                      Deepmind is producing science vapourware while OpenAI is changing the world

                                                                                                                                                                                      • sijnapq 9 months ago

                                                                                                                                                                                        [dead]

                                                                                                                                                                                        • sijnapq 9 months ago

                                                                                                                                                                                          [dead]

                                                                                                                                                                                        • idunnoman1222 9 months ago

                                                                                                                                                                                          So one other designer plus Google is using alpha chip for their layouts? - not sure on that title, call me when amd and nvidia are using it

                                                                                                                                                                                          • 7e 9 months ago

                                                                                                                                                                                            Did it, though? Google’s chips still aren’t very good compared with competitors.

                                                                                                                                                                                            • negativeonehalf 9 months ago

                                                                                                                                                                                              There's a lot of... passionate discussion in this thread, but we shouldn't lose sight of the big picture -- Google has used AlphaChip in multiple generations of TPU, their flagship AI accelerator. This is a multi-billion dollar project that is strategically critical for the success of the company. The idea that they're secretly making TPUs worse in order to prop up a research paper is just absurd. Google has even expanded their of AlphaChip use to other chips (e.g. Axion).

                                                                                                                                                                                              Meanwhile, MediaTek built on AlphaChip and is using it widely, and announced that it was used to help design Dimensity 5G (4nm technology node size).

                                                                                                                                                                                              I can understand that, when this open-source method first came out, there were some who were skeptical, but we are way beyond that now -- the evidence is just overwhelming.

                                                                                                                                                                                              I'm going to paste here the quotes from the bottom of the blog post, as it seems like a lot of people have missed them:

                                                                                                                                                                                              “AlphaChip’s groundbreaking AI approach revolutionizes a key phase of chip design. At MediaTek, we’ve been pioneering chip design’s floorplanning and macro placement by extending this technique in combination with the industry’s best practices. This paradigm shift not only enhances design efficiency, but also sets new benchmarks for effectiveness, propelling the industry towards future breakthroughs.” --SR Tsai, Senior Vice President of MediaTek

                                                                                                                                                                                              “AlphaChip has inspired an entirely new line of research on reinforcement learning for chip design, cutting across the design flow from logic synthesis to floor planning, timing optimization and beyond. While the details vary, key ideas in the paper including pretrained agents that help guide online search and graph network based circuit representations continue to influence the field, including my own work on RL for logic synthesis. If not already, this work is poised to be one of the landmark papers in machine learning for hardware design.” --Siddharth Garg, Professor of Electrical and Computer Engineering, NYU

                                                                                                                                                                                              "AlphaChip demonstrates the remarkable transformative potential of Reinforcement Learning (RL) in tackling one of the most complex hardware optimization challenges: chip floorplanning. This research not only extends the application of RL beyond its established success in game-playing scenarios to practical, high-impact industrial challenges, but also establishes a robust baseline environment for benchmarking future advancements at the intersection of AI and full-stack chip design. The work's long-term implications are far-reaching, illustrating how hard engineering tasks can be reframed as new avenues for AI-driven optimization in semiconductor technology." --Vijay Janapa Reddi, John L. Loeb Associate Professor of Engineering and Applied Sciences, Harvard University

                                                                                                                                                                                              “Reinforcement learning has profoundly influenced electronic design automation (EDA), particularly by addressing the challenge of data scarcity in AI-driven methods. Despite obstacles including delayed rewards and limited generalization, research has proven reinforcement learning's capability in complex electronic design automation tasks such as floorplanning. This seminal paper has become a cornerstone in reinforcement learning-electronic design automation research and is frequently cited, including in my own work that received the Best Paper Award at the 2023 ACM Design Automation Conference.” --Professor Sung-Kyu Lim, Georgia Institute of Technology

                                                                                                                                                                                              "There are two major forces that are playing a pivotal role in the modern era: semiconductor chip design and AI. This research charted a new path and demonstrated ideas that enabled the electronic design automation (EDA) community to see the power of AI and reinforcement learning for IC design. It has had a seminal impact in the field of AI for chip design and has been critical in influencing our thinking and efforts around establishing a major research conference like IEEE LLM-Aided Design (LAD) for discussion of such impactful ideas." --Ruchir Puri, Chief Scientist, IBM Research; IBM Fellow

                                                                                                                                                                                              • sijnapq 9 months ago

                                                                                                                                                                                                [dead]

                                                                                                                                                                                              • DrNosferatu 9 months ago

                                                                                                                                                                                                Yet, their “frontier” LLM lags all the others…

                                                                                                                                                                                                • abc-1 9 months ago

                                                                                                                                                                                                  Why aren’t they using this technique to design better transformer architectures or completely novel machine learning architectures in general? Are plain or mostly plain transformers really peak? I find that hard to believe.

                                                                                                                                                                                                  • jebarker 9 months ago

                                                                                                                                                                                                    Because chip placement and the design of neural network architectures are entirely different problems, so this solution won't magically transfer from one to the other.

                                                                                                                                                                                                    • abc-1 9 months ago

                                                                                                                                                                                                      And AlphaGo is trained to play Go? The point is training a model through self play to build neural network architectures. If it can play Go and architect chip placements, I don’t see why it couldn’t be trained to build novel ML architectures.

                                                                                                                                                                                                      • jebarker 9 months ago

                                                                                                                                                                                                        Sure, they could choose to work on that problem. But why do you think that's a more important/worthwhile problem than chip design or any other problem they might choose to work on? My point was that it's not trivial to make self-play for some other problem work, so given all the problems in the world why did you single our neural network architecture design? Especially since it's not the transformer architecture that is really holding back AI progress.

                                                                                                                                                                                                        • abc-1 9 months ago

                                                                                                                                                                                                          Recursive self improvement

                                                                                                                                                                                                  • mikewarot 9 months ago

                                                                                                                                                                                                    I understand the achievement, but can't square it with my belief that uniform systolic arrays will prove to be the best general purpose compute engine for neural networks. Those are almost trivial to route, by nature.

                                                                                                                                                                                                    • ilaksh 9 months ago

                                                                                                                                                                                                      Isn't this already the case for large portions of GPUs? Like, many of the blocks would be systolic arrays?

                                                                                                                                                                                                      I think the next step is arrays of memory-based compute.

                                                                                                                                                                                                      • mikewarot 9 months ago

                                                                                                                                                                                                        Imagine a bit level systolic array. Just a sea of LUTs, with latches to allow the magic of graph coloring to remove all timing concerns by clocking everything in 2 phases.

                                                                                                                                                                                                        GPUs still treat memory as separate from compute, they just have wider bottlenecks than CPUs.