Battery efficiency comes from a million little optimizations in the technology stack, most of which comes down to using the CPU as little as possible. As such the instruction set architecture and process node aren't usually that important when it comes to your battery life.
If you fully load the CPU and calculate how much energy a AI340 needs to perform a fixed workload and compare that to a M1 you'll probably find similar results, but that only matters for your battery life if you're doing things like blender renders, big compiles or gaming.
Take for example this battery life gaming benchmark for an M1 Air: https://www.youtube.com/watch?v=jYSMfRKsmOU. 2.5 hours is about what you'd expect from an x86 laptop, possibly even worse than the fw13 you're comparing here. But turn down the settings so that the M1 CPU and GPU are mostly idle, and bam you get 10+ hours.
Another example would be a ~5 year old mobile qualcomm chip. It's a worse process node than an AMD AI340, much much slower and significantly worse performance per watt, and yet it barely gets hot and sips power.
All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.
> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.
It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding. I've had to enable gpu video decoding on my fw16 and haven't noticed the fans on youtube.
All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.
This isn't true. Yes, uncore power consumption is very important but so is CPU load efficiency. The faster the CPU can finish a task, the faster it can go back to sleep, aka race to sleep.Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.
Another thing that makes Apple laptops feel way more efficient is that they use a true big.Little design while AMD and Intel's little cores are actually designed for area efficiency and not power efficiency. In the case of Intel, they stuff as many little cores as possible to win MT benchmarks. In real world applications, the little cores are next to useless because most applications prefer a few fast cores over many slow cores.
A huge reason for the low power usage is the iPhone.
Apple spent years incrementally improving efficiency and performance of their chips for phones. Intel and AMD were more desktop based so power efficiency wasnt the goal. When Apple's chips got so good they could transition into laptops, x86 wasn't in the same ballpark.
Also the iPhone is the most lucrative product of all time (I think) and Apple poured a tonne of that money into R&D and taking the top engineers from Intel, AMD, and ARM, building one of the best silicon teams.
> All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.
A good demonstration is the Android kernel. By far the biggest difference between it and the stock Linux kernel is power management. Many subsystems down to the process scheduler are modified and tuned to improve battery life.
And the more relevant case for laptops is macOS, which is heavily optimized for battery life and power draw in ways that Linux just isn't, neither is Windows. A lot of the problems here can't actually be fixed by intel, amd, or anyone designing x86 laptops because getting that level of efficiency requires the ability to strongly lead the app developer community. It also requires highly competent operating system developers focusing on the issue for a very long time, and being able to co-design the operating system, firmware and hardware together. Microsoft barely cares about Windows anymore, the Linux guys only care about servers since forever, and that leaves Apple alone in the market. I doubt anything will change anytime soon.
Power efficiency is very important to servers too, for cost instead of for battery life. But, energy is energy. Thus, I suspect that the power draw is in userland systems that are specific to desktop, like desktop environments. Thus, using a simpler desktop environment may be worthwhile.
It's important but not relative to performance. Perf/watt thinking has a much longer history in mobile and laptop spaces. Even in servers most workloads haven't migrated to ARM.
I used Ubuntu around 2015 - 2018 and got hit with a nasty defect around gnome online accounts integrations (please correct me if the words are wrong here). For some reason, it got stuck in a loop or a bad state on my machine. I have since then decided that I will never add any of my online accounts, Facebook, Google, or anything to Gnome.
> It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding.
To be fair, usually the linux itself has hardware acceleration available but the browser vendors tend to disable gpu rendering except on controlled/known perfectly working combinations of OS/Hardware/Drivers and they have much less testing in Linux. In most case you can force enabling gpu rendering in about:config and try it out yourself and leave it unless you get recurring crashes.
Hell, Apple CPU's are even optimized for Apple software GC calls like Retain/Release objects. It seems if you want optimal performance and power efficiency, you need to own both hardware and software.
Looks like general purpose CPUs are on the losing train.
Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.
It seems if you want optimal performance and power efficiency, you need to own both hardware and software.
Does Apple optimize the OS for its chips and vice versa? Yes. However, Apple Silicon hardware is just that good and that far ahead of x86.Here's an M4 Max running macOS running Parallels running Windows when compared to the fastest AMD laptop chip: https://browser.geekbench.com/v6/cpu/compare/13494385?baseli...
M4 Max is still faster even with 14 out of 16 possible cores being used. You can't chalk that up to optimizations anymore because Windows has no Apple Silicon optimizations.
I disable turbo boost in cpu on linux. Fans rarely start on the laptop and the system is generally cool. Even working on development and compilation I rarely need the extra perf. For my 10yr old laptop I cap max clock to 95% too to stop the fans from always starting. YMMV
Sounds like death by (2^10)-24) cuts for the x86 architecture.
Turning down the settings will get you worse experiece, especially if you turn down that they are "mostly idle". Not comparable.
> most of which comes down to using the CPU as little as possible.
it least on mobile platform apple advocate the other way with race to sleep - do calculation as fast as you can with powerful cores so that whole chip can go back to sleep earlier and more often take naps.
Intel stipulated the same under the name HUGI (Hurry Up and Go Idle) about 15 years ago when ultrabooks were the new hot thing.
But when Apple says it, software devs actually listen.
They're big, expensive chips with a focus on power efficiency. AMD and Intel's chips that are on the big and expensive side tend toward being optimized for higher power ranges, so they don't compete well on efficiency, while their more power efficient chips tend toward being optimized for size/cost.
If you're willing to spend a bunch of die area (which directly translates into cost) you can get good numbers on the other two legs of the Power-Performance-Area triangle. The issue is that the market position of Apple's competitors is such that it doesn't make as much sense for them to make such big and expensive chips (particularly CPU cores) in a mobile-friendly power envelope.
Per core, Apple’s Performance cores are no bigger than AMD’s Zen cores. So it’s a myth that they’re only fast and efficient because they are big.
What makes Apple silicon chips big is they bolt on a fast GPU on it. If you include the die of a discrete GPU with an x86 chip, it’d be the same or bigger than M series.
You can look at Intel’s Lunar Lake as an example where it’s physically bigger than an M4 but slower in CPU, GPU, NPU and has way worse efficiency.
Another comparison is AMD Strix Halo. Despite being ~1.5x bigger than the M4 Pro, it has worse efficiency, ST performance, and GPU performance. It does have slightly more MT.
Is it not true that the instruction decoder is always active on x86, and is quite complex?
Such a decoder is vastly less sophisticated with AArch64.
That is one obvious architectural drawback for power efficiency: a legacy instruction set with variable word length, two FPUs (x87 and SSE), 16-bit compatibility with segmented memory, and hundreds of otherwise unused opcodes.
How much legacy must Apple implement? Non-kernel AArch32 and Thumb2?
Yet when you put them side by side Intel and AMD feel so slow and sluggish in multi task scenarios while M series feels smooth. And unlike windows and Linux you don’t feel like it needs to be rebooted every few hours.
I had macOS but the whole experience is just superior than any Linux distro. And both windows and Linux are trash at battery life.
Edit: Linux users seem to have their panties in a twist.
I don't think there is a single thing you can point to. But overall Apple's hardware/software is highly optimized, closely knit, and each component is in general the best the industry has to offer. It is sold cheap as they make money on volume and an optimized supply chain.
Framework does not have the volume, it is optimized for modularity, and the software is not as optimized for the hardware.
As a general purpose computer Apple is impossible to beat and it will take a paradigm shift for that for to change (completely new platform - similar to the introduction of the smart phone). Framework has its place as a specialized device for people who enjoy flexible hardware and custom operating systems.
When one controls the OS and much of the delivery chain, it is not unthinkable to decide to through some billions of $$$ to create a chop optimized to serve exactly your needs.
So this is precisely what Apple did, and we can argue it was long time in the making. The funny part is that nobody expected x86 to make way for ARM chips, but perhaps this was a corporate bias stemming from Intel marketing, which they are arguably very good at.
> It is sold cheap as they make money on volume and an optimized supply chain.
What about all the money that they make from abusive practices like refusing to integrate with competitors' products thus forcing you to buy their ecosystem, phoning home to run any app, high app store fees even on Mac OS, and their massive anti repair shenanigans?
Macs today are not designed to be easily repairably but instead to be lighter and otherwise better integrated - I believe that is consequence of consumer preferences and not shady business practices.
As for the services - it is a bit off topic as I believe Apple makes a profit on their macs alone ignoring their services business. But in general I have less of a problem with a subscription / fee-driven services business compared to an advertisement-based one. And as for the fee / alternative payment controversy (epic vs apple etc.) this is something that is relevant if you are a big brand that can actually market on your own / build an alternative shop infrastructure. For small time developers the marketing and payment infrastructure the apple app store offers is a bargain.
I am pretty sure it is a consequence of consumer preference. I can see it from my own behaviour - I am a power user of all things computing and it has been decades since I upgraded a harddisk.
Macbooks are one of the heaviest laptops you can buy. I think they are doing it for the premium feel - it is extremely sturdy. I recently got some random lenovo YOGA for linux to go along side my macbook and it weighs less, is as thin and even has dedicated gpu - while having 2 user replaceable M.2 slots. It is also very sturdy but not as sturdy Macbooks.
What i am saying is that Apple could for sure fit replaceable drives without any change hit to size or weight. But their Mac strategy is price based on disk size and make repairs expensive so you buy new machine. I don't complain it is the reason why cheapest Macbook Air is the best laptop deal.
But let's stop this marketing story that it's their engineering genius not their market strategy.
Macbooks are one of the heaviest laptops you can buy. I think they are doing it for the premium feel - it is extremely sturdy.
Yes, because of the metal enclosure while nearly all Windows laptop makers use plastic. Macs are usually the thinnest laptops in their class though.That's a Chrome problem, especially on extra powerful processors like Strix Halo. Apple is very strict about power consumption in the development of Safari, but Chrome is designed to make use of all unallocated resources. This works great on a desktop computer, making it faster than Safari, but the difference isn't that significant and it results in a lot of power draw on mobile platforms. Many simple web sites will peg a CPU core even when not in focus, and it really adds up with multiple tabs open.
It's made worse on the Strix Halo platform, because it's a performance first design, so there's more resource for Chrome to take advantage of.
The closest browser to Safari that works on Linux is Falkon. It's compatability is even less than Safari, so there's a lot of sites where you can't use it, but on the ones where you can, your battery usage can be an order of magnitude less.
I recommend using Thorium instead of Chrome; it's better but it's still Chromium under the hood, so it doesn't save much power. I use it on pages that refuse to work on anything other than Chromium.
Chrome doesn't let you suspend tabs, and as far as I could find there aren't any plugins to do so; it just kills the process when there aren't enough resources and reloads the page when you return to it. Linux does have the ability to suspend processes, and you can save a lot of battery life, if you suspend Chrome when you aren't using it.
I don't know of any GUI for it, although most window managers make it easy to assign a keyboard shortcut to a command. Whenever you aren't using Chrome but don't want to deal with closing it and re-opening it, run the following command (and ignore the name, it doesn't kill the process):
killall -STOP google-chrome
When you want to go back to using it, run: killall -CONT google-chrome
This works for any application, and the RAM usage will remain the same while suspended, but it won't draw power reading from or writing to RAM, and its CPU usage will drop to zero. The windows will remain open, and the window manager will handle them normally, but whats inside won't update, and clicks won't do anything until resumed.AFAICT the comparisons to safari are no longer true
https://birchtree.me/blog/everyone-says-chrome-devastates-ma...
That might be different on other platforms
I think the GP is talking about linux specifically. On a Mac I can see that Chrome disables unused tabs (mouse over says "Inactive tab, xxx MB freed up")
Like a few other comments have mentioned, AMD's Strix Halo / AI Max 380 and above is the chip family that is closest to what Apple has done with the M series. It has integrated memory and decent GPU. A few iterations of this should be comparable to the M series (and should make local LLMs very feasible, if that is your jam.)
On Cinebench 2025 single threaded, M4 is roughly 4x more efficient and 50% faster than Strix Halo. These numbers can be verified by googling Notebookcheck.
How many iterations to match Apple?
yes and no. i have macbook pro m4 and a zbook g1a (ai max 395+ ie strix halo)
In day to day usage the strix halo is significantly faster, and especially when large context LLM and games are used - but also typical stuff like Lightroom (gpu heavy) etc.
on the flip side the m4 battery life is significantly longer (but also the mpb is approx 1/4 heavier)
for what its worth i also have a t14 with a snapdragon X elite and while its battery is closer to a mbp, its just kinda slow and clunky.
so my best machine right now is the x86 actually!
yes and no. i have macbook pro m4 and a zbook g1a (ai max 395+ ie strix halo)
You're comparing the base M4 to a full fat Strix Halo that costs nearly $4,000. You can buy the base M4 chip in a Mac Mini for $500 on sale. A better comparison would be the M4 Max at that price.Here's a comparison I did between Strix Halo, M4 Pro, M4 Max: https://imgur.com/a/yvpEpKF
As you can see, Strix Halo is behind M4 Pro in performance and severely behind in efficiency. In ST, M4 Pro is 3.6x more efficient and 50% faster. It's not even close to the M4 Max.
(but also the mpb is approx 1/4 heavier)
Because it uses a metal enclosure.I also have a Strix Halo zbook G1A and I am quite disappointed in the idle power consumption as it hovers around 8W.
Adding to that, it is very picky about which power brick it accepts (not every 140W PD compliant works) and the one that comes with the laptop is bulky and heavy. I am used to plugging my laptop into whatever USB-C PD adapter is around, down to 20W phone chargers. Having the zbook refuse to charge on them is a big downgrade for me.
I am also on search of good portable brick to replace 140w. I found 100w Anker Prime was working well. And surprisingly there is almost identical 3 port Baseus 100w GaN but half the price. For some reason it is hard to come by (they have few other 100w bricks that are not so portable) i think it might be discontinued.
>How many iterations to match Apple?
Until AMD can built a tailor made OS for their chips and build their own laptops.
Here's an M4 Max running macOS running Parallels running Windows compared to AMD's very best laptop chip:
https://browser.geekbench.com/v6/cpu/compare/13494385?baseli...
M4 Max is still faster. Note that the M4 Max is only given 14 out of 16 cores, likely reserving 2 of them for macOS.
How do you explain this when Windows has zero Apple Silicon optimizations?
Maybe Geek bench is not a good benchmark?
Geekbench is the closest thing to a good benchmark that's usable across generations and architectures.
Maybe it is? Cinebench favors Apple even more.
GB correlates highly with SPEC.
> It has integrated memory And has had for many years. Even Apple had that with Apple II
A lot of insightful comments already, but there are two other tricks I think Apple is using: (1) the laptops can get really hot before the fans turn on audibly and (2) the fans are engineered to be super quiet. So even if they run on low RPM, you won't hear them. This makes the M-series seem even more efficient than they are.
Also, especially the MacBook Pros have really large batteries, on average larger than the competition. This increases the battery runtime.
The macbook air doesn't even have a fan. I don't think you could built a fan-less x86 laptop.
Sure you can. There are a bunch listed in this article: https://www.ultrabookreview.com/6520-fanless-ultrabooks/
Fanless x86 desktops are a thing too, in the form of thin clients and small PCs intended for business use. I have a few HP T630s I use as servers (I have used them as desktop PCs too, but my tab-hoarding habit makes them throttle a bit too much for my use - they'd be fine for a lot of people).
Do you have a version of that web page for people who want to run Linux? That'd be particularly helpful.
> I don't think you could built a fan-less x86 laptop.
Sure you can, they’re readily available on the market, though not especially common.
But even performance laptops can often be run without spinning their fans up at all. Right now, the ambient temperature where I live is around 28°, and my four-year-old Ryzen 5800HS laptop hasn’t used its fan all day, though for a lot of that time it will have been helped by a ceiling fan. But even away from a fan for the last half hour, it sits in my lap only warm, not hot. It’s easy enough to give it a load it’ll need to spin the fan up for, but you can also limit it so it will never need its fan. (In summer when the ambient temperature is 10°C higher every day, you’ll want to use its fan even when idling, and it’ll be hard to convince it not to spin them up.)
x86-64 devices that are don’t even have fans won’t ever have such powerful CPUs, and historically have always been very underpowered. Like only 60% of my 5800HS’s single-threaded benchmarking and only 20% of its multithreaded. But at under 20% of the peak power consumption.
Sure, I have one sitting on my desk right now. It uses an Intel Core m3, and it's 7.5 years old, so it can't exactly be described as high performance, but it has a fantastic 3200x1800 screen and 8GB of RAM, and since I do all my number-crunching on remote servers it has been absolutely perfect. Unfortunately, the 7.5-year-old battery no longer lasts the whole day (it'll do more like 2 hours, or 1 hour running Zoom/Teams). It has a nice rigid all-metal construction and no fan. I'm looking around for a replacement but not finding much that makes sense.
You can, the thing is you have to build it out of a solid piece of metal. Either that's patented by Apple or it is too expensive for x86 system builders.
If I recall correctly Apple had to buy enormous numbers of CNC machines in order to build laptops that way. It was considered insane by the industry at the time.
Now it makes complete sense. Sort of like how crowbarring a computer into a laptop form factor was considered insane back in the early 90s.
Yup. The original article is gone, however there is the key excerpt in an old HN thread: https://news.ycombinator.com/item?id=24532257
Apple, unlike a lot, if not all large companies (who are run by MBA beancounter morons), holds insanely large amounts of cash. That is how they can go and buy up entire markets of vendors - CNC mills, TSMC's entire production capacity for a year or two, specialized drills, god knows what else.
They effectively price out all potential competitors at once for years at a time. Even if Microsoft or Samsung would want to compete with Apple and make their own full aluminium cases, LED microdots or whatever - they could not because Apple bought exclusivity rights to the machines necessary.
Of course, there's nothing stopping Microsoft or Samsung to do the same in theory... the problem these companies have is that building the war chest necessary would drag down their stonk price way too much.
One downside of Framework is they use DDR instead of LPDDR. This means you can upgrade or replace the RAM, but it also means memory is much slower and more power hungry.
Its also probably worth putting the laptop in "efficiency" mode (15W sustained, 25W boost per Framework). The difference in performance should be fairly negligible compared to balanced mode for most tasks and it will use less energy.
Hopefully Framework will move to https://en.wikipedia.org/wiki/CAMM_(memory_module) in the future. But it'd have to become something that's widely available and readily purchased.
Isn't Ryzen AI (Strix Point?) using similar non-upgradeable LPDDR?
Framework does not have any design with those LPDDR packages.
https://frame.work/desktop?tab=specs
"LPDDR5x-8000"
Apple tailors their software to run optimally on their hardware. Other OSs have to work on a variety of platforms. Therefore limiting the amount of hardware specific optimizations.
Well I don’t think so.
First, op is talking about Chrome which is not an Apple software. And I can testify that I observed the same behavior with other software which are really not optimized for macOS or even at all. Jetbrains IDEs are fast on M*.
Also, processor manufacturers are contributors of the Linux kernel and have economical interest in having Linux behave as fast as they can on their platforms if they want to sell them to datacenters.
I think it’s something else. Probably unified the memory ?
Chrome uses tons of APIs from MacOS, and all that code is very well optimized by Apple.
I remember disassembling Apple’s memcpy function on ARM64 and being amazed at how much customization they did just for that little function to be as efficient as possible for each length of a (small) memory buffer.
memcpy (and the other string routines) are some of the library functions that most benefit from heavy optimisation and tuning for specific CPUs -- they get hit a lot, and careful adjustment of the code can get major performance wins by ensuring that the full memory bandwidth of the CPU is being used (which may involve using specific load instructions, deciding whether using the simd registers is better or not, and so on). So everybody who cares about performance optimises these routines pretty carefully, regardless of toolchain/OS. For instance the glibc versions are here:
https://github.com/bminor/glibc/tree/master/sysdeps/aarch64/...
and there are five versions specialised for either specific CPU models or for available architecture features.
This argument never passes the sniff test.
You can run Linux on a MacBook Pro and get similar power efficiency.
Or run third party apps on macOS and similarly get good efficiency.
unfortunately, contrarily to popular belief, you cannot run Linux natively on recent macbooks (m4) today.
Depends what "natively" means. You can virtualize Linux through several means such as Virtual Box.
You can run Linux on a MacBook Pro and get similar power efficiency.
What? No. Asahi is spectacular for what it accomplished, but battery life is still far worse than macOS.
I am not saying that it is only software. It's everything from hardware to a gazillion optimizations in macOS.
It’s worse at switching power states, but at a given power state it is within the ball park of macOS power use.
The things where it lags are anything that use hardware acceleration or proper lowering to the lower power states.
The fastest and most efficiency Windows laptop in the world is an M4 MacBook running Parallels.
How does it compare with VMWare? I’d rather not use Parallels…
edit: whoever downvoted - please explain, what's wrong with preferring VMWare? also, for me, historically (2007-2012), it's been more performant, but didn't use it lately.
Looks about the same between Parallels and VMWare: https://browser.geekbench.com/v6/cpu/compare/13494570?baseli...
Also, here's proof that M4 Max running Parallels is the fastest Windows laptop: https://browser.geekbench.com/v6/cpu/compare/13494385?baseli...
M4 Max is running macOS running Parallels running Windows and is only using 14 out of 16 possible cores and it's still faster than AMD's very best laptop chip.
No, it's not, it's absolutely the hardware. The vertical integration surely doesn't hurt, but third-party software runs very fast and efficient on M-series too, including Asahi Linux.
Does Asahi Linux now run efficiently? I tried it on M1 about two years ago. Battery life was maybe 30% of what you get on macOS.
I think this is partially down to Framework being a very small and new company that doesn't have the resources to make the best use of every last coulomb, rather than an inherent deficiency of x86. The larger companies like Asus and Lenovo are able to build more efficient laptops (at least under Windows), while Apple (having very few product SKUs and full vertical integration) can push things even further.
notebookcheck.com does pretty comprehensive battery and power efficiency testing - not of every single device, but they usually include a pretty good sample of the popular options.
Framework is a bit behind the others in terms of cooling, apparently due to compromises needed to achieve modularity. However, a well-tuned Ryzen U in the latest ThinkPads is not that far from M chips in terms of computing power per Watt according to some benchmarks.
Most Linux distributions are not well tuned, because this is too device-specific. Spending a few minutes writing custom udev rules, with the aid of powertop, can reduce heat and power usage dramatically. Another factor is Safari, which is significantly more efficient than Firefox and Chromium. To counter that, using a barebones setup with few running services can get you quite far. I can get more than 10 hours of battery from a recent ThinkPad.
+1 on powertop, i have use it successfully for tunning old macs that I have upcycled with Linux and difference is day & night.
powertop helps a lot, I went from 3-4 hours to 6-7 hours on a ThinkPad. That said, it's not something you would want to bother a regular user with. E.g. enabling powertop optimizations will enable USB autosuspend, this will add a delay every darn time you didn't touch your USB keyboard or mouse for a second. So, you end up writing udev rules that excludes certain HID devices (or using different settings for when a laptop is on power or not), etc.
These are the kinds of optimizations that macOS does out of the box and you cannot expect most Linux users to do (which is one of the reasons battery life is so bad on Linux out-of-the-box).
I tend to think its putting the memory on the package. Putting the memory on the package has given the M1 over 400GB/s which is a good 4x that on a usual dual channel x64 CPU and the latency is half that of going out to a DRAM slot. That is drastic and I remember when the northbrige was first folded into the CPU by AMD with the Athlon and it had a similarly big improvements in performance. It also reduces power consumption a lot.
The cost is flexibility and I think for now they don't want to move to fixed RAM configurations. The X3D approach from AMD gets a good bunch of the benefits by just putting lots of cache on board.
Apple got a lot of performance out of not a lot of watts.
One other possibility on power saving is the way Apple ramps the clockspeed. Its quite slow to increase from its 1Ghz idle to 3.2Ghz, about 100ms and it doesn't even start for 40ms. With tiny little bursts of activity like web browsing and such this slow transition likely saves a lot of power at a cost of absolute responsiveness.
> and the latency is half that of going out to a DRAM slot.
No, it's not. DRAM latency on Apple Silicon is significantly higher than on the desktop, mainly because they use LPDDR which has higher latencies.
I was going to mention this as well.
Source: chipsandcheese.com memory latency graphs
A small reason for less power consumption with on die RAM is that you don't need active termination, which does use a few watts of power. It isn't the main reason that the Macs use less power, though.
this slow transition likely saves a lot of power at a cost of absolute responsiveness.
Not necessarily. Running longer at a slower speed may consume more energy overall, which is why "race to sleep" is a thing. Ideally the clock would be completely stopped most of the time. I suspect it's just because Apple are more familiar with their own SoC design and have optimised the frequency control to work with their software.
Memory bandwidth is not what makes the CPU fast and efficiency. The CPU doesn’t even have access to the full Apple Silicon bandwidth.
On package memory increases efficiency, not speed.
However, most of the speed and efficiency advantages are in the design.
On efficiency side, there's big difference on OS department. Recently released handheld Lenovo Go S has both SteamOS (which is Arch btw) and Windows11 versions, allowing to directly compare efficiency of a AMD's Z1E chip under load with limited TDP. And the difference is huge, with SteamOS fps is significantly higher and and the same time battery lasts a lot more.
Windows does a lot of useless crap in the background that kills battery and slows down user-launched software
I’ve been thinking a lot about getting something from Framework, as I like their ethos around relatability. However, I currently have an M1 Pro which works just fine, so I’ve been kicking the can down the road while worrying that it just won’t be up to par in terms of what I’m used to from Apple. Not just the processor, but everything. Even in the Intel Mac days, I ended up buying a Asus Zephyrus G14, which had nothing but glowing reviews from everyone. I hated it and sold it within 6 months. There is a level of polish that I haven’t seen on any x86 laptop, which makes it really hard for me to venture outside of Apple’s sandbox.
I recently upgraded from an M1 mac book pro 15", which I was pretty happy with, to the M4 max pro 16". I've been extremely impressed with the new laptop. The key metric I use to judge performance is build speed for our main project. It's a thing I do a few dozen times per day. The M1 took about four minutes to run our integration tests. I should add that those tests run in parallel and make heavy use of docker. There are close to 300 integration tests and a few unit tests. Each of those hit the database, Redis, and Elasticsearch. The M4 Pro dropped that to 40 seconds. Each individual test might take a few seconds. It seems to be benefiting a lot from both the faster CPU with lots of cores and the increased amount of memory and memory bandwidth. Whatever it is, I'm seriously impressed with this machine. It costs a lot new but on a three year lease, it boils down to about 100 euros per month. Totally worth it for me. And I'm kind of kicking myself for not upgrading earlier.
Before the M1, I was stuck using an intel core i5 running arch linux. My intel mac managed to die months before the M1 came out. Let's just say that the M1 really made me appreciate how stupidly slow that intel hardware is. I was losing lots of time doing builds. The laptop would be unusable during those builds.
Life is too short for crappy hardware. From a software point of view, I could live with Linux but not with Windows. But the hardware is a show stopper currently. I need something that runs cool and yet does not compromise on performance. And all the rest (non-crappy trackpad, amazingly good screen, cool to the touch, good battery life, etc.). And manages to look good too. I'm not aware of any windows/linux laptop that does not heavily compromise on at least a few of those things. I'm pretty sure I can get a fast laptop. But it'd be hot and loud and have the unusable synaptics trackpad. And a mediocre screen. Etc. In short, I'd be missing my mac.
Apple is showing some confidence by just designing a laptop that isn't even close to being cheap. This thing was well over 4K euros. Worth every penny. There aren't a lot of intel/amd laptops in that price class. Too much penny pinching happening in that world. People think nothing of buying a really expensive car to commute to work. But they'll cut on the thing that they use the whole day when they get there. That makes no sense whatsoever in my view.
Considering the amount of engineering that goes into Apple's laptops, and compared to other professional tools, 4000 EUR is extremely cheap. Other tradespeople have to spend 10x more.
Most manufacturers just don't give a shit. Had the exact same experience with a well-reviewed Acer laptop a while back, ended up getting rid of it a few months in because of constant annoyances, replaced with a MacBook Air that lasted for many years. A few years back, I got one of the popular Asus NUCs that came without networking drivers installed. I'm guessing those were on the CD that came with it, but not particularly helpful on a PC without a CD drive. The same SKU came with a variety of networking hardware from different manufacturers, without any indication of which combination I had, so trial and error it was. Zero chance non-techy people would get either working on their own.
> There is a level of polish
Yeah, those glossy mirror-like displays in which you see yourself much better than the displayed content are polished really well
Having used both types extensively my dell matte display diffuses the reflections so badly that you can’t see a damn thing. The one that replaced it was even worse.
I’ll take the apple display any day. It’s bright enough to blast through any reflections.
I had a 2020 Zephyrus G14 - also bought it largely because of the reviews.
First two years it was solid, but then weird stuff started happening like the integrated GPU running full throttle at all times and sleep mode meaning "high temperature and fans spinning to do exactly nothing" (that seems to be a Windows problem because my work machine does the same).
Meanwhile the manufacturer, having released a new model, lost interest, so no firmware updates to address those issues.
I currently have the Framework 16 and I'm happy with it, but I wouldn't recommend it by default.
I for one bought it because I tend to damage stuff like screens and ports and it also enables me to have unusual arrangements like a left-handed numpad - not exactly mainstream requirements.
I suspect the majority of people who recommend particular x86 laptops have only had x86 laptops. There’s a lot of disparity in quality between brands and models.
Apple is just off the side somewhere else.
> "There is a level of polish that I haven’t seen on any x86 laptop, which makes it really hard for me to venture outside of Apple’s sandbox."
Hah, it's exactly the other way around for me; I can't stand Apple's hardware. But then again I never bought anything Asus... let alone gamer laptops.
What exactly is wrong with Apple hardware?
For me, the keyboards in the UK have an awful layout.
Not sure why they can follow ANSI in the US but not ISO here. I just have to override the layout and ignore the symbols.
I very much prefer penabled detachables, a much better form factor than the outdated classic laptop, with a focus on general-purpose computing, such as HP's ZBook x2 G4 detachable workstation. The ideal machine would be a second iteration of that design, just updated to be smaller as well as more performant and repairable. Of course that's not gonna happen, as there's, apart from legal issues, no money in it.
Apple on the other hand doesn't offer such machines... actually never has. To me, prizing maintainability, expandability, modularity, etc., their laptops are completely undesireable even within the confines of their outdated form factor; their efficient performance is largely irrelevant, and their tablets are much too enshittified to warrant consideration. And that's before we get into the OS and eco-system aspects. :)
> using the Framework feels like using an older Intel based Mac
Your memory served you wrong. Experience eith Intel based Macs was much worse than recent AMD chips.
I considered getting a personal MBP (I have an M3 from work), but picked up a Framework 13 with the AMD 7 7840U. I have Pop!_OS on it, and while it isn't quite as impressive as the MBP, it is radically better than other Windows / Linux laptops I have used lately, battery life is quite good, ~5hr or so, not quite on par with the MBP but still good enough that I don't really have any complaints (and being able to up upgrade RAM / SSD / even mobo is worth some tradeoff to me, where my employers will just throw my MBP away in a few years).
> "[...] battery life is quite good, ~5hr or so [...]"
You call five hours good?! Damn... For productivity use, I'd never buy anything below shift-endurance (eight hours or more).
Depends on what you do at work, 5 hours of continuous editing video is pretty good.
5 hours seems a lot worse than the ~10 hours I get on my M4 Air.
I get like 3 hours on my MBP when I use it. MacBooks have better runtime only when they are mostly idle, not when you fully load them.
Can confirm, when developing software (a big project at $JOB) getting 3h out of a M3 MBP is a good day. IDE, build, test and crowdstrike are all quite power hungry.
I wonder how much of that is crowdstrike. At $LASTJOB my Mac was constantly chugging due to some mandated security software. Battery life on that computer was always horrible compared to a personal MB w/o it.
Exactly. Antiviruses are evil in this sense - crippling battery life significantly.
Wherever possible, I send “pkill -STOP” to all those processes, and stall them and thus save battery…
> crowdstrike
It is incredible that crowdstrike is still operating as a business.
It is also hard to understand why companies continue to deploy shoddy, malware-like "security" software that decreases reliability while increasing the attack surface.
Basically you need another laptop just to run the "security" software.
Allegedly, crowdstrike is S-tier EDR. Can’t blame security folks to want to have it. The performance and battery tax is very real though.
Ever since Crowdstrike fucked up and took out $10 billion worth of Windows PCs with a bad patch, most of the security folks I know have come around to the view that it is an overall liability. Something lighter-touch carries less risk, even if it isn't quite as effective.
I concur.
The only portable M device I heavily used on the go was my iPad Pro.
That thing could survive for over a week if not or lightly used. But as soon as you open Lightroom to process photos, the battery would melt away in an hour or two.
At a certain point it's not like it matters. If you're working for 5 hours, let alone 10, you will almost certainly be able to plug in during that time.
It’s true for me. I need a portable workstation more than a mobile laptop, as long as it survives train travels (most have power outlets now), moving between buildings/rooms or the occasional meeting with a customer +presentation it is enough for me.
But I can imagine some people have different needs and may not have access to (enough) power outlets. Some meeting/conference rooms had only a handful outlets for dozens of people. Definitely nice to survive light office work for a full working day.
Curious if the suspend / hibernate "just works" when you close the lid?
I feel like I've tried several times to get this working in both Linux and Windows on various laptops and have never actually found a reliable solution (often resulting in having a hot and dead laptop in my backpack).
I have an intel framework running fedora. I have found that intels s0 sleep just uses way too much battery. I’d expect that in sleep mode, it should last a week and still be above 50% power but that is definitely not the case.
I ended up moving to hybrid, where it suspends for an hour allowing immediate wake up then hibernates completely. It’s a decent compromise and I’ve never once had an issue with resume from suspend or hibernate, nor have I ever had an issue with it randomly waking up and frying itself in a backpack or unexpectedly having a dead battery.
My work M1 is still superior in this regard but it is an acceptable compromise.
I learned that even tho I run Ubuntu, arch wiki has good info on proper commands to run to configure this behavior on my machine.
It does! The only thing wasn't working out of the box, so to speak, was the fingerprint reader, I had to do a little config to get it going.
I’m sure it’s great.
As a layman there’s no way I’m running something called “Pop!_OS” versus Mac OS.
How'd you get here - "as a layman"?
Meh, it's kind of a silly name, sure, but it's one of the few distros backed by an actual vendor (System76) who isn't just trying to sucker you into buying something. As a result it has a nice level of polish and function.
I like macOS fine, I have been using Macs since 1984 (though things like SIP grate).
You're missing out. I've daily-driven both, modern macOS feels like a Fischer Price operating system by comparison.
You can probably install Asahi Linux on that M1 pro and do comparative benchmarks. Does it still feel different? (serious question)
I think it is getting close: [0]
(Edit, I read lower in the thread that the software platform also needs to know how to make efficient use of this performance per watt, ie, by not taking all the watts you can get.)
[0] https://www.phoronix.com/review/ryzen-ai-max-395-9950x-9950x...
I may be out of date or wrong, but I recall when the M1 came out there was some claims that x86 could never catch up, because there is an instruction decoding bottleneck (instructions are all variable size), which the M1 does not have, or can do in parallel. Because of that bottleneck x86 needs to use other tricks to get speed and those run hot.
ARM instructions are fixed size, while x86 are variable. This makes a wide decoder fairly trivial for ARM, while it is complex and difficult for x86.
However, this doesn't really hold up as the cause for the difference. The Zen4/5 chips, for example, source the vast majority of their instructions out of their uOp trace cache, where the instructions have already been decoded. This also saves power - even on ARM, decoders take power.
People have been trying to figure out the "secret sauce" since the M chips have been introduced. In my opinion, it's a combination of:
1) The apple engineers did a superb job creating a well balanced architecture
2) Being close to their memory subsystem with lots of bandwidth and deep buffers so they can use it is great. For example, my old M2 Pro macbook has more than twice the memory bandwidth than the current best desktop CPU, the zen5 9950x. That's absurd, but here we are...
3) AMD and Intel heavily bias on the costly side of the watts vs performance curve. Even the compact zen cores are optimized more for area than wattage. I'm curious what a true low power zen core (akin to the apple e cores) would do.
When limited to 5 watts, the Ryzen HX 370 works pretty darn well. In some low-power user cases, my GPD Pocket 4 is more power efficient than my M3 MBA.
We will need some citations on that as the GPD Pocket 4 isn't even the most power efficient pocket pc.
Closest I've seen is an uncited Reddit thread talking about usb c charging draw when running a task, conflating it with power usage.
We are going to need to see some numbers for your claim. That’s not believable.
A 8.8" screen takes a lot less power.
When you say efficiency, I assume you’re factoring in performance of the device as well?
Maybe run Geekbench 6 and see.
How about single-core performance?
Zens don't have a trace cache, just an uop cache.
But is the uOp trace cache free? It surely doesn’t magically decode and put stuff in there without cost
For sure.. for what it's worth though, I have run across several references to arm also implementing uop caches as a power optimization versus just running the decoders, so I'm inclined to say that whatever it's cost it pays for itself. I am not a chip designer though!
They can always catch up, it may just take a while. x86's variable size instructions have performance advantages because they fit in cache better.
ARM has better /security/ though - not only does it have more modern features but eg variable length instructions also mean you can reinterpret them by jumping into the middle of one.
No one ever said that. The M1 was not the fastest laptop when it was introduced. It was a nice balance of speed/battery life/heat
Backward compatibility.
Intel provides processors for many vendors and many OS. Changing to a new architecture is almost impossible to coordinate. Apple doesn't have this problem.
Actually in de 90s Intel and Microsoft wanted to move to a RISC architecture but Compaq forced them to stay on x86.
Thanks for the honest review! I have two Intel ThinkPads (2018 and 2020) and I've been eying the Framework laptops for a few years as a potential replacement. It seems they do keep getting better, but I might just wait another year. When will x86 have the "alien technology from the future" moment that M1 users had years ago already?
Macbooks are more like "phone/tablet hardware evolved into desktop" mindset (low power, high performance). x86 hardware is the other way around (high power, we'll see about performance).
That being said, my M2 beats the ... out of my twice as expensive work laptop when compiling an arduino project. Literall jaw drop the first time I compiled on the M2.
Honestly, I have serious FOMO about this. I am never going to run a Mac (or worse: Windows) I'm 100% on Linux, but I seriously hate it that I can't reliably work at a coffee shop for five hours. Not even doing that much other than some music, coding, and a few compiles of golang code.
My Apple friends get 12+ hrs of battery life. I really wish Lenovo+Fedora or whoever would get together and make that possible.
> I'm 100% on Linux, but I seriously hate it that I can't reliably work at a coffee shop for five hours. Not even doing that much other than some music, coding, and a few compiles of golang code.
Don't you drink any coffee in the coffee shop? I hope you do. But, still, being there for /five/ hours is excessive.
> work at a coffee shop
That doesn't sound super secure to me.
> for five hours.
My experience with anything that is not designed to be an office is that it will be uncomfortable in the long run. I can't see myself working for 5 hours in that kind of place.
Also it seems it is quite easily solved with an external battery pack. They may not last 12hours but they should last 4 to 6 hours without a charge in powersaving mode.
> I seriously hate it that I can't reliably work at a coffee shop for five hours
just... take your charger...
They’re relatively heavy, take up space and there’s no guarantee there will be an outlet near your table. When connected, the laptop becomes more difficult to move or pack. It’s all doable but also slightly less convenient.
Software.
If you actually benchmark said chips in a computational workload I'd imagine the newer chip should handily beat the old M1.
I find both windows and Linux have questionable power management by default.
On top of that, snappiness/responsiveness has very little to do with the processor and everything to do with the software sitting on top of it.
> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.
Is that your metric of performance? If so...
$ sudo cpufreq-set -u 50MHz
done!They are pretty similar when comparing the latest amd, and Apple chips on the same node. The buying power from Apple means that they get them earlier than AMD, usually by 6-9 months.
Windows on the other hand is horribly optimized, not only for performance, but also for battery life. You see some better results from Linux, but again it takes a while for all of the optimizations to trickle down.
The tight optimization between the chip, operating system, and targeted compilation all come together to make a tightly integrated product. However comparing raw compute, and efficiency, the AMD products tend to match the capacity of any given node.
Does the M series have a flat memory model? If so, I believe that may be the difference. I'm pretty sure the entire x86 family still pages RAM access which (at least) quadruples activity on the various busses and thus generates far more heat and uses more energy.
I'm not aware of any CPU invented since the late eighties that doesn't have paged virtual memory. Am I misunderstanding what you mean? Can you expand on where you are getting the 4x number from?
I doubt any CPU has more levels of address translation, caching, and other layers of memory access indirection than AMD/Intel 64 at this point.
That's an interesting question about the number of levels of address translation. Does anyone have numbers for that, and how much latency and energy an extra layer costs?
x86 has long been the industry standard and can’t be remove, but Apple could move away from it because they control both hardware and software.
I always thought it's Apple's on-package DRAM latency that contributes to its speed relative to x86 especially for local LLM (generative but not necessarily training) usage but with the answers here I'm not so sure.
> a number of Dockers containers running simultaneously and I never hear the fans, battery life has taken a bit of a hit but it is still very respectable.
Note those docker containers are running in a linux VM!
Of course they are on Windows (WSL2) as well.
Docker has got to be one of the worst energy consumption offenders given it's running on a heavy VM under a non-Linux OS for developers most of the time when people think it's lightweight. Doesn't help on the high performance side either, esp. ML. Might just be the wrong abstraction, and driven by "cloud vendors" for conveniently (for them!) farming overcomitted servers with ill-partitioned mostly-idle vibe "microservices."
No incentive. x86 users come to the table with a heatsink in one hand and a fan in the other, ready to consume some watts.
How much do you like the rest of the hardware? What price would seem OK for decent GUI software that runs for a long time on batter?
Am learning x86 in order to build nice software for the Framework 12 i3 13-1315U (raptor lake). Going into the optimization manuals for intel's E-cores (apparently Atom) and AMD's 5c cores. The efficiency cores on the M1 MacBook Pro are awesome. Getting debian or Ubuntu with KDE to run this on a FW12 will be mind-boggling.
I think the Ryzen ai max 395+ gets really close in terms of performance per watt.
It isn't.
In single threaded CPU performance, M4 Pro is roughly 3.6x more efficient while also being 50% faster.
In my opinion AMD is on a good way having at least comparable performance to MacBooks copying Apples architectural decisions. Unfortunately their jump on the latest AI Hype Train did not suit them well for efficiency. Ryzen 7840U was significantly more efficient than Ryzen AI 7 350 [1]
However, with AMD Strix Halo aka AMD Ryzen AI Max+ 395 (PRO) there are Notebooks like the ZBook Ultra G1a and Tablets like the Asus ROG Flow Z13, that come close to the MacBook power / performance ratio[2] due to the fact, that they used high bandwidth soldered on memory, which allows for GPUs with shared VRAM similar to Apple's strategy.
Framework did not manage to put this thing in notebook yet, but shipped a Desktop variant. They also pointed out, that there was no way to use LPCAMM2 or any other modular RAM tech with that machine, because it would have slowed it down / increased latencies to an unusable state.
So I'm pretty sure the main reason for Apple's success is the deeply integrated architecture and I'm hopeful that AMD's next generation STRIX Halo APUs might provide this with higher efficiency and hopefully Framework adapts these chips in their notebooks. Maybe they just did in the 16?! Let's wait for this announcement: https://www.youtube.com/watch?v=OZRG7Og61mw
Regarding the deeply thought through integration there is a story I often tell: Apple used to make iPods. These had support for audio playback control with their headphone remotes (e.g. EarPods), which are still available today. These had a proprietary ultra sonic chirp protocol[3] to identify Apple devices and supported volume control and complex playback control actions. You could even navigate through menus via voiceover with longpress and then using the volume buttons to navigate. Until today with their USB-C-to-AudioJack Adapters these still work on nearly every apple device published after 2013 and the wireless earbuds also support parts of this. Android has tried to copy this tiny little engineering wonder, but until today they did not manage to get it working[4]. They instead focus on their proprietary "longpress" should work in our favour and start "hey google" thing, which is ridiculously hard to intercept / override in officially published Android apps... what a shame ;)
1: https://youtu.be/51W0eq7-xrY?t=773
2: https://youtu.be/oyrAur5yYrA
AMD’s Strix Halo is still significantly far behind M4 in performance and efficiency. Not even close n
> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.
I've got the Framework 13 with the Ryzen 5 7640U and I routinely have dozens of tabs open, including YouTube videos, docker containers, handful of Neovim instances with LSPs and fans or it getting hot have never been a problem (except when I max out the CPU with heavy compilation).
The issue you're seeing isn't because x86 is lacking but something else in your setup.
I don't know, but I suspect the builds of the programs you're using play a huge factor in this. Depending on the Linux distro and package management you're using, you just might not be getting programs that are compiled with the latest x86_64 optimizations.
s/x84/x86/
>s/x84/x86/
TIL:
https://en.wikipedia.org/wiki/Monopole_(company)#Racing_cars
I was kind of hoping that there was some little-known x84 standard that never saw the light of day, but instead all I found was classic French racing cars.
One is more built from the ground up more recently than the other.
Looking beyond Apple/Intel, AMD recently came out with a cpu that shares memory between the GPU and CPU like the M processors.
The Framework is a great laptop - I'd love to drop a mac motherboard into something like that.
M1’s efficiency/thermals performance comes from having hardware-accelerated core system libraries.
Imagine that you made an FPGA do x86 work, and then you wanted to optimize libopenssl, or libgl, or libc. Would you restrict yourself to only modifying the source code of the libraries but not the FPGA, or would you modify the processor to take advantage of new capabilities?
For made-up example, when the iPhone 27 comes out, it won’t support booting on iOS 26 or earlier, because the drivers necessary to light it up aren’t yet published; and, similarly, it can have 3% less battery weight because they optimized the display controller to DMA more efficiently through changes to its M6 processor and the XNU/Darwin 26 DisplayController dylib.
Neither Linux, Windows, nor Intel have shown any capability to plan and execute such a strategy outside of video codecs and network I/O cards. GPU hardware acceleration is tightly controlled and defended by AMD and Nvidia who want nothing to do with any shared strategy, and neither Microsoft nor Linux generally have shown any interest whatsoever in hardware-accelerating the core system to date — though one could theorize that the Xbox is exempt from that, especially given the Proton chip.
I imagine Valve will eventually do this, most likely working with AMD to get custom silicon that implements custom hardware accelerations inside the Linux kernel that are both open source for anyone to use, and utterly useless since their correct operation hinges on custom silicon. I suspect Microsoft, Nintendo, and Sony already do this with their gaming consoles, but I can’t offer any certainty on this paragraph of speculation.
x86 isn’t able to keep up because x86 isn’t updated annually across software and hardware alike. M1 is what x86 could have been if it was versioned and updated without backwards compatibility as often as Arm was. it would be like saying “Intel’s 2026 processors all ship with AVX-1024 and hardware-accelerated DMA, and the OS kernel (and apps that want the full performance gains) must be compiled for its new ABI to boot on it”. The wreckage across the x86 ecosystem would be immense, and Microsoft would boycott them outright to try and protect itself from having to work harder to keep up — just like Adobe did with Apple M1, at least until their userbase starting canceling subscriptions en masse.
That’s why there are so many Arm Linux architectures: for Arm, this is just a fact of everyday life, and that’s what gave the M1 such a leg up in x86: not having to support anything older than your release date means you can focus on the sort of boring incremental optimizations that wouldn’t be permissible in a “must run assembly code written twenty years ago” environment assumed by Lin/Win today.
This isn't really true. Linux doesn't use any magic accelerators yet it runs very fast on Apple Silicon. It's just the best processor.
P/E cores do benefit from software tuning, but aside from that it's almost all hardware.
The GPU is significantly different from other desktop GPUs but it's in principle like other mobile GPUs, so not sure how much better Linux could be adapted there.
iOS 26 comes out this year.
macOS releases still work fine on intel based macs.
My M1 Macbook Pro I used at work for several months until the Ubuntu Ryzen 7 7840U P14s w/32GB RAM arrived didn't seem particularly amazing.
The only real annoying thing I've found with the P14s is the Crowdstrike junk killing battery life when it pins several cores at 100% for an hour. That never happened in MacOS. These are corporate managed devices I have no say in, and the Ubuntu flavor of the corporate malware is obviously far worse implemented in terms of efficiency and impact on battery life.
I recently built myself a 7970X Threadripper and it's quite good perf/$ even for a Threadripper. If you build a gaming-oriented 16c ryzen the perf/$ is ridiculously good.
No personal experience here with Frameworks, but I'm pretty sure Jon Blow had a modern Framework laptop he was ranting a bunch about on his coding live streams. I don't have the impression that Framework should be held as the optimal performing x86 laptop vendor.
> That never happened in MacOS
Oh you've gotten lucky then. Or somehow disabled crowdstrike.
Most probably it is not impacting on Microsoft sales?
All Ryzen mobile chips (so far) use a homogeneous core layout. If heat/power consumption is your concern, AMD simply hasn't caught up to the Big.little architecture Intel and Apple use.
In terms of performance though, those N4P Ryzen chips have knocked it out of the park for my use-cases. It's a great architecture for desktop/datacenter applications, still.
Sort of. Technically the Ryzen 5 AI 340 has 3 Zen 5 cores and 3 Zen 5c cores. They are more similar than the power/efficiency cores of Apple/Intel but 5c cores are more power efficient.
> I am sorely disappointed, using the Framework feels like using an older Intel based Mac. If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.
A big thing is storage. Apple uses extremely fast storage directly attached to the SoC and physically very very close. In contrast, most x86 systems use storage that's socketed (which adds physical signal runtime) and that goes via another chip (southbridge). That means, unlike Mac devices that can use storage as swap without much practical impact, x86 devices have a serious performance penalty.
Another part of the issue when it comes to cooling is that Apple is virtually the only laptop manufacturer that makes solid full aluminium frames, whereas most x86 laptops are made out of plastic and, for higher-end ones, magnesium alloy. That gives Apple the advantage of being able to use the entire frame to cool the laptop, allowing far more thermal input before saturation occurs and the fans have to activate.
> A big thing is storage. Apple uses extremely fast storage directly attached to the SoC and physically very very close. In contrast, most x86 systems use storage that's socketed (which adds physical signal runtime) and that goes via another chip (southbridge).
Why would PCIe SSDs need to go through a southbridge? The CPU itself provides PCIe lanes that can be used directly.
> That means, unlike Mac devices that can use storage as swap without much practical impact, x86 devices have a serious performance penalty.
Swap is slow on all hardware. No SSD comes close to the speed of RAM - not even Apple's. Latency is also significantly worse when you trigger a page fault and then need to wait for the page to load from disk before the thread can resume execution.
> The CPU itself provides PCIe lanes that can be used directly.
It does, but if you look at the mainboard manuals of computers, usually it's 32 lanes of which 16 go to the GPU slot and 16 to the southbridge, so no storage directly attached to the CPU. Laptops are just as bad.
Intel has always done price segmentation with the number of PCIe lanes exposed to the world.
Threadripper AMD CPUs are a different game, but I'm not aware of anyone, even "gamer" laptops, sticking such a beast into a portable device.
> Latency is also significantly worse when you trigger a page fault and then need to wait for the page to load from disk before the thread can resume execution.
Indeed, but the difference in performance between an 8GB Windows laptop and an 8GB M-series Apple laptop is noticeable, even if all it's running is the base OS and Chrome with a few dozen tabs.
There is one positive to all of this. Finally, we can stop listening to people who keep saying that Apple Silicon is ahead of everyone else because they have access to better process. There are now chips on better processes than M1 that still deliver much worse performance per watt.
Go down the rabbit hole of broken compiler settings for debian default builds, if you want to see how much low-hanging fruit we still have.
Who here would be interested in testing a distro like debian with builds optimized for the Framework devices?
Should .. should I install gentoo?
The answer is always yes, continously.
Because of a random anecdote on hackwrnews?
Not sure why you'd think that, comparing a heterogeneous core architecture to a homogeneous one. Mobile Ryzen chips aren't designed for power efficiency, if you want a "fair" comparison then pull up a Big.little x86 chip or benchmark Apple's performance cores vs AMD's mobile chipsets.
Once you normalize for either efficiency cores or performance cores, you'll quickly realize that the node lead is the largest advantage Apple had. Those guys were right, the writing was on the wall in 2019.
I guess that’s the new excuse. Except it doesn’t work. I can off-line all the efficiency cores on my M1 laptop and still run circles around the new x86 stuff in performance per watt.
Well don't just tell me about it, show me. Link the Geekbench results when its done running.
RISC vs CISC. Why you think a mainframe is so fast?
ARM is great. Those M are the only thing I could buy used and put Linux on it.
> RISC vs CISC. Why you think a mainframe is so fast?
This hasn't been true for decades. Mainframes are fast because they have proprietary architectures that are purpose-built for high throughput and redundancy, not because they're RISC. The pre-eminent mainframe architecture these days (z/Architecture) is categorized as CISC.
Processors are insanely complicated these days. Branch prediction, instruction decoding, micro-ops, reordering, speculative execution, cache tiering strategies... I could go on and on but you get the idea. It's no longer as obvious as "RISC -> orthogonal addressing and short instructions -> speed".
> The pre-eminent mainframe architecture these days (z/Architecture) is categorized as CISC.
Very much so. It's largely a register-memory (and indeed memory-memory) rather than load-store architecture, and a direct descendant of the System/360 from 1964.
Everything is RISC after it gets decoded. It isn’t 1990 anymore. The decoder costs maybe 1% performance.
I thought people stopped believing this around 2005 when Apple users finally had to admit that PPC was behind x86.
Even though this was the case for the most part during the entire history of PPC Macs (I owned two during these years)
RISC lost its meaning once SPARC added an integer multiply instruction.
It especially doesn't matter because the latest x86 update adds a mode that turns it into ARM.
https://www.intel.com/content/www/us/en/developer/articles/t...
At least my G5 helped keep my room warm in the winter.
Its fun watching things swing back and forth over time. I remember having those Sun mini-fridge size servers, all running RISC sparc based CPU's if I remember correctly. I wonder if there would be some merit in RISC based linux servers, like maybe the power consumption is lower? I forget the pros/cons of RISC vs CISC CPUs.