• metadat 2 days ago

    It's going to be rough without Anandtech reporting anymori wonder if a new outlet will spring up to fill the void.

    https://news.ycombinator.com/item?id=41399872

    Here's to hoping this PM9E1 drive makes it into the Samsung EVO 9x series drives..

    I'm curious why the capacity only goes to 4TB, aren't there a bunch of 8TB NVMe's out there? When will we see consumer-grade 16TB SSDs? Capacity hasn't seemed to increase in more than half a decade.

    • Panzer04 2 days ago

      4TB seems like the upper end for most normal consumers, I would hazard. We had 1-2TB HDDs a decade ago, and there's been little reason to go higher in the consumer space. Arguably SSDs only now getting cheap enough at those capacities might have limited it, but even so I think we're running out of things that consume that much space.

      Video and pictures are the main culprit (even in games), but 4k is likely to be the upper end of consumer usage for the forseeable future, photos have been 20-40MP for a decade, and perceivable quality benefits from going higher are fairly minimal. We can always use more space, but from a practical perspective there's not the same explosion in space required from everything else scaling to use it, I'd say.

      • bee_rider 2 days ago

        Videogames seem to be continuing to bloat.

        • ErikBjare 2 days ago

          You don't need them on an expensive Gen5 drive

          • bee_rider a day ago

            I think videogames are the most bandwidth-hungry workload that most normal people have, so I think it would be hard to justify this sort of drive otherwise.

            • undefined 2 days ago
              [deleted]
          • nxicvyvy a day ago

            8tb needed dual sided PCBs, they tend not to fit in a bunch of laptops would be my guess

          • pixl97 2 days ago

            The question is if consumers are willing to pay the prices of the larger SSDs. I consider myself a pro-sumer and have not needed that much fast SSD myself.

            • xelamonster 2 days ago

              For some reason 8TB drives are consistently a worse value. I did need that much fast SSD but ended up getting two 4TB M.2 drives because it was significantly cheaper.

              • metadat 2 days ago

                Me either :)

                But it'd be nice to ditch the magnetic storage someday.

                • hakfoo 2 days ago

                  Going all-flash was compelling for a while, but it seems like SSDs have failed to deliver on some important promises:

                  * The whole "it's SLC cache until you fill it up, then drops to pretty mediocre performance" thing is frightening because I suspect a lot of reviews aren't sufficiently battering the drive to report on this. I gather this is a bigger problem as they move to TLC and QLC and beyond and any corner-cutting they can do in the controller. TBH, I'd love to see sanctioned first-party tooling to manage my overprovision and caching strategy, so if I want to spend $130 to turn a 2Gb drive into a really overbuilt 512Gb drive, let me.

                  * The "we don't guarantee it will maintain data if left unpowered for 3 months" story doesn't make it a great choice for cold storage/intermittent access. If you just plug in a drive once a year to back up your tax returns, I'm not sure you want a SSD for it.

                  I ended up setting up a NAS with a cheap spinning-rust drive, figure that gives me a different reliability profile than the flash for that tier of backup.

                  • zamadatix 2 days ago

                    I went all flash for my NAs last round. Here's why:

                    I found the sustained write concern to be a bit of a storm in a teacup. I went for budget drives, MP34, and the sustained performance of a single drive is still greater than what the maximum performance of a SATA 3 interface a spinning disk would be on. Add on that the random performance at such times is still orders of magnitude better than that plus 2 drives worth sustained writes is enough to saturate a 10G link. Between all that it feels a bit silly to shy away from SSDs just because they are only significantly faster in sustained workloads instead of monumentally faster.

                    I also found actual tests people have been performing on SSDs have shown no loss on a shelf in 4x-10x that timeframe, not that I plan on buying a NAS and only powering it on once a quarter anyways. Particularly with the power savings and lack of spin time or spin up/spin down cycles or related noise I've just been leaving this one to run 24/7.

                    The other big benefit I found is spinning disks commonly used in NAS building have write limits that are often worse than the write limits of SSDs. On SSDs there is also no operational concern with the impact of a read workflow https://serverfault.com/questions/582170/limits-on-read-tran... so you can set your pool scrub to occur much more often without lifespan concern.

                    The cost downside of course remains though. $/GB has only went back up since I built the all flash NAS.

            • rapjr9 17 hours ago

              Why don't all drive makers (both solid state and rotating) use a RAID-like structure to offer drives with any speed or reliability level that buyers want? Seems like it could be much more efficient to put RAID into the drive than to wrap it around multiple physical drives. Maybe it would actually decrease reliability (you lose all the storage if part of it goes out and you can't replace component drives)? It seems like it would be a way to get large permanent storage that is as fast as SRAM, which has been a holy grail in computing for a long time, to get past the CPU I/O bandwidth bottleneck presented by slower drives.

              • Aerroon 2 days ago

                >Comparatively, we now see the Gen 5 Samsung PM9E1 achieving a whopping 14.5 GB/s read and 13 GB/s write

                Isn't this comparable to DDR3 memory?

                I wonder if at some point we will have GPUs extend their memory with like a raid array of SSDs.

                • Panzer04 2 days ago

                  SSDs still degrade, though - Optane was mooted for something like this, but still ended up being too expensive and not good enough at either (unprofitable) in the end.

                  Pushing 10GB/s on an SSD with 1000TBW write endurance would kill it in ~ 100000s, or a little over a day of continuous usage - and I'd expect a GPU probably would come kind of close.

                • jiggawatts 2 days ago

                  The IT industry as a whole still hasn't quite internalised that servers now have dramatically worse I/O performance than the endpoints they are serving.

                  For example, a project I'm working on right now is a small data warehouse (~100GB). The cloud VM it is running on provides only 5,000 IOPS with a relatively high latency (>1ms).

                  The laptops that pull data from it all have M.2 drives with 200K IOPS, 0.05ms latency, and gigabytes per second of read bandwidth.

                  It's dramatically faster to just zip up the DB, download it, and then manipulate it locally. This includes the download time!

                  The cheapest cloud instance that even begins to outperform local compute is about $30K/month, and would be blown out of the water by this new Samsung drive anyway. I don't know what it would cost to exceed 15GB/s read bandwidth... but I'm guessing: "Call us".

                  Back in the Good Old Days, PCs and laptops would have a single 5400 RPM drive with maybe 200 IOPS and servers would have a RAID at a minimum. Typically they'd have many 10K or 15K RPM drives, often with a memory or flash cache. The client-to-server performance ratio was at least 1-to-10, typically much higher. Now it's more like 10-to-1 the other way, and sometimes as bad as 1000-to-1.

                  • benlivengood 2 days ago

                    Which cloud is $30K/month for 200K IOPS? GCE looks like it should be under $200/month for local SSD exceeding 200K IOPS for reads and writes. EC2 i3.xlarge looks about the same price and performance.

                    • jiggawatts 2 days ago

                      Is that persistent storage or just a “temp” disk?

                      • benlivengood a day ago

                        It's as persistent as a laptop I'd say. GCP at least will live-migrate the data for maintenance events.

                        Distributed consistent storage on top of local SSDs is 3x to 5x as expensive depending on your redundancy requirements in-region, another 2x that with regional replication. Fast IO is available cheaply in the cloud if you handle your own clustering.