« BackWhy are my ZFS disks so noisy?allthingsopen.orgSubmitted by todsacerdoti a year ago
  • paol a year ago

    I know the article is just using the noise thing as an excuse to deep dive into zfs and proxmox, which is cool, but if what you really care about is reducing noise I thought I'd leave some practical advice here:

    1. Most hard drive noise is caused by mechanical vibrations being transmitted to the chassis the drive is mounted on.

    2. Consequently the most effective way to reduce noise is to reduce the mechanical coupling in the drive mounting mechanism. Having the drives in a noise-isolating case is helpful too, but only as a secondary improvement. Optimizing the drive mounting should really be the first priority.

    3. If space isn't a concern the optimal thing is to have a large case (like an ATX or larger) with a large number of HDD bays. The mounting should use soft rubber or silicon grommets. Some mounting systems can work with just the grommets, but systems that use screws are ok too as long as the screw couples to the grommet not the chassis. In a good case like this any number of hard drives can be made essentially inaudible.

    4. If space is a concern, a special purpose "NAS like" case (example: the Jonsbo N line of cases) can approach the size of consumer NAS boxes. The lack of space makes optimal accoustics difficult, but it will still be a 10x improvement over typical consumers NASes.

    5. Lastly what you shouldn't ever do is get one of those consumers NAS boxes. They are made with no concern for noise at all, and manufacturing cheapness constraints tend to make them literally pessimal at it. I had a QNAP I got rid of that couldn't have been more effective at amplifying drive noise if it had been designed for that on purpose.

    • cjs_ac a year ago

      To add to this, I mounted an HDD to a case using motherboard standoffs in a place that was obviously intended for SSDs. Not only was it very loud, the resonance between the disk and the case also broke the disk after six months.

    • Larrikin a year ago

      >Lastly what you shouldn't ever do is get one of those consumers NAS boxes. They are made with no concern for noise at all, and manufacturing cheapness constraints tend to make them literally pessimal at it. I had a QNAP I got rid of that couldn't have been more effective at amplifying drive noise if it had been designed for that on purpose.

      Is there any solution that lets me mix and match drive sizes as well as upgrade? I'm slowly getting more and more into self hosting as much of digital life as possible, so I don't want to be dependent on Synology, but they offered a product that let me go from a bunch of single drives with no redundancy to being able to repurpose them into a solution where I can swap out drives and most importantly grow. As far as I can tell theres no open source equivalent. As soon as I've set up a file system with the drives I already have the only solution is to buy the same amount of drives with more space once I run out.

      • devilbunny a year ago

        And I've never used a QNAP, but I'm on my second Synology and their drive carriages all use rubber/silicone grommets to isolate drive vibration from the case. It's not silent - five drives of spinning rust will make some noise regardless - but it sits in a closet under my stairs that backs up to my media cabinet and you have to be within a few feet to hear it even in the closet over background noise in the house.

        I don't use any of their "personal cloud" stuff that relies on them. It's just a Linux box with some really good features for drive management and package updates. You can set up and maintain any other services you want without using their manager.

        The ease with which I could set it up as a destination for Time Machine backups has absolutely saved my bacon on at least one occasion. My iMac drive fell to some strange data corruption and would not boot. I booted to recovery, pointed it at the Synology, and aside from the restore time, I only lost about thirty minutes' work. The drive checked out fine and is still going strong. Eventually it will die, and when it does I'll buy a new Mac and tell it to restore from the Synology. I have double-disk redundancy, so I can lose any two of five drives with no loss of data so long as I can get new drives to my house and striped in before a third fails. That would take about a week, so while it's possible, it's unlikely.

        If I were really paranoid about that, I'd put together a group buy for hard drives from different manufacturers, different runs, different retailers, etc., and then swap them around so none of us were using drives that were all from the same manufacturer, factory, and date. But I'm not that paranoid. If I have a drive go bad, and it's one that I have more than one of the same (exact) model, I'll buy enough to replace them all, immediately replace the known-bad one, and then sell/give away the same-series.

        • charrondev a year ago

          So I’ve got a setup like this:

          It’s an 8Bay Synology 1821+. Cost about $1300 for the machine, 32GB of ECC memory, and the 10gbe network card.

          I have 4 8Tb drives in a btrfs volume with 1 drive redundancy giving me 21TB of space.

          All the important stuff gets also backed up to another 8TB drive periodically and sent to glacier.

          The way synology’s shr1 setup works seems to be like RAID5 + a bit more flexibility so I can add more drives to the array but as long as they are 8TB or larger.

          The docker manager seems to work pretty well. I run a few services there and mount certain volumes into them. A few DNS records and some entries into the reverse proxy in the control panel of it and you can run whatever you want.

          Most critically power draw is very low and it’s very quiet which was an important consideration to me.

          • kccqzy a year ago

            I might be misunderstanding your needs but my home server uses just LVM. When I run out of disk space, I buy a new drive, use `pvcreate` followed by `vgextend` and `lvextend`.

            • tharkun__ a year ago

              This.

              I've been running LVM and Linux software RAID for like 20 years now.

              The only limits (for me at least) are:

                  smallest device in a raid determines size of that array. But that's fine since I then LVM them together anyhow. It does let you mix+match and upgrade though really I always just buy two drives but it helped when starting and I experimented with just LVM without RAID too.
              
                  I have to know RAID and LVM instead of trusting some vendor UI. That's a good thing. I can fix stuff in case it were to break.
              
                  I found as drives went to Terabytes it was better to have multiple smaller partitions as the raid devices even when on the same physical drive. Faster rebuild in case of a random read error. I use raid1. YMMV
              
              I still have the same LVM partitions / data that I had 20 years ago but also not. All the hardware underneath has changed multiple times, especially drives. I still use HDDs and used to have root on RAID+LVM too but have switched for a single SSD. I reinstalled the OS for that part but the LVM+RAID setup and its data stayed intact. If anything ever happens to the SSD with the OS, I don't care. I'll buy a new one, install an OS and I'm good to go.
            • Marsymars a year ago

              > Is there any solution that lets me mix and match drive sizes as well as upgrade?

              Probably more than one, but on my non-Synology box I use SnapRAID, which can take any number/size of drives. Downside is that it isn’t realtime, you have to schedule a process to sync your parity: http://www.snapraid.it/

              • mrighele a year ago

                > As soon as I've set up a file system with the drives I already have the only solution is to buy the same amount of drives with more space once I run out.

                Recent versions of zfs support raidz expansion [1], which let you add extra disks to a raidz1/2/3 pool. It has a number of limitations, for example you cannot change the type of pool (mirror to raidz1, raidz1 to raidz2 etc.) but if you plan to expand your pool one disk at a time it can be useful. Just remember that 1) old data will not take advantage of the extra disk until you copy it around and 2) the size of the pool is limited by the size of the smallest disk in the pool.

                [1] https://github.com/openzfs/zfs/pull/15022

                • edmundsauto a year ago

                  I started thinking about this a year ago. Unraid works great for me. Just bought another 32tb to extend to 104tb usable space, 8gb to 20gb drives. It’s a JBOD with dual parity setup, next upgrade path requires a disk shelf but hopefully that won’t be for a couple of years.

                  • kiney a year ago

                    BTRFS

                  • magicalhippo a year ago

                    I recently upgraded my home-NAS from a Fractal Define R4 to a Define 7 XL. The R4 had the rubber grommets, but hot-swappable trays that were just held in by spring force. As such they rattled a lot.

                    The Define 7 has the same grommet system, but the trays can be fastened by screws to the support rails.

                    The difference in noise was significant. Even though I went from 6 to 10 disks it's much more quiet now.

                    • Marsymars a year ago

                      I went down this rabbit hole of sound reduction, but then I bought a house and put all my NAS and NAS-like things in my basement storage room where I can’t hear them from anywhere else in the house.

                      • gymbeaux a year ago

                        I find NASes to be a waste of money except for the “no one ever got fired for….” aspect in an enterprise environment. $600 for a NAS with a Celeron and 8GB of RAM is absurd.

                        • devilbunny a year ago

                          Value of your time and effort maintaining it is not zero.

                          I used to play with stuff like this. It was fun when I was single and had lots of free time. I don't play with it anymore. If I pay someone $500 over nominal value to provide me with 8-9 years of support for security updates, etc., and I just install their packages... that's worth it to me. My first Syno was a DS412+ and my second was a DS1621+. Nine years between introduction of the two. The 412+ is still running just fine at a friend's house. I gave it to him with ~12 TB total drive space, said just help me next time I need something done with car audio (he's a DJ and knows cars) and we're square.

                          He's happy, I'm happy. I go set up his network, he installs my head unit. We both win by doing what we're good at and letting someone else use their expertise instead of learning a lot of stuff we will almost never use again.

                        • theodric a year ago

                          I assembled a fanless all-flash NAS in a Jobsbo N1 last year, and it's still working pretty well https://github.com/theodric/NASty

                          • PeterStuer a year ago

                            "The mounting should use soft rubber or silicon grommets"

                            Suspension of the drives with elastic bands used to be popular in the silent PC community.

                            • kaliszad a year ago

                              You clearly haven't read the full article as Jim Salter writes about the mechanical stuff at the end of the article.

                              Also, you want to reduce vibrations because of this: https://www.youtube.com/watch?v=tDacjrSCeq4 (Shouting in the datacenter)

                              • naming_the_user a year ago

                                Yeah, the article title seemed kind of weird to me. I have a ZFS NAS, it's just a bunch of drives in an ATX case with (what I'd considered to nowadays be) the standard rubber grommets.

                                I mean, you can hear it, but it's mostly just the fans and drives spinning, it's not loud at all.

                                The recommendations seem reasonable but for noise? If it's noisy probably something is wrong I think.

                                • nicolaslem a year ago

                                  I totally understand the article title, I have a ZFS NAS that makes the same kind of noise as described there. Roughly every five seconds the drives make sound that is different from the background hum of a running computer. In a calm environment this is very distracting. I even had a guest sleeping in an adjacent room complain about it once.

                                  • ssl-3 a year ago

                                    That's a tunable in ZFS.

                                    vfs.zfs.txg.timeout defaults to 5 seconds, but it can be set (much) higher if you wish.

                                    I don't care if I lose up to a minute or two of work instead of <=5 seconds in the face of an unplanned failure, so I set it to a couple of minutes on my desktop rig years ago and never looked back.

                                    AFAIK there's also no harm in setting it both dynamically and randomly. I haven't tried it, but periodically setting vfs.zfs.txg.timeout to a random value between [say] 60 and 240 seconds should go a long ways towards making it easier to ignore by breaking up the regularity.

                                    (Or: Quieter disks. Some of mine are very loud; some are very quiet. Same box, same pool, just different models.

                                    Or: Put the disks somewhere else, away from the user and the sleeping guests.)

                                    • ndiddy a year ago

                                      This is likely a different problem than the article describes. Most newer hard drives will move the actuator arm back and forth every few seconds when the drive is inactive. It has to do with evenly distributing the lubrication on the arm to increase the life of the drive.

                                  • patrakov a year ago

                                    No. The most effective way to remove HDD noise is to remove HDDs and add SSDs. I don't have any HDDs since 2016.

                                    P.S. I also talked to a customer in the past who stored their backups in an SSD-only Ceph cluster. They were citing higher reliability of SSDs and higher density, which was important because they had very limited physical space in the datacenter. In other words, traditional 3.5" HDDs would not have allowed them to store that much data in that many rack units.

                                    • toast0 a year ago

                                      SSDs are great. Quieter, can be denser, faster, available in small sizes for small money, more reliable, etc.

                                      But they're not great for low cost bulk storage. If you're putting together a home NAS, you probably want to do well on $/TB and don't care so much about transfer speeds.

                                      But if you've found 10TB+ ssds for under $200, let us know where to find them.

                                      • sobriquet9 a year ago

                                        They also lose data. Especially large files you rarely touch, like family videos. Bit rot on SSDs is real. I backup to HDDs now.

                                      • Guvante a year ago

                                        A 20 TB HDD is <$400

                                        An 8 TB SSD is >$600

                                        $80/TB vs $20/TB is a four fold increase.

                                        Also a 16 TB drive is $2,000 so more like a 5x increase in a data center setup.

                                        • magicalhippo a year ago

                                          The 4TB M.2 SSDs are getting to a price point where one might consider them. The problem is that it's not trivial to connect a whole bunch of them in a homebrew NAS without spending tons of money.

                                          Best I've found so far is cards like this[1] that allow for 8 U.2 drives, and then some M.2 to U.2 adapters like this[2] or this[3].

                                          In a 2x RAID-Z1 or single RAID-Z2 setup that would give 24TB of redundant flash storage for a tad more than a single 16TB enterprise SSD.

                                          [1]: https://www.aliexpress.com/item/1005005671021299.html

                                          [2]: https://www.aliexpress.com/item/1005005870506081.html

                                          [3]: https://www.aliexpress.com/item/1005006922860386.html

                                          • bpye a year ago

                                            On AM5 you can do 6 M.2 drives without much difficulty, and with considerably better perf. Your motherboard will need to support x4/x4/x4/x4 bifurcation on the x16 slot, but you put 4 there [0], and then use the two on board x4 slots, one will use the CPU lanes and the other will be connected via the chipset.

                                            [0] - https://www.aliexpress.com/item/1005002991210833.html

                                            • Fnoord a year ago

                                              You can do without bifurcation if you use a PCIe switch such as [1]. This is more expensive but also can achieve more speed, and will work in machines without bifurcation. Downside is it uses more W.

                                              [1] https://www.aliexpress.com/item/1005001889076788.html

                                              • magicalhippo a year ago

                                                The controller I linked to in my initial post does indeed contain a PCIe switch, which is how it can connect 8 PCIe devices to a single x16 slot.

                                                • bpye a year ago

                                                  Right, and whilst 3.0 switches are semi-affordable, 4.0 or 5.0 costs significantly more, though how much that matters obviously depends on your workload.

                                                  • magicalhippo a year ago

                                                    True. I think a switch which could do for example PCIe 5.0 on the host side and 3.0 on the device side would be sufficient for many cases, as one lane of 5.0 can serve all four lanes of a 3.0 NMVe. But I realize we probably won't see that.

                                                    Perhaps it will be realized with higher PCIe versions, given how tight signalling margins will get. But the big guys have money to throw at this so yeah...

                                        • umbra07 a year ago

                                          I can buy a 16TB refurbished enterprise drive with warranty for less than a hundred.

                                          • 7bit a year ago

                                            > The most effective way to remove HDD noise is to remove HDDs and add SSDs.

                                            Lame

                                        • hamandcheese a year ago

                                          I don't know if this exists or not, but I'd like to try something like a fuse filesystem which can transparently copy a file to a fast scratch SSD when it is first accessed.

                                          I have a somewhat large zfs array and it makes consistent noise as I stream videos from it. The streaming is basically a steady trickle compared to what the array is capable of. I'd rather incur all the noise up front, as fast as possible, then continue the stream from a silent SSD.

                                          • Mayzie a year ago

                                            > I don't know if this exists or not, but I'd like to try something like a fuse filesystem which can transparently copy a file to a fast scratch SSD when it is first accessed.

                                            You may be interested in checking out bcache[1] or bcachefs[2].

                                            [1] https://www.kernel.org/doc/html/latest/admin-guide/bcache.ht...

                                            [2] https://bcachefs.org/

                                            • theblazehen a year ago

                                              lvm-cache works as well, if you're already using LVM.

                                              https://github.com/45Drives/autotier is exactly what they were asking for as well

                                              • magicalhippo a year ago

                                                I've done some testing with using ZFS on top of LVM with dm-writecache.

                                                Worked well enough on the small scale, but sadly haven't had the time or hardware to test it in a more production-like environment.

                                                Also, it was starting to feel a bit like a Jenga tower, increasing the chances of bugs and other weird issues to strike.

                                                • kaliszad a year ago

                                                  I wouldn't recommend combining those two. It's only begging for problems.

                                                  • magicalhippo a year ago

                                                    Yeah that's my worry. Still, got 6x old 3TB disks that still work and a few spare NVMEs, so would be fun to try it for teh lulz.

                                                    • kaliszad a year ago

                                                      Rather build a L2ARC or metadata special device hybrid setup using ZFS or skip ZFS and go for lvmcache/ mdadm style RAID with XFS or something.

                                                      • magicalhippo a year ago

                                                        But L2ARC only helps read speed. The idea with dm-writecache is to improve write speed.

                                                        I started thinking about this when considering using a SAN for the disks, so that write speed was limited by the 10GbE network I had. A local NVMe could then absorb write bursts, maintaining performance.

                                                        That said, it's not something I'd want to use in production that's for sure.

                                                        There was some work being done on writeback caching for ZFS[1], sadly it seems to have remained closed-source.

                                                        [1]: https://openzfs.org/wiki/Writeback_Cache

                                                        • kaliszad a year ago

                                                          That's what SLOG is for if the writes are synchronous or if you have many small files/ want to optimize the metadata speed look at the metadata special device, which can also store small files of configurable size.

                                                          ZFS of course has its limits too. But in my experience I feel much more confident (re)configuring it. You can tune the real world performance well enough especially if you can utilize some of the advanced features of ZFS like snapshots/ bookmarks + zfs-send/recv for backups. Because with LVM/ XFS you can certainly hack something together which will work pretty reliably too but with ZFS it's all integrated and well tested (because it is a common use case).

                                                          • magicalhippo a year ago

                                                            As I mentioned in my other[1] post, the SLOG isn't really a write-back cache. My workloads are mostly async, so SLOG wouldn't help unless I force sync=always which isn't great either.

                                                            I love ZFS overall, it's been rock solid for me in the almost 15 years I've used it. This is just that one area where I feel could do with some improvements.

                                                            [1]: https://news.ycombinator.com/item?id=41670945

                                              • hamandcheese a year ago

                                                My assumption with bcache is that it operates on blocks rather than entire files. Am I wrong?

                                                • LtdJorge a year ago

                                                  Yeah, bcache is exactly that

                                                • mystified5016 a year ago

                                                  Bcachefs.

                                                  You can do all sorts of really neat things. You can define pools of drives at different cache levels. You can have a bunch of mechanical drives for deep storage, some for hot storage, SSD to cache recently read files, then write-through from the SSD down to mechanical drives, either immediately or after a delay.

                                                  It's pretty much everything I could wish for from a filesystem, though I haven't actually taken the time to try it out yet. AFAIK it's still somewhat experimental, more or less in beta.

                                                  • cassianoleal a year ago

                                                    Is it fully merged to the kernel? I remember a few weeks ago some drama between Kent Overstreet and Linus about it but I didn't go into details. Has that been resolved?

                                                    Edit: or maybe it was some drama over userspace tooling, I can't remember tbh.

                                                    • MattTheRealOne a year ago

                                                      Bcachefs is fully merged into the kernel since 6.7. The drama was around Overstreet trying to merge significant code changes to a Release Candidate kernel that should only be receiving minor bug fixes at that stage of the development process. It was developer communication issues, not anything that impacts the user. The changes will just have to wait until the next kernel version.

                                                  • ThePowerOfFuet a year ago

                                                    Bcachefs has not proven that it can be trusted to not eat data, and has also recently undergone an on-disk layout change which further demonstrates its lack of maturity.

                                                  • hnlmorg a year ago

                                                    Not aware of anything that directly matches your description, however all major operating systems do cache filesystem object in RAM. So if you pre-read the file, then it should be read back from cache when you come to stream it.

                                                    Additionally, ZFS supports using SSDs to supplement the cache.

                                                    • dialup_sounds a year ago

                                                      Reminded of this -- ZFS on Apple's Fusion Drive

                                                      http://jolly.jinx.de/teclog/2012.10.31.02-fusion-drive-loose...

                                                      • throw0101d a year ago

                                                        > I don't know if this exists or not, but I'd like to try something like a fuse filesystem which can transparently copy a file to a fast scratch SSD when it is first accessed.

                                                        ZFS has caching for writes (SLOG)[0][1] and reads (L2ARC),[2][3] which was introduced many years ago when HDDs were cheap and flash was still very, very expensive:

                                                        * https://www.brendangregg.com/blog/2009-10-08/hybrid-storage-...

                                                        [0] https://openzfs.github.io/openzfs-docs/man/master/7/zpoolcon...

                                                        [1] https://openzfs.github.io/openzfs-docs/man/master/8/zpool-cr...

                                                        [2] https://openzfs.github.io/openzfs-docs/man/master/7/zpoolcon...

                                                        [3] https://openzfs.github.io/openzfs-docs/man/master/8/zpool-ad...

                                                        • aidenn0 a year ago

                                                          I should point out that the SLOG only caches synchronous writes, which are written twice with ZFS.

                                                          Also, the L2ARC is great, but does still have RAM overhead. There are also useful tunables. I had a workload on a RAM limited machine where directory walking was common, but data reads were fairly random and a L2ARC configured for metadata only speed it up by a large amount.

                                                          • kaliszad a year ago

                                                            You can also have a special metadata only/ small files dedicated special VDEV. ZFS can pull of many tricks if you configure it well. Of course, the L2ARC is better, if you don't trust the caching device that much (e.g. only have a single SSD).

                                                            • flemhans a year ago

                                                              You can also configure an L2ARC to only hold metadata, which is safer than a metadata device.

                                                          • magicalhippo a year ago

                                                            > ZFS has caching for writes

                                                            Not really.

                                                            It will accumulate synchronous writes into the ZIL, and you put the ZIL on a fast SLOG vdev. But it will only do so for a limited amount of time/space, and is not meant as a proper write-back cache but rather as a means to quickly service synchronous writes.

                                                            By default asynchronous writes do not use the ZIL, and hence SLOG vdev at all. You can force it to, but that can also be a bad idea unless you have Optane drives as you're then bottlenecked by the ZIL/SLOG.

                                                            • telgareith a year ago

                                                              "Fast" is meaningless.

                                                              A SLOG needs to have: good throughput and low latency at a queue depth of 1.

                                                          • anotherhue a year ago

                                                            mpv has --cache-size (or something) that you can set at a few GB. If you run out of ram it should swap to your ssd.

                                                            Edit: demuxer-max-bytes=2147483647

                                                            • 3np a year ago

                                                              It may not be 100% what you're looking for and will probably not make your drives silent while streaming but putting L2ARC on that SSD and tweaking prefetch might get you a good way there.

                                                              Another much simpler filesystem-agnostic alternative would be to copy it over to the SSD with a script and commence streaming from there. You'll have to wait for the entire file to copy for the stream to start, though. I think some streaming servers may actually support this natively if you mount /var and/or /var/tmp on the faster drive and configure it to utilize it as a "cache".

                                                              • spockz a year ago

                                                                Just to test whether your OS setup etc is already up to par, try reading the whole file, eg by calculating the hash with something like `md5` (yes md5 is not secure I know.). This should put the file mostly in the os cache. But with video files being able to hit more than 50GiB in size these days, you need quite a lot of RAM to keep it all in cache. Maybe you can setting the SSD as a scratch disk for swap? I’m not sure how/if you can tweak what it is used for.

                                                                As a sibling says, ZFS should support this pretty transparently.

                                                                • undefined a year ago
                                                                  [deleted]
                                                                • toast0 a year ago

                                                                  If you've got enough ram, you might be able to tune prefetching to prefetch the whole file? Although, I'm not sure how tunable that actually is.

                                                                  • iforgotpassword a year ago

                                                                    Yes, I think nowadays you do this with

                                                                      blockdev --setra <num_sectors> /dev/sdX
                                                                    
                                                                    But I feel like there was a sysctl for this too in the past. I used it back in the day to make the HDD in my laptop spin down immediately after a new song started playing in rhythmbox by setting it to 16MB.
                                                                  • madeofpalk a year ago

                                                                    This is essentially what macOS's Fusion Drive is/was https://en.wikipedia.org/wiki/Fusion_Drive

                                                                    I'm unsure if they ship any macs with these anymore. I guess not since the Apple Sillicon iMacs don't have spinning hard drives?

                                                                    • _hyn3 a year ago

                                                                         cat filename > /dev/null
                                                                      
                                                                      Reads the entire file into the OS buffer.
                                                                      • sulandor a year ago

                                                                        add this to mpv.conf

                                                                           cache=yes                                                                                                                                                                     
                                                                           demuxer-max-bytes=5G                                                                                                                                                          
                                                                           demuxer-max-back-bytes=5G
                                                                        • 2OEH8eoCRo0 a year ago

                                                                          You could use overlayfs with the upper layer on the SSD "scratch" and trigger a write operation

                                                                          • naming_the_user a year ago

                                                                            Depending on the file-size I wonder if it'd be better to just prefetch the entire file into RAM.

                                                                            • fsckboy a year ago

                                                                              good idea, but that's not his problem. he needs a media player that will sprint through to the end of the file when he first opens it. They don't do that cuz they figure they might be streaming from the net so why tax that part of the sytem.

                                                                              • hamandcheese a year ago

                                                                                Thank you, yes this is my problem which most commenters here are missing.

                                                                                I'm using Jellyfin server and Infuse on Apple TV, so I don't have a great way to force-slurp a file on first read, at least not without patching Jellyfin. And I'm not super eager to learn C#.

                                                                                Even if I were streaming to mpv, most of my lan is only gigabit which is much less than what my storage array can put out.

                                                                                • throw73737 a year ago

                                                                                  VLC has all sorts of prefetch settings.

                                                                                  • usefulcat a year ago

                                                                                    With ZFS that wouldn't necessarily help. I've been using ZFS at work for many years with mostly large files. Even if I repeatedly read the same file as fast as possible, ZFS will not cache the entire thing in memory (there's no shortage of RAM so that's not it). This is unlike most linux filesystems, which are usually pretty aggressive with caching.

                                                                                    Maybe there is some combination of settings that will get it to cache more aggressively; just saying that it's not a given that it will do so.

                                                                                    • kaliszad a year ago

                                                                                      The ZFS ARC cache tries to keep a balance between most frequently and least recently used data. Also, by default, ZFS ARC only fills out to a maximum of about half the available RAM. You can change that at runtime (by writing the size in bytes to

                                                                                        /sys/module/zfs/parameters/zfs_arc_max
                                                                                      
                                                                                      or setting the module e.g. in

                                                                                        /etc/modporbe.d/zfs.conf
                                                                                      
                                                                                      to something like this

                                                                                        options zfs zfs_arc_max=<size in bytes>
                                                                                      
                                                                                      ). But be careful, as the ZFS ARC does not play that nice with the OOM killer.
                                                                                • nicman23 a year ago

                                                                                  i mean you can do that with a simple wrapper

                                                                                  have a ssd ie mounted in /tmp/movies

                                                                                  and create a script in .bin/ (or whatever)

                                                                                  #!/bin/sh

                                                                                  tmp_m="/tmp/movies/$(mktemp -d)"

                                                                                  cp "$@" $tmp_m

                                                                                  mpv $tmp_m

                                                                                  rm $tmp_m

                                                                                  please note i have not tried the script but it probably works

                                                                                • m463 a year ago

                                                                                  I run proxmox, and ever since day 1 I noticed it hits the disk.

                                                                                  a LOT.

                                                                                  I dug into it and even without ANY vms or containers runnning, it writes a bunch of stuff out every second.

                                                                                  I turned off a bunch of stuff, I think:

                                                                                    systemctl disable pve-ha-crm
                                                                                    systemctl disable pve-ha-lrm
                                                                                  
                                                                                  But stuff like /var/lib/pve-firewall and /var/lib/rrdcached was still written to every second.

                                                                                  I think I played around with commit=n mount and also

                                                                                  The point of this is - I tried running proxmox with zfs, and it wrote to the disk even more often.

                                                                                  maybe ok for physical hard disks, but I didn't want to burn out my ssd immediately.

                                                                                  for physical disks it could be noisy

                                                                                  • jftuga a year ago

                                                                                    Is there an online calculator to help you find the optimal combination of # of drives, raid level, and block size?

                                                                                    For example, I'm interested in setting up a new RAID-Z2 pool of disks and would like to minimize noise and number of writes. Should I use 4 drives or 6? Also, what would be the optimal block size(es) in this scenario?

                                                                                  • hi-v-rocknroll a year ago

                                                                                    My 4U JBOD NAS box noise is dominated by the goddamn 1U-style jet engine fans.

                                                                                    45 helium HDDs themselves are relatively quiet.

                                                                                    PS: I ditched non-Solaris ZFS several years ago after ZoL destroyed itself unable to mount RW and the community shrugged at glaring fragility. XFS + mdadm (raid10) are solid and work. Boring and reliable get less press, but I like working over not working. Maybe folks here run Sun Thumpers at home which would be a form of ZFS that works.

                                                                                    • kaliszad a year ago

                                                                                      Now the common code base of upstream OpenZFS is the FreeBSD/ Linux code. Things have changed a lot with OpenZFS 2.0 AFAIK.

                                                                                      Yeah, server vendors are a bit crazy with small jet engine fans. A 4U chassis could easily house 80, 92 or even 120 mm fans which could spin much slower with a much higher air flow. That would of course also be much more efficient.

                                                                                    • undefined a year ago
                                                                                      [deleted]
                                                                                      • nubinetwork a year ago

                                                                                        I don't think you can really get away from noise, zfs writes to disk every 5 seconds pretty much all the time...

                                                                                        • Maledictus a year ago

                                                                                          I expected a way to find out what the heck the system is sending to those disks. Like per process and what the the Kernel/ZFS is adding.

                                                                                          • M95D a year ago

                                                                                            It can be done with CONFIG_DM_LOG_WRITES and another set of drives, but AFAIK, it needs to be set up before (or more exactly under) zfs.

                                                                                            • hi-v-rocknroll a year ago

                                                                                              I'd have a look at jobs like scrubbing.

                                                                                              • Sesse__ a year ago

                                                                                                blktrace is fairly useful in this regard.

                                                                                              • undefined a year ago
                                                                                                [deleted]