• pjmlp a year ago

    For proper statistics use Visual VM or Flight Recorder, if using an OpenJDK derived JVM implementation.

    Also note that not all JVMs are made alike, and there are plenty to chose from.

    • hashmash a year ago

      When using the `-XX:+PerfDisableSharedMem` workaround, VisualVM cannot attach to the running process anymore.

    • sltkr a year ago

      > The pauses occur even [..] if you call mlock

      I wonder how this is even possible. The only scenario I can think of involves a page fault on the page table itself (i.e., the page is locked into memory, but a page fault occurs during virtual-to-physical address translation). Does anyone know the real reason?

      • survivedurcode a year ago

        Probably because pages mapped, even if they are locked into memory are not allowed to stay dirty forever. Does this help? https://stackoverflow.com/a/11024388 (In contrast, if you mlocked but never wrote to the pages, you probably would not encounter read pauses)

      • cogman10 a year ago

        > in /tmp

        Why is `/tmp` on disk and not a tmpfs mount?

        • sltkr a year ago

          There is no law that says /tmp must be on tmpfs, and historically this wasn't done, because tmpfs is limited in size to some faction of the kernel's memory, while /tmp may be used to store much larger files.

          For example, GNU sort can sort arbitrarily large input files, which is implemented by splitting the input into sorted chunks that are written to a temporary directory, /tmp by default. But this is based on the assumption that /tmp can store significantly larger files than fit in memory, otherwise the point is moot. So using tmpfs makes /tmp useless for this type of operation.

          In the end, it's a trade-off between performance and disk space. I also prefer to mount /tmp on tmpfs for performance reasons, but you should not assume that this is the case on all systems.

          • aidenn0 a year ago

            While I run /tmp on disk, I should point out that tmpfs is not limited to the size of RAM; contents of tmpfs can be swapped out just like any other memory allocation.

            • funcDropShadow a year ago

              That is one of the reason why we should still have swap space.

          • aidenn0 a year ago

            Why would I want it on tmpfs? Only advantage I see is slightly improved boot times (/tmp is typically cleared on boot, which is obviously not necessary for tmpfs).

            • hinkley a year ago

              Slightly simpler handling for docker containers. Particularly if you run multiple copies of the same image on one box (blue-green deploys, process-per-cpu programming languages, etc)

              • TacticalCoder a year ago

                > Why would I want it on tmpfs?

                It's now there in several distros by default. Not that it answers your question.

            • ta988 a year ago

              Is it still the case?

              • ackfoobar a year ago

                Probably yes.

                https://bugs.openjdk.org/browse/JDK-8076103

                Closed with "Won't Fix".

              • lbalazscs a year ago

                In 2015 there was no ZGC. Today ZGC (an optional garbage collector optimized for latency) guarantees that there will be no GC pauses longer than a millisecond.

                • survivedurcode a year ago

                  I would check your answer. These are pauses due to time spent writing to diagnostic outputs. These are not traditional collection pauses. This affects both jstat as well as writes of GC logs. (I.e. GC log writes will block the app just the same way)

                  • pjmlp a year ago

                    Which is why for anything serious one should be using Flight Recorder instead.

                    • funcDropShadow a year ago

                      Or /tmp should be a tmpfs as it is on most current Linux distributions.

                  • esaym a year ago

                    These modern garbage collectors are not simply free though. I got bored last year and went on a deep dive with GC params for Minecraft. For my needs I ended up with: -XX:+UseParallelGC -XX:MaxGCPauseMillis=300 -Xmx2G -Xms768M

                    When flying around in spectator mode, you'd see 3 to 4 processes using 100%. Changing to more modern collectors just added more load to the system. ZGC was the worst, with 16+ processes all using 100% cpu. With the ParallelGC, yes you'll get the occasional pause but at least my laptop is not burning hot fire.

                    • plandis a year ago

                      Yes no GC is free (well perhaps Epsilon comes close :)

                      It’s a low pause GC so latencies, particularly tail latencies, can be more predictable and bounded. The tradeoff you make is that it uses more CPU time and memory in order to operate.

                      • mike_hearn a year ago

                        Minecraft really needs generational ZGC (totally brand new) because Minecraft generates garbage at prodigious rates and non-generational GC collects less garbage per unit time.

                        • namibj a year ago

                          You'll need more spare heap for ZGC.

                          • ackfoobar a year ago

                            And using generational ZGC will probably lower CPU usage a lot.

                          • tuna74 a year ago

                            Yes, this is why GCs work so bad for 3D games since you are usually limited by memory bandwidth and latency, especially on systems with unified RAM (no seperate GPU RAM).

                          • kanzenryu2 a year ago

                            Sadly in many cases no; it's not magic. This nirvana is restricted to cases where there is CPU bandwidth available (e.g. some cores idle) and plenty of free RAM. When either CPU or RAM are less plentiful... hello pauses my old friend.

                            • sunshowers a year ago

                              This is why memory-bound services generally use languages without mandatory GC. Tail latency is a killer.

                              Rust's memory management does have some issues in practice (large synchronous drops) but they're relatively minor and easily addressed compared to mandatory GC.

                              • foobarchu a year ago

                                In cases where java is unavoidable and you're working with large blocks, it is possible to sort of skirt around the gc with certain kinds of large buffers that live outside the heap.

                                I've used these to great success when I had multiple long-lived gigabyte+ arrays. Without off-heap memory, these tended to really slow the gc down (to be fair, I didn't have top of the line gc algorithms because the openj9 jvm had been mandated)

                                • pkolaczk a year ago

                                  Managing off heap memory in Java is pain even worse than manual memory management in C. Unlike C++ and Rust, Java offers no tools for manual memory management, and its idioms like frequent use of exceptions make writing such code extremely error prone.

                                  • foobarchu a year ago

                                    ByteBuffers and direct memory make it possible.

                                    https://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffe...

                                    But it is a pain and only really useful if you have a big, long lived object. In my case it was loading massive arrays into memory for access by the API server frontend. They needed to be complete overwritten once an hour, and it turns out that allocating 40% of system memory then immediately releasing another 40% back to the GC at once is a good recipe for long pauses or high CPU use

                            • hawk_ a year ago

                              ZGC doesn't remove safepoint requests on threads which is the root cause. "Guarantees" here are with very heavy quotes.

                              • funcDropShadow a year ago

                                But it reduces the amount of safepoint requests by doing more in parallel to the working application.

                              • hinkley a year ago

                                The cost of statistics gathering on a GC implementation that avoids ineffective GC activity is less affected by the cost of telemetry (no news is good news), but it is still affected.

                              • ahoka a year ago

                                Not with Linux 5.x AFAIK.

                              • jakewins a year ago

                                Man I remember being bit by this in migrating to AWS - this had like snuck through on fast on-prem disks, but as soon as that /tmp was on RDS oh boy, it was a dozy.

                                • opentokix a year ago

                                  Using ebpf, perf and flamegraphs would let him find this in a couple of hours. That was not available for him in 2015 tho.

                                  • hinkley a year ago

                                    Stuff like this is why back when I still wrote Java we only wanted to turn on JVM telemetry on production boxes if they were canaries. Slower you can work around by deploying more copies. But jitter is not something you can do much about.

                                    • smrtinsert a year ago

                                      Is this account a submission bot of some sort?

                                      • throwaway04324 a year ago

                                        The account seems to be connected to a real person, but it has a high number of submissions (over 350 submissions the past 30 days)

                                        • undefined a year ago
                                          [deleted]
                                        • geodel a year ago

                                          Also spending a lot cause higher credit card bills.