« BackSeconds Since the Epochaphyr.comSubmitted by zdw 19 hours ago
  • cbarrick 3 hours ago

    > There’s an ongoing effort to end leap seconds, hopefully by 2035.

    I don't really like this plan.

    The entire point of UTC is to be some integer number of seconds away from TAI to approximate mean solar time (MST).

    If we no longer want to track MST, then we should just switch to TAI. Having UTC drift away from MST leaves it in a bastardized state where it still has historical leap seconds that need to be accounted for, but those leap seconds no longer serve any purpose.

    • phicoh 6 minutes ago

      The is no such thing as TAI. TAI is what you get if you start with UTC and then subtract the number of leap seconds you care about. TAI is not maintained as some sort of separate standard quantity.

      In most (all?) countries, civil time is based on UTC. Nobody is going to set all clocks in the world backwards by about half a minute because it is somewhat more pure.

      GPS time also has an offset compared to TAI. Nobody care a bout that. Just like nobody really cares about the Unix epoch. As long as results are consistent.

    • ascorbic an hour ago

      I just finished reading "A Deepness in the Sky" a 2000 SF book by Vernor Vinge. It's a great book with an unexpected reference to seconds since the epoch.

      >Take the Traders' method of timekeeping. The frame corrections were incredibly complex - and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth's moon. But if you looked at it still more closely ... the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind's first computer operating systems.

      • jvanderbot an hour ago

        That is one of my favorite books of all time. The use of subtle software references is really great.

        I recommend Bobiverse series for anyone who wants more "computer science in space" or permutation city for anyone who wants more "exploration of humans + simulations and computers"

      • colanderman 3 hours ago

        Note also that the modern "UTC epoch" is January 1, 1972. Before this date, UTC used a different second than TAI: [1]

        > As an intermediate step at the end of 1971, there was a final irregular jump of exactly 0.107758 TAI seconds, making the total of all the small time steps and frequency shifts in UTC or TAI during 1958–1971 exactly ten seconds, so that 1 January 1972 00:00:00 UTC was 1 January 1972 00:00:10 TAI exactly, and a whole number of seconds thereafter. At the same time, the tick rate of UTC was changed to exactly match TAI. UTC also started to track UT1 rather than UT2.

        So Unix times in the years 1970 and 1971 do not actually match UTC times from that period. [2]

        [1] https://en.wikipedia.org/wiki/Coordinated_Universal_Time#His...

        [2] https://en.wikipedia.org/wiki/Unix_time#UTC_basis

        • weinzierl 29 minutes ago

          A funny consequence of this is that there are people alive today that do not know (and never will know) their exact age in seconds[1].

          This is true even if we assume the time on the birth certificate was a time precise down to the second. It is because what was considered the length of a second during part of their life varied significantly compared to what we (usually) consider a second now.

          [1] Second as in 9192631770/s being the the unperturbed ground-state hyperfine transition frequency of the caesium 133 atom

        • sarusso 24 minutes ago

          I remember hearing at a conference about 10 years ago that Google does not make use of leap seconds. Instead, they spread them across regular seconds (they modified their NTP servers). I quickly searched online and found the original article [1].

          [1] https://googleblog.blogspot.com/2011/09/time-technology-and-...

          • move-on-by 17 hours ago

            Without fail, if I read about time keeping, I learn something new. I had always thought unix time as the most simple way to track time (as long as you consider rollovers). I knew of leap seconds, but somehow didn’t think they applied here. Clearly I hadn’t thought about it enough. Good post.

            I also read the link for “UTC, GPS, LORAN and TAI”. It’s an interesting contrast that GPS time does not account for leap seconds.

            • foobar1962 14 hours ago

              Saying that something happened x-number of seconds (or minutes, hours, days or weeks) ago (or in the future) is simple: it’s giving that point in time a calendar date that’s tricky.

              • miki123211 an hour ago

                > Saying that something happened x-number of [...]days or weeks) ago in the future) is simple

                It's not, actually. Does 2 days and 1 hour ago mean 48, 49 or 50 hours, if there was a daylight saving jump in the meantime? If it's 3PM and something is due to happen in 3 days and 2 hours, the user is going to assume and prepare for 5PM, but what if there's a daylight saving jump in the meantime? What happens to "in 3 days and 2 hours" if there's a leap second happening tomorrow that some systems know about and some don't?

                You rarely want to be thinking in terms of deltas when considering future events. If there is an event that you want to happen on jan 1, 2030 at 6 PM CET, there is no way to express that as a number of seconds between now and then, because you don't know whether the US government abolishes DST between now and 2030 or not.

                To reiterate this point, there is no way to make an accurate, constantly decreasing countdown of seconds to 6PM CET on jan 1, 2030, because nobody actually knows when that moment is going to happen yet.

                • Izkata an hour ago

                  You ignored the last part of their comment. All your examples are things they did say are hard.

                  Also natural events are the other way around, we can know they're X in the future but not the exact calendar date/time.

                  • PaulDavisThe1st 28 minutes ago

                    No. The problems begin because GP included the idea of saying "N <calendar units> in the future".

                    If the definition of a future time was limited to hours, minutes and/or seconds, then it would be true that the only hard part is answering "what calendrical time and date is that?"

                    But if you can say "1 day in the future", you're already slamming into problems before even getting to ask that question.

                • GolDDranks 7 hours ago

                  But because of the UNIX time stamp "re-synchronization" to the current calendar dates, you can't use UNIX time stamps to do those "delta seconds" calculations if you care about _actual_ amount of seconds since something happened.

                  • wodenokoto 9 hours ago

                    Simple as long as your precision is at milliseconds and you don’t account for space travel.

                    We can measure the difference in speed of time in a valley and a mountain (“just” take an atomic clock up a mountain and wait for a bit, bring it back to your lab where the other atomic clock is now out of sync)

                  • mytailorisrich 3 hours ago

                    I have come to the conclusion that TAI is the simplest and that anything else should only be used by conversion from TAI when needed (e.g. representation or interoperability).

                  • prmph 44 minutes ago

                    The more I learn about the computation of time, the more unbelievably complex getting it right seems. I thought I was pretty sophisticated in in my view of time handling, but just in the last couple of months there have been a series of posts on HN that have opened my eyes even more to how leaky this abstraction of computer time is.

                    Pretty soon we'll have to defer to deep experts and fundamental libraries to do anything at all with time in our applications, a la security and cryptography.

                    • DrBazza 8 hours ago

                      There's a certain exchange out there that I wrote some code for recently, that runs on top of VAX, or rather OpenVMS, and that has an epoch of November 17, 1858, the first time I've seen a mention of a non-unix epoch in my career. Fortunately, it is abstracted to be the unix epoch in the code I was using.

                      • pavlov 8 hours ago

                        Apparently the 1858 epoch comes from an astronomy standard calendar called the Julian Day, where day zero was in 4713 BC:

                        https://www.slac.stanford.edu/~rkj/crazytime.txt

                        To make these dates fit in computer memory in the 1950s, they offset the calendar by 2.4 million days, placing day zero on November 17, 1858.

                        • spiffytech 6 hours ago

                          There's an old Microsoft tale related to the conflict between Excel's epoch of Jan 1 1900 vs Basic's Dec 31 1899:

                          https://www.joelonsoftware.com/2006/06/16/my-first-billg-rev...

                          • evmar 2 hours ago

                            Another common computing system to be aware of: the Windows epoch is 1-Jan-1601.

                          • schneehertz 14 hours ago

                            This means that some time points cannot be represented by POSIX timestamps, and some POSIX timestamps do not correspond to any real time

                            • paulddraper 2 minutes ago

                              No.

                              If you sat down and watched https://time.is/UTC you would not see any paused or skipped seconds.

                              Sort of. What you would find is that occasionally seconds would become imperceptibly slower.

                              Like 0.001% slower for around a day.

                              • GolDDranks 7 hours ago

                                What are POSIX timestamps that don't correspond to any real time? Or do you mean in the future if there is a negative leap second?

                                • growse 11 hours ago

                                  This has always been true. Pre 1970 is not defined in Unix time.

                                  • usrnm 7 hours ago

                                    Why? time_t is signed

                                    • growse 7 hours ago

                                      From IEE 1003.1 (and TFA):

                                      > If year < 1970 or the value is negative, the relationship is undefined.

                                      • 8n4vidtmkvmk an hour ago

                                        Probably because the Gregorian calendar didn't always exist. How do you map an int to a calendar that doesn't exist?

                                        • layer8 3 hours ago

                                          In addition to being formally undefined (see sibling comment), APIs sometimes use negative time_t values to indicate error conditions and the like.

                                      • marcosdumay 5 hours ago

                                        Well, at least there isn't any POSIX timestamp that correspond to more than one real time point. So, it's better than the one representation people use for everything.

                                        • brianpan 3 hours ago

                                          Not yet.

                                      • jonnycomputer 15 hours ago

                                        I think this article ruined my Christmas. Is nothing sacred? seconds should be seconds since epoch. Why should I care if it drifts off solar day? Let seconds-since-epoch to date representation converters be responsible for making the correction. What am I missing?

                                        • christina97 15 hours ago

                                          The way it is is really how we all want it. 86400 seconds = 1 day. And we operate under the assumption that midnight UTC is always a multiple of 86400.

                                          We don’t want every piece of software to start hardcoding leap second introductions and handling smears and requiring a way to update it within a month when a new leap second is introduced.

                                          You never worried or thought about it before, and you don’t need to! It’s done in the right way.

                                          • lmm 14 hours ago

                                            > We don’t want every piece of software to start hardcoding leap second introductions and handling smears and requiring a way to update it within a month when a new leap second is introduced.

                                            That kind of thing is already needed for timezone handling. Any piece of software that handles human-facing time needs regular updates.

                                            I think it would make most of our lives easier if machine time was ~29 seconds off from human time. It would be a red flag for carelessly programmed applications, and make it harder to confuse system time with human-facing UK time.

                                            • withinboredom 26 minutes ago

                                              You can set your OS to any timezone you want to. If you want it to be 29 seconds off, go for it. The tz database is open source.

                                            • demurgos an hour ago

                                              I don't want it this way: it mixes a data model concern (timestamps) with a ui concern (calendars). As other have said, it would be much better if we used TAI and handled leap seconds at the same level as timezones.

                                              • turminal 14 hours ago

                                                But most software that would need to care about that already needs to care about timezones, and those already need to be regularly updated, sometimes with not much more than a month's notice.

                                                • dmoy 11 hours ago

                                                  I will never forgive Egypt for breaking my shit with a 3 day notice (what was it like 10 years ago?).

                                                  Thankfully for me it was just a bunch of non-production-facing stuff.

                                                  • kragen 4 hours ago

                                                    Was this Morsy's government or Sisi's? If it's Morsy's government you're holding a grudge against, I have some good news for you. (Presumably you're not holding that grudge against random taxi drivers and housewives in Alexandria.)

                                            • xpe 5 hours ago

                                              Is there a synchronized and monotonically increasing measure of time to be found?

                                              • kevindamm 5 hours ago

                                                Not really. GPS time comes close (at least, it avoids leap seconds and DST) but you still have technical issues like clock drift.

                                              • calrain 16 hours ago

                                                When storing dates in a database I always store them in Unix Epoch time and I don't record the timezone information on the date field (it is stored separately if there was a requirement to know the timezone).

                                                Should we instead be storing time stamps in TAI format, and then use functions to convert time to UTC as required, ensuring that any adjustments for planetary tweaks can be performed as required?

                                                I know that timezones are a field of landmines, but again, that is a human construct where timezone boundaries are adjusted over time.

                                                It seems we need to anchor on absolute time, and then render that out to whatever local time format we need, when required.

                                                • lmm 14 hours ago

                                                  > Should we instead be storing time stamps in TAI format, and then use functions to convert time to UTC as required, ensuring that any adjustments for planetary tweaks can be performed as required?

                                                  Yes. TAI or similar is the only sensible way to track "system" time, and a higher-level system should be responsible for converting it to human-facing times; leap second adjustment should happen there, in the same place as time zone conversion.

                                                  Unfortunately Unix standardised the wrong thing and migration is hard.

                                                  • beng-nl 10 hours ago

                                                    I wish there were a TAI timezone: just unmodified, unleaped, untimezoned seconds, forever, in both directions. I was surprised it doesn’t exist.

                                                    • maxnoe 8 hours ago

                                                      TAI is not a time zone. Timezones are a concept of civil time keeping, that is tied to the UTC time scale.

                                                      TAI is a separate time scale and it is used to define UTC.

                                                      There is now CLOCK_TAI in Linux [1], tai_clock [2] in c++ and of course several high level libraries in many languages (e.g. astropy.time in Python [3])

                                                      There are three things you want in a time scale: * Monotonically Increasing * Ticking with a fixed frequency, i.e. an integer multiple of the SI second * Aligned with the solar day

                                                      Unfortunately, as always, you can only chose 2 out of the 3.

                                                      TAI is 1 + 2, atomic clocks using the caesiun standard ticking at the frequency that is the definition of the SI second forever Increasing.

                                                      Then there is UT1, which is 1 + 3 (at least as long as no major disaster happens...). It is purely the orientation of the Earth, measured with radio telescopes.

                                                      UTC is 2 + 3, defined with the help of both. It ticks the SI seconds of TAI, but leap seconds are inserted at two possible time slots per year to keep it within 1 second of UT1. The last part is under discussion to be changed to a much longer time, practically eliminating future leap seconds.

                                                      The issue then is that POSIX chose the wrong standard for numerical system clocks. And now it is pretty hard to change and it can also be argued that for performance reasons, it shouldn't be changed, as you more often need the civil time than the monotonic time.

                                                      The remaining issues are:

                                                      * On many systems, it's simple to get TAI * Many software systems do not accept the complexity of this topic and instead just return the wrong answer using simplified assumptions, e.g. of no leap seconds in UTC * There is no standardized way to handle the leap seconds in the Unix time stamp, so on days around the introduction of leap second, the relationship between the Unix timestamp and the actual UTC or TAI time is not clear, several versions exist and that results in uncertainty up to two seconds. * There might be a negative leap second one day, and nothing is ready for it

                                                      [1] https://www.man7.org/linux/man-pages/man7/vdso.7.html [2] https://en.cppreference.com/w/cpp/chrono/tai_clock [3] https://docs.astropy.org/en/stable/time/index.html

                                                      • beng-nl 5 hours ago

                                                        Thank you ; it’s kind of you to write such a thoughtful, thorough reply.

                                                        In my original comment, when I wrote timezone, I actually didn’t really mean one of many known civil timezones (because it’s not), but I meant “timezone string configuration in Linux that will then give TAI time, ie stop adjusting it with timezones, daylight savings, or leap seconds”.

                                                        I hadn’t heard of the concept of timescale.

                                                        Personally i think item (3) is worthless for computer (as opposed to human facing) timekeeping.

                                                        Your explanation is very educational, thank you.

                                                        That said, you say it’s simple to get TAI, but that’s within a programming language. What we need is a way to explicitly specify the meaning of a time (timezone but also timescale, I’m learning), and that that interpretation is stored together with the timestamp.

                                                        I still don’t understand why a TZ=TAI would be so unreasonable or hard to implement as a shorthand for this desire..

                                                        I’m thinking particularly of it being attractive for logfiles and other long term data with time info in it.

                                                        • dfc 3 hours ago

                                                          Why do you think a time scale has to be aligned with solar day? Are you an astronomer or come from an astronomy adjacent background?

                                                          • wrs 3 hours ago

                                                            Of all the definitions and hidden assumptions about time we’re talking about, possibly the oldest one is that the sun is highest at noon.

                                                          • lmm 7 hours ago

                                                            > you more often need the civil time than the monotonic time

                                                            I don't think that's true? You need to time something at the system level (e.g. measure the duration of an operation, or run something at a regular interval) a lot more often than you need a user-facing time.

                                                      • christina97 15 hours ago

                                                        No, almost often no. Most software is written to paper over leap seconds: it really only happens at the clock synchronization level (chrony for example implements leap second smearing).

                                                        All your cocks are therefore synchronized to UTC anyway: it would mean you’d have to translate from UTC to TAI when you store things, then undo when you retrieve. It would be a mess.

                                                        • growse 11 hours ago

                                                          Smearing is alluring as a concept right up until you try and implement it in the real world.

                                                          If you control all the computers that all your other computers talk to (and also their time sync sources), then smearing works great. You're effectively investing your own standard to make Unix time monatomic.

                                                          If, however, your computers need to talk to someone else's computers and have some sort of consensus about what time it is, then the chances are your smearing policy won't match theirs, and you'll disagree on _what time it is_.

                                                          Sometimes these effects are harmless. Sometimes they're unforseen. If mysterious, infrequent buggy behaviour is your kink, then go for it!

                                                          • ratorx 9 hours ago

                                                            Using time to sync between computers is one of the classic distributed systems problems. It is explicitly recommended against. The amount of errors in the regular time stack mean that you can’t really rely on time being accurate, regardless of leap seconds.

                                                            Computer clock speeds are not really that consistent, so “dead reckoning” style approaches don’t work.

                                                            NTP can only really sync to ~millisecond precision at best. I’m not aware of the state-of-the-art, but NTP errors and smearing errors in the worst case are probably quite similar. If you need more precise synchronisation, you need to implement it differently.

                                                            If you want 2 different computers to have the same time, you either have to solve it at a higher layer up by introducing an ordering to events (or equivalent) or use something like atomic clocks.

                                                            • growse 8 hours ago

                                                              Fair, it's often one of those hidden, implicit design assumptions.

                                                              Google explicitly built spanner (?) around the idea that you can get distributed consistency and availability iff you control All The Clocks.

                                                              Smearing is fine, as long as it's interaction with other systems is thought about (and tested!). Nobody wants a surprise (yet actually inevitable) outage at midnight on New year's day.

                                                              • gpderetta 8 hours ago

                                                                In practice with GPS clocks and OTP you can get very good precision in the microseconds

                                                                • withinboredom 18 minutes ago

                                                                  Throw in chrony and you can get nanoseconds.

                                                            • sadeshmukh 5 hours ago

                                                              That's quite the typo

                                                              • halper 2 hours ago

                                                                Close to the poles, I'd say the assumption that the cocks be synchronised with UTC is flawed. Had we had cocks, I am afraid they'd be oversleeping at this time of year.

                                                            • semiquaver 16 hours ago

                                                                > and I don't record the timezone information on the date field
                                                              
                                                              Very few databases actually make it possible to preserve timezone in a timestamp column. Typically the db either has no concept of time zone for stored timestamps (e.g. SQL server) or has “time zone aware” timestamp column types where the input is converted to UTC and the original zone discarded (MySQL, Postgres)

                                                              Oracle is the only DB I’m aware of that can actually round-trip nonlocal zones in its “with time zone” type.

                                                            • hx8 16 hours ago

                                                              Maybe, it really depends on what your systems are storing. Most systems really won't care if you are one second off every few years. For some calculations being a second off is a big deal. I think you should tread carefully when adopting any format that isn't the most popular and have valid reasons for deviating from the norm. The simple act of being different can be expensive.

                                                              • wodenokoto 8 hours ago

                                                                Use your database native date-time field.

                                                                • SoftTalker an hour ago

                                                                  Seconded. Don't mess around with raw timestamps. If you're using a database, use its date-time data type and functions. They will be much more likely to handle numerous edge cases you've never even thought about.

                                                              • ZeroCool2u 2 hours ago

                                                                More often than I care to admit, I yearn for another of Aphers programming interview short stories. Some of my favorite prose and incredibly in depth programming.

                                                                • vendiddy 14 hours ago

                                                                  Maybe a naive question but why wasn't the timestamp designed as seconds since the epoch with zero adjustments?

                                                                  Everything would be derived from that.

                                                                  I suppose it would make some math more complex but overall it feels simpler.

                                                                  • growse 11 hours ago

                                                                    With hindsight, we'd do lots of things differently :)

                                                                    I guess they just didn't foresee the problem, or misjudged the impact. I can imagine it being very "let's kick that problem down the road and just do a simple thing for now" approach.

                                                                    • wat10000 4 hours ago

                                                                      UNIX systems at the time probably didn’t care about accuracy to the second being maintained over rare leap second adjustments.

                                                                      Random example, the wonderful RealTime1987A project (https://bmonreal.github.io/RealTime1987A/) talks about detecting neutrinos from the supernova, and what information can be inferred from the timing of the detections. A major source of that data is the Super Kamiokande project. The data was recorded to tape by a PDP-11, timestamped by its local clock. That clock was periodically synced with UTC with a sophisticated high-tech procedure that consisted of an operator making a phone call to some time service, then typing the time into the computer. As such, the timestamps recorded by this instrument have error bars of something like +/- one minute.

                                                                      If that’s the sort of world you’re in, trying to account for leap seconds probably seems like a complete waste of effort and precious computer memory.

                                                                    • fragmede 7 hours ago

                                                                      Arguably it's worse if 00:33 on 2024.12.26 has to get run through another function to get the true value of 2024.12.25 T 23:59.

                                                                      The problem is leap seconds. Software just wasn't designed to handle 86401 seconds in a day, and caused incidents at Google, Cloudflare, Quantus, and others. Worried that resolving all possible bugs related to days with 86401 seconds in them was going to be impossible to get right, Google decided to smear that leap second so that the last "second" isn't.

                                                                      And if you've not seen it, there's the falsehoods programmers believe about time article.

                                                                    • sevensor 17 hours ago

                                                                      What I don’t understand is why we would ever assume two clocks in two different places could be compared in a non approximate way. Your clock, your observations of the world state, are always situated in a local context. In the best of all possible cases, the reasons why your clock and time reports from other clocks differ are well understood.

                                                                      • wat10000 5 hours ago

                                                                        GPS depends on widely separated (several times the diameter of Earth) clocks agreeing with each other down to the nanosecond.

                                                                        • withinboredom 14 minutes ago

                                                                          and moving at such high speeds that relativity factors into the equations.

                                                                        • vpaulus 12 hours ago

                                                                          I believe it has some advantages that while you are waiting at the train station your clock shows exactly the same time as the train conductor’s several miles away from you.

                                                                          • sevensor 9 hours ago

                                                                            Surely not! We could be a whole minute off and I’d still be standing on the platform when the train arrived.

                                                                            • dibujaron 5 hours ago

                                                                              in the US or parts of Europe you could wait there for 10m past the scheduled time and barely notice. In Japan if the train clock disagreed with the station clock by 30s, causing the train to arrive 30s late, they'd have to write all of the passengers excuse notes for why they were late to work.

                                                                          • ses1984 15 hours ago

                                                                            I think something like the small angle approximation applies. There are plenty of applications where you can assume clocks are basically in the same frame of reference because relativistic effects are orders of magnitude smaller than your uncertainty.

                                                                            • christina97 15 hours ago

                                                                              The approximation error is so small that you can often ignore it. Hence the concept of exact time.

                                                                              Eg in most computing contexts, you can synchronize clocks close enough to ignore a few nanos difference.

                                                                              • Asraelite 13 hours ago

                                                                                How? Unless you have an atomic clock nearby, they will very quickly drift apart by many nanoseconds again. It's also impossible to synchronize to that level of precision across a network.

                                                                                • prerok 9 hours ago

                                                                                  The Precision Time Protocol is intended to solve this problem:

                                                                                  https://en.m.wikipedia.org/wiki/Precision_Time_Protocol

                                                                                  It does require hardware support, though.

                                                                                  • AlotOfReading 12 hours ago

                                                                                    It's not only possible, you can demonstrate it on your phone. Check the GPS error on your device in a clear area. 1 ft of spatial error is roughly 1ns timing error on the signal (assuming other error sources are zero). Alternatively, you can just look at the published clock errors: http://navigationservices.agi.com/GNSSWeb/PAFPSFViewer.aspx

                                                                                    All the satellites in all of the GNSS constellations are synchronized to each other and every device tracking them to within a few tens of nanoseconds. Yes, atomic clocks are involved, but none of them are corrected locally and they're running at a significantly different rate than "true" time here on earth.

                                                                                    • Asraelite 11 hours ago

                                                                                      That's true, but it's not really the situation I'm thinking of. Your phone is comparing the differences between the timestamps of multiple incoming GNSS signals at a given instant, not using them to set its local clock for future reference.

                                                                                      A better analogy to practical networked computing scenarios would be this: receive a timestamp from a GNSS signal, set your local clock to that, wait a few minutes, then receive a GNSS timestamp again and compare it to your local clock. Use the difference to measure how far you've travelled in those few minutes. If you did that without a local atomic clock then I don't think it would be very accurate.

                                                                                      • wat10000 5 hours ago

                                                                                        Basic hardware gets you a precise GNSS time once per second. Your local clock won’t drift that much in that time, and you can track and compensate for the drift. If you’re in a position to get the signal and have the hardware, then you can have very accurate clocks in your system.

                                                                                        • AlotOfReading 10 hours ago

                                                                                          That's a common way of doing high precision time sync, yes. It's slightly out of phone budget/form factor, but that's what a GPSDO does.

                                                                                          The receiver in your phone also needs pretty good short term stability to track the signal for all of the higher processing. It'd be absolutely fine to depend on PPS output with seconds or minutes between measurements.

                                                                                      • mgaunard 12 hours ago

                                                                                        WhiteRabbit achieves sub-nanosecond time synchronization over a network.

                                                                                        • Asraelite 12 hours ago

                                                                                          Oh wow, that's impressive. Is that over a standard internet connection? Do they need special hardware?

                                                                                          • mgaunard 11 hours ago

                                                                                            It does require a special switch yes.

                                                                                    • pavel_lishin 17 hours ago

                                                                                      camera cuts across to Newton, seething on his side of the desk, his knuckles white as the table visibly starts to crack under his grip

                                                                                    • mgaunard 12 hours ago

                                                                                      The timestamps given in the article seem completely wrong? Also, where would 29 even come from?

                                                                                      The offset between UTC and TAI is 37 seconds.

                                                                                      • possiblywrong 7 hours ago

                                                                                        You are correct. The first example time in the article, "2024-12-25 at 18:54:53 UTC", corresponds to POSIX timestamp 1735152893, not 1735152686. And there have been 27 leap seconds since the 1970 epoch, not 29.

                                                                                        • Retr0id 7 hours ago

                                                                                          I'm also not sure where 29 came from, but the expected offset here is 27 - there have been 27 UTC leap seconds since the unix epoch.

                                                                                        • maxbond 15 hours ago

                                                                                          > ((tm_year - 69) / 4) * 86400

                                                                                          Seems like there's another corner cut here, where the behavior of leap years at the end of a century (where they're skipped if a year is divisible by 100 unless it's divisible by 400) is not accounted for.

                                                                                          I suppose using Unix time for dates in the far future isn't a good idea. I guess I'll file that away.

                                                                                          (For the curious, the way this seems to work is that it's calibrated to start ticking up in 1973 and every 4 years thereafter. This is integer math, so fractional values are rounded off. 1972 was a leap year. From March 1st to December 31st 1972, the leap day was accounted for in `tm_yday`. Thereafter it was accounted for in this expression.)

                                                                                          • jwilk 10 hours ago

                                                                                            > the behavior of leap years at the end of a century (where they're skipped if a year is divisible by 100 unless it's divisible by 400) is not accounted for.

                                                                                            The article cites the original edition of POSIX from 1988.

                                                                                            The bug in question was fixed in the 2001 edition:

                                                                                            https://pubs.opengroup.org/onlinepubs/007904975/basedefs/xbd...

                                                                                            • growse 11 hours ago

                                                                                              > I suppose using Unix time for dates in the far future isn't a good idea. I guess I'll file that away.

                                                                                              Not just Unix time, converting future local time to UTC and storing that is also fraught with risk, as there's no guarantee that the conversion you apply today will be the same as the one that needs to be applied in the future.

                                                                                              Often (for future dates), the right thing to do is to store the thing you were provided (e.g. a local timestamp + the asserted local timezone) and then convert when you need to.

                                                                                              (Past dates have fewer problems converting to UTC, because we don't tend to retroactively change the meaning of timezones).

                                                                                            • tw1984 14 hours ago

                                                                                              there is literally no easy and safe way to actually handle leap seconds. what happens when they need to remove one second? even for the easier case of inserted leap second, you can smear it, but what happens if there are multiple systems each smearing it at different rates? I'd strongly argue that you pretty much have to reboot all your time critical and mission critical systems during the leap second to be safe.

                                                                                              the issue is so wide spread and complicated that they decided to stop introducing extra leap seconds so people can come up with something better in the coming decades - probably way later than the arrival of AGI.

                                                                                              • zaran 12 hours ago

                                                                                                I wonder if the increasing number of computers in orbit will mean even more strange relativistic timekeeping stuff will become a concern for normal developers - will we have to add leap seconds to individual machines?

                                                                                                Back of the envelope says ~100 years in low earth orbit will cause a difference of 1 second

                                                                                                • gavinsyancey 12 hours ago

                                                                                                  Most of those probably don't/won't have clocks that are accurate enough to measure 1 second every hundred years; typical quartz oscillators drift about one second every few weeks.

                                                                                                  • Rastonbury 9 hours ago

                                                                                                    For GPS at least it is accounted for 38 microseconds per day, they have atomic clocks accurate to like 0.4 milliseconds over 100 years. The frequencies they measure at are different from earth and are constantly synchronised.

                                                                                                  • SerCe 16 hours ago

                                                                                                    Working with time is full of pitfalls, especially around clock monotonicity and clock synchronisation. I wrote an article about some of those pitfalls some time ago [1]. Then, you add time zones to it, and you get a real minefield.

                                                                                                    [1]: https://serce.me/posts/16-05-2019-the-matter-of-time

                                                                                                    • mmooss 16 hours ago

                                                                                                      You are a developer who works with time and you named your file, "16-05-2019-the-matter-of-time"? :)

                                                                                                      • eru 14 hours ago

                                                                                                        What's wrong with that?

                                                                                                        • lysium 9 hours ago

                                                                                                          That’s not a standard format. ISO format is yyyy-mm—dd. Also, sorts nicely by time if you sort alphabetically.

                                                                                                          • BrandoElFollito 12 hours ago

                                                                                                            They wrote it on the 16th of May, or the 5th of Bdrfln, we will never know.

                                                                                                            • eru 12 hours ago

                                                                                                              Perhaps it's just named for that date, and not written then?

                                                                                                              In any case, dates only have to make sense in the context they are used.

                                                                                                              Eg we don't know from just the string of numbers whether it's Gregorian, Julian, or Buddhist or Japanese etc calendar.

                                                                                                              • gsich 6 hours ago

                                                                                                                Assuming Gregorian is a sane choice.

                                                                                                                • BrandoElFollito 9 hours ago

                                                                                                                  Who know, it may not even be a date?

                                                                                                                  But seriously, https://xkcd.com/1179/

                                                                                                            • SerCe 10 hours ago

                                                                                                              Yeah, sorry mate, it can be confusing, will use unix epoch next time.

                                                                                                              • sandblast 5 hours ago

                                                                                                                Why the snarkiness? Don't you acknowledge that YYYY-MM-DD is strictly superior to DD-MM-YYYY?

                                                                                                                • mmooss an hour ago

                                                                                                                  lol. Great article, btw; thanks. I submitted it:

                                                                                                                  https://news.ycombinator.com/item?id=42516811

                                                                                                            • ck2 2 hours ago

                                                                                                              I would not be on a plane or maybe even an elevator mid-January 2038

                                                                                                              if it can do this to cloudflare, imagine everything left on legacy signed 32bit integers

                                                                                                              https://blog.cloudflare.com/how-and-why-the-leap-second-affe...

                                                                                                              • christina97 15 hours ago

                                                                                                                Lot of people seem to miss the point of the article.

                                                                                                                Suppose you had a clock that counted seconds (in the way we understand seconds, moving forward one unit per second). If you looked at it in a few days at midnight UTC on NYE (according to any clock), it would not be a multiple of 86400 (number of seconds per day). It would be off by some 29 seconds due to leap seconds. In that way, Unix time is not seconds since the epoch.

                                                                                                                • umanwizard 15 hours ago

                                                                                                                  You have it backwards. If you look at it at midnight UTC (on any day, not just NYE) it WOULD be an exact multiple of 86400. (Try it and see.)

                                                                                                                  Because of leap seconds, this is wrong. Midnight UTC tonight is in fact NOT a multiple of 86,400 real, physical seconds since midnight UTC on 1970-01-01.

                                                                                                                  • jacobgkau 12 hours ago

                                                                                                                    He didn't have it backwards, he was saying the same thing as you. He said, "suppose you had a clock that counted seconds," then described how it would work (it would be a non-multiple) if that was the case, which it isn't. You ignored that his description of the behavior was part of a hypothetical and not meant to describe how it actually behaves.

                                                                                                                    • umanwizard 6 hours ago

                                                                                                                      You’re absolutely right — not sure how I misinterpreted that so badly.

                                                                                                                • nubinetwork 7 hours ago

                                                                                                                  Isn't this the point to the tz files shipped on every linux system? If the crappy online converters only do the basic math formula, of course it's going to be off a little...

                                                                                                                  • silisili 17 hours ago

                                                                                                                    > People, myself included, like to say that POSIX time, also known as Unix time, is the number of seconds since the Unix epoch, which was 1970-01-01 at 00:00:00.

                                                                                                                    > This is not true. Or rather, it isn’t true in the sense most people think.

                                                                                                                    I find that assertion odd, because it works exactly as I did assume. Though, to be fair, I'm not thinking in the scientific notion that the author may.

                                                                                                                    If we think of a second as a tick of some amount of time, it makes sense to just count up once each tick. That scientists inject a second here or there wouldn't interfere with such logic.

                                                                                                                    All of that said, the leap second is going away anyways, so hopefully whatever replaces it is less troublesome.

                                                                                                                    • apgwoz 16 hours ago

                                                                                                                      The leap second in Unix time is supposed to wait a second and pretend it never happened. I can see why a longer second could be trouble, but also… if you knew it was coming you could make every nanosecond last 2 and lessen the impact as time would always be monotonic?

                                                                                                                    • lmm 14 hours ago

                                                                                                                      > If we think of a second as a tick of some amount of time, it makes sense to just count up once each tick.

                                                                                                                      It would, but Unix timestamps don't. It works exactly not how you assume.

                                                                                                                      • silisili 14 hours ago

                                                                                                                        Explain?

                                                                                                                        The article is claiming POSIX ignores injected leap seconds.

                                                                                                                        • lmm 12 hours ago

                                                                                                                          The article is needlessly unclear, but the specification given in the second blockquote is the one that is actually applied, and a simpler way of explaining it is: POSIX time() returns 86400 * [the number of UTC midnights since 1970-01-01T00:00:00] + [the number of seconds since the last UTC midnight].

                                                                                                                          • ec109685 14 hours ago

                                                                                                                            POSIX doesn’t ignore leap seconds. Occasionally systems repeat a second, so time doesn’t drift beyond a second from when leap seconds were invented: https://en.wikipedia.org/wiki/Leap_second

                                                                                                                            • silisili 14 hours ago

                                                                                                                              After reading this article no less than 3 times, and the comments in this thread, I'm beyond lost.

                                                                                                                              So maybe the author was right. Because different people are claiming different things.

                                                                                                                              • zokier 9 hours ago

                                                                                                                                The Unix time article has concrete example with tables which should clarify the matter. https://en.wikipedia.org/wiki/Unix_time#Leap_seconds

                                                                                                                                In that example, Unix time goes from 915148799 -> 915148800 -> 915148800 -> 915148801. Note how the timestamp gets repeated during leap second.

                                                                                                                      • paradite 12 hours ago

                                                                                                                        Typically you don't need to worry about leap seconds on server because AWS or GCP will help you handle it.

                                                                                                                        You just need to read the docs to understand their behavior. Some will smooth it out for you, some will jump for you. It would be a problem if you have 3rd party integrations and you rely on their timestamp.

                                                                                                                        • quotemstr 16 hours ago

                                                                                                                          So what if leap seconds make the epoch 29 seconds longer-ago than date +%s would suggest? It matters a lot less than the fact that we all agree on some number N to represent the current time. That we have -29 fictional seconds doesn't affect the real world in any way. What are you going to do, run missile targeting routines on targets 30 years ago? I mean, I'm as much for abolish leap seconds as anyone, but I don't think it's useful --- even if it's pedantically correct --- to highlight the time discrepancy.

                                                                                                                          • wat10000 4 hours ago

                                                                                                                            One could imagine a scenario where you’re looking at the duration of some brief event by looking at the start and end times. If that’s interval happens to span a leap second then the duration could be significantly different depending on how your timestamps handled it.

                                                                                                                            Much more important, though, is how it affects the future. The fact that timestamps in the past might be a few seconds different from the straightforward “now minus N seconds” calculation is mostly a curiosity. The fact that clocks might all have to shift by one more second at time point in the future is more significant. There are plenty of real-world scenarios where that needs some substantial effort to account for.

                                                                                                                            • chrchr 15 hours ago

                                                                                                                              It matters for some things. Without those fictional leap seconds, the sun would be 29 seconds out of position at local noon, for instance.

                                                                                                                              • umanwizard 15 hours ago

                                                                                                                                That does not matter at all to anyone.

                                                                                                                                • growse 11 hours ago

                                                                                                                                  Did you ask everyone?

                                                                                                                                  It most certainly matters to a lot of people. It sounds like you've never met those people.

                                                                                                                                  • zokier 9 hours ago

                                                                                                                                    For practically everyone the local civil time is off from local solar time more than 30 seconds, because very few people live at the exact longitude that corresponds to their time zone. And then you got DST which throws the local time even more off.

                                                                                                                                    This is ignoring the fact that due equation of time, solar noon naturally shifts around tens of minutes over the course of the year.

                                                                                                                                    To drive the point, for example local mean solar time at Buckingham palace is already more than 30 seconds off from Greenwich time.

                                                                                                                                    • numpad0 7 hours ago

                                                                                                                                      The point is, since astronomical "time" isn't exactly on constant multiple of cesium standard seconds, and it even fluctuates due to astrophysical phenomena, applications that concern astro-kineti-geometrical reality has to use the tarnished timescale to match the motion of the planet we're on rather than following a monotonic counter pointed at a glass vial.

                                                                                                                                      It is up to you to keep TAI for everything and let your representations of physical coordinates drift away into the galaxy or something, but that's not the majority choice. Overwhelming majority choose UTC time.

                                                                                                                                      TAI is still nice for many high precision applications, weirdly including a lot of precisely those geo-spatial use cases, so we have both.

                                                                                                                                      • growse 8 hours ago

                                                                                                                                        Sure, but that doesn't mean that we invented and practise leap seconds for the sheer fun of it.

                                                                                                                                        There's very good reasons that are important behind why we try and keep UTC near UT1, so saying "it doesn't matter to anyone" without even entertaining that some people might care isn't very constructive.

                                                                                                                                        • zokier 4 hours ago

                                                                                                                                          UTC, and leap seconds, originate from (military) navies of the world, with the intent of supporting celestial navigation. It is already dubious how useful leap seconds were for that use, and much more dubious is its use as civil timescale.

                                                                                                                                          • growse 2 hours ago

                                                                                                                                            We have leap seconds to save us from having leap minutes, or leap hours.

                                                                                                                                            Generally, it's useful for midnight to be at night, and midday during the day. UT1 is not regular, so you need some form of correction. Then the debate is about how big and how often.

                                                                                                                                          • umanwizard 6 hours ago

                                                                                                                                            Okay, I’ll bite. Who does this matter to, and why?

                                                                                                                                      • porridgeraisin 14 hours ago

                                                                                                                                        Yeah. "Exact time" people are a bit like "entropy" people in cryptography. Constantly arguing about the perfect random number when nobody cares.

                                                                                                                                  • computator 16 hours ago

                                                                                                                                    > POSIX time, also known as Unix time, is the number of seconds since the Unix epoch, which was 1970-01-01 at 00:00:00. … I think there should be a concise explanation of the problem.

                                                                                                                                    I don’t think that the definition that software engineers believe is wrong or misleading at all. It really is the number of seconds that have passed since Unix’s “beginning of time”.

                                                                                                                                    But to address the problem the article brings up, here’s my attempt at a concise definition:

                                                                                                                                    POSIX time, also known as Unix time, is the number of seconds since the Unix epoch, which was 1970-01-01 at 00:00:00, and does not include leap seconds that have been added periodically since the 1970s.

                                                                                                                                    • jodrellblank 16 hours ago

                                                                                                                                      Atomic clocks measure time passing.

                                                                                                                                      Seconds are a fraction of a day which is Earth rotating, and count 86400 seconds and then roll over to the next day, but Earth's rotating speed changes so how much "time passing" is in 86400 seconds varies a little. Clocks based on Earth rotating get out of sync with atomic clocks.

                                                                                                                                      Leap seconds go into day-rotation clocks so their date matches the atomic clock measure of how much time has passed - they are time which has actually passed and ordinary time has not accounted for; so it's inconsistant for you to say "Unix time really is the number of seconds that have passed" and "does not include leap seconds" because those leap seconds are time that has passed.

                                                                                                                                      • umanwizard 15 hours ago

                                                                                                                                        You’re wrong and have the situation exactly backwards.

                                                                                                                                        If a day has 86,401 or 86,399 seconds due to leap seconds, POSIX time still advances by exactly 86,400.

                                                                                                                                        If you had a perfectly accurate stopwatch running since 1970-01-01 the number it shows now would be different from POSIX time.

                                                                                                                                        • quasarj 14 hours ago

                                                                                                                                          Wait, why would it be different?

                                                                                                                                          • umanwizard 14 hours ago

                                                                                                                                            Because a day, that is the time between midnight UTC and midnight UTC, is not always exactly 86400 seconds, due to leap seconds. But Unix time always increases by exactly 86400.

                                                                                                                                            • growse 11 hours ago

                                                                                                                                              Unix time is not monatomic. It sometimes goes backwards.

                                                                                                                                              • zokier 9 hours ago

                                                                                                                                                Strictly speaking Unix time is monotonic, because it counts integer number of seconds and it does not go backwards, it only repeats during leap seconds.

                                                                                                                                                • growse 8 hours ago

                                                                                                                                                  This feels like semantics. If a counter repeats a value, it's effectively gone backwards and by definition is not monatomic.

                                                                                                                                                  A delta between two monatomic values should always be non-negative. This is not true for Unix time.

                                                                                                                                                  • wat10000 4 hours ago

                                                                                                                                                    “Monotonic” means non-decreasing (or non-increasing if you’re going the other way). Values are allowed to repeat. The term you’re looking for is “strictly increasing.”

                                                                                                                                                    • growse 3 hours ago

                                                                                                                                                      I guess this hinges on whether you think Unix time is an integer or a float. If you think it's just an integer, then yes, you can't get a negative delta.

                                                                                                                                                      If, however, you think it's a float, then you can.

                                                                                                                                          • Calamityjanitor 15 hours ago

                                                                                                                                            I think you're describing the exact confusion that developers have. Unix time doesn't include leap seconds, but they are real seconds that happened. Consider a system that counts days since 1970, but ignores leap years so doesn't count Feb 29. Those 29ths were actual days, just recorded strangely in the calendar. A system that ignores them is going to give you an inaccurate number of days since 1970.

                                                                                                                                            • quasarj 15 hours ago

                                                                                                                                              Are you sure they actually happened? as you say, at least one of us is confused. My understanding is that the added leap seconds never happened, they are just inserted to make the dates line up nicely. Perhaps this depends on the definition of second?

                                                                                                                                              • wat10000 4 hours ago

                                                                                                                                                Leap seconds are exactly analogous to leap days. One additional unit is added to the calendar, shifting everything down. For leap days we add a day 29 when normally we wrap after 28. For leap seconds we add second 60 when normally we wrap after 59.

                                                                                                                                                Imagine a timestamp defined as days since January 1, 1970, except that it ignores leap years and says all years have 365 days. Leap days are handled by giving February 29 the same day number as February 28.

                                                                                                                                                If you do basic arithmetic with these timestamps to answer the question, “how many days has it been since Nixon resigned? then you will get the wrong number. You’ll calculate N, but the sun has in fact risen N+13 times since that day.

                                                                                                                                                Same thing with leap seconds. If you calculate the number of seconds since Nixon resigned by subtracting POSIX timestamps, you’ll come up short. The actual time since that event is 20-some seconds more than the value you calculate.

                                                                                                                                                • Calamityjanitor 15 hours ago

                                                                                                                                                  I'm honestly just diving into this now after reading the article, and not a total expert. Wikipedia has a table of a leap second happening across TAI (atomic clock that purely counts seconds) UTC, and unix timestamps according to POSIX: https://en.wikipedia.org/wiki/Unix_time#Leap_seconds

                                                                                                                                                  It works out to be that unix time spits out the same integer for 2 seconds.

                                                                                                                                                  • quasarj 14 hours ago

                                                                                                                                                    "spits out" as in, when you try to convert to it - isn't that precisely because that second second never happened, so it MUST output a repeat?

                                                                                                                                                    • jacobgkau 12 hours ago

                                                                                                                                                      I thought you were wrong because if a timestamp is being repeated, that means two real seconds (that actually happened) got the same timestamp.

                                                                                                                                                      However, after looking hard at the tables in that Wikipedia article comparing TAI, UTC, and Unix time, I think you might actually be correct-- TAI is the atomic time (that counts "real seconds that actually happened"), and it gets out of sync with "observed solar time." The leap seconds are added into UTC, but ultimately ignored in Unix time.* ~~So Unix time is actually more accurate to "real time" as measured atomically than solar UTC is.~~

                                                                                                                                                      The only point of debate is that most people consider UTC to be "real time," but that's physically not the case in terms of "seconds that actually happened." It's only the case in terms of "the second that high noon hits." (For anyone wondering, we can't simply fix this by redefining a second to be an actual 24/60/60 division of a day because our orbit is apparently irregular and generally slowing down over time, which is why UTC has to use leap seconds in order to maintain our social construct of "noon == sun at the highest point" while our atomic clocks are able to measure time that's actually passed.)

                                                                                                                                                      *Edit: Or maybe my initial intuition was right. The table does show that one Unix timestamp ends up representing two TAI (real) timestamps. UTC inserts an extra second, while Unix time repeats a second, to handle the same phenomenon. The table is bolded weirdly (and I'm assuming it's correct while it may not be); and beyond that, I'm not sure if this confusion is actually the topic of conversation in the article, or if it's just too late in the night to be pondering this.

                                                                                                                                              • juped 16 hours ago

                                                                                                                                                It really is the number of seconds that have passed since Unix's "beginning of time", minus twenty-nine. Some UTC days have 86401 seconds, Unix assumes they had 86400.

                                                                                                                                                It's wrong and misleading in precisely the way you (and other commenters here) were wrong and misled, so it seems like that's a fair characterization.