Flundstrom2 2 days ago

Time is a mess. Always. The author only scratched the surface on all the issues. Even if we exclude the time dilation of relativity which affects GPS/GNSS satellites - independent of if it is due to difference in gravitational pull or their relative speed over ground, it's still a mess.

Timezones; sure. But what about before timezones got into use? Or even halfway through - which timezone, considering Königsberg used CET when it was part of Germany, but switched to EET after it became Russian. There's even countries that have timezones differenting by 15 minutes.

And dont get me started on daylight savings time. There's been at least one instance where DST was - and was not - in use in Lebanon - at the same time! Good luck booking an appointment...

Not to mention the transition from Julian calendar to Gregorian, which took place over many, many years - different by different countries - as defined by the country borders at that time...

We've even had countries that forgot to insert a leap day in certain years, causing March 1 to occur on different days altogether for a couple of years.

Time is a mess. Is, and aways have been, and always will be.

  • johnisgood a day ago

    It is, there are a couple of timezones where not only there is a hour difference, but even a 30 and 45 minutes difference. India is UTC +5:30, and Lord Howe Island is UTC +10:30 / +11:00 and New Zealand, Chatham Islands is UTC +12:45 / +13:45, Iran is UTC +3:30 / +4:30 and so on. Where the format is X / Y, that means X is Standard Time, and Y is Daylight time.

    Messy.

    I think the full list can be found here: https://www.timeanddate.com/time/time-zones-interesting.html

    You can use a Bash script that can give you an exhaustive list based on files from /usr/share/zoneinfo/, i.e. find timezones with non-whole hour offsets.

    • volemo a day ago

      I don’t understand this. What practical difference does it make making the time to round to the nearest quarter of an hour instead of the nearest hour? Personally, I don’t care if noon (sun is in zenith) happens half an hour before 12:00 or half an hour after.

      Why do such time zones exist?

      • johnisgood a day ago

        Well, I do not know the answer to that, my guess is that it is for historical, political, geographical, and socio-economic reasons.

        For example in terms of India, they had two timezones before they adopted a compromise: UTC+5:30.

        Nepal uses UTC+5:45, partly to distinguish itself from Indian Standard Time, reinforcing national identity.

        • volemo a day ago

          > India, they had two timezones before they adopted a compromise: UTC+5:30.

          Truly, a compromise is when nobody is happy. ._\

          • johnisgood 21 hours ago

            Truthfully, I do not know the story behind it. If you do, feel free to share.

      • johnisgood 20 hours ago

        Oh by the way, check this out, this is one of the news in 2025b tzdata:

          # From Roozbeh Pournader (2025-03-18):
          # ... the exact time of Iran's transition from +0400 to +0330 ... was Friday
          # 1357/8/19 AP=1978-11-10. Here's a newspaper clip from the Ettela'at
          # newspaper, dated 1357/8/14 AP=1978-11-05, translated from Persian
          # (at https://w.wiki/DUEY):
          # Following the government's decision about returning the official time
          # to the previous status, the spokesperson for the Ministry of Energy
          # announced today: At the hour 24 of Friday 19th of Aban (=1978-11-10),
          # the country's time will be pulled back half an hour.
        
        From https://github.com/eggert/tz/blob/main/asia#L1503.

        Pretty sure we can find a lot more oddities that are way worse.

  • minkzilla 2 days ago

    Author covers how IANA handles Königsberg, it is logically its own timezone.

      An IANA timezone uniquely refers to the set of regions that not only share the same current rules and projected future rules for civil time, but also share the same history of civil time since 1970-01-01 00:00+0. In other words, this definition is more restrictive about which regions can be grouped under a single IANA timezone, because if a given region changed its civil time rules at any point since 1970 in a a way that deviates from the history of civil time for other regions, then that region can't be grouped with the others
    
    I agree that time is a mess. And the 15 minute offsets are insane and I can't fathom why anyone is using them.
    • mzs 2 days ago

      zoneinfo does in practice hold the historical info before 1970 when it can do so easily in its framework: https://en.wikipedia.org/wiki/UTC%2B01:24

        % zdump -i Europe/Warsaw | head
        
        TZ="Europe/Warsaw"
        - - +0124 LMT
        1880-01-01 00 +0124 WMT
        1915-08-04 23:36 +01 CET
        1916-05-01 00 +02 CEST 1
        1916-10-01 00 +01 CET
        1917-04-16 03 +02 CEST 1
        1917-09-17 02 +01 CET
        1918-04-15 03 +02 CEST 1
        % zdump -i Europe/Kaliningrad | head -20
        
        TZ="Europe/Kaliningrad"
        - - +0122 LMT
        1893-03-31 23:38 +01 CET
        1916-05-01 00 +02 CEST 1
        1916-10-01 00 +01 CET
        1917-04-16 03 +02 CEST 1
        1917-09-17 02 +01 CET
        1918-04-15 03 +02 CEST 1
        1918-09-16 02 +01 CET
        1940-04-01 03 +02 CEST 1
        1942-11-02 02 +01 CET
        1943-03-29 03 +02 CEST 1
        1943-10-04 02 +01 CET
        1944-04-03 03 +02 CEST 1
        1944-10-02 02 +01 CET
        1945-04-02 03 +02 CEST 1
        1945-04-10 00 +02 EET
        1945-04-29 01 +03 EEST 1
        1945-10-31 23 +02 EET
        %
      • inglor_cz a day ago

        Koenigsberg was conquered by the Soviets in April 1945, but the final Soviet-Polish border was only established in August of the same year. I wonder when the official switch to EET was made. For several months, the future of the city was a bit uncertain.

  • drob518 2 days ago

    Yep. Fortunately, a lot of apps can get by with just local civil time and an OS-set timezone. It’s much less common that they need to worry about leap seconds, etc. And many also don’t care about millisecond granularity, etc. If your app does care about all that, however, things become a mess quite quickly.

  • voidUpdate a day ago

    Even worse, there are some areas where the timezone depends on your religion. Lebanon had "Muslim time" and "Christian time" at one point (Unsure if that's still a thing)

    • exe34 a day ago

      There's a joke about Northern Ireland, where if you claim to be an atheist, they would ask you, yes but which side - Catholic or Protestant?

yen223 2 days ago

The way Google implemented leap seconds wasn't by sticking a 23:59:60 second at the end of 31st Dec. The way they did it was more interesting.

What they did instead was to "smear" it across the day, by adding 1 / 86400 seconds to every second on 31st Dec. 1/86400 seconds is well within the margin of error for NTP, so computers could carry on doing what they do without throwing errors.

Edit: They smeared it from noon before the leap second, to the noon after, i.e 31st Dec 12pm - 1st Jan 12pm.

alphazard a day ago

There's nothing special about time. These sorts of problems show up whenever there are a few overloaded terms being used to model a larger amount of concepts.

Confusion about special and general relativity accounts for almost none of the problems that programmers encounter in practice. If that's your use case, then fine, time is special and tricky.

The most common issue is a failure to separate models vs. view concepts. e.g. timestamps are a model, but local time, day of the week, leap seconds are all view concepts. The second most common issue is thinking that UTC is suitable to use as a model, instead of the much more reasonable TAI64. After that it's probably the difference between scheduling requests vs. logging what happened. "The meeting is scheduled next Wednesday at 9am local time" vs. "the meeting happened in this time span". One is a fact about the past, and the other is just scheduling criteria. It could also be something complicated like "every other Wednesday", or every "Wednesday on an even number day of the month". Or "we can work on this task once these 2 machines are available", "this process will run in 2 time slots from now", etc.

https://cr.yp.to/libtai/tai64.html

  • theamk a day ago

    You've had me agreeing until TAI64 reference... Why would you optimize your software for super-rare case (super high precision time intervals) at the expense of making a very common case (timestamp -> calendar time) harder? Unless you are into computerized astronomy or rare physics experiments, there is never a need to worry about leap seconds or TAI. For long intervals, having 1-second error every few years to reduce code conversion complexity is totally worth it.

    (note that for the short intervals, when a single second actually matters, you should use neither TAI nor UTC, use monotonic timer provided by your OS)

    • alphazard a day ago

      The TAI64 extensions N and NA for nanoseconds and attoseconds repspsectively are for the super rare cases you're talking about, but that's not the reason to use TAI64, which by default has second resolution. The main reason is that it preserves the most common intuitions about time (that it goes in a single direction at a constant rate). Distance between two TAI64 timestamps is the number of seconds elapsed. Future will never be equal or less than the past.

      This shouldn't affect code complexity at all. None of this logic belongs in the application code. You take your timestamp stored as state and convert it to a presentation format by calling a library in both cases. Converting timezones yourself by adding or subtracting hours is asking for trouble.

      • theamk a day ago

        The "most common intuition" is that if you have a timestamp X, and you add 3600 seconds, you get a same time but in next hour (unless there are timezone shenanigans). But that's not true for TAI - it's entirely possible that 3600 seconds after 23:30:00, the time is 00:29:59. No intuition works like that.

        So that's why I always advocate for UTC, ideally with smeared leap seconds (although this point is kinda moot now). The 0.00000001% of people who actually care about leap seconds can use the library to get TAI.

  • eduction a day ago

    Local time isn’t just a view it gets down into the model. You can’t effectively model “every Tuesday at 4pm until cancelled” as a series of timestamps because it comes with an implied “…regardless of changes to daylight saving rules.”

    Heck you can’t even model “Oct 15 2030 at 2pm” as a timestamp for the same reason if it is an appointment involving two humans in the same tz.

    Somewhere you need to store the rule using local time concepts.

    • theamk a day ago

      Agreed, but amazingly high number of people (ab)use the local time notation for things which don't need it, like timestamps of various events (login time, order time, etc..)

      It's worth repeating over and over: "Only use local time if the actual moment event happens depends on the timezone, like during future calendar entries. Every other time in the system should be a utc timestamp, converted to local time string at the very last presentation layer"

    • alphazard a day ago

      The input to the scheduler would be “every Tuesday at 4pm", and that would produce a schedule out to some time horizon. The schedule would be in terms of timestamps. When the criteria changes, the scheduler is rerun and a new tentative schedule is produced.

      > Somewhere you need to store the rule using local time concepts.

      The scheduling rules can be in terms of whatever you want and maybe that's local time or a timezone or soonest free block, but they are being used to solve for timestamps in the actual schedule. You run into problems if the scheduler output is in terms of any of those concepts.

    • taeric a day ago

      My favorite mistake is when people assume it is only daylight savings time that you need to consider. I could just be on vacation in a different place, but still want to keep the same schedule. And indeed, knowing that most places open around 9ish local time everywhere is very convenient.

      Another fun aspect of this topic is how people will tacitly jump around the definition of "day" being when the sun is up without confronting that that is, itself, not a given. And, worse, is completely thrown out if you try to abolish time zones.

karmakaze 2 days ago

Two things that aren't really covered:

- system clock drift. Google's instances have accurate timekeeping using atomic clocks in the datacenter, and leap seconds smeared over a day. For accurate duration measurements, this may matter.

- consider how the time information is consumed. For a photo sharing site the best info to keep with each photo is a location, and local date time. Then even if some of this is missing, a New Year's Eve photo will still be close to midnight without considering its timezone or location. I had this case and opted for string representations that wouldn't automatically be adjusted. Converting it to the viewer's local time isn't useful.

  • bigiain 2 days ago

    The calendar event scheduling problem is hard too.

    If I'm in Sydney and I accept a 4pm meeting in 3 weeks time, say 4pm July 15 2025 in San Francisco, how should my calendar store that event's datetime and how does my calendar react to my phone changing locations/timezones?

    And now try and work that out if the standard/summertime changeover happens between when the event is created in one timezone and the actual time the event (is supposed to) occur. Possibly two daylight savings time changes if Sydney goes from winter to summer time and San Francisco goes from summer to winter time - and those changeovers don't happen at the same time, perhaps not even the same week.

    • valenterry a day ago

      That's easy though. An event of such type is about an absolute point in time, so your calendar stores it like that and then displays it in your current timezone (or whatever one you specify).

      When you change locations and you have your calendar configured to show events in "the" timezone of your location, it does so. And should there be no clear timezone, it should ask you.

      Very simple problem and simple solutions. There are much harder problems imho.

      As you can see, the summertime change does it even matter here.

      • bigiain a day ago

        But I don't want my San Francisco meeting to display in my calendar as 1am the day before when I'm in Sydney, then switch to 4pm on Tuesday once I'm in California. And I sure as hell don't want the displayed time in Sydney to switch from 1am to 11pm or 3am just because daylight savings kicked in.

        It's a 4pm Tuesday meeting. I want it to show as 4pm while I'm in Sydney, 4pm while I'm on a stopover in Hawaii, and correctly alert me for my 4pm meeting when I'm in San Francisco. And it probably should alert me at 4pm San Francisco time even if I'm not there, in case I missed my connecting flight in Hawaii and I want to call in at the correct time. And that last requirement conflicts wit the "I want it to show as 4pm while I'm on a stopover in Hawaii" requirement, because I'm human and messy and I want the impossible without expending any effort to make it happen.

        I'm pretty sure there is no "simple solution" for getting the UX right so I can add a meeting in San Francisco on my phone while I'm in Sydney, and have it "just work" without it always bugging me by asking for timezones.

        • valenterry a day ago

          Yeah that's fair, but that's purely a UI/display problem and there is and cannot be any solution that works without context.

          Ultimately, if you don't like it, tell your calendar to show it differently.

          > It's a 4pm Tuesday meeting

          In one timezone, yes.

          Apperently you want the times to be shown for when you will be in that timezone. But the calendar doesn't know when you will be in what timezone, and it's such a rare thing that apparently no one made a calendar where you can day "I'll be (mentally) in this timezone from that day and then in that timezone a week later".

          So you, your last sentence is right, because that's impossible. That's different from "hard".

wpollock 2 days ago

Very nice write up! But I think your point that time doesn't need to be a mess is refuted by all the points you made.

I know you had to limit the length of the post, but time is an interest of mine, so here's a couple more points you may find interesting:

UTC is not an acronym. The story I heard was the English acronym would be "CUT" (the name is "coordinated universal time") and the French complained, the French acronym would be "TUC" and the English-speaking committee members complained, so they settled for something that wasn't pronouncable in either. (FYI, "ISO" isn't an acronym either!)

Leap seconds caused such havoc (especially in data centers) that no further leap seconds will be used. (What will happen in the future is anyone's guess.) But for now, you can rest easy and ignore them.

I have a short list of time (and NTP) related links at <https://wpollock.com/Cts2322.htm#NTP>.

pavel_lishin 2 days ago

> other epochs work too (e.g. Apollo_Time in Jai uses the Apollo 11 rocket landing at July 20, 1969 20:17:40 UTC).

I see someone else is a Vernor Vinge fan.

But it's kind of a wild choice for an epoch, when you're very likely to be interfacing with systems whose Epoch starts approximately five months later.

  • r2_pilot 2 days ago

    That's kind of the point of software archeology, isn't it? Sometimes something so evident to people within the first few hundred years becomes opaque in reasoning later on, and what's 5 months anyway? You'd need a Rosetta stone to be sure you were even off in time, otherwise you just might have a few missing months that historians couldn't account for.

mzl a day ago

As many others have said, time and calendars is messy, and there is often no correct solution but just a bunch of trade-offs. Jon Skeets Storing UTC is not a Silver Bullet (https://codeblog.jonskeet.uk/2019/03/27/storing-utc-is-not-a...) was very influential for me in realizing some of the subtleties in what a point in time means for a user, and how that should incluece the design of a system.

  • sfn42 a day ago

    I have never worked on a system where simply storing everything as UTC isn't a perfect solution.

    I know they exist, but I would say those are niche. And even for applications where you can't just use UTC for everything and be done with it, the vast majority of timestamps will be UTC. You'll have one or a few special cases where you need to do more fancy stuff, and for everything else you just use utc.

    • RitzyMage a day ago

      > I have never worked on a system where simply storing everything as UTC isn't a perfect solution.

      > I know they exist, but I would say those are niche.

      I firmly disagree with this, but I think that is because I think timestamps are very different than dates. Two examples my team runs into frequently that I think are very common:

      1. Storing the time of an event at a physical location. These are not timestamps and I would never want to convert them to a different time zone. We had google calendar trying to be smart and convert it to user's local time because it was stored as a timestamp, but it is not a timestamp. I don't care where the user lives, they will need to show up Jan 2nd at 3pm. Period. I hate when tools try to auto-convert time zones 2. Storing "pure dates" (e.g. birthdays). The database we use does not allow this and we have to store birthdates in UTC. This is an abomination. I've seen so many bugs where our date libraries "helpfully" convert the time zone of bitthdays and put them a day before they actually are.

      Storing UTC may solve almost all timestamp problems, but timestamp problems are a pretty narrow slice of date and time related bugs.

      • sfn42 a day ago

        You're completely right, for these niche cases it matters. I'm not saying you should indiscriminately apply UTC to every problem, I'm just saying usually datetimes should be stored as UTC.

        And the reason I feel the need to say this is that most systems I've worked on don't do that. They use local time for things that have no business being local time.

        UTC should be the default solution, and local datetime usage should be a solution in the few situations where its needed.

        And yeah, dateonly is nice. If the db doesn't support it you can just store it as a string and serialize/deserialize it to a DateOnly type in your software.

        • mzl 10 hours ago

          I'd say, in general any time a time-value somehow originates as a part of an interaction with a human, that time-value carries with it the context of a timezone and the expectations of what that timezone means for that human.

          For internal timestamps such as ordering events in a database, them UTC or something similar is nice. Bet the point then is that those values are not really meaningful or important in the analog world.

johnisgood a day ago

> One way the website could handle this is by storing the user's exact input 2026-06-19 07:00, and also store the UTC+0 version of that datetime (if we assumed that the timezone rules won't change); this way, we can keep using the UTC+0 datetime for all logic, and we can recompute that UTC+0 datetime once we detect that the time rules for that timezone have changed.

Well, how do we know what timezone is "2026-06-19 07:00" in, to be able to know that the time rules for that timezone have changed, if we do not store the timezone?

Additionally, how do we really "detect that the time rules for that timezone have changed"? We can stay informed, sure, but is there a way to automate this?

  • dqv a day ago

    > Well, how do we know what timezone is "2026-06-19 07:00" in, to be able to know that the time rules for that timezone have changed, if we do not store the timezone?

    The website uses their system timezone America/Los_Angeles. It would be better to store that as well, but it's probably not much of an issue if the website is dedicated to meetings in that specific locale.

    > Additionally, how do we really "detect that the time rules for that timezone have changed"? We can stay informed, sure, but is there a way to automate this?

    My first attempt would be to diff the data between versions. If a diff of 2025b against 2025a has added/removed lines which include America/Los_Angeles, you recompute the timestamps. This, of course, requires that the library support multiple timezone database versions at once.

glorbx 2 days ago

Glad OP discussed daylight savings nightmare.

But I hate how when I stack my yearly weather charts, every four years either the graph is off by one day so it is 1/366th narrower and the month delimiters don't line up perfectly, or i have to duplicate Feb 28th so there is no discontinuity in the lines. Still not sure how to represent that, but it sure bugs me.

zokier 2 days ago

It is a pet peeve of mine, but any statement that implies that Unix time is a count of seconds since epoch is annoyingly misleading and perpetuates such misconception. Imho better mental model for Unix time is that has two parts, days since epoch * 86400, and seconds since midnight, which get added together.

  • valenterry a day ago

    But it's correct. It's "a" count. Just not the count that you might always expect. And the "second" in this definition means what people usually understand as a second, as in the duration is always the same. That's all, and it's pretty useful imho.

    • zokier a day ago

      > And the "second" in this definition means what people usually understand as a second, as in the duration is always the same.

      Umm what? In Unix time some values span two seconds, which is the crux of the problem. In UTC every second is a proper nice SI second. In Unix time the value increments every one or two SI seconds.

      • valenterry a day ago

        Really? Now I'm confused

        • zokier a day ago

          The table in Wikipedia is good for clarifying matters:

          https://en.wikipedia.org/wiki/Unix_time#Leap_seconds

          From there you can clearly see that e.g. Unix time 915148800 lasted two seconds

          We can make an analogy to leap days:

          - UTC is like Gregorian calendar, on leap years it goes Feb28-Feb29-Mar1 (23:59:59-23:59:60-00:00:00)

          - TAI would be just always going from Feb28-Mar1 (23:59:59-00:00:00) and ignoring leap years

          - Unix time would be like to go Feb28-Mar1-Mar1 (23:59:59-00:00:00-00:00:00) on leap years, repeating the date

          From this it should be pretty obvious why I consider Unix time so bonkers.

          • valenterry 14 hours ago

            Indeed, it's the opposite of what I thought. Confusing! Thanks for the clarification.

            So in fact, unix seconds can be longer than intuitively expected. Which also means two timestamps of e.g. UTC with different seconds can map to the same unix timestamps.

  • charcircuit 2 days ago

    How is it misleading? The source code of UNIX literally has time as a variable of seconds that increments every second.

    • adgjlsfhk1 2 days ago

      leap seconds

      • LegionMammal978 2 days ago

        Also, UTC had a different clock rate than TAI prior to 1972. And TAI itself had its reference altitude adjusted to sea level in 1977.

codr7 a day ago

Plenty of people these days insist on stamping anything that looks like time with a time zone. Getting that right through the entire stack is like winning the lottery imo. When dealing with different time zones, I've had some success in simplifying the situation by mandating UTC for the server side.

  • bloppe a day ago

    I think the more insidious and fundamental issue is that there are a plethora of subtly different clocks that everyone calls "UTC". A real UTC clock has an occasional 61-second minute. A leap second would read 11:59:60. So that day would have 86401 seconds instead of the usual 86400.

    Software engineers understandably don't like that, so Unix time handles it instead by "going backwards" and repeating the final second. That way every minute is 60 seconds long, every day is 86400, and you're only at risk of a crazy consistency bug about once every year and a half. But most databases do it differently, using smearing. Many databases use different smearing windows from one another (24 hours vs 20 for instance). Some rarer systems instead "stop the clock" during a leap second.

    That's 4 different ways to handle a leap second, but much documentation will use terms like "UTC" or "Unix time" interchangeably to describe all 4 and cause confusion. For example, "mandating UTC for the server side" almost never happens. You're probably mandating Unix time, or smeared UTC.

    • theamk a day ago

      Does this disagreement cause any problems though?

      If you care about sub-second differences, you likely run your own time infra (like Google Spanner), and your systems are so complex already that the time server is just a trivial blip.

      If you are communicating across org boundaries, I've never seen sub-second difference in absolute time matter.

      • bloppe 17 minutes ago

        It matters so much that the BIPM decided to abolish the leap second starting in 2035. See this post for a lot of the reasoning: https://engineering.fb.com/2022/07/25/production-engineering...

        It makes a lot of sense until you realize what we're doing. We're just turning UTC into a shittier version of TAI. After 2035, they will forevermore have a constant offset, but UTC will keep it's historical discontinuities. Why not just switch to TAI, which already exists, instead of destroying UTC to make a more-or-less redundant version of TAI?

    • codr7 a day ago

      Yep, there's an astonishing amount of pretending involved in dates/time processing,

      I'm surprised anything works at all just from what I know.

nzach 2 days ago

> What explains the slowdown in IANA timezone database updates?

My guess is that with the increasing dependency on digital systems for our lives the edge-cases where these rules aren't properly updated cause increased amounts of pain "for no good reason".

In Brazil we recently changed our DST rules, it was around 2017/2018. It caused a lot of confusion. I was working with a system where these changes were really important, so I was aware of this change ahead of time. But there are a lot of systems running without too much human intervention, and they are mostly forgotten until someone notices a problem.

BlackFly a day ago

> How does general relativity relate to the idea of time being a universal, linear, forward-moving "entity"?

TAI provides a time coordinate generated by taking the weighted average of the proper times of 450 world lines tracked by atomic clocks. Like any other time coordinate, it provides a temporal orientation but no time coordinate could be described as "universal" or "linear" in general relativity. It would be a good approximation to proper time experienced by most terrestrial observers.

Note that general relativity doesn't add much over special relativity here (the different atomic clocks will have different velocities and accelerations due to altitude and so have relative differences in proper time along their world lines). If you already have a sufficiently general notion of spacetime coordinates, the additional curvature from general relativity over minkowski space is simply an additional effect changing the relation between the coordinate time and proper time.

Raphell 2 days ago

I never really took time seriously until one of my cron jobs skipped execution because of daylight saving. That was the moment I realized how tricky time actually is.

This article explains it really well. The part about leap seconds especially got me. We literally have to smear time to keep servers from crashing. That’s kind of insane.

  • bloppe a day ago

    The issue with leap seconds is that the BIPM recommends that everyone should use UTC for their "source of truth" fundamental representation for time stamps, so pretty much every software system does that. That's the core mistake.

    Everyone should use TAI as their fundamental representation. TAI has no leap seconds. It's way easier to convert from TAI to UTC than vice versa. You can still easily present all your timestamps in UTC when printed as a string.

    NTP servers are generally synced up to GPS signals, which already use a version of TAI for their time signals. So an NTP server will take a perfectly good TAI time signal and do a smearing conversion to something that looks more like UTC (but isn't, because a true UTC clock would occasionally have a 61-second minute instead of smearing). Then someone never fails to freak out about the leap seconds because we have this oversimplified time abstraction that encourages you to ignore them. And instead of realizing they made a mistake in recommending UTC as the fundamental representation, BIPM is doubling down and is about to eliminate leap seconds entirely, so UTC will become just a worse version of TAI (because it will still have cause historical discontinuities with most software systems, but also drift away from solar time). I'm kinda pissed about it

  • bigiain 2 days ago

    I avoid running "daily" cron jobs or other scheduled tasks around 2am for that reason, they might not get run or them might get run twice.

    Where practical I schedule them around 12:00 (but I'm sure one day I'll get stung but some odd country who chooses to implement their daylight savings changeover in the middle of the day).

didgetmaster a day ago

Another big issue is how time is often used to determine the order of events. Did event1 occur before or after event2?

Things like timestamps are used to track things like file creation, transaction processing, and other digital events.

As computers and networks have become increasingly fast, the accuracy of the timestamps becomes more and more critical.

While the average human doesn't care if a file was created at a time calculated down to the nanosecond; it is often important to know if it was created before or after the last backup snapshot.

klabb3 2 days ago

It’s quite different from how I think about time, as a programmer. I treat human time and timezones as approximate. Fortunately I’ve been spared from working on calendar/scheduling for humans, which sounds awful for all the reasons mentioned.

Instead I mostly use time for durations and for happens-before relationships. I still use Unix flavor timestamps, but if I can I ensure monotonicity (in case of backward jumps) and never trust timestamps from untrusted sources (usually: another node on the network). It often makes more sense to record the time a message was received than trusting the sender.

That said, I am fortunate to not have to deal with complicated happens-before relationships in distributed computing. I recall reading the Spanner paper for the first time and being amazed how they handled time windows.

hn111 a day ago

One thing I found out when programming a timeline lately was that year zero doesn’t exist. It just goes from -1 to 1. Which looks very weird if you want to display intervals of e.g. 500 years: -1001, -501, -1, 500, 1000, etc

  • zokier a day ago

    ISO 8601, one of the most common date notations, does have year zero. It really depends on what specific calendar you use. Gregorian calendar itself only starts at 1582, anything before that is some sort of extension.

dijksterhuis 2 days ago

I think this is one of my favourite write ups on HN for a while. I miss seeing more things like this.

foxyv a day ago

I am very happy about the newer 'java.time' package in Java. ZonedDateTime, Duration and Instant are my favorite classes. Everything is based on Unix epoch and UTC which makes converting times very easy.

I'm bookmarking this article to hand out to new developers.

smurpy 2 days ago

We don’t have much trouble yet with relativistic temporal distortions, but Earth’s motion causes us to lose about 0.152 seconds per year relative to the Solar system. Likewise we lose about 8.5 seconds per year relative to the Milky Way. I wonder when we’re going to start to care. Presumably there would be consideration of such issues while dealing with interplanetary spacecraft, timing burns and such.

  • Bjartr 2 days ago

    GPS satellite clocks have to run fast to account for the combined relatavistic effects of moving fast and being significantly farther away from earth's gravity. Without this, they would accumulate around 11km of error per day from losing around 7microseconds per day compared to earthbound clocks.

    https://www.gpsworld.com/inside-the-box-gps-and-relativity/

  • zokier a day ago

    It is why we are introducing LTC, Coordinated Lunar Time. Apparently the relativistic effects on the Moon are already big enough to make using UTC problematic.

  • smurpy 2 days ago

    Earth time <> Sol time <> SagA* time

Raphell 2 days ago

I never really took time seriously until one of my cron jobs skipped execution because of daylight saving. That was the moment I realized how tricky time actually is. This article explains it really well. The part about leap seconds especially got me. We literally have to smear time to keep servers from crashing. That’s kind of insane.

a_t48 2 days ago

I’m all about monotonic time everywhere after having soon too many badly configured time sync settings. :)

lionelholt 2 days ago

... humans don't generally say

"Wanna grab lunch at 1,748,718,000 seconds from the Unix epoch?"

I'm totally going to start doing that now.

TZubiri a day ago

>Two important concepts for describing time are "durations" and "instants"

The standard name for durations in physics are "periods" or 'uppercase T' ('lowercase t' being a point in time), which curiously enough are the inverse of a frequency (or the frequency is the inverse of). A period can also be thought of as an interval [t0,t1] or inequality t0<=T<=t1

> The concept of "absolute time" (or "physical/universal time") refers to these instants, which are unique and precisely represent moments in time, irrespective of concepts like calendars and timezones.

Funnily enough, you mean the opposite. An absolute time physically does not exist, like an absolute distance, there is no kilometer 0. Every measurement is relative to another, in the case of time you might use relative to the birth of (our Lord and saviour) Jesus Christ. But you never have time "irrespective" of something else, and if you do, you are probably referring to a period with an implicit origin. For example if I say a length of 3m, I mean an object whose distance from one end to the other is 3m. And if I say 4 minutes of a song, I mean that the end is 4 minutes after the start, in the same way that a direction might be represented by a 2D vector [1,1] only because we are assuming a relationship to [0,0].

That said, it's clear that you have a lot of knowledge about calendars from a practical software experience of implementing time features in global products, I'm just explaining time from the completely different framework of classical physics, which is of course of little use when trying to figure out whether 6PM in Buenos Aires and 1 PM in 6 months in California will be the same time.

jcranmer a day ago

> What's the history of human timekeeping? Particularly before the Gregorian calendar, what historical records do we have for who was tracking/tallying the days elapsed over time? How did people coordinate on the current date globally (if at all)? How did local mean time (LMT) work in the past?

Ooh, this is a really interesting topic!

Okay, so the first thing to keep in mind is that there are three very important cyclical processes that play a fundamental role in human timekeeping and have done so since well before anything we could detect archaeologically: the daily solar cycle, the lunar cycle (whence the month), and the solar year. All of these are measurable with mark 1 human eyeballs and nothing more technologically advanced than a marking stick.

For most of human history, the fundamental unit of time from which all other time units are defined is the day. Even in the SI system, a second wasn't redefined to something more fundamental than the Earth's kinematics until about 60 years ago. For several cultures, the daylight and the nighttime hours are subdivided into a fixed number of periods, which means that the length of the local equivalent of 'hour' varied depending on the day of the year.

Now calendars specifically refer to the systems for counting multiple days, and they break down into three main categories: lunar calendars, which look only at the lunar cycle and don't care about aligning with the solar year; lunisolar calendars, which insert leap months to keep the lunar cycle vaguely aligned with the solar year (since a year is about 12.5 lunations long); and solar calendars, which don't try to align the lunations (although you usually still end up with something akin to the approximate length of a lunation as subdivisions). Most calendars are actually lunisolar calendars, probably because lunations are relatively easy to calibrate (when you can go outside and see the first hint of a new moon, you start the new month) but one of the purposes of the calendar is to also keep track of seasons for planting, so some degree of solar alignment is necessary.

If you're following the history of the Western calendrical tradition, the antecedent of the Gregorian calendar is the Julian calendar, which was promulgated by Julius Caesar as an adaptation of the Egyptian solar calendar for the Romans, after a series of civil wars caused the officials to neglect the addition of requisite leap months. In a hilarious historical example of fencepost errors, the number of years between leap years was confused and his successor Augustus had to actually fix the calendar to have a leap year every 4th year instead of every third year, but small details. I should also point out that, while the Julian calendar found wide purchase in Christendom, that didn't mean that it was handled consistently: the day the year started varied from country to country, with some countries preferring Christmas as New Years' Day and others preferring as late as Easter itself, which isn't a fixed day every year. The standardization of January 1 as New Years' Day isn't really universal until countries start adopting the Gregorian calendar (the transition between Julian and Gregorian calendar is not smooth at all).

Counting years is even more diverse and, quite frankly, annoying. The most common year-numbering scheme is a regnal numbering: it's the 10th year of King Such-and-Such's reign. Putting together an absolute chronology in such a situation requires accurate lists of kings and such that is often lacking; there's essentially perennial conflicts in Ancient Near East studies over how to map those dates to ones we'd be more comfortable with. If you think that's too orderly, you could just name years after significant events (this is essentially how Winter Counts work in Native American cultures); the Roman consular system works on that basis. If you're lucky, sometimes people also had an absolute epoch-based year number, like modern people largely agree that it's the year 2025 (or Romans using 'AUC', dating the mythical founding of Rome), but this tends not to be the dominant mode of year numbering for most of recorded human history.