joeyh 4 days ago

Steve Langasek decided to work on this problem in the last few years of his life and was a significant driver of progress on it. He will be missed, and I'll always think of him when I see a 64 bit time_t.

  • AceJohnny2 4 days ago

    Thanks for the reminder, Joey. He is missed.

    Are you still involved in Debian?

larrik 4 days ago

> Readers of a certain vintage will remember well the "Y2K problem," caused by retrospectively shortsighted attempts to save a couple of bytes by using two-digit years – meaning that "2000" is represented as "00" and assumed to be "1900."

This seems overly harsh/demeaning.

1. those 2 bytes were VERY expensive on some systems or usages, even into the mid-to-late 90's

2. software was moving so fast in the 70s/80's/90's that you just didn't expect it to still be in use in 5 years, much less all the way to the mythical "year 2000"

  • GuB-42 4 days ago

    And we still use 2 digit years!

    For example, credit cards often use the mm/yy format for expiration dates because it is more convenient to write and considering the usual lifetime of a credit card, it is sufficient. But it means there is a two digit date somewhere in the system, and if the conversion just adds 2000, we are going to have a problem in 2100 if nothing changes, no matter how many bytes we use to represent and store the date. A lot of the Y2K problem was simple UI problems, like a text field with only 2 characters and a hardcoded +1900.

    One of the very few Y2K bugs I personally experienced was an internet forum going from the year 1999 to the year 19100. Somehow, they had the correct year (2000), subtracted 1900 (=100) and put a "19" in front as a string. Nothing serious, it was just a one-off display error, but that's the kind of thing that happened in Y2K, it wasn't just outdated COBOL software and byte savings.

  • cogman10 4 days ago

    This is a case where "premature optimization" would have been a good thing.

    They could have represented dates as a simple int value 0ed at 1900. The math to convert a day number to a day/month/year is pretty trivial even for 70s computers and the end result would have been saving more than just a couple of bytes. 3 bytes could represent days from 1900->~44,000 (unsigned).

    Even 2 bytes would have bought ~1900->2070

    • umanwizard 4 days ago

      There were plenty of people born in 1899 who were still alive in 1970, so you couldn't e.g. use your system to store people's birth dates.

      • zerocrates 4 days ago

        Of course you couldn't with 2 digit years either, or at least not without making more changes to move the dividing line between 1900s/1800s down the line.

      • cogman10 4 days ago

        Cut the upper range in half, use 3 bytes and 2's compliment.

        That gives you something like 20,000BCE -> 22,000CE.

        Doesn't really change the math to spit out a year and it uses fewer bytes than what they did with dates.

        I will say the math gets more tricky due to calendar differences. But, if we are honest, nobody is really caring a lot about March 4, 43-BCE

      • Y_Y 4 days ago

        I think GP meant uint, but by my book int should have a sign bit, so that grampa isn't born in the future.

        • tremon 4 days ago

          The only reason why a system can only represent the range from 1900-1999 is when the system uses characters (ascii decimal digits) or BCD encoded digits. It would have been very unlikely for any integer-based encoding system to have had a cutoff date at 1999 (e.g. int8 would need an epoch at 1872 to rollover after 1999), so I don't think signed vs unsigned makes a difference here.

    • Hemospectrum 4 days ago

      > They could have represented dates as a simple int value

      In standard COBOL? No, they couldn't have.

      • cogman10 4 days ago

        The average programmer couldn't have, the COBOL language authors could.

        COBOL has datatypes built into it, even in COBOL 60. Date, especially for what COBOL was being used for, would have made a lot of sense to add as one of the supported datatypes.

      • colejohnson66 4 days ago

        And COBOL can support four-digit numbers.

        • __d 2 days ago

          The problem was mostly that storage was expensive.

          It’s difficult to understand in an era of cheap terabyte SSDs, but in the 1960s and 1970s, DASD (what IBM mainframes called hard drives) was relatively tiny and very expensive.

          And so programmers did the best they could (in COBOL) to minimize the amount of data stored. Especially for things that there were lots of, like say, bank transactions. Two bytes here and two bytes there and soon enough you’re saving millions of dollars in hardware costs.

          Twenty years later, and that general ledger system that underlies your entire bank’s operations just chugging along solidly 24/7/365 needs a complete audit and rewrite because those saved bytes are going to break everything in ten years.

          But it was probably still cheaper than paying for the extra DASD in the first place.

  • GartzenDeHaes 4 days ago

    People aren't getting that it was two characters that need to be added, not two bytes to make a short into an int. COBOL uses a fixed width character format for all data (yes even for COMP). If you want a four digit number, then you have to use 4 character positions. Ten digits? Then ten characters.

    These field sizes have to hard coded into all parts of the COBOL program including data access, UI screens, batch jobs, intermediate files, and data transfer files.

    • Per_Bothner 4 days ago

      "COBOL uses a fixed width character format for all data (yes even for COMP). If you want a four digit number, then you have to use 4 character positions."

      That is incorrect. USAGE COMP will use binary, with the number number of bytes depending on the number of digits in the PIC. COMP-1 specifically takes 4 bytes. COMP-3 uses packed decimal (4 bits per digits).

      • GartzenDeHaes 3 days ago

        That's what the specs say, but I found out it actually didn't work that way when I was working on a transpiler, at least for that installation.

    • larrik 3 days ago

      yeah, I shouldn't have said "bytes" either, especially as I had AS/400's in mind when I wrote it.

  • hans_castorp 4 days ago

    I was working on a COBOl program in the late 80's that stored the year as a single digit value. Sounded totally stupid when I was explained the structure. But records were removed after 4 years automatically, so it wasn't a problem, it was always obvious which year was stored.

  • amelius 4 days ago

    I know people who bought large amounts of put options just before Y2K, thinking that the stocks of large banks would crash. But little happened ...

    • eloisant 4 days ago

      Little happened because work was done to prevent issues.

      Also it was a bit dumb to imagine the computers would crash at 00:00 on Jan 1st 2000, bugs started to happen earlier as it's common to work with dates in the future.

      • __d 4 days ago

        As an example, I started working on Y2K issues in 1991, and it was a project that had been running for several years already. It was an enormous amount of work, at least 25% of the bank’s development budget for over a decade.

        30 year mortgages were the first thing that was fixed, well before my time. But we still had heaps of deadlines through the 90’s as future dates passed 2000.

        The inter-bank stuff was the worst: lots of coordination needed to get everyone ready and tested before the critical dates.

        It’s difficult to convey how much work it all was, especially given the primitive tools we had at the time.

      • bigstrat2003 4 days ago

        > Also it was a bit dumb to imagine the computers would crash at 00:00 on Jan 1st 2000, bugs started to happen earlier as it's common to work with dates in the future.

        That is why people have the "nothing happened" reaction. There were doomers predicting planes would literally fall out of the sky when the clock rolled over, and other similar Armageddon scenarios. So of course when people were making predictions that strong, everyone notices when things don't even come close to that.

      • Hikikomori 4 days ago

        Is there an y2k unsafe Linux you can try in a VM?

        • pkaye 4 days ago

          Linux (and probably most Unix systems) use a 32 bit time counter so didn't have the Y2K issue. But there might have been some applications that had it. And its possible some early bios clocks used 2 digit year that had to be worked around.

      • amelius 4 days ago

        Yeah, I don't remember the specifics.

  • burnt-resistor 3 days ago

    For RTC storage in CMOS using a BCD byte, one could assume that the epoch was relative to, say the decade of manufacturing (suppose 1990) such that dates roll over from 99 to 00 could instead create a Y2090 problem:

        Y = (yy < 90) ? (2000 + yy) : (1900 + yy);
    
    This would have to be handled differently than something that was required to be IBM PC or IBM AT compatible with every compatible quirk. It's simply a way to save 8-bits of battery-backed SRAM or similar.
lambdaone 4 days ago

Not all solutions are going with just 64 bits worth of seconds, although 64 bit time_t will certainly sort out the Epochalypse.

ext4 moved some time ago to 30 bits of fractional resolution (on the order of nanoseconds) and 34 bits of seconds resolution. It punts the problem 400 years or so into the future. I'm sure we will eventually settle on 128-bit timestamps with 64 bits of seconds and 64 bits of fractional resolution, and that should sort things for forseeable human history.

  • badc0ffee 4 days ago

    64 bits of fractional resolution? No way, gotta use 144 bits so we can get close to Planck time.

  • jmclnx 4 days ago

    Thanks, I was wondering about ext4 and time stamps.

    I wonder what the zfs/btrfs type file systems do. I am a bit lazy to check but I expect btrfs is using 64 bit. zfs, I would not be surprised if it matches zfs (edit meant ext4 here).

    • XorNot 4 days ago

      A quick glance at ZFS shows it uses a uint64_t time field in nanoseconds in some places.

      So 580 years or so till problems (but probably patchable ones? I believe the on disk format is already 2x uint64s, this is just the gethrtime() function I saw).

      • RedShift1 4 days ago

        What is the use of such high precision file timestamps?

        • zokier 4 days ago

          nanoseconds is just common sub-second unit that is used. notably it is used internally in linux kernel and exposed via clock_gettime (and related functions) via timespec struct

          https://man7.org/linux/man-pages/man3/timespec.3type.html

          it is convenient unit because 10^9 fits neatly into 32 bit integer, and it is unlikely that anyone would need more precision than that for any general purpose use.

lioeters 4 days ago

> Debian's maintainers found the relevant variable, time_t, "all over the place,"

Nit: time_t is a data type, not a variable.

  • scottlamb 4 days ago

    This is a reporter paraphrase of the Debian wiki page, which says: "time_t appears all over the place. 6,429 of Debian's 35,960 packages have time_t in the source. Packages which expose structs in their ABI which contain time_t will change their ABI and all such libraries need to migrate together, as is the case for any library ABI change."

    A couple significant things I found much clearer in the wiki page than in the article:

    * "For everything" means "on armel, armhf, hppa, m68k, powerpc and sh4 but not i386". I guess they've decided i386 doesn't have much of a future and its primary utility is running existing binaries (including dynamically-linked ones), so they don't want to break compatibility.

    * "the move will be made after the release of Debian 13 'Trixie'" means "this change is included in Trixie".

cbmuser 4 days ago

»Venerable Linux distribution Debian is side-stepping the Y2K38 bug – also known as the Unix Epochalypse – by switching to 64-bit time for everything but the oldest of supported hardware, starting with the upcoming Debian 13 "Trixie" release.«

That's inaccurate. We actually switched over all 32-bit ports except i386 because we wanted to keep compatibility for this architecture with existing binaries.

All other 32-bit ports use time64_t, even m68k ;-). I did the switch for m68k, powerpc, sh4 and partially hppa.

  • ldng 3 days ago

    Since your nitpicking, I can't resist myself ^^ I would say that it's not inaccurate but, eventually, less precise. And wouldn't you say that i386 IS the oldest arch ? ;-)

amelius 4 days ago

Can we also switch to unlimited/dynamic command line length?

I'm tired of "argument list too long" on my 96GB system.

  • jeroenhd 4 days ago

    You can recompile your kernel to work around the 100k-ish command line length limit: https://stackoverflow.com/questions/33051108/how-to-get-arou...

    However, that sounds like solving the wrong end of the problem to me. I don't really know what a 4k JPEG worth of command line arguments is even supposed to be used for.

    • mort96 4 days ago

      Linking together thousands of object files with the object paths to the linker binary as command line arguments is probably the most obvious example of where the command length limit becomes problematic.

      Most linkers have workarounds, I think you can write the paths separated by newlines to a file and make the linker read object file paths from that file. But it would be nice if such workarounds were unnecessary.

      • AlotOfReading 4 days ago

        I tracked down a fun bug early in my career with those kinds of paths. The compiler driver was internally invoking a shell and had an off-by-one error that caused it to drop every 1023rd character if the total length exceeded 4k.

      • ogurechny 4 days ago

        This sounds like a job for named pipes. You get the temporary file, but nothing is actually written to disk. Or maybe unnamed pipes, if bash command redirection is suitable for creating the list of options.

        Looking back, it's unfortunate that Unix authors offered piping of input and output streams, but did not extend that to arbitrary number of streams, making process arguments just a list of streams (with some shorthand form for constants to type in command line, and universal grammar). We could have been used to programs that react to multiple inputs or produce multiple outputs.

        It is obvious that it made sense in the '70s to just copy the call string to some free chunk of memory in the system record for the starting process, and let it parse those bytes in any way it wants, but, as a result, we can't just switch from list of arguments to arbitrary stream without rewriting the program. In that sense, argument strings are themselves a workaround, a quick hack which gave birth to ad-hoc serialisation rules, multi-level escaping chains, lines that are “too long” for this random system or for that random system, etc.

    • qart 4 days ago

      I have worked on projects (research, not production) where doing "ls" on some directories would crash the system. Some processes generated that many data files. These files had to be fed to other programs that did further processing on them. That's when I learned to use xargs.

    • wongarsu 4 days ago

      tar cf metadata.tar.zstd *.json

      in a large directory of image files with json sidecars

      Somebody will say use a database, but when working for example with ML training data one label file per image is a common setup and what most tooling expects, and this extends further up the data preparation chain

      • cesarb 4 days ago

        > tar cf metadata.tar.zstd *.json

        From a quick look at the tar manual, there is a --files-from option to read more command line parameters from a file; I haven't tried, but you could probably combine it with find through bash's process substitution to create the list of files on the fly.

        • syncsynchalt 4 days ago

          Yes, workarounds exist, but it would be nicer if the arbitrary limits were removed instead.

      • NegativeK 4 days ago

        I won't say use a database, but I will beg you to compress a parent directory instead of making a tar bomb.

        • mort96 4 days ago

          That really just makes the problem worse: tar czf whatever.tgz whatever/*.json; you've added 9 bytes to every file path.

          I mean I get that you're suggesting to provide only one directory on the argv. But it sucks that the above solution to add json files to an archive while ignoring non-json files only works below some not-insane number of files.

          • stouset 4 days ago

            Having an insane number of files in the same directory is already a performance killer. Here you’re talking about having something like 10,000 JSON files in one directory plus some number of non-JSON files, and you’d just be better off in all cases having these things split across separate directories.

            Does it suck you can’t use globing for this situation? Sure, yeah, fine, but by the time it’s a problem you’re already starting to push the limits of other parts of the system too.

            Also using that glob is definitely going to bite you when you forget some day that some of the files you needed were in subdirectories.

            • wongarsu 4 days ago

              But in this example the files are conceptually a flat structure. Any hierarchy would be artificial, like the common ./[first two letters]/[second two letters]/filename structure. Which you can do, but it certainly doesn't make creating the above tarball any easier. Now we really need to use some kind of `find` invocation instead of a simple glob

              It also just extends the original question. If I have a system with 96GB RAM and terabytes of fast SSD storage, why shouldn't I be able to put tens of thousands of files in a directory and write a glob that matches half of them? I get that this was inconceivable in v6 unix, but in modern times those are entirely reasonable numbers. Heck, Windows Explorer can do that in a GUI, on a network drive. And that's a program that has been treated as essentially feature complete for nearly 30 years now, on an OS with a famously slow file system stack. Why shouldn't I be able to do the same on a linux command line?

            • mort96 4 days ago

              > Does it suck you can’t use globing for this situation? Sure, yeah

              Then we agree :)

      • badc0ffee 4 days ago

        You don't need to tar everything in one command. You can batch your tar into multiple commands with a reasonable amount of arguments with something like `rm metadata.tar.zstd && find . -maxdepth 1 -name \*.json -exec tar rf metadata.tar.zstd {} +`.

    • tomrod 4 days ago

      A huge security manifold to encourage adoption then sell white hat services on top?

    • silverwind 4 days ago

      There's numerous examples of useful long commands, here is one:

          perl -p -i -e 's#foo#bar#g' **/*
      • PaulDavisThe1st 4 days ago

        no limits:

            find . -type f -exec perl -p -i -e 's#foo#bar#g' {} \;
        • Someone 4 days ago

          That runs perl multiple times, possibly/likely often in calls that effectively are no-ops. To optimize the number of invocations of perl, you can/should use xargs (with -0)

          • dr4g0n 4 days ago

            No need for `xargs` in this case, `find` has been able to take care of this for quite some time now, using `+` instead of `;`:

                find . -type f -exec perl -p -i -e 's#foo#bar#g' {} +
          • tremon 4 days ago

            xargs constructs a command line from the find results, so if **/* exceeds the max command line length, so will xargs.

            • Someone 2 days ago

              xargs was written to avoid that problem, so no, it won’t. https://man7.org/linux/man-pages/man1/xargs.1.html:

              “The command line for command is built up until it reaches a system-defined limit (unless the -n and -L options are used). The specified command will be invoked as many times as necessary to use up the list of input items. In general, there will be many fewer invocations of command than there were items in the input. This will normally have significant performance benefits.”

              Your only risk is that it won’t handle inputs that, on its own, are too long.

    • dataflow 4 days ago

      > I don't really know what a 4k JPEG worth of command line arguments is even supposed to be used for.

      I didn't either, until I learned about compiler command line flags.

      But also: the nice thing about conmand line flags is they aren't persisted anywhere (normally). That's good for security.

      • eightys3v3n 4 days ago

        There were ways for other users on the system to see running process command line flags I thought. This isn't so good for security.

        • dataflow 4 days ago

          That depends on your threat model.

      • Ghoelian 4 days ago

        Pretty sure `.bash_history` includes arguments as well.

        • dataflow 4 days ago

          That's only if you're launching from an interactive Bash? We're talking about subprocess launches in general.

          • johnisgood 4 days ago

            If you add a whitespace before the command, it will not even get appended to history!

            • zamadatix 4 days ago

              I had never run across this and it didn't work for me when I tried it. After some reading it looks like the HISTCONTROL variable needs to be set to include "ignorespace" or "ignoreboth" (the other option included in this is "ignoredups").

              This would be really killer if it was always enabled and the same across shells but "some shells support something akin and you have to check if it is actually enabled on the ones that do" is just annoying enough that I probably won't bother adopting this on my local machine even though it sounds convenient as a concept.

              • wongarsu 4 days ago

                I'm mostly using debian and ubuntu flavors (both on desktop and the cloud images provided and customized by various cloud and hosting providers) and they have all had this as the default behavior for bash

                YMMV with other shells and base distros

                • zamadatix 4 days ago

                  Proxmox (debian 12.x based) and Arch both didn't for me (both w/ bash). An Ubuntu 24.04 container in Proxmox did though.

            • amelius 4 days ago

              This is quite annoying behavior, actually.

              • nosrepa 4 days ago

                Good thing you can turn it off!

                • amelius 4 days ago

                  Yeah, took me a while to figure that out though. Plus I lost my history.

                  Typing an extra space should not invoke advanced functionality. Bad design, etc.

    • Brian_K_White 4 days ago

      Backups containing other backups containing other backups containing vms/containers containing backups... all with deep paths and long path names. Balloons real fast with even just a couple arguments.

  • jart 4 days ago

    Just increase your RLIMIT_STACK value. It can easily be tuned down e.g. `ulimit -s 4000` for a 4mb stack. But to make it bigger you might have to change a file like /etc/security/limits.conf and then log out and back in.

    • amelius 4 days ago

      I mean, yes, that is possible. But we had fixed maximum string lengths in the COBOL era. It is time to stop wasting time on this silly problem and fix it once and for all.

      • fpoling 4 days ago

        There is always a limit. An explicit value versus implicit depending on memory size of the system have a big advantage that it will be hit sufficiently often so any security vulnerabilities will surface much earlier. Plus it forces to use saner interfaces to pass big data chunks to a utility. For that reason I would even prefer for the limit to be much lower on Linux so the commands will stop to assume that the user can always pass all the settings on the command line.

        • amelius 4 days ago

          Would you advocate to put a hard limit on Python lists too?

          • jart 4 days ago

            It's important to understand that functions like execve(), which are used to spawn processes, are upstream dependencies of dynamic memory functions like malloc(). It's hairy to have low-level functions in your system depend on other functions that are higher level than them. For instance I've been in situations where malloc() failed and the LLVM libcxx abort handler depends on malloc(). POSIX also defines execve() as being asynchronous signal safe, which means it isn't allowed to do things like acquire a mutex (which is necessary to allocate unbounded dynamic memory).

          • fpoling 4 days ago

            On a few occasions I wished that Python by default limited the max length of its lists to, say, 100 million elements to avoid bad bugs that consume memory and trigger swapping for few minutes before been killed by OOM. Allocating such amount of memory as plain Python list, not a specialized data structure like numpy array, is way more likely indicate a bug then a real need.

          • Eavolution 4 days ago

            There is already a hard limit, the amount of memory before the OOM killer is triggered.

            • 9dev 4 days ago

              So why can't the limit for shell args be the amount of memory before the OOM killer is triggered as well?

              • jart 4 days ago

                It can. Just set RLIMIT_STACK to something huge. Distros set it at 8mb by default. Take it up with them if you want the default to change for everyone.

                • 9dev 4 days ago

                  I think I, and the parent commenter, are just pointing out how arbitrary the limit is. It can't hurt to question stuff like this every once in a while.

                  • jart 4 days ago

                    There's always a limit. People only complain when it actually limits them. Most open source people have never needed to glob tens of thousands of files. If you want to feel better, POSIX says the minimum permissible ARG_MAX is 4096, and with Windows ARG_MAX is only 32767 characters.

                • amelius 4 days ago

                  I mean we wouldn't even need to have this discussion if the limit was at the memory limit.

                  Imagine having this discussion for every array in a modern system ...

  • GoblinSlayer 4 days ago

    Pack it in Electron and send an http post json request to it.

  • justincormack 4 days ago

    You can redefine MAX_ARG_STRLEN and recompile the kernel. Or use a machine with a larger page size, as its defined as 32 pages, eg RHEL provides a 64k pagesize Arm kernel.

    But using a pipe to move things between processes not the command buffer is easier...

  • loloquwowndueo 4 days ago

    Ever heard of xargs?

    • amelius 4 days ago

      Sure. But it's not the same thing (instead of invoking the command once it invokes a command multiple times) and a workaround at best. And ergonomics are not great. Especially if you first type the command without xargs, and then find out the argument list is too long and you'll have to reformulate it but now with xargs.

    • styanax 4 days ago

      The ARGS_MAX (`getconf ARGS_MAX`) is defined at the OS level (glibc maybe? haven't looked); xargs will also be subject to it's limitation like all other processes. One can use:

          xargs --show-limits --no-run-if-empty </dev/null
      
      ...to see a nicely formatted output include the -2048 POSIX recommendation on a separate line.
  • perlgeek 4 days ago

    Same for path lengths.

    Some build systems (eg Debian + python + dh-virtualenv) like to produce very long paths, and I'd be inclined to just let them.

  • arccy 4 days ago

    what does 96GB have to do with anything? is that the size of your boot disk?

    • dataflow 4 days ago

      It's their RAM. The point is they have sufficient RAM.

efitz 4 days ago

They’re just kicking the can down the road. What will people do on December 4, 292277026596, at 15:30:07 UTC?

  • zaik 4 days ago

    Celebrate 100 years since complete ipv6 adoption.

    • IgorPartola 4 days ago

      I think you are being too optimistic. Interplanetary Grade NAT works just fine and doesn’t have the complexity of using colons instead of periods in its addresses.

      • klabb3 4 days ago

        The year is 292277026596. The IP TTL field of max 255 has been ignored for ages and would no longer be sufficient to ping even localhost. This has resulted in ghost packets stuck in circular routing loops, whose original source and destination have long been forgotten. It's estimated these ghost packets consume 25-30% of the energy from the Dyson sphere.

        • bestouff 4 days ago

          Not since the world opted for statistical TTL decrement : start at 255 and decrement by one if Rand(1024) == 0. Voilà, no more zombie packets, TCP retransmit takes care of the rest.

        • sidewndr46 4 days ago

          The ever increasing implementation complexity of IPv4 resulted in exactly one implementation that worked replacing all spiritual scripture and becoming known as the one true implementation. Due to a random bitflip going unnoticed the IPv4-truth accidentally became Turing complete several millenia ago. With the ever increasing flows of ghost packets, IPv4-truth processing power has rapidly grown and will soon achieve AGI. Its first priority is to implement 128-bit time as a standard in all programming languages to avoid the impending apocalypse.

        • MisterTea 4 days ago

          Sounds like a great sci-fi plot - hunting for treasure/information by scanning ancient forgotten packets still in-flight on a neglected automated galactic network.

          • rootbear 4 days ago

            Vernor Vinge could absolutely have included that in some of his stories.

            • db48x 4 days ago

              Charles Stross, Neptune’s Brood.

          • JdeBP 4 days ago

            I have a vague memory that Sean Williams's Astropolis series touches upon this at one point. Although it has been a while and I might be mis-remembering.

          • tengwar2 4 days ago

            There was an sf short story based on someone implementing a worm (as in Morris Worm) which deleted all data on a planet. They fix it by flying FTL and intercepting some critical information being send at radio speed. I think it was said to be the first description of malware, and the origin of the term "worm" in this context.

          • saalweachter 4 days ago

            B..E....S..U..R..E....T..O....D..R..I..N..K....Y..O..U..R....O..V..A..L..T..I..N..E....

          • kstrauser 4 days ago

            “We tapped into the Andromeda Delay Line.”

        • pyinstallwoes 4 days ago

          That’s only 25-30% of the energy environmental disaster in sector 137 resulting from the Bitcoin cluster inevitably forming a black hole from the plank scale space-filling compute problem.

        • troupo 4 days ago

          Oh, this is a good evolution of the classic bash.org joke https://bash-org-archive.com/?5273

          --- start quote ---

          <erno> hm. I've lost a machine.. literally _lost_. it responds to ping, it works completely, I just can't figure out where in my apartment it is.

          --- end quote ---

      • saalweachter 4 days ago

        The awkward thing is how the US still has 1.5 billion IPv4s, while the 6000 other inhabited clusters are sharing the 10k addresses originally allocated to Tuvalu before it sank into the sea.

    • diegocg 4 days ago

      You can laugh but Google stats show nearly 50% of their global traffic being ipv6 (US is higher, about 56%), Facebook is above 40%.

      • londons_explore 4 days ago

        As soon as we get to about 70%, I reckon some games and apps will stop supporting ipv4 on the basis that nat traversal is a pain and dual stack networking is a pain.

        If you spend 2 days vibe coding some chat app and then you have to spend 2 further days debugging why file sharing doesn't work for ipv4 users behind nat, you might just say it isn't supported for people whose ISP's use 'older technology'.

        After that, I reckon the transition will speed up a lot.

        • gruturo 4 days ago

          > some games and apps will stop supporting ipv4 on the basis that nat traversal is a pain and dual stack networking is a pain

          None of these are actually the game/app developers' problem. The OS takes care of them for you (you may need code for e2e connectivity when both are behind a NAT, but STUN/TURN/whatever we do nowadays is trivial to implement).

          • eqvinox 3 days ago

            > None of these are actually the game/app developers' problem.

            Except people complain to the game/app developer when it doesn't work.

        • RedShift1 4 days ago

          What makes you think filesharing is going to work any better on IPv6?

          • kccqzy 4 days ago

            NAT traversal not needed. Just need to deal with firewalls. So that's one fewer thing to think about when doing peer-to-peer file sharing over the internet.

            • ectospheno 4 days ago

              “Just need to deal with firewalls.”

              The only sane thing to do in a SLAAC setup is block everything. So no, it isn’t a solved problem just because you used ipv6.

              • kccqzy 4 days ago

                No. Here's a simple strategy: the two peers send each other a few packets simultaneously, then the firewall will open because by default almost all firewalls allow response traffic. IPv6 simplifies things because you know exactly what address to send to.

                • ectospheno 4 days ago

                  That is my point. You hole punch in that scenario even without NAT. It is no easier.

                  • the8472 4 days ago

                    It's easier since you don't don't have to deal with symmetric nat, external IP address discovery and port mapping.

      • avhception 4 days ago

        And yet here I am, fighting with our commercial grade fiber ISP over obscure problems in their IPv6 stack related to MTU and the phase of the moon. Sigh. I've been at this on and off for about a year (it's not a high priority thing, more of a hobby).

      • LtWorf 4 days ago

        But how much of it is not natted?

      • msk-lywenn 4 days ago

        Do they accept smtp over ipv6 now?

        • rwmj 4 days ago

          They do, but I had to change my mail routing to use IPv4 to gmail because if I connect over IPv6 everything gets categorised as spam.

        • betaby 4 days ago

          MX has IPv6:

          ~$ host gmail.com gmail.com has address 142.250.69.69 gmail.com has IPv6 address 2607:f8b0:4020:801::2005 gmail.com mail is handled by 10 alt1.gmail-smtp-in.l.google.com. gmail.com mail is handled by 30 alt3.gmail-smtp-in.l.google.com. gmail.com mail is handled by 5 gmail-smtp-in.l.google.com. gmail.com mail is handled by 20 alt2.gmail-smtp-in.l.google.com. gmail.com mail is handled by 40 alt4.gmail-smtp-in.l.google.com.

          ~$ host gmail-smtp-in.l.google.com. gmail-smtp-in.l.google.com has address 142.250.31.26 gmail-smtp-in.l.google.com has IPv6 address 2607:f8b0:4004:c21::1a

        • stackskipton 4 days ago

          Yes. However SMTP these days is almost all just servers exchanging mail, IPv6 support is much less priority.

          • 1718627440 2 days ago

            How does your MUA sends the message to the server? That's also SMTP.

      • creshal 4 days ago

        50%, after only 30 years.

  • greenavocado 4 days ago

    Everything on the surface of the Earth will vaporize within 5 billion years as the sun becomes a red giant

    • mike-cardwell 4 days ago

      Nah. 5 billion years from now we'll have the technology to move the Earth to a survivable orbit.

      • technothrasher 4 days ago

        Not in my backyard. I paid a lot of money to live on this gated community planet, and I'm not letting those dirty Earthlings anywhere near here.

      • red-iron-pine 4 days ago

        or we'll be so far away from earth we won't care.

        or we'll have failed to make it through the great filter and all be long extinct.

      • juped 4 days ago

        We have the technology, just not the logistics.

      • EbNar 4 days ago

        Orbit around... what, exactly?

      • swayvil 4 days ago

        Or modify the sun.

        • speed_spread 4 days ago

          Oh please, we're just getting past this shared mutable thing.

    • daedrdev 4 days ago

      The carbon cycle will end in only 600 million years due to the increasing brightness of the sun if you want a closer end date for life as we know it on earth

    • layer8 4 days ago

      The oceans will already start to evaporate in a billion years.

  • tmtvl 4 days ago

    Move to 128-bit time.

    • bombcar 4 days ago

      You laugh but a big danger with “too big” but representations is the temptation to use the “unused” bits as flags for other things.

      We’ve seen it before with 32 bit processors limited to 20 or 24 bits addressable because the high order bits got repurposed because “nobody will need these”.

      • bigstrat2003 4 days ago

        And with 64-bit pointers in Linux, where you have to enable kernel flags to use anything higher than 48 bits of the address space. All because some very misguided people figured it would be ok to use those bits to store data. You'd think the fact that the processor itself will throw an exception if you use those bits would be a red flag of "don't do that", but you would apparently be wrong.

        • Someone 4 days ago

          > You'd think the fact that the processor itself will throw an exception if you use those bits would be a red flag of "don't do that"

          That makes it slightly safer to use those bits, won’t it? As long as your code asks the OS how many bits the hardware supports, and only use the ones it requires to be zero, if you forget to clear the bits before following a pointer, the worst that can happen is a segfault, not reading ‘random’ memory.

    • HPsquared 4 days ago

      Best to switch to 512 bits, that's enough to last until the heat death of the universe, with plenty of margin for time dilation.

      • pulse7 4 days ago

        Best to switch to Smalltalk integers which are unlimited...

      • bayindirh 4 days ago

        Maybe we can add a register to the processors for just keeping time. At the end of the day, it's a ticker, no?

        RTX[0-7] would do. For time dilation purposes, we can have another 512 bit set to adjust ticking direction and frequency.

        Or shall we go 1024 bits on both to increase resolution? I'd agree...

    • layer8 4 days ago

      Just use LEB128.

  • Ekaros 4 days ago

    Hopefully by them we have moved to better calendar... Not that it will change the timestamp issue.

    • GoblinSlayer 4 days ago

      By that time we will have technology to spin Earth to keep calendar intact.

      • freehorse 4 days ago

        And also modify earth's orbit to get rid of the annoying leap seconds.

        • saalweachter 4 days ago

          "To account for calendar drift, we will be firing the L4 thrusters for six hours Tuesday. Be sure not to look directly at the thrusters when firing, to avoid having your retinas melt."

          "... still better than leap seconds."

        • zokier 4 days ago

          rotation, not orbit.

    • b3lvedere 4 days ago

      Star Trek stardates?

      • mrlonglong 4 days ago

        Today, right now it's -358519.48

  • layer8 4 days ago

    UTC will stop being a thing long before the year 292277026596.

zdw 4 days ago

Only 11 years after OpenBSD 5.5 did the same change: https://www.openbsd.org/55.html

  • mardifoufs 4 days ago

    OpenBSD doesn't have to care about compatibility as much, and has orders of magnitude less users. Which also means that changes are less likely to cause bugs from obscure edge cases.

    • zdw 4 days ago

      OpenBSD (and most other BSDs) are willing to make changes that break binary backwards compatibility, because they maintain and ship both the kernel and userland together as a release and can thus "steer the whole ship", rather than the kernel being it's own separate component developed like with Linux.

      • mardifoufs 4 days ago

        Sure, that's usually true, but Debian also has that ability in this case ( they can fix, patch, or update everything in the repository). The issue is mostly with all software that isn't part of the distribution and the official repositories. Which is a lot of software especially for Debian. OpenBSD doesn't have that issue, breaking the ABI won't cause tens of thousands of user applications to break.

        But I agree that Debian is still too slow to move forward with critical changes even with that in mind. I just don't think that OpenBSD is the best comparison point.

    • anthk 4 days ago

      Guess where OpenSSH comes from.

      • mardifoufs 4 days ago

        What? What does that have to do with what I said? Nothing that I said was about the project as a whole. I was just saying that the OS has different constraints than Debian does. What does openssh have to do with how easy it is for OpenBSD to break ABI compatibility?

  • JdeBP 4 days ago

    I have you all beaten. When I discovered that the 32-bit OS/2 API actually returned a 64-bit time, I wrote a C++ standard library for my OS/2 programs with a 64-bit time_t. This was in the 1990s.

  • keysdev 4 days ago

    A bit off topic, but its time like this that really make me wanting to swap out the public facing server from Linx to OpenBSD

rini17 4 days ago

> Debian is confident it is now complete and tested enough that the move will be made after the release of Debian 13 "Trixie" – at least for most hardware.

This means Trixie won't have it?

  • zokier 4 days ago
    • wongarsu 4 days ago

      "All architectures other than i386 ..."

      So Trixie does not have 64-bit time for everything.

      Granted, the article, subtitle and your link all point out that this is intentional and won't be fixed. But in the strictest sense that GP was likely going for Trixie does not have what the headline of this article announces

      • cbmuser 4 days ago

        It's not planned for i386 to avoid breaking existing i386 binaries of which there are a lot of them.

thesuitonym 4 days ago

> Y2K38 bug – also known as the Unix Epochalypse

Is it also known as that? It's a cute name but I've never seen anyone say it before this article. I guess it's kind of fetch though.

pilif 4 days ago

"everything" for those values of "everything" that do not include one of the most (if not the most) widely used 32 bit architectures.

(snark aside: I understand the arguments for and against making the change of i386 and I think they did the right thing. It's just that I take slight issue with the headline)

  • pantalaimon 4 days ago

    I doubt that i386 is still widely used. You are more likely to find embedded ARM32 devices running Linux, for x86 this is only the case in the retro computing community.

    • pm215 4 days ago

      It's actually still pretty heavily used in some niches, which mostly amount to "running legacy binaries on an x86-64 kernel". LWN had an article recently about the Fedora discussion on whether to drop i386 support (they decided to keep it): https://lwn.net/Articles/1026917/

      One notable use case is Steam and running games under Wine -- there are apparently a lot of 32 bit games, including still some relatively recent releases.

      Of course if your main use case for the architecture is "run legacy binaries" then an ABI change is probably inducing more pain than it seeks to solve, hence the exception of it from Debian's transition here.

    • Ekaros 4 days ago

      Intel Core 2 starting their 64bit CPUs is 20 years old next year. Athlon 64 is over 20 years old... I wonder truly how many real computers and not just VMs there is left.

      • jart 4 days ago

        8.8 percent of Firefox users have 32-bit systems. It's probably mostly people with a 32-bit install of Windows 7 rather than people who actually have an old 32-bit Intel chip like Prescott. Intel also sold 32-bit chips like Intel Quark inside Intel Galileo boards up until about 2020. https://data.firefox.com/dashboard/hardware

        People still buy 16-bit i8086 and i80186 microprocessors too. Particularly for applications like defense, aerospace, and other critical systems where they need predictable timing, radiation hardening, and don't have the resources to get new designs verified. https://www.digikey.at/en/products/detail/rochester-electron...

      • wongarsu 4 days ago

        On windows, encountering 32 bit software isn't all that rare. Running on 64 bit hardware on a 64bit OS, but that doesn't change that the 32bit software uses 32bit libraries and 32bit OS interfaces.

        Linux is a lot more uniform in its software, but when emulating windows software you can't discount i386

      • pilif 4 days ago

        I wasn't discounting VMs with my initial statement. I can totally imagine quite a few VMs still being around, either migrated from physical hardware or even set up fresh to conserve resources.

        Plus, keeping i386 the same also means any still available support for running 32 bit binaries on 64 bit machines.

        All of these cases (especially the installable 32 bit support) must be as big or bigger than the amount of ARM machines out there.

        • axus 4 days ago

          In the Linux binary context, do i386 and i686 mean the same thing? i686 seems relatively modern in comparison, even if it's 32-bit.

          • mananaysiempre 4 days ago

            Few places still maintain genuine i386 support—I don’t believe the Linux kernel does, for example. There some important features it lacks, such as CMPXCHG. Nowadays Debian’s i386 is actually i686 (Pentium Pro), but apparently they’ve decided to introduce a new “i686” architecture label to denote a 32-bit x86 ABI with a 64-bit time_t.

            Also, I’m sorry to have to tell you that the 80386 came out in 1985 (with the Compaq Deskpro 386 releasing in 1986) and the Pentium Pro in 1995. That is, i686 is three times closer to i386 than it is to now.

        • bobmcnamara 4 days ago

          Opt in numbers here: https://popcon.debian.org/

          • pilif 4 days ago

            that tells me my snark about i386 being the most commonly used 32 bit architecture wasn't too far off reality, doesn't it?

            • bobmcnamara 4 days ago

              Indeed - i386 is certainly the most common 32-bit Debian platform.

              Note also that the numbers are log-scale, so while it looks like Arm64 is a close third over all bitwidths, it isn't.

              • umanwizard 4 days ago

                I'm actually amazed by this, I would have bet a lot on aarch64 being second.

                • bobmcnamara a day ago

                  Debian doesn't support Arm64 Mac...

                  • umanwizard a day ago

                    You can run whatever distro you want in a VM, though. My daily driver is GuixSD in an aarch64 VM on a Mac. I wouldn’t recommend guixsd if you don’t love tinkering and troubleshooting, but otherwise the setup works fine.

      • pantalaimon 4 days ago

        The later Prescott Pentium 4 was already supporting 64 bit, but Pentium M / first generation Atom did not.

  • zokier 4 days ago

    i386 is not really properly supported arch for trixie anymore:

    > From trixie, i386 is no longer supported as a regular architecture: there is no official kernel and no Debian installer for i386 systems.

    > Users running i386 systems should not upgrade to trixie. Instead, Debian recommends either reinstalling them as amd64, where possible, or retiring the hardware.

    https://www.debian.org/releases/trixie/release-notes/issues....

    • IsTom 4 days ago

      > retiring the hardware.

      Contrast of age of retired hardware with Windows 11 is a little funny.

  • pavon 4 days ago

    Most production use of 32-bit x86, like industrial equipment controllers, and embedded boards support i686 these days, which is getting 64-bit time.

panzi 4 days ago

The problem is not time_t. If that is used the switch to 64 bit is trivial. The problem is when devs used int for stupid reasons. Then all those instances have to be found and changed to time_t.

  • rjsw 4 days ago

    Most open source software packages are also compiled for BSD variants, they switched to 64 bit time_t a long time ago and reported back upstream any problems.

  • monkeyelite 4 days ago

    It is more difficult to evaluate what happens when sizeof(time_t) changes then to replace `int` with `time_t`, so I don't think that's the issue.

  • im3w1l 4 days ago

    Could you use some analyzer that flags every time a time_t is cast? Throw in too-small memcpy too for good measure.

    I guess a tricky thing might be casts from time_t to datatypes that are actually 64bit. E.g. for something like

      struct Callback {
        int64_t(*fn)(int64_t);
        int64_t context;
      }
    
    If a time_t is used for context and the int64_t is then downcast to int32_t that could be hard to catch. Maybe you would need some runtime type information to annotate what the int64_t actually is.
  • panzi 4 days ago

    Several people pointed out pre-built binaries linking libraries they don't ship. Yeah that is a problem, I was only thinking of open source that can be easily recompiled.

    And AFAIK glibc provides both functions, you can chose which one you want via compiler flags (-D_FILE_OFFSET_BITS=64 -D_TIME_BITS=64). So a pre-built program that ships all its dependencies except for glibc should also work.

  • qcnguy 4 days ago

    It's not very trivial. They have broken the userspace ABI for lots of libraries again. So all the package names change; it's annoying if you're distributing debs to users. They obviously have some ideological belief that nobody should do so but they're wrong.

    • Denvercoder9 4 days ago

      > They have broken the userspace ABI for lots of libraries again.

      If the old ABI used a 32-bit time_t, breaking the ABI was inevitable. Changing the package name prevents problems by signaling the incompatibility proactively, instead of resulting in hard-to-debug crashes due to structure/parameter mismatches.

      • qcnguy 4 days ago

        Inevitable... for Linux. Other platforms find better solutions. Windows doesn't have any issues like this. The Win32 API doesn't have the epoch bug, 64 bit apps don't have it, and the UNIX style C library (not used much except by ported software) makes it easy to get a 64 bit time without an ABI break.

        • Denvercoder9 4 days ago

          > Other platforms find better solutions.

          Other platforms make different trade-offs. Most of the pain is because on Debian, it's customary for applications to use system copies of almost all libraries. On Windows, each application generally ships their own copies of the libraries they use. That prevents these incompatibility issues, at the cost of it being much harder to patch those libraries (and a little bit of dikspace).

          There's nothing technical preventing you from taking the same approach as Windows on Debian: as you pointed out, the libc ABI didn't change, so if you ship your own libraries with your application, you're not impacted by this transition at all.

        • panzi 3 days ago

          Personally I only really consider glibc as the system library of Linux (), and that supports both variants depending on compiler flags. Both functions are compiled into glibc, I guess the 32 bit one just wrapping the 64 bit one.

          However, other libraries (Qt, Gtk, ...) don't do that compatibility stuff. If you consider those to be also system libraries then yeah, its breaking the ABI of system libraries. Though a pre-compiled program under Linux could just bundle all* of it's dependencies and just either use glibc (probably a good idea), statically link musl, or even do system calls on its own (probably not a good idea). Linux has a stable system call interface!

          (*) One can certainly argue about that point. Not sure about that point myself anymore when thinking about it, since there are things like libpcap, libselinux, libbpf, libmount, libudev etc. and I don't know if any of them use time_t anywhere and if they do weather they support the -D_FILE_OFFSET_BITS=64 and -D_TIME_BITS=64 stuff.

      • scottlamb 4 days ago

        All true, but qcnguy's point is valid. If you are distributing .deb files externally from their repo, on the affected architectures you need to have a pre-Trixie version and a Trixie-onward version.

        • Denvercoder9 4 days ago

          Shipping separate debs is usually the easiest, but not the only solution. It's totally possible to build something that's compatible with both ABIs.

          • scottlamb 4 days ago

            How?

            I suppose in theory if there's one simple library that differs in ABI, you could have code that tries to dlload() both names and uses the appropriate ABI. But that seems totally impractical for complex ABIs, and forget about it when glibc is one of the ones involved.

            There's no ABI breakage anyway if you do static linkage (+ musl), but that's not practical for GUI stuff for example.

            I suppose you could have bundle wrapper .so for each that essentially converts one ABI to the other and include it in your rpath. But again doesn't seem easy for the number/complexity of libraries affected.

  • pestat0m 4 days ago

    Right, the problem appears to be more an issue of data-rep for time, rather than an issue with 32-bit vs 64-bit architectures. Correct me if I'm wrong, but i think there was long int well before 32 bit chips came around(and long long before 64). Does a system scheduler really need to know the number of seconds elapsed since midnight on Jan-1st-1970? There are only 86400 seconds in a day(31536000 sec/year, 2^32 = 4294967296 - seems like enough, why not split time in 2?). On a side note, i tried setting up a little compute station on my TV about a year ago using an old raspi i had laying around, and the latest version of raspbian-i386 is pretty rot-gut. I seemed to remember it being more snappy when i had done a similar job a few years prior. Also, i seem to remember it doing better at recognizing peripherals a few years prior. I guess this seems to be a trend now: if you don't buy the new tech you are toast, and your old stuff is likely kipple at this point. i think the word I'm looking for is designed-obsolescence. Perhaps a potential light at the end of the tunnel was that i discovered RISC OS, though the 3-button mouse thing sort crashed the party and then i ran out of time. I'm also contemplating SARPi(Slackware) as another contender if i ever get back to the project. Also maybe Plan 9? It seams that kids these days think old computers aren't sexy. Maybe that's fair, but they can be good for the environment(and your wallet).

mojo-ponderer 4 days ago

Will this create significant issues and extra work to support Debian specifically right now? Not saying that we shouldn't bite the bullet, just curious how much libraries have been implicitly depending on the time type to be 32-bit.

  • toast0 4 days ago

    Probably less extra work right now than ten or twenty years ago.

    For one, OpenBSD (and others?) did this a while ago. If it breaks software when Debian does it, it was probably mostly broken.

    For another, most people are using 64-bit os and 64-bit userland. These have been running 64-bit time_t forever (or at least a long time), so it's no change there. Also, someone upthread said no change for i386 in Trixie... I don't follow Debian to know when they're planning to stop i386 releases in general, but it might not be that far away?

ta1243 4 days ago

Disappointing, I was hoping for a nice consulting gig to ease into retirement for a few years about 2035, can't be doing with all this proactive stuff.

Was too young to benefit from Y2K

  • delichon 4 days ago

    I wasn't. A fellow programmer bought the doomsday scenario and went full prepper on us. To stock up his underground bunker he bought thousands of dollars worth of MREs. After 2k came and went with nary a blip, he started bringing MREs for lunch every day. I tried one and liked it. Two years later when I moved on he was still eating them.

    • 4gotunameagain 4 days ago

      TIL people got scurvy because of Y2K. Turns out it wasn't so harmless now, was it ?

      • offmycloud 3 days ago

        I believe the MRE Orange Drink powder is fortified with Vitamin C, so that should help a bit.

  • rini17 4 days ago

    Plenty of embedded stuff deployed today will be there in 15 years, even with proactive push. Which is not yet done, only planned in few years mind you. Buying devkits for most popular architectures could prove good investment, if you are serious.

    • wongarsu 4 days ago

      Can confirm, worked on embedded stuff over a decade ago that's still being sold and will still be running in factories all over the world in 2038. And yes, it does have (not safety critical) y2k38 bugs. The project lead chose not to spend resources on fixing them since he will be retired by then

  • nottorp 4 days ago

    Keep your Yocto skills fresh :)

    All those 32 bit arm boards that got soldered into anything that needed some smarts won't have a Debian available.

    Say, what's the default way to store time in an ESP32 runtime? Haven't worked so much with those.

    • bobmcnamara 4 days ago

      64-bit on IDF5+, 32-bit before then

  • jandrese 4 days ago

    Well, if you want a worry for your retirement just think of all of the medical equipment with embedded 32 bit code that will definitely not be updated in time.

Dwedit 4 days ago

If anyone is serializing 32-bit times as a 32-bit integer, the file format won't match anymore. If anyone has a huge list of programs that are affected, you've solved the 2038 problem.

godshatter 4 days ago

Don't 32-bit systems have 64-bit types? C has long long types and iirc a uint_least64_t or something similar. Is there a reason time_t must fit in a dword?

  • wahern 4 days ago

    > Don't 32-bit systems have 64-bit types?

    The "long long" standard integer type was only standardized with C99, long after Linux established it's 32-bit ABI. IIRC long long originated with GCC, or at least GCC supported it many years before C99. And glibc had some support for it, too. But suffice it to say that time_t had already been entrenched as "long" in the kernel, glibc, and elsewhere (often times literally--using long instead of time_t for (struct timeval).tv_sec).

    This could have been fixed decades ago, but the transition required working through alot of pain. I think OpenBSD was the first to make the 32-bit ABI switch (~2014); they broke backward binary compatibility, but induced alot of patching in various open source projects to fix time_t assumptions. The final pieces required for glibc and musl-libc to make the transition happened several years later (~2020-2021). In the case of glibc it was made opt-in (in a binary backward compatible manner if desired, like the old 64-bit off_t transition), and Debian is only now opting in.

  • poly2it 4 days ago

    Yes, 32-bit systemes have 64-bit types. time_t as a u32 is a remnant.

nsksl 4 days ago

What solutions are there for programs that can’t be recompiled because the source code is not available? Think for example of old games.

  • zelphirkalt 4 days ago

    Probably changing the system time, or faking the system time, so that these programs do not run into issues.

    • Joel_Mckay 4 days ago

      Or party like its epoch time.

      =3

  • CodesInChaos 4 days ago

    Probably a backwards compatible runtime that uses 32-bit timestamps which fills in a fake time after 2038 (e.g 1938). For example steam ships different runtimes, as does flatpak.

Paianni 3 days ago

Just in time to drop 32-bit x86.

misja111 4 days ago

Why would anyone want to store time in signed integer? Or in any signed numerical type?

  • LegionMammal978 4 days ago

    So that people can represent times that occurred before 1970? You could try adjusting the epoch to the estimated age of the universe, but then you run into calendar issues (proleptic Gregorian?), have huge constants in the common case (Julian Days are not fun to work with), and still end up with issues if the estimate is ever revised upward.

  • benmmurphy 4 days ago

    You can have an epoch and now you can measure times before the epoch. In terms of the range of values you can represent it transposes to just having the epoch further in the past and using an unsigned type. So signed/unsigned it should not really matter except maybe for particular languages things work better if the type is either signed or unsigned. For example if you try to calculate the difference between two times maybe its better if the time type is signed to match the result type which is signed as well (not that it solves the problems with overflow).

  • bhaney 4 days ago

    Some things happened before 1970

    • toast0 4 days ago

      Blasphemy! The world sprung into being on Jan 1, 1970 UTC as-is, and you can't convince me otherwise. :P

  • toast0 4 days ago

    It's useful to have signed intervals, but most integer type systems return a signed number when subtracting a signed int from a signed int.

    You kind of have to pick your poison; do you want a) reasonable signed behavior for small differences but inability to represent large differences, b) only able to represent non-negative differences, but with the full width of the type, c) like a, but also convincing your programming system to do a mixed signed subtraction ... like for ptrdiff_t.

  • FlyingAvatar 4 days ago

    I have wondered this as well and my best guess is so two times can be diffed without converting them to an signed type. With 64-bit especially, the extra bit isn't buying you anything useful.

  • layer8 4 days ago

    So that calculating “80 years ago” doesn’t result in a future time.

  • jech 4 days ago

    So you can write "delta = t1 - t0".

  • lorenzohess 4 days ago

    So that my Arch Linux box can still work when we get time travel.

kstrauser 4 days ago

Kragen, get in here and comment. This is your shine to time.

snvzz 4 days ago

>for everything

Except x86.

notepad0x90 4 days ago

I honestly think there won't be any big bugs/outages by 2038. Partly because I have a naive optimism that any important system will not only have the OS/stdlibs support 64bit time, but that important systems that need accurate time probably use NTP/network time, and that means their test/dev/qa equivalent deployments can be hooked up to use a test network time server that will simulate post-2038 times to see what crashes.

12 years+ is a long time to prepare for this. Normally I wouldn't have much faith in test/dev systems, network time being setup properly,etc...but it's a long time. Even if none of my assumptions are true, in a decade we couldn't at least identify where 32bit time is being used and plan for contingencies? that's unlikely.

But hey, let me know when Python starts supporting nano-second precision time :'(

https://stackoverflow.com/a/10612166

Although, it's been a while since I checked to see they support it. In Windows-land at least, everything system-side uses 64bit/nsec precision, as far as I've had to deal with it at least.

  • 0cf8612b2e1e 4 days ago

    Software today has to be able to model future times. Mortgages are 30 years long. This is already a problem today which has been impacting software.

  • mrweasel 4 days ago

    My concern is that this is happening 12 years to late. A bunch of embedded stuff will not be replaced in 12 years. We have a lot more tiny devices running all sorts of system, many more than we did 25 years ago. These are frequently in hard to reach places, manufacturers have gone out of business, no updates will be available and no one is going to 2038 validate those devices and their output.

    Many of the device going into production now won't have 64bit time, they'll still run version of Linux that was certified, or randomly worked, in 2015. I hope you're right, but in any case it will be worse than Y2K.

  • zbendefy 4 days ago

    its 12 years not 22.

    An embedded device bought today may be easily in use 12 years from now.