The UNIX timestamp is used by many computers to record the time of events. It counts the seconds that have elapsed since Jan 1st 1970.
But numbers in computers can't be arbitrarily big, because you need to designate a certain amount of memory to them. As such UNIX timestamps were defined to allocate 32 bits of information, which lets you store numbers up to 2,147,483,647.
In 2038 more seconds than that will have passed, and it could lead to problems not unlike the Y2K bug.
@fribbledom @izaya ah shit, well, you better figure something out! We're all counting on you!... Literally...
@fribbledom @funnypanja Isn't everything u64 now?
(BigInteger for the win?)
@fribbledom @funnypanja π€¦
@gudenau There's nothing wrong with signed values, as long as you have the bits of precision to support it. 64-bits gives you precision to measure as far back as origin of the universe, and a future wrap-around somewhere close to 150 billion years into the future.
@vertigo I guess stuff did happen before 0.
@fribbledom @gudenau @funnypanja even signed 64 bit integers are big enough for now. 63 bits of counting seconds give us ~3x10^11 years, which is close to 21 times the current age of the universe.
So even counting milliseconds we've got plenty of numbers there.
@gudenau @fribbledom @funnypanja The 9P protocol still uses a 32-bit timestamp, but it is unsigned. So, instead of a 2038 problem, it has a 2107 problem.
@fribbledom Oh wow, did not know that.
It was simpler than trying to keep sync with local times.