About a year ago I was looking at Crash Bandicoot timer systems and I found that Crash 3 has a constantly incrementing int32. It only resets if you die.
Left for 2.26 years, it will overflow.
When it does finally overflow, we get "minus" time and the game breaks in funny ways. I did a video about it: https://youtu.be/f7ZzoyVLu58
There's a weapon in Final Fantasy 9 which can only be obtained by reaching a lategame area in less than 12 hours of play time, or 10 hours on the PAL version due to an oversight. Alternatively you can just leave the game running for two years until the timer wraps around. Slow and steady wins the race.
"The perfect racing car crosses the finish line first and subsequently falls into its component parts."
Games fit this philosophy, compared to many other pieces of software that are expected to be long-lived and receiving a lot of maintenance and changes and evolve.
The Porsche quote reflects a wider design philosophy that says "Ideally, all components of a system lasts as long as the design life of the entire system and there should be no component that lives significantly longer. If there is such a component, it has been overengineered and thus the system will be more expensive to the end consumer than it needs to be.". It kinda skips over maintenance, but overall most people find it unobjectionable when stated like this.
But plenty of people will find complaints when they try to drive their car beyond its design specs and more or less everything starts failing at once.
Porsche was talking about racing, where the primary focus is reaching the finish line faster an anyone else, and over-engineering can easily get in the way of that goal. Back in the real world, no race team would agree that their cars should disintegrate after one race.
If you go back further than that, teams used to destroy entire engines for a single qualifying.
The BMW turbocharged M12/M13 that was used in the mid-eighties put out about 1,400 horsepower at 60 PSI of boost pressure, but it may have been even more than that because there was no dyno at the time capable of testing it.
They would literally weld the wastegate shut for qualifying, and it would last for about 2-3 laps: outlap, possibly warmup lap, qualifying time lap, inlap.
After which the engine was basically unusable, and so they'd put in a new one for the race.
Yup, cigarette money enabled all kinds of shenanigans. Engine swaps for qualification, new engines every race, spare third cars, it goes on. 2004 was the first year that specified engines must last the entire race weekend and introduced penalties for swaps.
Even today F1 teams are allowed 4 engine replacements before taking a grid place penalty, and those penalties still show up regularly enough. So nobody is making "reliable" F1 engines.
You can see this really on display with the AMG ONE. It's a "production" car using an F1 engine that requires a rebuild every 31,000 miles.
Alan Weisman's lovely book World Without Us speculates a bit about this, basically saying that more recently built structures would be the first to collapse because they've all be engineered so close to the line. Meanwhile stuff that already been standing for 100+ years like the Brooklyn Bridge will probably still be there in another 100 years even without any maintenance just on account of how overbuilt it all had to be in an era before finite element analysis.
There was an aluminum extrusion company that falsified test records for years. They got away with it because what's a few % when your customer's safety factor is 2. Once they got into weight sensitive aerospace applications, where sometimes the factor is 1.2, rockets starting blowing up on the launch pad.
Consumer protection laws prevents businesses following this to it’s extreme. For many businesses the ideal would be to just sell stuff that immediately breaks down as soon as it’s sold. It has the fulfilled its purpose from their point of view
I run sous vide cookers 24*7, and they uniformly break within 90 days or less. But they don't like to admit their smaller duty cycle, so they don't, and keep sending me warranty replacements instead. I keep buying different brands looking for one with a longer life. I'll bet most people do that when their gadgets die, and purposely making products that die as soon as sold isn't often a successful business model.
That’s not a small cycle count for a normal household.
90 × 24 = 2,160 total hours.
I sous vide now and then, about twice a week for 6 hours each, so around 12 hours a week.
That works out to roughly 15 years of usable machine time for the average person.
Photography is the same way. Most SLR / DSLR / mirrorless cameras have a mechanical shutter which is expected to last around 200k-1m activations. I've had a camera for a bit over a year. I've used it quite heavily, and my shutter count is at about 13k photos. At this rate, the shutter will probably last for 20+ years - which seems fine. If I'm still using the camera by then, spending a few hundred dollars to replace the shutter mechanism sounds totally reasonable.
There are, and if you really have the workload that you need to cook stuff 24/7 (what in gods name is OP cooking btw?) then you should definitely get one of those. Maybe not even secondhand but just a new one. The cheap consumer grade ones are meant for people who use them once or twice a year.
This is a fine example of what I meant about people complaining when they use products beyond their design parameters.
I got one that seems to be kind of in the middle, it's better built than most of the consumer models but not quite as "industrial" feeling as some of the commercial models. I use it a few times a week for a few hours each.
I'm on a mostly carnivore, mostly ruminant meat diet and for costs tend to do a lot of ground beef... I sous vide a bunch of burgers in 1/2lb ring molds, refrigerate and sear off when hungry. This lets me have safer burgers that aren't overcooked. I do 133F for 2.5+ hours.
I also do steaks about once or twice a week. I have to say it's probably the best kitchen investment I could have made in terms of impact on the output quality.
A friend of mine gets new headphones/headsets every six to eighteen months, and hasn’t bought a pair entirely out of pocket in years. For him it’s all down to buying the Microcenter protection plan every time they’re replaced. They fail, he takes them back, he gets store credit for the purchase price, and he buys a new set and a new plan. He doesn’t even care about the manufacturer’s warranty anymore.
Personally, most of my headphones I look for metal mechanical connections instead of plastic and I buy refurbished when I can. I think I pay about as much as he does or less, but we haven’t really hashed out the numbers together. I’m typing this while wearing a HyperX gaming headset I bought refurbished that’s old enough that I’ve replaced the earpads while everything else continues to work.
Computers and computer parts often have, in my experience, a better reliability record competently refurbished than when they first leave the factory too. I wonder if sous vide cookers would.
Beef, lamb, sometimes pork. I have a daily meal of a cheap, tough cut of meat cooked for 48 hours at 150F.
Sous vide is generally not a bacterial growth risk above 140F. At 150F throughout, you get decent pasteurization in under two minutes. Two days of that is such extreme overkill that I'm concerned about the nutritional effect of over cooking.
The Food Saver style vacuum sealers fail fast for me, so I bought a $400 chamber sealer, and I'm on year 5 with it.
I think I love you? This is great. Do you have them running in arrays of 3? What’s your favourite cut? What’s the best cost:deliciousness cut? What bags do you use to minimize plastic leeching?
It's just me, so I only need one running at a time. Every day I take one serving out and put another one in. I clean the tank about once per week, or if something breaks. My favorite is short ribs, my daily drivers are chuck roast or shank. The prices have skyrocketed in the last few years. I buy in bulk on sale and portion it into bags with a chamber style vacuum sealer. It goes straight from the freezer into the tank.
Do you take pride in knowing that you eat cooler than anyone else, because you should.
Short rib is shocking where I am. Even chuck is pushing past $15 a pound.
What are you doing for sides/sauce? Generally when I think braise/sous-vide I think some rich, flavourful sauce, but that seems unpractical for daily consumption.
Chuck on sale is now $8 a pound, more than double since Covid started. I am eating less of it and more ground beef, pork and eggs.
I crisp it up in an air fryer before serving. Here's the full ingredient list: meat, butter, salt. After five years I still look forward to every repeat.
I just replaced an air fryer that lasted two years of daily use, a personal record. I was ready to replace it anyway, because they accumulate grease where you can't clean, and the smell gets interesting.
So the invisible 12h timer runs during cutscenes. During Excalibur 2 runs, I used to open and close the PS1 disc tray to skip (normally unskippable) cutscenes. Never knew why that worked.
I’m going to wager that the cutscenes are all XA audio/video DMA’d from the disc. Opening the disc kills the DMA and the error recovery is just to end the cutscene and continue. The program is in RAM, so a little interruption on reading doesn’t hurt unless you need to time it to avoid an error reading the file for the next section of gameplay.
This is a significantly better handling than the previous game (final fantasy viii). My disk 1 (it had four disks) got scratched over time (I was a child after all), and the failure mode was just to crash - thus the game was unplayable. The game had a lot of cutscenes.
I recall that handling disc eject was an explicit part of the Tech Requirements Doc (things the console manufacturer requires you to comply with). They'd typically check while playing, while loading and while streaming.
I'm guessing the game probably streams FMV cutscenes of the disc as they play, and the fallback behaviour if it can't find them is to skip rather than crash.
Oh yeah. The sword you pick up in Memoria. The problem there is that the PAL version runs slower; the way PSX games "translated" between the two video systems was just to have longer VSync pauses for PAL. So the game is actually slower, not interpolated
Erm. No, like lots of games during the era quite a lot of stuff is tied to the frame rate, so the 50Hz region game just runs slower than the 60Hz one as next to nobody bothers to adjust for it. The clock for the hidden weapon does run at the same rate for both unfortunately, hence it being harder to get in 50Hz regions.
When you say looking at the source code, what do you mean here?
AFAIK the source for FF9 PSX (and all the PSX ff games) has been lost as Square just used short term archives
Also, FF9 does not run at a constant framerate. Like all the PSX FF games it runs at various rates, sometimes multiple at a time (example: model animations are 15fps vs 30 for the UI)
In terms of timers, the bios does grant you access to root timers, but these are largely modulated by a hardware oscillator
(Incidentally, the hardware timing component is the reason a chipped PAL console cannot produce good NTSC video. Only a Yaroze can support full multiregion play)
FFIX for PSX would have been written in C (or possibly C++) with PSY-Q. It will not be one program - those games were composed of multiple overlays that are banked in / out over the PlayStation's limited memory.
From what I know the PC release was a port to a new framework, which supports the same script engines, but otherwise is fresh code. This is how it can support mobile, widescreen, Steam achievements etc.
Except I’m looking at the original source, not the remake, the crappy C/C++ Square engine. Not C# unity code.
There are a number of timers and things used. But the claim that it runs slower is absolutely false. It’s just perceived that way because it’s “drawn” slower.
No? Wall time (that the challenge runs on) is unchanged, game time (Vsync) is running at 83% of full speed (50Hz vs 60Hz), so if something tied to frame rate (animation, walking speed etc.) takes 1 second to do on NTSC, it'll take 1.2 seconds to do on PAL etc.
We should rally together to force game companies to use 32 bit timers rather than 64bit ones so we can keep finding these fun little glitches. The time to protect overflows is now! ;)
Does this discussion strike you as one where I’m deliberating whether or not to chuck my smartphone and buy into a new ecosystem to avoid ads on fandom?
These types of comments are always very unhelpful.
No, that's just a reminder that you had a choice, and chose empty talk about “ecosystems” over ability to control what you can see on “your” screen. You've stepped on a rake once, you got some experience, why repeat it over and over again?
Is it common to default to a signed integer for tracking a timer? I realize being unsigned it would still overflow but at least you'd get twice the time, no?
If you get to right before you need to be (taking as long as you want), then wait until overflow, then you still have 12h to do the last tiny part if it's unsigned.
I think many games were that way. SotN definitely has a global timer. On a native 32-bit system it makes sense, especially when the life of a game was a few months to a few years on the retail shelf. No player is going to leave their system running for 2.27 years so what’s the point of even tesing it?
Who knew at the time they were creating games that would be disassembled, deconstructed, reverse engineered. Do any of us think about that regarding any program we write?
Can be more than timers too. There's a funny one in Paper Mario where a block technically can be hit so many times it'll reset and award items again. Hit enough times it'll eventually crash. Of course it'd take around 30 years for the first rollover and 400 or so for the crash.
https://n64squid.com/paper-mario-reward-block-glitch/
For some games the timer is stored is save files so it doesn't even have to be continuous play time. 2 years is still longer than anyone is expected to spend on a game.
There are always limits to what a program can do. The only fix is to choose large enough integers (and appropriate units) so that you can represent long enough times / large enough sizes / etc. that anyone could reasonably encounter. What sizes make sense also include how they impact performance and for a game from the 32-bit era, a crash (controlled abort or not) after over two years is probably a better choice than slowing everything down by using a 64-bit integer.
they're still made like this. Just now I made a frame counter that just increments every frame on a int64. It would eventually wrap around but doubt anyone will still be around to see it happen :|
aslong as it doesnt lead to online cheats having such code is fine. if someone wants to reverse the game find an obscure almost untriggerable bug and then trigger it or play with it. 2.6 year game session is crazy if its not a server, and if its a server, thats still really crazy even for some open-world open-ended game... its a long time to keep a server up w/o restarts or anything (updates?).
looking at the various comments, there might be even some kind of weird appeal to leave such things in your game :D for people to find and chuckle about. it doesnt really disrupt the game normally does it?
Doom is actually such a good game, I always go back to it every few years. The 2016 reboot is also pretty fun, but the later two in the series didn’t do it for me.
Same. Something about the metroidvania design with the home hub of the later ones didn’t give the same feeling. It should be run, kill, find secrets, end, next level.
I just finished Robocop: Rogue City and it was exactly this a linear level by level shooter that felt like a pure Robocop power fantasy movie. I played new game plus it was so much fun and I never do that.
It's like the game industry got a fake memo saying no one wanted linear story-based games anymore. I ended up buying two more Teyon games because I was so happy with their formula and they are playable in a dozen or so hours. Tight, compact, linear, fun story and game play... No MTX or always online BS and they don't waste my time with busy work.
This is exactly how I want my FPS games to be. Just linear, run & gun.
TBH, I can even do without weapon upgrades or any "RPG" style elements.
It's even worse in multiplayer games like COD and BF. As soon as I need to figure out combinations of 5x attachments to guns I lose all my interest in playing the game. That's why I'm still on CS I guess lol.
I was quite excited for it, despite not enjoying Eternal as much. But after about ~2 hours of playing it, I lost interest. I'm happy you're enjoying it, sadly it didn't click for me.
Especially the 'mech scale' stuff was just boring. I don't remember what they call it in-universe, but essentially the parts of the game where you're playing from a giant robot and just walking over tanks and fighting supersized demons.
Fun fact: Doom is now a Microsoft property, along with Quake, StarCraft, WarCraft, Overwatch, all of the adventure games from Infocom and Sierra, and of course Halo. Microsoft pretty much owns most of PC gaming. Which is what they've wanted since 1996 or so.
They own the past of PC gaming, as well as Call of Duty but that is more popular on consoles than PC nowadays. Those listed are small time compared to Counter-Strike 2, Dota 2, League of Legends, Valorant, Roblox, Apex Legends, Marvel Rivals and a number of hard-hitting games every year such as Witcher 3, Elden Ring, Baldur's Gate 3 etc.
They missed that window when Sierra was still the publisher for Half-Life. Besides, Valve is not a publicly traded company and Gabe Newell as former manager at Microsoft has no interest in getting back together. Valve is betting everything on Linux right now to be more independent from Microsoft.
All the more reason for Microsoft to make a play now while Valve still at least somewhat depends on them.
And Gabe won't be around forever and the guy is already over sixty. Statistically he's got about two decades left to live and not all of that will be at a level where he can lead Valve.
> Valve is betting everything on Linux right now...
They've been working on Linux support since at least around the time that Microsoft introduced the Windows Store... so for the last twelve years or so.
And, man, a couple of months ago I figured out how to run Steam as a separate user on my Xorg system. Not-at-all-coincidentally, I haven't booted into Windows in a couple of months. Not every game runs [0], but nearly every game in my library does.
I'm really gladdened by the effort put in to making this work.
[0] Aside from the obvious ones with worryingly-intrusive kernel-level anticheat, sometimes there are weird failures like Highfleet just detonating on startup.
I used to game on Linux back in the late 2000s through Wine. And I always found the mouse support to be jarring, even if I could get support to a decent level, for some reason the mouse input was never quite as fluid as it should have been.
And now I'm reluctant to move back to Linux for gaming, even though they've clearly come so far. I guess I should just go ahead and give it another shot.
ProtonDB has a feature where you can give it access to your Steam account for reading and it'll give you a full report based on your personal library: https://www.protondb.com/profile
And I find if anything it tends to the conservative. I've encountered a few things where it was overoptimistic but its outweighed by the stuff that was supported even better than ProtonDB said.
In the late 2000s, I played a few things, but I went in with the assumption it either wouldn't work, or wouldn't work without tweaking. Now I go in with the assumption that it will work unless otherwise indicated. Except multiplayer shooters and VR.
I'm under the impression that since Doom Eternal (the first after Doom 2016), the gameplay has considerably shifted to an "interconnected arenas" style, and with more sophisticated combat mechanics. Many games have started adopting this design, for example, Shadow Warrior 3.
I also dislike this trend. As a sibling comment noted, boomer shooters are generally closer to the old-school Doom gameplay, although some are adopting the newer design too.
I had read an article about how DOOMs engine works and noticed how a variable for tracking the demo kept being incremented even after the next demo started. This variable was compared with a second one storing its previous value
Doesn't sound like something that would crash, I wonder what was the actual crash
Signed overflow is undefined behavior in C, so pretty much anything could happen. Though this crash seems to be deterministic between platforms and compilers, so probably not about that. TFA says the variable is being compared to its previous value, and that comparison presumably assumes new < old cannot happen. And when it does, it could easily lead to eg. stack corruption. C after all happily goes to UB land if, for example, some execution path doesn’t return a value in a function that’s supposed to return a value.
Just because the language standard allows for anything to happen doesn't mean that actually anything can happen with real compilers. It's still a good question to think about how it could actually lead to a crash.
That doesn't make sense. If new < old cant happen there is no need to make a comparison. Stack corruption? Nah, its a counter not an index or pointer or it would fail sooner. But then what is the failure? IDK
Assuming new > old doesn't mean you actually make the comparison, but rather that the code is written with the belief that new > old. This code behaves correctly under this assumption, but might be doing something very bad that leads to a crash if the new < old.
An actual analysis would be needed to understand the actual cause of the crash.
Um, there are the cases new == old and new > old. And all the more specific cases new == old + n. I haven’t seen the code so this is just speculation, but there are plenty of ways how an unexpected, "can never happen" comparison result causes immediate UB because there’s no execution path to handle it, causing garbage to be returned from a function (and if that garbage was supposed to be a pointer, well…) or even execution never hitting a `ret` and just proceeding to execute whatever is next in memory.
Another super easy way to enter UB land by assuming an integer is nonnegative is array indexing.
int foo[5] = { … }
foo[i % 5] = bar;
Everything is fine as long as i isn’t negative. But if it is… (note that negative % positive == negative in C)
The error states that the window can't be created. It might be the problem with parameters to the window creation function (that should not depend on game state), or maybe the system is out of memory. Resources allocated in memory are never cleaned up because cleanup time overflows?
Doom4CE (this port) was based on WinDoom, which only creates the program window once at startup, then switches the graphical mode, and proceeds to draw on screen independently, processing the keyboard and mouse input messages. I'm not sure, but maybe Windows CE memory management forced the programmer to drop everything and start from scratch at the load of each level? Then why do we see the old window?
There are various 32 bit integer counters in Doom code. I find it quite strange that the author neither names the specific one, nor what it does, nor tries to debug what happens by simply initialising it with some big value.
Moreover, 2^32 divided by 60 frames per second, then by 60 seconds, 60 minutes, 24 hours, 30 days, and 12 months gives us a little less than 2.5 years. However, Doom gameplay tick (or “tic”), on which everything else is based, famously happens only 35 times a second, and is detached from frame rendering rate on both systems that are too slow (many computers at the time of release), or too fast (most systems that appeared afterwards). 2^32 divided by 35, 60 seconds, etc. gives us about 4 years until overflow.
Would be hilarious if it really is such an easy mistake.
Once upon a time, Windows NT 4 had a similar bug. Their counter was high precision, though, and was for uptime of the system. Back before Service Pack 3 (or was it SP2?) we had a scheduled task reboot the system on the first of the month. Otherwise it would crash after about 42 days of uptime, because apparently nobody at Microsoft tested their own server OS to run for that long.
I love the post, but your blurry text is hurting my eyes. Looks like it's intentionally blurry but I can't figure out why. This can't be a holdover from older systems, they had razor-sharp text rendering on CRTs.
You have 13 years to upgrade to 64-bit ints or switch to a long long for time_t. Lots of embedded stuff or unsupported closed-source stuff is going to need special attention or to be replaced.
I know the OpenFirmware in my old SunServer 600MP had the issue. Unfortunately I don’t have to worry about that.
Assuming correct implementation of the NTP spec and adherence to the "eras" functions, NTP should be resistant to this failure in 2036.
The problem being so many micro-controllers, non-interfaceable or cheaply designed computers/devices/machines might not follow the standards and therefore be susceptible although your iPhone, Laptop and Fridge should all be fine.
This is a level of testing that exceeds what the testers I know commit to. I myself was annoyed the five or so times yesterday we had to sit and wait to check the error handling after a 30 second timeout in the system I work on.
In games I worked on I use time to pan textures for animated FX.
After a few hours precision errors accumulate and the texture become stretched and noisy, but since explosions are generally short-lived its never a problem.
Props again to the id team. No doubt something like that engineered by most folks today would have died long before the 2 year mark due to memory fragmentation if not outright leaks.
This headline gave me a heart attack... I misread the site's name as Lenovo, and as I'm responsible for a whole lot of their servers running for years in a critical role... heart attack.
It's also running on very old hardware, potentially with some electrolytic capacitors that have dried up. And, there's always the possibility that it's a gamma ray [1]!
To me, that error message was caused by some panic, and then the OS began gracefully shutting down the application in this case DooM - which would not have been done by the program itself. Therefore I conclude it was the OS.
I am not an OS developer, so I take my own conclusion with a grain of salt.
After the recent hacker news "invasion", I have now determined that the page can handle up to 1536 users before running out of RAM, meaning that the IP camera surprisingly is fully sufficient for its purpose. In other words, I will not be moving the forum in the near future as 32 MB of RAM seem to be enough to run it
At the time of writing the comment it was practically instantaneous for me and the comment was genuine. Now it seems to be having trouble and I'm choosing to retroactively make the comment a joke about Jira ;)
You folks overflowed the 32 MB of RAM that my forum is running on and caused it to restart a few times due to the high amount of simultaneous connections. It has recovered now though
About a year ago I was looking at Crash Bandicoot timer systems and I found that Crash 3 has a constantly incrementing int32. It only resets if you die.
Left for 2.26 years, it will overflow.
When it does finally overflow, we get "minus" time and the game breaks in funny ways. I did a video about it: https://youtu.be/f7ZzoyVLu58
There's a weapon in Final Fantasy 9 which can only be obtained by reaching a lategame area in less than 12 hours of play time, or 10 hours on the PAL version due to an oversight. Alternatively you can just leave the game running for two years until the timer wraps around. Slow and steady wins the race.
https://finalfantasy.fandom.com/wiki/Excalibur_II_(Final_Fan...
Am reminded by this quote from Ferdinand Porsche:
"The perfect racing car crosses the finish line first and subsequently falls into its component parts."
Games fit this philosophy, compared to many other pieces of software that are expected to be long-lived and receiving a lot of maintenance and changes and evolve.
The Porsche quote reflects a wider design philosophy that says "Ideally, all components of a system lasts as long as the design life of the entire system and there should be no component that lives significantly longer. If there is such a component, it has been overengineered and thus the system will be more expensive to the end consumer than it needs to be.". It kinda skips over maintenance, but overall most people find it unobjectionable when stated like this.
But plenty of people will find complaints when they try to drive their car beyond its design specs and more or less everything starts failing at once.
Porsche was talking about racing, where the primary focus is reaching the finish line faster an anyone else, and over-engineering can easily get in the way of that goal. Back in the real world, no race team would agree that their cars should disintegrate after one race.
> Back in the real world, no race team would agree that their cars should disintegrate after one race.
Wasn't F1 teams basically doing this by replacing their engines and transmissions until the rules introduced penalties for component swaps in 2014?
If you go back further than that, teams used to destroy entire engines for a single qualifying.
The BMW turbocharged M12/M13 that was used in the mid-eighties put out about 1,400 horsepower at 60 PSI of boost pressure, but it may have been even more than that because there was no dyno at the time capable of testing it.
They would literally weld the wastegate shut for qualifying, and it would last for about 2-3 laps: outlap, possibly warmup lap, qualifying time lap, inlap.
After which the engine was basically unusable, and so they'd put in a new one for the race.
Yup, cigarette money enabled all kinds of shenanigans. Engine swaps for qualification, new engines every race, spare third cars, it goes on. 2004 was the first year that specified engines must last the entire race weekend and introduced penalties for swaps.
F1 income is way way higher than the 80s.
Even today F1 teams are allowed 4 engine replacements before taking a grid place penalty, and those penalties still show up regularly enough. So nobody is making "reliable" F1 engines.
You can see this really on display with the AMG ONE. It's a "production" car using an F1 engine that requires a rebuild every 31,000 miles.
Don't highly optimized drag racers do this? I mean, a clutch that in normal operation gets heated until it glows can't be very durable.
Anyone can build a bridge, but it takes an engineer to barely build a bridge.
Alan Weisman's lovely book World Without Us speculates a bit about this, basically saying that more recently built structures would be the first to collapse because they've all be engineered so close to the line. Meanwhile stuff that already been standing for 100+ years like the Brooklyn Bridge will probably still be there in another 100 years even without any maintenance just on account of how overbuilt it all had to be in an era before finite element analysis.
There was an aluminum extrusion company that falsified test records for years. They got away with it because what's a few % when your customer's safety factor is 2. Once they got into weight sensitive aerospace applications, where sometimes the factor is 1.2, rockets starting blowing up on the launch pad.
https://www.justice.gov/archives/opa/pr/aluminum-extrusion-m...
Should have resulted in jail time. A monetary fine is no deterrent.
Consumer protection laws prevents businesses following this to it’s extreme. For many businesses the ideal would be to just sell stuff that immediately breaks down as soon as it’s sold. It has the fulfilled its purpose from their point of view
I run sous vide cookers 24*7, and they uniformly break within 90 days or less. But they don't like to admit their smaller duty cycle, so they don't, and keep sending me warranty replacements instead. I keep buying different brands looking for one with a longer life. I'll bet most people do that when their gadgets die, and purposely making products that die as soon as sold isn't often a successful business model.
That’s not a small cycle count for a normal household. 90 × 24 = 2,160 total hours.
I sous vide now and then, about twice a week for 6 hours each, so around 12 hours a week. That works out to roughly 15 years of usable machine time for the average person.
Not bad at all.
Photography is the same way. Most SLR / DSLR / mirrorless cameras have a mechanical shutter which is expected to last around 200k-1m activations. I've had a camera for a bit over a year. I've used it quite heavily, and my shutter count is at about 13k photos. At this rate, the shutter will probably last for 20+ years - which seems fine. If I'm still using the camera by then, spending a few hundred dollars to replace the shutter mechanism sounds totally reasonable.
2160/12 is 180 weeks, or roughly 3.5 years, not 15 years
Assuming linearity, which I doubt is the case.
You think a measly 360 uses at your 6 hours typical operation is even remotely acceptable for a glorified heating element?
And yes, 15 years is bad. I don't want to replace my entire household every 15 years FFS.
Are there not industrial ones meant to last longer? Maybe you can buy a used but good condition one of those.
There are, and if you really have the workload that you need to cook stuff 24/7 (what in gods name is OP cooking btw?) then you should definitely get one of those. Maybe not even secondhand but just a new one. The cheap consumer grade ones are meant for people who use them once or twice a year.
This is a fine example of what I meant about people complaining when they use products beyond their design parameters.
I got one that seems to be kind of in the middle, it's better built than most of the consumer models but not quite as "industrial" feeling as some of the commercial models. I use it a few times a week for a few hours each.
I'm on a mostly carnivore, mostly ruminant meat diet and for costs tend to do a lot of ground beef... I sous vide a bunch of burgers in 1/2lb ring molds, refrigerate and sear off when hungry. This lets me have safer burgers that aren't overcooked. I do 133F for 2.5+ hours.
I also do steaks about once or twice a week. I have to say it's probably the best kitchen investment I could have made in terms of impact on the output quality.
It is easy to have to run a bunch of sous vide cooker 24/7 if you have a small restaurant or food delivery business.
In which case one shouldn't be using consumer-grade kitchen equipment.
Call it vibe cooking.
Definitely -- get something meant for a lab. I worked in one that had a 150F water bath running day and night.
A friend of mine gets new headphones/headsets every six to eighteen months, and hasn’t bought a pair entirely out of pocket in years. For him it’s all down to buying the Microcenter protection plan every time they’re replaced. They fail, he takes them back, he gets store credit for the purchase price, and he buys a new set and a new plan. He doesn’t even care about the manufacturer’s warranty anymore.
Personally, most of my headphones I look for metal mechanical connections instead of plastic and I buy refurbished when I can. I think I pay about as much as he does or less, but we haven’t really hashed out the numbers together. I’m typing this while wearing a HyperX gaming headset I bought refurbished that’s old enough that I’ve replaced the earpads while everything else continues to work.
Computers and computer parts often have, in my experience, a better reliability record competently refurbished than when they first leave the factory too. I wonder if sous vide cookers would.
Well from an evil business perspective their options are either
- the product doesn't break and you don't buy a replacement from them because you still have a working product
- the product breaks and there is a greater than 0% chance that you will buy a replacement product from them
Of course in practice it's more complicated but I wouldn't be so quick to declare that the math doesn't work out.
What do you sous vide 24*7? It sounds like it would be party grounds for bacteria. Also curious if the bags and other components break as well.
Beef, lamb, sometimes pork. I have a daily meal of a cheap, tough cut of meat cooked for 48 hours at 150F.
Sous vide is generally not a bacterial growth risk above 140F. At 150F throughout, you get decent pasteurization in under two minutes. Two days of that is such extreme overkill that I'm concerned about the nutritional effect of over cooking.
The Food Saver style vacuum sealers fail fast for me, so I bought a $400 chamber sealer, and I'm on year 5 with it.
I think I love you? This is great. Do you have them running in arrays of 3? What’s your favourite cut? What’s the best cost:deliciousness cut? What bags do you use to minimize plastic leeching?
It's just me, so I only need one running at a time. Every day I take one serving out and put another one in. I clean the tank about once per week, or if something breaks. My favorite is short ribs, my daily drivers are chuck roast or shank. The prices have skyrocketed in the last few years. I buy in bulk on sale and portion it into bags with a chamber style vacuum sealer. It goes straight from the freezer into the tank.
Do you take pride in knowing that you eat cooler than anyone else, because you should.
Short rib is shocking where I am. Even chuck is pushing past $15 a pound.
What are you doing for sides/sauce? Generally when I think braise/sous-vide I think some rich, flavourful sauce, but that seems unpractical for daily consumption.
Chuck on sale is now $8 a pound, more than double since Covid started. I am eating less of it and more ground beef, pork and eggs.
I crisp it up in an air fryer before serving. Here's the full ingredient list: meat, butter, salt. After five years I still look forward to every repeat.
I just replaced an air fryer that lasted two years of daily use, a personal record. I was ready to replace it anyway, because they accumulate grease where you can't clean, and the smell gets interesting.
When the design spec seems to be a 3 year long lease I can see why people get bothered.
So the invisible 12h timer runs during cutscenes. During Excalibur 2 runs, I used to open and close the PS1 disc tray to skip (normally unskippable) cutscenes. Never knew why that worked.
(I also never managed to get it)
I’m going to wager that the cutscenes are all XA audio/video DMA’d from the disc. Opening the disc kills the DMA and the error recovery is just to end the cutscene and continue. The program is in RAM, so a little interruption on reading doesn’t hurt unless you need to time it to avoid an error reading the file for the next section of gameplay.
This is a significantly better handling than the previous game (final fantasy viii). My disk 1 (it had four disks) got scratched over time (I was a child after all), and the failure mode was just to crash - thus the game was unplayable. The game had a lot of cutscenes.
That’s a solid guess. And if that’s the case, that’s actually pretty good error handling!
I recall that handling disc eject was an explicit part of the Tech Requirements Doc (things the console manufacturer requires you to comply with). They'd typically check while playing, while loading and while streaming.
> Never knew why that worked.
I'm guessing the game probably streams FMV cutscenes of the disc as they play, and the fallback behaviour if it can't find them is to skip rather than crash.
Oh yeah. The sword you pick up in Memoria. The problem there is that the PAL version runs slower; the way PSX games "translated" between the two video systems was just to have longer VSync pauses for PAL. So the game is actually slower, not interpolated
Longer vsync pauses but larger frame time deltas so it’s basically the same speed of play. The only thing that was even noticeable was the UI lag.
Erm. No, like lots of games during the era quite a lot of stuff is tied to the frame rate, so the 50Hz region game just runs slower than the 60Hz one as next to nobody bothers to adjust for it. The clock for the hidden weapon does run at the same rate for both unfortunately, hence it being harder to get in 50Hz regions.
Incorrect. I’m looking at the source code. It’s not perfect but it’s not just “slowed down to 50hz” like people claim.
When you say looking at the source code, what do you mean here?
AFAIK the source for FF9 PSX (and all the PSX ff games) has been lost as Square just used short term archives
Also, FF9 does not run at a constant framerate. Like all the PSX FF games it runs at various rates, sometimes multiple at a time (example: model animations are 15fps vs 30 for the UI)
In terms of timers, the bios does grant you access to root timers, but these are largely modulated by a hardware oscillator
(Incidentally, the hardware timing component is the reason a chipped PAL console cannot produce good NTSC video. Only a Yaroze can support full multiregion play)
It’s definitely not lost…
What code are you looking at?
FFIX for PSX would have been written in C (or possibly C++) with PSY-Q. It will not be one program - those games were composed of multiple overlays that are banked in / out over the PlayStation's limited memory.
From what I know the PC release was a port to a new framework, which supports the same script engines, but otherwise is fresh code. This is how it can support mobile, widescreen, Steam achievements etc.
FF VII-IX were reimplemented under a custom engine.
Except I’m looking at the original source, not the remake, the crappy C/C++ Square engine. Not C# unity code.
There are a number of timers and things used. But the claim that it runs slower is absolutely false. It’s just perceived that way because it’s “drawn” slower.
Wouldn't a slower tick make it easier as you get more wall time to do the same challenge.
No? Wall time (that the challenge runs on) is unchanged, game time (Vsync) is running at 83% of full speed (50Hz vs 60Hz), so if something tied to frame rate (animation, walking speed etc.) takes 1 second to do on NTSC, it'll take 1.2 seconds to do on PAL etc.
We should rally together to force game companies to use 32 bit timers rather than 64bit ones so we can keep finding these fun little glitches. The time to protect overflows is now! ;)
Lord have mercy fandom really has become unbearable with the ads and pop ups.
Install an ad blocker.
I opened this on an iPhone which has fewer adblock options. Desktop is better locked down.
Regardless I can still complain about how intrusive the ads are.
There are many ad block options on iPhone. I currently use Wipr 2, but in the past I've used both 1Blocker and AdBlock Pro with success.
Don't accept devices that limit your ad blocker options.
Does this discussion strike you as one where I’m deliberating whether or not to chuck my smartphone and buy into a new ecosystem to avoid ads on fandom?
These types of comments are always very unhelpful.
No, that's just a reminder that you had a choice, and chose empty talk about “ecosystems” over ability to control what you can see on “your” screen. You've stepped on a rake once, you got some experience, why repeat it over and over again?
I just opened this my iPhone with 1Blocker installed. I saw no ads. It’s been around since iOS 8
Never heard of it, appreciate the recc!
Edit: ah only works on safari
You are on iOS. There is only safari. Any other "web browser" is just a skin over safari
Yes I know everything is wrapped around safari. But I like having Firefox syncing across devices.
Edit: ah forgot my vpn was off, usually clears all that up for me. Much better now
So that's why it's called Excalibur 2!
Is it common to default to a signed integer for tracking a timer? I realize being unsigned it would still overflow but at least you'd get twice the time, no?
If you get to right before you need to be (taking as long as you want), then wait until overflow, then you still have 12h to do the last tiny part if it's unsigned.
You really managed to make the whole video without making a single "crash" pun? (Those freezes come close enough that you could call them crashes...)
I think many games were that way. SotN definitely has a global timer. On a native 32-bit system it makes sense, especially when the life of a game was a few months to a few years on the retail shelf. No player is going to leave their system running for 2.27 years so what’s the point of even tesing it?
Who knew at the time they were creating games that would be disassembled, deconstructed, reverse engineered. Do any of us think about that regarding any program we write?
Can be more than timers too. There's a funny one in Paper Mario where a block technically can be hit so many times it'll reset and award items again. Hit enough times it'll eventually crash. Of course it'd take around 30 years for the first rollover and 400 or so for the crash. https://n64squid.com/paper-mario-reward-block-glitch/
For some games the timer is stored is save files so it doesn't even have to be continuous play time. 2 years is still longer than anyone is expected to spend on a game.
Let's say youre pedantic with code. Ive been trying to be lately - clippy has an ovefflow lint for rust i try to use.
Error: game running for two years, rebooting so you cant cheese a timer.
Does this make the bug any better handled? Bugs like this annoy me because they arent easily answered.
There are always limits to what a program can do. The only fix is to choose large enough integers (and appropriate units) so that you can represent long enough times / large enough sizes / etc. that anyone could reasonably encounter. What sizes make sense also include how they impact performance and for a game from the 32-bit era, a crash (controlled abort or not) after over two years is probably a better choice than slowing everything down by using a 64-bit integer.
they're still made like this. Just now I made a frame counter that just increments every frame on a int64. It would eventually wrap around but doubt anyone will still be around to see it happen :|
Isn't this common in the computer game scene? Shouldn't you asume your game will be disassembled, deconstructed, reverse engineered?
Although for old games released before internet was widespread in the general population, it might have not been this obvious.
aslong as it doesnt lead to online cheats having such code is fine. if someone wants to reverse the game find an obscure almost untriggerable bug and then trigger it or play with it. 2.6 year game session is crazy if its not a server, and if its a server, thats still really crazy even for some open-world open-ended game... its a long time to keep a server up w/o restarts or anything (updates?).
looking at the various comments, there might be even some kind of weird appeal to leave such things in your game :D for people to find and chuckle about. it doesnt really disrupt the game normally does it?
The true Time Twister unlocked
Great video, just subscribed
Literally unplayable, someone should fix that.
Doom is actually such a good game, I always go back to it every few years. The 2016 reboot is also pretty fun, but the later two in the series didn’t do it for me.
Same. Something about the metroidvania design with the home hub of the later ones didn’t give the same feeling. It should be run, kill, find secrets, end, next level.
I just finished Robocop: Rogue City and it was exactly this a linear level by level shooter that felt like a pure Robocop power fantasy movie. I played new game plus it was so much fun and I never do that.
It's like the game industry got a fake memo saying no one wanted linear story-based games anymore. I ended up buying two more Teyon games because I was so happy with their formula and they are playable in a dozen or so hours. Tight, compact, linear, fun story and game play... No MTX or always online BS and they don't waste my time with busy work.
This is exactly how I want my FPS games to be. Just linear, run & gun. TBH, I can even do without weapon upgrades or any "RPG" style elements.
It's even worse in multiplayer games like COD and BF. As soon as I need to figure out combinations of 5x attachments to guns I lose all my interest in playing the game. That's why I'm still on CS I guess lol.
The latest DOOM: Dark Ages ditched the home hub. I think it's a really great DOOM game.
I was quite excited for it, despite not enjoying Eternal as much. But after about ~2 hours of playing it, I lost interest. I'm happy you're enjoying it, sadly it didn't click for me.
Especially the 'mech scale' stuff was just boring. I don't remember what they call it in-universe, but essentially the parts of the game where you're playing from a giant robot and just walking over tanks and fighting supersized demons.
This caters for people who prefer the classic Doom style of gameplay in FPS games:
https://www.reddit.com/r/boomershooters/
Ahh yes, I'm quite happy that a few years ago this has become a trend!
Same. And love those brutality mods.
Fun fact: Doom is now a Microsoft property, along with Quake, StarCraft, WarCraft, Overwatch, all of the adventure games from Infocom and Sierra, and of course Halo. Microsoft pretty much owns most of PC gaming. Which is what they've wanted since 1996 or so.
They own the past of PC gaming, as well as Call of Duty but that is more popular on consoles than PC nowadays. Those listed are small time compared to Counter-Strike 2, Dota 2, League of Legends, Valorant, Roblox, Apex Legends, Marvel Rivals and a number of hard-hitting games every year such as Witcher 3, Elden Ring, Baldur's Gate 3 etc.
So in other words the own the part of PC gaming that's actually good.
They own Minecraft as well.
> Microsoft pretty much owns most of PC gaming.
So valve next?
They missed that window when Sierra was still the publisher for Half-Life. Besides, Valve is not a publicly traded company and Gabe Newell as former manager at Microsoft has no interest in getting back together. Valve is betting everything on Linux right now to be more independent from Microsoft.
All the more reason for Microsoft to make a play now while Valve still at least somewhat depends on them.
And Gabe won't be around forever and the guy is already over sixty. Statistically he's got about two decades left to live and not all of that will be at a level where he can lead Valve.
"Valve is betting everything on Linux right now"
Not everything, but they do invest in it.
> Valve is betting everything on Linux right now...
They've been working on Linux support since at least around the time that Microsoft introduced the Windows Store... so for the last twelve years or so.
And, man, a couple of months ago I figured out how to run Steam as a separate user on my Xorg system. Not-at-all-coincidentally, I haven't booted into Windows in a couple of months. Not every game runs [0], but nearly every game in my library does.
I'm really gladdened by the effort put in to making this work.
[0] Aside from the obvious ones with worryingly-intrusive kernel-level anticheat, sometimes there are weird failures like Highfleet just detonating on startup.
I used to game on Linux back in the late 2000s through Wine. And I always found the mouse support to be jarring, even if I could get support to a decent level, for some reason the mouse input was never quite as fluid as it should have been.
And now I'm reluctant to move back to Linux for gaming, even though they've clearly come so far. I guess I should just go ahead and give it another shot.
It has come lightyears.
ProtonDB has a feature where you can give it access to your Steam account for reading and it'll give you a full report based on your personal library: https://www.protondb.com/profile
And I find if anything it tends to the conservative. I've encountered a few things where it was overoptimistic but its outweighed by the stuff that was supported even better than ProtonDB said.
In the late 2000s, I played a few things, but I went in with the assumption it either wouldn't work, or wouldn't work without tweaking. Now I go in with the assumption that it will work unless otherwise indicated. Except multiplayer shooters and VR.
As long as Gabe is alive, no way.
We must find a way to extend his life indefinitely.
*in control of Valve
Old age can make him give that up before death.
2016 remains one the greatest single player FPS games I've played (Titan Fall 2 is the other)
I'm under the impression that since Doom Eternal (the first after Doom 2016), the gameplay has considerably shifted to an "interconnected arenas" style, and with more sophisticated combat mechanics. Many games have started adopting this design, for example, Shadow Warrior 3.
I also dislike this trend. As a sibling comment noted, boomer shooters are generally closer to the old-school Doom gameplay, although some are adopting the newer design too.
The enemy cap all but forces the arena style gameplay. Doom 2016 tried to hide it more, but it still felt very stifling.
Does that hardware traps overflows or something?
Doesn't sound like something that would crash, I wonder what was the actual crashSigned overflow is undefined behavior in C, so pretty much anything could happen. Though this crash seems to be deterministic between platforms and compilers, so probably not about that. TFA says the variable is being compared to its previous value, and that comparison presumably assumes new < old cannot happen. And when it does, it could easily lead to eg. stack corruption. C after all happily goes to UB land if, for example, some execution path doesn’t return a value in a function that’s supposed to return a value.
Just because the language standard allows for anything to happen doesn't mean that actually anything can happen with real compilers. It's still a good question to think about how it could actually lead to a crash.
That doesn't make sense. If new < old cant happen there is no need to make a comparison. Stack corruption? Nah, its a counter not an index or pointer or it would fail sooner. But then what is the failure? IDK
Assuming new > old doesn't mean you actually make the comparison, but rather that the code is written with the belief that new > old. This code behaves correctly under this assumption, but might be doing something very bad that leads to a crash if the new < old.
An actual analysis would be needed to understand the actual cause of the crash.
Um, there are the cases new == old and new > old. And all the more specific cases new == old + n. I haven’t seen the code so this is just speculation, but there are plenty of ways how an unexpected, "can never happen" comparison result causes immediate UB because there’s no execution path to handle it, causing garbage to be returned from a function (and if that garbage was supposed to be a pointer, well…) or even execution never hitting a `ret` and just proceeding to execute whatever is next in memory.
Another super easy way to enter UB land by assuming an integer is nonnegative is array indexing.
Everything is fine as long as i isn’t negative. But if it is… (note that negative % positive == negative in C)Dividing by a difference that is suddenly zero is another possibility.
The error states that the window can't be created. It might be the problem with parameters to the window creation function (that should not depend on game state), or maybe the system is out of memory. Resources allocated in memory are never cleaned up because cleanup time overflows?
Doom4CE (this port) was based on WinDoom, which only creates the program window once at startup, then switches the graphical mode, and proceeds to draw on screen independently, processing the keyboard and mouse input messages. I'm not sure, but maybe Windows CE memory management forced the programmer to drop everything and start from scratch at the load of each level? Then why do we see the old window?
There are various 32 bit integer counters in Doom code. I find it quite strange that the author neither names the specific one, nor what it does, nor tries to debug what happens by simply initialising it with some big value.
Moreover, 2^32 divided by 60 frames per second, then by 60 seconds, 60 minutes, 24 hours, 30 days, and 12 months gives us a little less than 2.5 years. However, Doom gameplay tick (or “tic”), on which everything else is based, famously happens only 35 times a second, and is detached from frame rendering rate on both systems that are too slow (many computers at the time of release), or too fast (most systems that appeared afterwards). 2^32 divided by 35, 60 seconds, etc. gives us about 4 years until overflow.
Would be hilarious if it really is such an easy mistake.
Just be glad you knew what the bug was before you started. After 2.5 years... "Shit, I forgot to enable debug logging"
Notably, DOOM crashed before Windows CE.
Yes, great achivement!
Once upon a time, Windows NT 4 had a similar bug. Their counter was high precision, though, and was for uptime of the system. Back before Service Pack 3 (or was it SP2?) we had a scheduled task reboot the system on the first of the month. Otherwise it would crash after about 42 days of uptime, because apparently nobody at Microsoft tested their own server OS to run for that long.
Since we've hugged the site to death, have an archive.org link: https://web.archive.org/web/20250916234009/https://lenowo.or...
Sadly it appears that archive.org didn't capture all of the site formatting, but at least the text is there.
I love the post, but your blurry text is hurting my eyes. Looks like it's intentionally blurry but I can't figure out why. This can't be a holdover from older systems, they had razor-sharp text rendering on CRTs.
Looks crisp on my setup, but I block fonts and scripts. Reader mode is your friend :-)
2038 is going to be a fun year.
You have 13 years to upgrade to 64-bit ints or switch to a long long for time_t. Lots of embedded stuff or unsupported closed-source stuff is going to need special attention or to be replaced.
I know the OpenFirmware in my old SunServer 600MP had the issue. Unfortunately I don’t have to worry about that.
Most 32-bit games won't be updated, we'll have to resort to faking the time to play many of them.
That seems much closer than it did in y2k.
Everybody is sleeping on 2036 for NTP. That's when the fun begins.
Assuming correct implementation of the NTP spec and adherence to the "eras" functions, NTP should be resistant to this failure in 2036.
The problem being so many micro-controllers, non-interfaceable or cheaply designed computers/devices/machines might not follow the standards and therefore be susceptible although your iPhone, Laptop and Fridge should all be fine.
Fixing that is my retirement plan.
This is a level of testing that exceeds what the testers I know commit to. I myself was annoyed the five or so times yesterday we had to sit and wait to check the error handling after a 30 second timeout in the system I work on.
In games I worked on I use time to pan textures for animated FX.
After a few hours precision errors accumulate and the texture become stretched and noisy, but since explosions are generally short-lived its never a problem.
Yet this keep bothering me..
I haven't opened my DOOM software box, it's still in the shrinkwrap. I guess I can take it back and ask for a refund now?
Props again to the id team. No doubt something like that engineered by most folks today would have died long before the 2 year mark due to memory fragmentation if not outright leaks.
Was this specific to the PDA port or the core doom code?
@ID_AA_Carmack Are you going to write a patch to fix this?
I am going to need to see this replicated before I can believe.
Quick! John Carmack needs to be brought into this immediately.
The easy way to e-Nostradamus predictions:
"See this crash?
I predicted it years ago.
Don't ask me how, I couldn't tell you."
p.s. I had an old iPaq that I wouldn't have trusted to run for longer than a day and stay stable, kudos for that at the very minimum.
I had an iPaq for a while and I don't remember seeing OS/hardware crashes.
Love the look of that board :-)
This headline gave me a heart attack... I misread the site's name as Lenovo, and as I'm responsible for a whole lot of their servers running for years in a critical role... heart attack.
Maybe I need my morning coffee. :)
I mean I wouldn't mind getting a subdomain there but I do like lenowo more :3
Seems to be a PocketPC port of Doom, with no source given or even a snippet of the relevant code/variable name/etc. shown at all.
Yes. I think it it seems like it was the os that overflowed, and not Doom in this case.
It's also running on very old hardware, potentially with some electrolytic capacitors that have dried up. And, there's always the possibility that it's a gamma ray [1]!
[1] https://www.bbc.com/future/article/20221011-how-space-weathe...
They explained it was in the game code though?
To me, that error message was caused by some panic, and then the OS began gracefully shutting down the application in this case DooM - which would not have been done by the program itself. Therefore I conclude it was the OS.
I am not an OS developer, so I take my own conclusion with a grain of salt.
CNR. Please attach video.
Literally unplayable
It's good it didn't took a billion years to overflow. That would have been quite a long wait.
Literally unplayable.
Has this ever come up in a TAS of custom levels?
glitchless?
Not a comment on the post, but I sure wish Jira would load even half as quickly as this site.
It takes serious hardware investment [0] to pull that off.
[0] https://lenowo.org/viewtopic.php?t=28
Meta-Meta-Meta:
Update:
After the recent hacker news "invasion", I have now determined that the page can handle up to 1536 users before running out of RAM, meaning that the IP camera surprisingly is fully sufficient for its purpose. In other words, I will not be moving the forum in the near future as 32 MB of RAM seem to be enough to run it
Source: https://lenowo.org/viewtopic.php?t=28
> Host it on the Fritzbox 7950 instead?
It's a router.. oh my god that made me laugh
Perhaps it's hosted on a disposable vape?
Commenting on my Epic from an LG Fridge.
[dead]
It's not loading for me at all.
We recently moved to Linear and couldn’t be happier, can recommend!
Is this a joke because the site isn't loading at all?
At the time of writing the comment it was practically instantaneous for me and the comment was genuine. Now it seems to be having trouble and I'm choosing to retroactively make the comment a joke about Jira ;)
Came back to check this since the tab never loaded. I'm guessing traffic caused some issues?
You folks overflowed the 32 MB of RAM that my forum is running on and caused it to restart a few times due to the high amount of simultaneous connections. It has recovered now though
I’m guessing HN hug of death. Probably smarter than just auto scaling to handle any surge traffic and then get swamped by crawlers & higher bills.
It just supports 1536 concurrent users [0].
Which is fine unless you get to HN frontpage.
[0] https://lenowo.org/viewtopic.php?t=28
"I hope someone got fired for that blunder." /s