MomsAVoxell 3 hours ago

I had the privilege to work as a junior operator in the 80’s, and got exposed to some strange systems .. Tandem and Wang and so on .. and I always wondered if those weird Wang Imaging System things were out there, in an emulator somewhere, to play with, as it seemed like a very functional system for archive digitalization.

As a retro-computing enthusiast/zealot, for me personally it is often quite rewarding to revisit the ‘high concept execution environments’ of different computing era. I have a nice, moderately sized retro computing collection, 40 machines or so, and I recently got my SGI systems re-installed and set up for playing. Revisiting Irix after decades away from it is a real blast.

  • Keyframe 38 minutes ago

    as a fellow dinosaur and a hobbyist, I concur. Especially SGI's. For those that didn't know, MAME (of all things) can run IRIX to an extent https://sgi.neocities.org/

maxlin 4 hours ago

This list should include SerenityOS IMHO.

It might not be super unique but is a truly from-scratch "common" operating system built in public, which for me at least puts it at the position of a reference of an OS of whose code one person can fully understand if they'd want to understand the codebase of a whole complete-looking OS.

  • Rochus 2 hours ago

    > This list should include...

    And a few dozen others as well.

Lerc 2 hours ago

Are there any operating systems designed from the ground up to support and fully utilize many processor systems?

I'm thinking systems designed based on the assumption that there are tens, hundreds or even thousands of processors, and design assumptions are made at every level to leverage that availability

  • fiberhood 33 minutes ago

    The RoarVM [1] is a research project that showed how to run Squeak Smalltalk on thousands of cores (at one point it ran on 10,000 cores).

    I'm re-implementing it as a metacircular adaptive compiler and VM for a production operating system. We rewrite the STEPS research software and the Frank code [2] on a million core environment [3]. On the M4 processor we try to use all types of cores, CPU, GPU, neural engine, video hardware, etc.

    We just applied for YC funding.

    [1] https://github.com/smarr/RoarVM

    [2] https://www.youtube.com/watch?v=f1605Zmwek8

    [3] https://www.youtube.com/watch?v=wDhnjEQyuDk

  • 0x0203 an hour ago

    Yes, to a degree, but probably not quite like you're thinking. The super computers and HPC clusters are highly tuned for the hardware they use which can have thousands of CPUs. But ultimately the "OS" that controls them takes on a bit of a different meaning in those contexts.

    Ultimately, the OS has to be designed for the hardware/architecture it's actually going to run on, and not strictly just a concept like "lots of CPUs". How the hardware does interprocess communication, cache and memory coherency, interrupt routing, etc... is ultimately going to be the limiting factor, not the theoretical design of the OS. Most of the major OSs already do a really good job of utilizing the available hardware for most typical workloads, and can be tuned pretty well for custom workloads.

    I added support for up to 254 CPUs on the kernel I work on, but we haven't taken advantage of NUMA yet as we don't really need to because the performance hit for our workloads is negligible. But the Linux's and BSD's do, and can already get as much performance out of the system as the hardware will allow.

    Modern OSs are already designed with parallelism and concurrency in mind, and with the move towards making as many of the subsystems as possible lockless, I'm not sure there's much to be gained by redesigning everything from the ground up. It would probably look a lot like it does now.

  • Findecanor 31 minutes ago

    There have certainly been research operating systems for large cache-coherent multiprocessors. For example, IBM's K42 and ETH Zürich's Barrelfish. Both had been designed to separate the kernel state at each core from the others' by using message passing between cores instead of shared data structures.

xattt 3 hours ago

I can’t help but notice that each of these stubs represent a not-insignificant portion of effort put in by one or more humans.

serhack_ 5 hours ago

I would love to see some examples outside of the WIMP-based UI

  • WillAdams 2 hours ago

    Well, there were Momenta and PenPoint --- the latter in particular focused on Notebooks which felt quite different, and Apple's Newton was even more so.

    Oberon looks/feels strikingly different (and is _tiny_) and can be easily tried out via quite low-level emulation (and just wants some drivers to be fully native say on a Raspberry Pi)

  • wazzaps 2 hours ago

    MercuryOS towards the bottom is pretty cool

    • MonkeyClub an hour ago

      MercuryOS [1, 2] appears to be simply a "speculative vision" with no proof of concept implementation, a manifesto rather than an actual system.

      I read through its goals, and it seems that it is against current ideas and metaphors, but without actually suggesting any alternatives.

      Perhaps an OS for the AI era, where the user expresses an intent and the AI figures out its meaning and carries it out?

      [1] https://www.mercuryos.com/

      [2] https://news.ycombinator.com/item?id=35777804 (May 1, 2023, 161 comments)

  • amelius 4 hours ago

    Maybe a catalog of kernels?

rubitxxx3 3 hours ago

This list could be longer! I expected much more, given that CS students and hobbyists are doing this sort of thing often. Maybe the format is too verbose?

m2f2 3 hours ago

Too much time on your hands folks. Get out of your cave, enjoy time with family, friends.... life is so short to lose time on designing sth just to make a post on HN .... or God forbid, X.com...

  • padjo 3 hours ago

    Don’t try to force your values on other people. In the end your time spent with friends is just as meaningless as their time spent developing an obscure OS.

  • junon 2 hours ago

    No thanks :)