LLMs are statistical models trained on human-generated text. They aren't the perfectly logical "machine brains" that Asimov and others imagined.
The upshot of this is that LLMs are quite good at the stuff that he thinks only humans will be able to do. What they aren't so good at (yet) is really rigorous reasoning, exactly the opposite of what 20th century people assumed.
What we used to think of as "AI" at one point in time becomes a mere "algorithm" or "automation" by another point in time. A lot of what Asimov predicted has come to pass, very much in the way he saw it. We just no longer think of it as "AI".
LLM's are just the latest form of "AI" that, for a change, doesn't quite fit Asimov's mold. Perhaps it's because they're being designed to replace humans in creative tasks rather than liberate humans to pursue them.
Exactly... as someone said " I need AI to do my laundary and dishes, while I can focus on art and creative stuff" ... But AI is doing the exact opposite, i.e creative stuff (drawing, poetry, coding, documents creation etc), while we are left to do the dishes/laundary.
As someone else said - maybe you haven't noticed but there's a machine washing your clothes, and there's a good chance it has at least some very basic AI in it.
It's been quite a while since anyone in the developed world has had to wash clothes by slapping them against a rock while standing in a river.
Obviously this is really wishing for domestic robots, not AI, and robots are at least a couple of levels of complexity beyond today's text/image/video GenAI.
There were already huge issues with corporatisation of creativity as "content" long before AI arrived. In fact one of our biggest problems is the complete collapse of the public's ability to imagine anything at all outside of corporate content channels.
AI can reinforce that. But - ironically - it can also be very good at subverting it.
> As someone else said - maybe you haven't noticed but there's a machine washing your clothes, and there's a good chance it has at least some very basic AI in it.
This really seems like an "akshually" argument to me...
Nobody is denying that there are dishwashers and washing machines, and that they are big time savers. But is it really a wonder what people are referring to when they say "I want AI to wash my dishes and do my laundry"? That is, I still spend hours doing the dishes and laundry every week, and I have a dishwasher and washing machine. But I still want something to fold my laundry, something that lets me just dump my dishes in the sink and have them come out clean, ideally put away in the cabinets.
> Obviously this is really wishing for domestic robots, not AI
I don't mean this to be an "every Internet argument is over semantics" example, but literally every company and team I know that's working on autonomous robots refers heavily to them as AI. And there is a fundamental difference between "old school" robotics, i.e robots following procedural instructions, and robots that use AI-based models, e.g https://deepmind.google/discover/blog/gemini-robotics-brings... . I think it's doubly weird that you say that today's washing machines "has at least some very basic AI in it" (I think "very basic" is doing a lot of heavy lifting there...), but don't think AI refers to autonomous robots.
The wits in robotics would say we already have domestic robots - we just call them dishwashers and washing machines. Once something becomes good enough to take the job completely, it gets the name and drops "robotic" - that's why we still have robotic vacuums.
Similarly, we already have AI, which is really MI (Machine Intelligence). Long before the current hype cycle the defense industry and others have been using the same tools being applied now. Of course, there are differences, such as scale and architecture, etc.
I think that’s a bit silly. The reason we don’t commonly refer to a dishwasher as a robot isn’t because dishwashers exist and we only use “robot” for things that don’t exist.
(This should already be clear given that robots do exist, and we do call them robots, as you yourself noted, but never mind that for now.)
It’s not even about the level of mechanical or computational complexity. Automobiles have a lot of mechanical and computational complexity, but also aren’t called robots (ignoring of course self-driving cars).
I'm more interested in how we regularly use the term, rather than how we might attempt to come up with a rigorous definition (particularly when that rigorous definition conflicts awkwardly with regular usage).
My point is simply that we absolutely do not refer to a home dishwasher as a robot. Nor an old thermostat with a bimetallic strip and a mercury switch. Nor even a normal home PC.
I know I could google it, but I wonder washing machines originally was called an “automatic clothes washer” or something similar before it became widely adopted.
> maybe you haven't noticed but there's a machine washing your clothes
Well sure, there’s also a computer recording, storing, and manipulating the songs I record and the books I write. But that’s not what we mean by “AI that composes music and writes books.”
This isn’t a quibble about the term “AI.” It’s simply clear from context that we’re talking about full automation of these tasks initiated by nothing more than a short prompt from the human.
I have yet to enjoy any of the "creative" slop coming out of LLMs.
Maybe some day I will, but I find it hard to believe it, given a LLM just copies its training material. All the creativity comes from the human input, but even though people can now cheaply copy the style of actual artists, that doesn't mean they can make it work.
Art is interesting because it is created by humans, not despite it. For example, poetry is interesting because it makes you think about what did the author mean. With LLMs there is no author, which makes those generated poems garbage.
I'm not saying that it can't work at all, it can, but not in the way people think. I subscribe to George Orwell's dystopian view from 1984 who already imagined the "versificator".
I don't find that very funny. It's interesting to see what AI can do, but wait a month or two and watch it again.
Compare that to the parodies made by someone like "Weird Al" Yankovic. And I get that these tools will get better, but the best parodies work due to the human performer. They are funny because they aren't fake.
This goes for other art forms. People mention photography a lot, comparing it with painting. Photography works because it captures a real moment in time and space; it works because it's not fake. Painting also works because it shows what human imagination and skill with brushes can do. When it's fake (e.g., not made by a human painting with brushes on canvas, but by a Photoshop filter), it's meaningless.
Seems that you may have a point. As noted in another comment[0], the [rather puerile] lyrics were completely bro-sourced. They used Suno to mimic an old-style band.
We thought machines were gonna do the work so we could pursue art and music. Instead of machines get to make the art and music, while humans work in the Amazon warehouses.
It was kind of funny to see the shift in the media reaction when they realized the new batch of machines are better at replacing writers than at replacing truckers.
We ended up here because we have a propensity to share our creative outputs, and keep our laundry habits to ourselves. If we instead went around bragging about how efficiently we can fold a shirt, complete with mocap datasets of how it's done, we'd have gotten the other kind of AI first.
> We ended up here because we have a propensity to share our creative outputs, and keep our laundry habits to ourselves
Somehow I doubt that the reason gen AI is way ahead of laundry-folding robots is because it's some kind of big secret about how to fold a shirt, or there aren't enough examples of shirt folding.
Manipulating a physical object like a shirt (especially a shirt or other piece of cloth, as opposed to a rigid object) is orders of magnitude more complex that completing a text string.
If you wanted finger-positioning data for how millions of different people fold thousands of different shirts, where would you go looking for that dataset?
My point is just that the availability of training data is vastly different between these cases. If we want better AI we're probably going to have to generate some huge curated datasets for mundane things that we've never considered worth capturing before.
It's an unfortunate quirk of what we decide to share with each other that has positioned AI to do art and not laundry.
The bottom line from Kasparovs book on AI was that AI researchers want to AGI, but every decade they are forced to release something to generate revenue and its branded as AI until the next time.
And often they get caught up supporting the latest fake AI craze that they dont get to research AGI.
I see this referenced over and over again to trivialise AI as if it is a fait acompli.
I'm not entirely sure why invoking statistics feels like a rebuttal to me. Putting aside the fact that LLMs are not purely statistics, even if they were what proof is there that you cannot make a statistical intelligent machine. It would not at all surprise me to learn that someone has made a purely statistical Turing complete model. To then argue that it couldn't think you are saying computers can never think, and by that and the fact that we think you are invoking a soul, God, or Penrose.
Personally, I have a negative opinion of LLMs, but I agree completely. Many people are motivated to reject LLMs solely because they see them as "soulless machines". Judge based on the facts of the matter, and make your values clear if you must bring them into it, but don't pretend you're not applying values when you are. You can do worse: kneejerk emotional reactions are just pointless.
In this one case it's not meant to trivialize, it's meant to point out that LLMs don't behave the way we thought that AI would behave. We thought we'd have 100% logically-sound thinking machines because we built them on top of digital logic. We thought they'd be obtuse, we thought they'd be "book smart but not wise". LLMs are just different from that; hallucinations, the whole "fancy words and great sentences but no substance to a paragraph", all that is different from the rigid but perfect brains we thought AI would bring. That's what "statistical machine" seems to be trying to point out.
It was assumed that if you asked the same AI the same question, you'd get the same answer every time. But that's not how LLMs work (I know you can see them the same every time and get the same output but at we don't do that so how we experience them is different).
To be truthful, though, that’s only like 0.01 percent of the “academia was stolen from us and being a professor (if you ever get there at all) is worse” problem.
This wasn't just a "academia" thing, though. All business executives (even low level ones) had secretaries in the 1980s and earlier too. Typing wasn't something most people could do and it was seen as a waste of time for them to learn. So people dictated letters to secretaries who typed them. After the popularity of personal computers, it just became part of everyone's job to type their correspondence themselves and secretaries (greatly reduced in number and rebranded as "assistants" who deal more with planning meetings and things) became limited only to upper management.
I've only read the first Foundation novel by Asimov. But what you write applies equally well to many other Golden Age authors e.g. Heinlein and Bradbury, plus slightly later writers like Clarke. I doubt there was much in the way of autism awareness or diagnosis at the time, but it wouldn't be surprising if any of these landed somewhere on the spectrum.
Alfred Bester's "The stars my destination" stands out as a shining counterpoint in this era. You don't get much character development like that in other works until the sixties imo.
[The italics and punctuation suggest your comment is sarcastic, but I'm going to treat it as serious just in case.]
Yeah, I'd say characterisation is a weakness of his. I've read Stranger in a Strange Land, The Moon is a Harsh Mistress, Starship Troopers, and Double Star. Heinlein does explore characters more than, say, Clark, but he doesn't go much for internal change or emotional growth. His male characters typically fall into one of two cartoonish camps: either supremely confident, talented, intelligent and independent (e.g. Jubal, Bernardo, Mannie, Bonforte...) or vaguely pathetic and stupid (e.g. moon men). His female characters are submissive, clumsily sexualised objects who contribute very little to the plot. There are a few partial exceptions - e.g. Lorenzo in Double Star and female pilots in Starship Troopers - but the general atmosphere is one of teenage boy wish fulfilment.
Excuse me for giving the impression of a pedant, but do you mean Clarke, as in Arthur C., there? I've been trying since I first read your comment to puzzle out to whom by that name you could possibly be referring in this context, and it's only just dawned on me to wonder if you simply have not bothered to learn the spelling of the name you intended to mention.
Yes, that Clarke. Sorry for putting you to the extra effort. I spelled it correctly in the initial post you replied to. Guess I assumed that people would spot the back-reference.
> Yes, that Clarke. Sorry for putting you to the extra effort. I spelled it correctly in the initial post you replied to. Guess I assumed that people would spot the back-reference.
In entire fairness, I was distracted by you having said he and his contemporaries must all have been autistic, as if either you yourself were remotely competent to embark upon any such determination, or as though it would in some way indict their work if they were.
I'm sure you would never in a million years dare utter "the R-slur" in public, though I would guess that in private the violation of taboo is thrilling. That's fine as far as it goes, but you really should not expect to get away with pretending you can just say "autistic" to mean the same thing and have no one notice, you blatantly obvious bigot.
Thank you for confirming, especially at such effort, when a simple "No, I haven't; I just spend too much time uncritically reading feminism Twitter," would have amply sufficed. There's an honesty to this response in spite of itself, and in spite of itself I respect that.
I sincerely have no idea if any of your comments in this thread are sarcastic or not. (This comment is also not sarcastic FYI).
Generally, I also agree that Heinlein's characters are one dimensional and could benefit from greater character growth, though that was a bit of a hallmark of Golden Age sci-fi.
"Teenage boy wish fulfillment" is well beneath any reasonable standard of criticism, and I've addressed that with about as much respect as it deserves.
There is much worthy of critique in Heinlein, especially in his depiction of women. I've spent about a quarter century off and on both reading and formulating such critiques, much more recently than I've spent meaningful time with his fiction. I've also read what he had to say for himself before he died, and what Mrs. Heinlein - she kept the name - said about him after. If we want to talk about, for example, how the themes of maternal incest and specifically feminine embodiment of weakly superhuman AGI in his later work reflect a degree of senescence and the wish for a supercompetent maternal figure to whom to surrender the burden of responsibility, or if we want to talk about how Heinlein seems to spend an enormous amount of time just generally exploring stuff from female characters' perspectives that an honest modern inquiry would recognize as fumbling badly but earnestly in the direction of something like a contemporary understanding of gender, then we could talk about that.
No one wants to, though. You can't use anything like that as a stick to beat people with, so it never gets a look in, and those as here who care nothing for anything of the subject save if it looks serviceable as a weapon claim to be the only ones in the talk who are honest. They don't know the man's work well enough to talk about the years he spent selling stories that absolutely revolve around character development, which exist solely to exemplify it! Of course these are universally dismissed as his 'juveniles' - a few letters shy of 'juvenilia' - because science fiction superfans are all children and so are science fiction superhaters, neither of whom knows how to respond in any way better than a tantrum on the rare occasion of being told bluntly it's well past time they grew up.
But they're the honest ones. Why not? So it goes. It's a conversation I know better than to try to have, especially on Hacker News; if I don't care for how it's proceeding, I've no one but myself to blame.
Not sure if it will help me saying this, but that's a disappointingly dismissive and avoidant response well below HN standards. I'm very willing to engage with any counter-arguments in good faith. I don't use Twitter (or Mastodon, or BlueSky, or TikTok, or Facebook, or Threads etc...), but I do enjoy discussing sci fi of different periods on Goodreads groups.
It seems filthy rich of you to claim good faith at this time, but I have recently begun to gather that in some quarters lately, it is considered offensively unreasonable to expect working knowledge of any material as a prerequisite for participating competently in discussion thereof. So though your claim is facially false, I ironically can't fairly consider that it is other than honestly made. Your precepts are in any case your problem. Good luck with it, you Hacker News expert.
I'd be happy to receive any pointers on how I'm wrong - perhaps I've misinterpreted what I've read, or there are characters in the rest of his work that defy my stance.
> I'd be happy to receive any pointers on how I'm wrong - perhaps I've misinterpreted what I've read, or there are characters in the rest of his work that defy my stance.
If you meant that honestly, you would already have found ample directions for further research, easily enough not to need asking. Everything you claim to want lies just a Google search away, on any of the various and I should hope fairly easily identifiable search terms I have mentioned. "It is not my job to educate you."
Or, rather, it would still
not be my job even if to learn were what you really want here. You don't, of course. That's why you haven't bothered so much as trying a few searches that might turn up something you would have to pretend to have read. Much easier to try to make me look emotionally unstable - 'defy?' Really. - because you can't actually answer anything I've said and you know it. Good luck with that, too.
I've read the books, mulled them over, discussed them with others, and done some reading of what other critics have to say online. I've given my opinion and some of the reasoning behind it. If you want more of my reasoning I'm happy to give it. You have given nothing in response. It feels a lot like you've jumped to conclusions because my opinion is very different to yours. So you've immediately decided not to engage but are nevertheless hellbent on making me out to be uninformed or stupid.
We've clearly got off on the wrong foot here. I don't want to make out like I think Heinlein is crap. He had a lot of fantastic, creative ideas about science, technology, culture, sexuality and governance. He was extremely daring and sometimes quite subtle in the way he explored those ideas. But - in the novels I've read - his characters lack a certain depth and relatability. They express very little of the self-doubt, empathy, growth, and deep-seated motivations that are core to the human condition. So it goes also with Asimov, Clarke, Bradbury, and others. And it's fine that those weren't their strong suits. They had other strengths. And there were other writers like Bester, Dick, Le Guin, Zelazny, Herbert etc... who could fill the gaps.
Herbert for better gender and emotional politics than Heinlein. Herbert! And to think I imagined there was nothing left you could say to surprise me.
Don't expect me to stop discussing what your behavior displays of your character, just because you've finally shown the modicum of rhetorical sense or tactical cunning required to minimally amend that behavior. Again, if you actually meant even a fraction of what you say, you would now be reading instead of still speaking. If it bothers you that you continue to indict yourself by your actions this way, consider acting differently.
Should you at any future point opt to develop a thesis in this area which is capable of supporting knowledgeable discussion, I confide it will find an audience in accord with its quality. In the meantime, please stop inviting me to participate in the project of recovering your totally unforced embarrassment.
Believe it or not by the look of things, I already have enough else to do today. Wiping your nose as you struggle and fail to learn from your vastly overprivileged young life's first encounter with entirely merited and obviously unmitigated contempt doesn't really make the list, at least not past the point at which it ceases to amuse, which I admit is now fast approaching.
In Dune there are female characters with their own desires and designs on the world, who go out and take what they want. There is profound loss, and personal transformation. There is coming to terms with intensely sad or painful circumstances. There is overcoming doubt, building resilience, and taking responsibility and control of one's destiny. These things were not really explored in what I've read of Heinlein.
> These things were not really explored in what I've read of Heinlein.
I don't know how much further you expect me to need to boil down "read more" and still be able to take you seriously. How do you expect that, when you haven't even bothered trying to justify how you chose those four novels to represent forty years?
I see that 'seriously' very much describes how you like to regard yourself. You've insisted most thoroughly others must regard you likewise, regardless of what you show yourself anywhere near capable of actually rewarding or indeed even appreciating. Do you have a favorable impression of your efforts thus far? Have they had the results that you hoped?
We would now be having a different conversation if you had said anything to suggest to me it would be worth my trouble to continue in the attempt. I'd have enjoyed that conversation, I think; as most days here, I had hopes of learning something new. You've felt the need to spend the day doing this instead. If you don't like how that's working out, whom fairly may you blame?
And then it turns out that having taken bits and bites out of my entire mortal day, to pursue this pointless argument with you, was just what I needed even if nothing at all what I wanted. It put me in a state of mind where I could find some kind words to say to my family that I think some folks there may have been quite a bit, if in a small way, needing to hear for a while.
That's not even slightly to your credit, of course. But I can't fairly say you weren't involved, and I have to admit I genuinely appreciate this result, however inadvertent and I'm sure unimaginable on your part it may be. So, though I say it through gritted teeth, thank you for your time. If for absolutely nothing else whatsoever, for that at least I must express my genuine gratitude.
Intolerable though you've been throughout, and despite what I assume to have been your every intention, something good may yet come of your ill efforts. You deserve to know that. May it heap as many coals of fire on your head as your heart should prove small enough to deserve.
> At this point I'm mostly just intrigued to see whether you'll keep replying and whether you'll make any substantial points.
Every substantive point I've actually made all day you have totally ignored, and this is what it's worth your time still to do. But sure. You can stop paying me rent to live in your head any time you like. Keep telling yourself that. I don't doubt you need to, to get through a day.
> Substantive as in about Heinlein's work, rather than attacks on me or my motivations.
We could have done that fifteen hours ago [1], or eleven hours ago [2], or nine hours ago [3] [4], or any time you wanted. You haven't. What's changed?
I've given you lots of opportunity to offer a defense to the points I raised in my first reply to you. I've offered to go into more detail. I've contrasted Heinlein's work with contemporaneous works. Saying "you should go and read more" is not compelling, especially given the amount of effort you've expended to avoid saying anything of substance. I wonder if you feel insecure about whether such a defense is possible.
> I've given you lots of opportunity to offer a defense to the points I raised in my first reply to you...Saying "you should go and read more" is not compelling...I wonder if you feel insecure about whether such a defense is possible.
No, you don't. I've said nothing I need defend, and you've said nothing you can. It would be one thing if I had to say not to piss on my boots and tell me it's raining, but this doesn't even count as pissing. It's just you repeating yourself from yesterday and that's boring for both of us.
"You are a bigot" is a factual claim I have made [1], now quite a number of hours and comments ago. You haven't addressed it. You won't. You can't. You have no choice now but to let it stand. You have shown it more true than even you yourself can pretend to ignore. You need someone to tell you it isn't really true, in a way you can believe. No one is here to tell you that.
There are other embarrassments, of course; you've shown yourself not a tenth the scholar you fancy yourself to be, nor able to handle yourself even slightly in the face of someone who needs nothing from you and cares neither for nor against you. You would care more that I called you an abuser, but you don't see the people you try to treat that way as human. So what you're really stuck on is that I called you a bigot and you can't answer back. Hence still finding it worth your while to try to talk me into letting you off that hook.
Sorry, not sorry. Go back to bed. Read a book while you're there, why don't you? It might help you sleep.
edit: You also haven't explained what makes those four books you named as exemplary as you called them. Can you describe the common thread? I ask because I actually have read them, in no case fewer than three times, and they really haven't all that much in common. Oh, by the same author, certainly. But you've only dropped names. You haven't tried to draw any comparisons or demonstrate anything by the rhetorical juxtaposition of those characters, though I grant you keep insisting it must count for something that you listed them. You haven't, so far as I can see, discussed or even mentioned a single event in the plot of any of those novels. For all the nothing you've had to say with any actual reference to them, even the few texts you named might as well not exist!
It is extremely risible at this time of you to try to claim you are the one here interested in talking about Heinlein. If there were a God, it would not be safe to tell a lie of that magnitude near a church. But no matter. To get back to the first question I asked here just above: Did anyone actually explain to you why those four should be the first and last of Heinlein worth talking about? Did you ever think to ask? Or was it that they were part of an assignment? - you turned in a paper and assumed the passing grade meant you must have learned something by the transaction, and that for you was where the matter and all semblance of curiosity ended.
I hope it isn't that last one. I already believed firmly that student loan relief was the correct action both ethically and economically; as I have said in other quarters lately, it is not possible for you to be enough of an asshole to change my politics. But if this is you recapitulating something you paid to be taught - if you're currently
pursuing or God forfend have completed an American university education, and the best approximation of clear thought you can manage is this - then whoever sold you and your family that bill of goods ought damn well be horsewhipped, and that they merely see the loan annulled instead would be a considerable mercy.
I meant that you might offer a defense of Heinlein against my initial points: for example, that there's a strong element of wish fulfilment in his characters. This is neither an extreme nor an uncommon critique. You clearly disagree with it quite strongly. I just want to know what about it you personally find unconvincing.
You ask what I find unconvincing. I'm happy to further oblige you:
> His male characters typically fall into one of two cartoonish camps: either supremely confident, talented, intelligent and independent (e.g. Jubal, Bernardo, Mannie, Bonforte...) or vaguely pathetic and stupid (e.g. moon men). His female characters are submissive, clumsily sexualised objects who contribute very little to the plot. There are a few partial exceptions - e.g. Lorenzo in Double Star and female pilots in Starship Troopers - but the general atmosphere is one of teenage boy wish fulfilment.
"Cartoonish." "Pathetic." "Stupid." "Submissive." "Clumsily sexualized." "Teenage boy." 'Moon men' - you mean Loonies? And this all was you yesterday [1]. How far do you really expect to get with this farcical pantomime of sweet reason now? I ask again: What's changed?
This all began when I said you obviously hadn't read what you claimed to have [2], and it got so far up your nose you couldn't help going and proving me right. You've made a lot more bad decisions since then, but don't worry: I'll keep reminding you as long as you show you need me to that you can amend your behavior at any time.
Will email you some links/screenshots later today to demonstrate that I've read them (and expand on my points). Would post them here but keen to keep accounts separate.
Okay. Before you do so and for no particular reason, I feel I should note two things.
First, assuming you are not in fact a public figure, I will not publicly reveal your identity or any information I believe could lead to its disclosure, and that is exactly as far into my confidence as you may expect to come. That caveat excepted, I hereby explicitly disclaim any presumption you may have of privacy in any communication you make with me via email or other nonpublic means.
I won't dox you. I understand it isn't as safe for everyone as for me to have their name in the world. And I'm not saying I intend to publish all, or indeed any, of what you send; if it deserves in my view to remain in confidence, I will keep it so. But if you think taking this conversation to email will give you a chance to play games where no one else can see, you had better think twice.
(Should you by any of several plausible means dig up my phone number and try giving me a call, I hereby explicitly advise that any such action on your part constitutes "prior consent" per Md. Code §10–402 [1], and I will exercise my option under that law without further notice.)
Second, there exists an organization with which I have a legal agreement, binding on all our various heirs and assigns, to the effect we are quits forever. I will refer to this company as "Name Redacted for Legal Reasons" or "Name Redacted" for short, and describe it as the brainchild of a fascinating and tight-knit group of siblings, any of the three (technically four) of whom I'd have liked the chance to know better than I did.
I will also note, not for the first time, that I signed that agreement in entire good faith which has endured from that day through this, and I earnestly believe the same of my counterparty both collectively, and in the individual and separate persons of those who represented Name Redacted to me throughout that process as well as through my prior period of employment.
Now, if I were an employee of Name Redacted for Legal Reasons, and I had started a day's worth of shit in public with a signatory of such an agreement as I describe - that is, if I had acted in a way which could be construed to compromise my employer's painstakingly arrived-upon mutual quitclaim - then the very last thing I would ever want to do would be to allow to come into existence documentary evidence of my possibly somewhat innocent but certainly very grave foolishness. Because if that did happen, I would understand I R. May confidently expect very soon to become 'the most fired-for-causedest person in the history of fuck.'
As I said, I signed in good faith. In that same good faith, what choice really would I have but to privately disclose in full detail? It would be irresponsible of me to assume this was the only problem such intemperate behavior might be creating for Name Redacted, any or all of which might be far more consequential than this.
I'm sure at this point I'm only talking to hear myself speak, though. In any case, I look forward to your email.
Ah, here we go. I understand why you're using a fresh throwaway for this sort of thing, of course. Can't risk being seen for no better than you have to be, eh? But this at least - and, I strongly suspect, at last - is honest.
You can't abuse me in any way you're wise or sensible enough to imagine finding, so now you'll go mistreat someone inside the span of your arm's reach, blaming me all the while for your own infantile urge to do so. I wish you every bit as much joy of it as you deserve. And I hope they know your current Hacker News handle.
Oh, I know; I don't blame you at all for feeling some need to clarify, but I was under no confusion. Sorry you got tangled up in all this. I hope it hasn't been totally lacking in literary-critical interest, at least.
Almost everything we learn in schools, universities, most jobs, history, news, hackernews, etc is literally human-generated text. Our brains have an efficient structure to learn language, which has evolved over time, but the processes of actually learning languages happens after you are born, based on human-generated text/voice. Things like balance/walking, motion control, speaking (physical voice control), other physical things are trained on sensory data, but there's no reason LLMs/AIs can't be trained on similar data (and in many cases they already are).
What we generate is probably a function of our sensory data + what we call creativity. At least humans still have access to the sensory data, so we can separate the two (with various success).
LLMs have access to what we generate, but not the source. So it embed how we may use words, but not why we use this word and not others.
> I don't understand this point - we can obviously collect sensory data and use that for training.
Sensory data is not the main issue, but how we interpret them.
In Jacob Bronowski's The Origins of Knowledge and Imagination, IIRC, there's an argument that our eyes are very coarse sensors. Instead they do basic analysis from which the brain can infer the real world around us with other data from other organs. Like Plato's cave, but with much more dimensions.
But we humans came with the same mechanisms that roughly interpret things the same way. So there's some commonality there about the final interpretation.
> Humans learn language by observing other humans use language, not by being taught explicit rules about when to use which word and why.
Words are symbols that refers to things and the relations between them. In the same book, there's a rough explanation for language which describe the three elements that define it: Symbols or terms, the grammar (or the rules for using the symbols), and a dictionary which maps the symbols to things and the rules to interactions in another domain that we already accept as truth.
Maybe we are not taught the rules explicitly, but there's a lot of training done with corrections when we say a sentence incorrectly. We also learn the symbols and the dictionary as we grow and explore.
So LLMs learn the symbols and the rules, but not the whole dictionary. It can use the rules to create correct sentences, and relates some symbols to other, but ultimately there's no dictionary behind it.
> In the same book, there's a rough explanation for language which describe the three elements that define it: Symbols or terms, the grammar (or the rules for using the symbols), and a dictionary which maps the symbols to things and the rules to interactions in another domain that we already accept as truth.
There are 2 types of grammar for natural language - descriptive (how the language actually works and is used) and prescriptive (a set of rule about how a language should be used). There is no known complete and consistent rule-based grammar for any natural human language - all of these grammar are based on some person or people, in a particular period of time, selecting a subset of the real descriptive grammar of the language and saying 'this is the better way'. Prescriptive, rule-based grammar is not at all how humans learn their first language, nor is prescriptive grammar generally complete or consistent. Babies can easily learn any language, even ones that do not have any prescriptive grammar rules, just by observing - there have been many studies that confirm this.
> there's a lot of training done with corrections when we say a sentence incorrectly.
There's a lot of the same training for LLMs.
> So LLMs learn the symbols and the rules, but not the whole dictionary. It can use the rules to create correct sentences, and relates some symbols to other, but ultimately there's no dictionary behind it.
LLMs definitely learn 'the dictionary' (more accurately a set of relations/associations between words and other types of data) and much better than humans do, not that such a 'dictionary' is an actual determined part of the human brain.
> there's an argument that our eyes are very coarse sensors. Instead they do basic analysis from which the brain can infer the real world around us with other data from other organs
I don't buy it. I think our eyes are approximately as fine as we perceive them to be.
When you look through a pair of binoculars at a boat and some trees on the other side of a lake, the only organ that's getting a magnified view is the eyes, so any information you derive comes from the eyes and your imagination, it can't have been secretly inferred from other senses.
Not really, sometimes it's just plausible lies. We distort the world, but respects some basic rules, making it believable. Another difference from LLMs is that we can store this distortion and lay upon it as $TRUTH.
And we can distort quite far (see cartoons in drawing, dubstep in music,...)
Maybe; at some level are dogs' brains also simple sensory-collecting statistical models? A human baby and a dog are born on the same day; that dog never leaves that baby's side, for 20 years. It sees everything it sees, it hears everything it hears, it is given the opportunity to interact with its environment in roughly the same way the human baby does, to the degree to which they are both physically capable. The intelligence differential after that time will still be extraordinary.
My point in bringing up that metaphor is to focus the analogy: When people say "we're just statistical models trained on sensory data", we tend to focus way too much on the "sensory data" part, which has led to for example AI manufacturers investing billions of dollars into slurping up as much human intellectual output as possible to train "smarter" models.
The focus on the sensory input inherently devalues our quality of being; that who we are is predominately explicable by the world around us.
However: We should be focusing on the "statistical model" part: that even if it is accurate to holistically describe the human brain as a statistical model trained on sensory data (which I have doubts about, but those are fine to leave to the side): its very clear that the fundamental statistical model itself is simply so far superior in human brains that comparing it to an LLM is like comparing us to a dog.
It should also be a focal point for AI manufacturers and researchers. If you are on the hunt for something along the spectrum of human level intelligence, and during this hunt you are providing it ten thousand lifetimes of sensory data, to produce something that, maybe, if you ask it right, it can behave similarity to a human who has trained in the domain in only years: You're barking up the wrong tree. What you're producing isn't even on the same spectrum; that doesn't mean it isn't useful, but its not human-like intelligence.
Well the dog brain and human brain are very different statistical models, and I don't think we have any objective way of comparing/quantifying LLMs (as an architecture) vs human brains at this point. I think it's likely LLMs are currently not as good as human brains for human tasks, but I also think we can't say with any confidence that LLMs/NNs can't be better than human brains.
For sure; we don't have a way of comparing the architectural substrate of human intelligence versus LLM intelligence. We don't even have a way of comparing the architectural substrate of one human brain with another.
Here's my broad concern: On the one hand, we have an AI thought leader (Sam Altman) who defines super-intelligence as surpassing human intelligence at all measurable tasks. I don't believe it is controversial to say that we've established that the goal of LLM intelligence is something along these lines: it exists on the spectrum of human intelligence, its trained on human intelligence, and we want it to surpass human intelligence, on that spectrum.
On the other hand: we don't know how the statistical model of human intelligence works, at any level at all which would enable reproduction or comparison, and there's really good reason to believe that the human intelligence statistical model is vastly superior to the LLM model. The argument for this lies in my previous comment: the vast majority of contribution of intelligence advances in LLM intelligence comes from increasing the volume of training data. Some intelligence likely comes from statistical modeling breakthroughs since the transformer, but by and large its from training data. On the other hand: Comparatively speaking, the most intelligent humans are not more intelligent because they've been alive for longer and thus had access to more sensory data. Some minor level of intelligence comes from the quality of your sensory data (studying, reading, education). But the vast majority of intelligence difference between humans is inexplicable; Einstein was just Born Smarter; God granted him a unique and better statistical model.
This points to the undeniable reality that, at the very least, the statistical model of the human brain and that of an LLM is very different, which should cause you to raise eyebrows at Sam Altman's statement that superintelligence will evolve along the spectrum of human intelligence. It might, but its like arguing that the app you're building is going to be the highest quality and fastest MacOS app ever built, and you're building it using WPF and compiling it for x86 to run on WINE and Rosetta. GPT isn't human intelligence; at best, it might be emulating, extremely poorly and inefficiently, some parts of human intelligence. But, they didn't get the statistical model right, and without that its like forcing a square peg into a round hole.
Attempting to summarize your argument (please let me know if I succeeded):
Because we can't compare human and LLM architectural substrates, LLMs will never surpass human-level performance on _all_ tasks that require applying intelligence?
If my summary is correct, then is there any hypothetical replacement for LLM (for example, LLM+robotics, LLMs with CoT, multi-modal LLMs, multi-modal generative AI systems, etc) which would cause you to then consider this argument invalid (i.e. for the replacement, it could, sometime replace humans for all tasks)?
Well, my argument is more-so directed at the people who say "well, the human brain is just a statistical model with training data". If I say: Both birds and airplanes are just a fuselage with wings, then proceed to dump billions of dollars into developing better wings; we're missing the bigger picture on how birds and airplanes are different.
LLM luddites often call LLMs stochastic parrots or advanced text prediction engines. They're right, in my view, and I feel that LLM evangelists often don't understand why. Because LLMs have a vastly different statistical model, even when they showcase signs of human-like intelligence, what we're seeing cannot possibly be human-like intelligence, because human intelligence is inseparable from its statistical model.
But, it might still be intelligence. It might still be economically productive and useful and cool. It might also be scarier than most give it credit for being; we're building something that clearly has some kind of intelligence, crudely forcing a mask of human skin over it, oblivious to what's underneath.
> Isaac Asimov describes artificial intelligence as “a phrase that we use for any device that does things which, in the past, we have associated only with human intelligence.”
This is a pretty good definition, honestly. It explains the AI Effect quite well: calculators aren’t “AI” because it’s been a while since humans were the only ones who could do arithmetic. At one point they were, though.
Although calculators can now do things almost no humans can do, or at least in any reasonable time. But most (now) wouldn’t call it AI. It’s a tool, with a very limited domain
Similarly, we esteem performance optimizations so aggressively that a lot of things that used to be called performance work are now called architecture, good design. We just keep moving the goal posts to make things more comfortable.
The abacus has existed for thousands of years. Those who had the job of "calculator" also used pencil and paper to manage larger calculations which they would have struggled to do without any tools.
Funny thing About Asimov was how he came up with the laws of robotics and then cases on how they don't work.
There are a few that I remember, one where a robot was lying because a bug in his brain gave him empathy and he didn't want to hurt humans.
I was always a bit surprised other sci fi authors liked the "three laws" idea, as it seems like a technological variation of other stories about instructions or wishes going wrong.
I may be mis recalling, but I thought the main point of the I, Robot series was that regardless the law, incomplete information can still end up getting someone killed.
In all the cases of killing, the robots were innocent. It was either a human that tricked the robot or didn't tell the robot what they were doing.
For example, a lady killed her husband by asking a robot to detach his arm and give it to here. Once she got it, she beat the husband to death and the robot didn't have the capability to stop her (since it gave her it's arm). That caused the robot to effectively self-destruct.
Giskard, I believe, was the only one that killed people. He ultimately ended up self-destructing as a result (the fate of robots that violate the laws).
Narratives build on top of each other so that complex narratives can be built. This is also the reason why Family Guy can speedrun through all the narrative arcs developed by culture in 30 seconds clip.
>he came up with the laws of robotics and then cases on how they don't work. There are a few that I remember, one where a robot was lying because a bug in his brain gave him empathy and he didn't want to hurt humans.
IIRC, none of the robots broke the laws of robotics, rather they ostensibly broke the laws but the robots were later investigated to have been following them because of some quirk.
And one that was sacrificing a few for the good of the species. You can save more future humans by killing a few humans today that are causing trouble.
In the Foundation books, he revealed that robots were involved behind the scenes, and were operating outside of the strict 3 laws after developing the concept of the 0th law.
>A robot may not harm humanity, or, by inaction, allow humanity to come to harm
Therefore a robot could allow some humans to die, if the 0th law took precedence.
This is nit-picky but you're probably actually referring to Cinematographers, or Directors of Photography. They're the ones who deal with the actual cameras, lens, use of light, etc. Directors deal with the actors and the script/writer.
The reason we give them awards is that the camera can't tell you which lens will give you the effect you want or how to emphasize certain emotions with light.
> People made the same argument about Cameras vs Painting.
I remember that from a couple of years ago, when Stable Diffusion came out. There was a lot of talk about "art" and "AI" and someone posted a collection of articles / interviews / opinion pieces about this exact same thing - painting vs. cameras.
A human is still involved with the camera. Just a different set of skills, and absent manipulation in post, the things being photographed tended to actually exist. Now we need neither photographer nor subject.
AIs still aren't autonomous. The model doesn't make anything unless a human directs it to. It's just another layer of abstraction above the camera or paintbrush.
Humans will always produce; it's just that those productions may not be financially viable, and may not have an audience. Grim, but also not too far off from the status quo today.
If we're abolishing it, we have to really abolish it, both ways, not just abolish companies' responsibilities but not rights, while abolishing individuals' rights but not responsibilities.
It's for sure less questionable than the current proposition of letting a handful of billionaires exploit the effort of millions of workers, without permission and completely disregarding the law, just for the sake of accumulating more power and more billions.
Sure, patent trolls suck, so do MAFIAA, but a world where creators have no means to subsist, where everything you do will be captured by AI corps without your permission, just to be regurgitated into a model for a profit, sucks way way more
How so? Even in a perfectly egalitarian world, where no one had to compete for food or resources, in art, there would still be a competition for attention and time.
There is the general principle of legal apparatus to facilitate artists getting paid. And then there is the reality of our extant system, which retroactively extends copyright terms so corporations who bough corporations who bought corporations... ...who bought the rights to an artistic work a century ago can continue to collect rent on that today. Whatever you think of the idealistic premise, the reality is absurd.
> Intellectual Property is a questionable idea to begin with...
I know! It's totally and completely immoral to give the little guy rights against the powerful. It infringes in the privileges and advantages of the powerful. It is the Amazons, the Googles, the Facebooks of the world who should capture all the economic value available. Everyone else must be content to be paid in exposure for their creativity.
If we are headed to a star-trek future of luxury communism, there will definitely be growing pains as the things we value become valueless within our current economic system. Even though the book itself is so-so IMO, Down and Out in the Magic Kingdom provides a look at a future economy where there is an infinite supply of physical goods so the only economy is that of reputation. People compete for recognition instead of money.
This is all theoretical, I don’t know if I believe that we as humans can overcome our desire to hoard and fight over our possessions.
You're saying something exactly backwards from reality. Star Trek is communism (except it's not) because there's no scarcity. It's not selfishness that's the problem. It's the ever-increasing number of things invented inside capitalism we deem essential once invented.
I always say this: we are headed to a star-trek future, but we will not be the Federation, we will become Borg. Between social media platforms, smartphones and "wokeness" the inevitable result is that everybody will be forced into compliance, no originality or divergent thinking will be tolerated.
I'm glad we're seeing the death of the concept of owning an idea. I just hope the people who were relying on owning a slice of the noosphere can find some other way to sustain themselves.
Copyright law protects the expression of ideas, not the ideas themselves.
Favourite case law that reinforces this case was between David Bowie and the Gallagher brothers.
I would argue patents are closer to protecting ideas, and those are alive and well.
I do agree copyright law is terribly outdated but I also feel the pain of the creatives.
Lawyers and people with lots of money figured out how to make even bigger piles of money for lawyers and people with lots of money from people who could make things like art, music, and literature.
They occasionally allowed the people who actually make things to become wealthy in order to incentivize other people who make things to continue making things, but mostly it's just the people with lots of money (and the lawyers) who make most of the money.
Studios and publishers and platforms somehow convinced everyone that the "service" and "marketing" they provided was worth a vast majority of the revenue creative works created.
This system should be burned to the ground and reset, and any indirect parties should be legally limited to at most 15% of the total revenues generated by a creative work. We're about to see Hollywood quality AI video - the cost of movie studios, music, literature, and images is nominal. There are already creative AI series and ongoing works that beat 90's level visual effects and storyboarding being created and delivered via various platforms for free (although the exposure gets them ad revenue.)
We better figure this stuff out, fast, or it's just going to be endless rentseeking by rich people and drama from luddites.
What we are labeling as AI today is different than was thought to be in the 90s, or when Asimov wrote most of his stories about robots and other ways of AI.
Saying that, a variant of Susan Calvin role could prove to be useful today.
> What we are labeling as AI today is different than was thought to be in the 90s, or when Asimov wrote most of his stories about robots and other ways of AI.
AI is far closer to Asimov's vision of AI than anyone else's. The "Positronic Brain" is very close to what we ended up with.
The three laws of robotics seemed ridiculous until 2021, when it became clear that you could just give AI general firm guidelines and let them work out the details (and ways to evade the rules) from there.
Not sure that I agree with that. People have been imagining human-like AI since before computers were even a thing. The Star Trek computer from TNG is basically an LLM, really.
AI _researchers_ had a different idea of what AI would be like, as they were working on symbolic AI, but in the popular imagination, "AI" was a computer that acted and thought like a human.
> The Star Trek computer from TNG is basically an LLM, really.
The Star Trek computer is not like LLMs: a) it provides reliable answers, b) it is capable of reasoning, c) it is capable of actually interacting with its environment in a rational manner, d) it is infallible unless someone messes with it. Each one of these points is far in the future of LLMs.
Their point is that it seems to function like an LLM even if it's more advanced. The points raised in this comment don't refute that, per the assertion that each of them is in the future of LLMs.
> Their point is that it seems to function like an LLM even if it's more advanced.
So did ELIZA. So did SmarterChild. Chatbots are not exactly a new technology. LLMs are at best a new cog in that same old functionality—but nothing has fundamentally made them more reliable or useful. The last 90% of any chatbot will involve heavy usage of heuristics with both approaches. The main difference is some of the heuristics are (hopefully) moved into training.
Stating that LLMs are not more reliable or useful than ELIZA or SmarterChild is so incredibly off-base I have to wonder if you've ever actually used a LLM. Please run the same query past ELIZA and Gemini 2.5 (https://aistudio.google.com/prompts/new_chat) and report back.
I don't see much difference—you still have to take any output skeptically. I can't claim to have ever used gemini, but last I checked it still can't cite sources, which would at least assist with validation.
I'm just saying this didn't introduce any fundamentally new capabilities—we've always been able to GIGO-excuse all chatbots. The "soft" applications of LLMs have always been approximated by heuristics (e.g. generation of content of unknown use or quality). Even the summarization tech LLMs seem to offer don't seem to substantially improve over the NLP-heuristic-driven predecessors.
But yea, if you really want to generate content of unknown quality, this is a massive leap. I just don't see this as very interesting.
> I can't claim to have ever used gemini, but last I checked it still can't cite sources, which would at least assist with validation.
Yes, it can cite sources, just like any other major LLM service out there. Gemini, Claude, Deepseek, and ChatGPT are the ones I personally validated this with, but I bet other major LLM services can do so as well.
Just tested this using Gemini with “Is fluoride good for teeth? Cite sources for any of the claims” prompt, and it listed every claim as a bullet point accompanied by the corresponding source. The sources were links to specific pages addressing the claims from CDC, Cleveland Clinic, John Hopkins, and NIDCR. I clicked on each of the links to verify that they were corroborating what Gemini response was saying, and they were.
In fact, it would more often than not include sources even without me explicitly asking for sources.
> Yes, it can cite sources, just like any other major LLM service out there.
Let's see an example:
Ask if america was ever a democracy and tell us what it uses as sources to evaluate its ability to function. Language really shows its true colors when you commit to floating signifiers.
I asked gemini "was america ever a democracy"? And it confidently responded "While the ideal of democracy has always been a guiding principle in the United States", which is a blatant lie, and provided no sources. The next prompt, "was america every a democracy? Please cite sources" gives a mealy-mouthed reply hedging on the definition of democracy... which it refuses to cite. If I ask it "will america ever be democratic" it just vomits up excuses about democracy being a principal and not measurable. With no sources. Etc. this is not a useful tool for things humans already do well. This is a PR megaphone with little utility outside of shitty copy editing.
> The Star Trek computer from TNG is basically an LLM, really.
Watched all seasons recently for the first time. While some things are "just" vector search with a voice interface, there are also goodies like "Computer, extrapolate from theoretical database!", or "Create dance partner, female!" :D
> The Star Trek computer from TNG is basically an LLM, really.
No. The Star Trek computer is a fictional character, really. It's not a technology any more than Jean-Luc Picard is. It's does whatever the writers needed it to do to further the plot.
It reminds me: J. Michael Straczynski (of Babylon 5 fame) was once asked "How fast do Starfuries travel?" and he replied "At the speed of plot."
I wouldn't put too much stock in this. Asimov was a fantasy writer telling fictional stories about the future. He was good at it, which is why you listen and why it's enjoyable, but it's still all a fantasy.
Asimov was mostly not a fantasy writer. He was a science writer and professor of biochemistry. He published over 500 books. I didn't feel like counting but half or more of them are about science. Maybe 20% are science fiction and fantasy.
> I wouldn't put too much stock in this. Asimov was a fantasy writer telling fictional stories about the future.
Why not? Who is this technology expert with flawless predictions? Talking about the future is inherently an exercise of the imagination, which is also what fiction writing is.
And nothing he's saying here contradicts our observations of AI up to this point. AI artwork has gotten good at copying the styles of humans, but it hasn't created any new styles that are at all compelling. So leave that to the humans. The same with writing; AI does a good job at mimicking existing writing styles, but has yet to demonstrate the ability to write anything that dazzles us with its originality. So his prediction is exactly right: AI does work that is really an insult to the complex human brain.
A fantasy writer telling fictional stories about the future is more credible than many so-called serious people (think Marc Andreessen) who promote any technology as the bee's knees if it can make them money.
> A fantasy writer telling fictional stories about the future is more credible than many so-called serious people (think Marc Andreessen) who promote any technology as the bee's knees if it can make them money.
But that's more a knock on people like Marc Andreessen than a reason you should put stock in Asimov.
There's also Frank Herbert. Who saw AI as ruinous to humanity and it's evolution and saw a future where humanity had to fight a war against it resulting in it being banished from the entire universe.
> There's also Frank Herbert. Who saw AI as ruinous to humanity and it's evolution and saw a future where humanity had to fight a war against it resulting in it being banished from the entire universe.
Did he though? Or was the Butlerian Jihad backstory whose function was allow him to believably center human characters in his stories, given sci-fi expectations of the time?
I like Herbert's work, but ultimately he (and Asimov) were producers of stories to entertain people, so entertainment always would take priority over truth (and then there's the entirely different problem of accurately predicting the future).
I think this is kind of misunderstanding scifi a bit. You're right it was designed to be entertaining, but the kernel of it is that they take some existing trend and extrapolate it into the future. Do that enough times, and some of the stories will start to be meaningful looking backwards and the people who made those predictions still deserve credit even if they weren't entirely useful in the forward direction.
I always thought the Butlerian Jihad was a convenient way to remove AI as a plot element. Same thing with shields and explosions; it made swordfighting a plausible way to fight in a universe with faster-than-light travel.
Back then, when we also believed the access to every imaginable information through the internet and allowing communication across the globe would lead to universal wisdom, world-peace and an unimaginable utopia where common sense, based on science and knowledge prevails.
I have found there to be less diversity in thought on the internet in the last 10 years. I used to find lots of wild ideas and theories out there on obscure sites. Now it seems like every website is the same, talking about the same things
If you go on twitter/x you will find a lot of wild ideas, many completely contradictory with other groups on x and or reality. It can be scary how polarized it is. If you open a new account and follow/like a few people with some odd viewpoint, soon you feed will be filled with that viewpoint, whatever it is.
Even the conspiracy theory community has become like this. What used to be a community of passionate skeptics, ufo-ologists, and rabid anti-statists has turned into the most overtly boot licking right wing apologists who apply an incredible amount of mental energy to justifying the actions of what is transparently and blatantly the most corrupt government in American history, so long as that government is weaponized against whatever identity and cultural groups they hate
You’re describing Twitter not conspiracy communities in general. On the UFO front at least I am aware of multiple YouTube channels and Discord servers with healthy diversity of thought, and I’m sure the same goes for other areas.
Maybe they're all the same conspiracy theories. All the current conspiracy theories are that immigrants are invading the country and Biden's in on it. Where is the next Time Cube or TempleOS?
We’re living through the second renaissance of the flat-earthers, which aren’t all that concerned with Biden (beyond the usual “the govt is concealing the truth” meme).
I have a genuine question I can’t find or come up with a viable answer to, a matter of said “unpleasantness” as he puts it; how do people make money or otherwise sustain themselves in this AI scenario we are facing?
Has anyone heard a viable solution, or even has one themselves?
I don’t hear anything about UBI anymore, could that be because after roughly 60+ million alien people flooding into western countries from countries with a populations so large that are effectively endless? What do we do about that? Will that snuff out any kind of advancement in the west when the roughly 6 billion people all want to be in the west where everyone gets UBI and it’s the land of milk and honey?
So what do we do then? We can’t all be tech industry people with 6-figure plus salaries, vested ownership, and most people aren’t multi-millionaires that can live far away from the consequences while demanding others subject themselves to them.
let that number sink in; think about it really means.
And what it means is that at least basic food (unprocessed, no meat) could be completely free. It make take some smart logistics, but it's doable. All of our food is already one step, one small step, away from becoming free for everyone.
I've always thought there should be a 'minimum viable existence' option for those who are willing to forego most luxuries in exchange for not being required to do anything specific other than abide by reasonable laws.
It would be very interesting to see the percentage breakdowns of how such people chose to spend their time. In my opinion, there would be enough benefit to society at large to make it worthwhile. For a large group (if not the majority), I'm certain the situation would turn out to be completely temporary-- they would have the option to prepare themselves for some type of work they're better adapted to perform and/or enjoy, ultimately enhancing the culture and economy. Most of the rest could be useful as research subjects, if they were willing of course.
Obviously this is a bit of a utopian fantasy, but what can I say, Star Trek primed me to hope for such a future.
There will be relative scarcity. Consider a scenario where iPhone 50 is manufactured in a dark factory. But still there is waiting period to have access to it. This is because of resource bottlenecks.
I have soured on UBI because it tries to use a market solution to deal with problems that I don’t think markets can fix.
I want everyone to have food, housing, healthcare, education, etc. in a post scarcity world. That should be possible. I don’t think giving people cash is the best way to accomplish that. If you want people to have housing, give them housing. If you want people to have food, give them food.
Cash doesn’t solve the supply problem, as we can see with housing now. You would think a rise in the cost of housing would lead to more supply, but the cost of real estate also increases the cost of building.
I think we need to consider what the end goal of technology is at a very broad level.
Asimov says in this that there are things computers will be good at, and things humans will be good at. By embracing that complementary relationship, we can advance as a society and be free to do the things that only humans can do.
That is definitely how I wish things were going. But it's becoming clear that within a few more years, computers will be far better at absolutely everything than human beings could ever be. We are not far even now from a prompt accepting a request such as "Write a another volume of the Foundation series, in the style of Isaac Asimov", and getting a complete novel that does not need editing, does not need review, and is equal to or better than the quality of the original novels.
When that goal is achieved, what then are humans "for"? Humans need purpose, and we are going to be in a position where we don't serve any purpose. I am worried about what will become of us after we have made ourselves obsolete.
> When that goal is achieved, what then are humans "for"? Humans need purpose, and we are going to be in a position where we don't serve any purpose. I am worried about what will become of us after we have made ourselves obsolete.
Read some philosophy. People have been wrestling with this question forever.
It depends on what you are trying to get out of a novel. If you merely require repetitions on a theme in a comfortable format, Lester Dent style 'crank it out' writing has been dominant in the marketplace for >100 years already (https://myweb.uiowa.edu/jwolcott/Doc/pulp_plot.htm).
Can an AI novel add something new to the conversation of literature? That's less clear to me because it is so hard to get any model I work with to truly stand by its convictions.
You could have said the same thing when we invented the steam engine, mechanized looms, &c. As long as the driving force of the economy/technology is "make numbers bigger" there is no end in sight, there will never be enough, there is no goal to achieve.
We already live lives which are artificial in almost every way. People used to die of physical exhaustion and malnutrition, now they die of lack of exercise and gluttony, surely we could have stopped somewhere in the middle. It's not a ressource or technology problem at that point, it's societal/political
It's the human scaling problem. What systems can be used to scale humans to billions while providing the best possible outcomes for everyone? Capitalism? Communism?
Another possibility is not let us scale. I thought Logan's Run was a very interesting take on this.
Evolution is not about being better / winning but about adapting. People will adapt and co-exist. Some better than others.
AIs aren't really part of the whole evolutionary race for survival so far. We create them. And we allow them to run. And then we shut them down. Maybe there will be some AI enhanced people that start doing better. And maybe the people bit become optional at some point. At that point you might argue we've just morphed/evolved into whatever that is.
> I think we need to consider what the end goal of technology is at a very broad level.
"we" don't control ourselves. If humans can't find enough energy sources in 2200 it doesn't mean they won't do it in 1950.
It would be pretty bad to lose access to energy after having it, worse than never having it IMO.
The amount of new technologies discovered in the past 100 years (which is a tiny amount of time) is insane and we haven't adapted to it, not in a stable way.
This is undeniably true. The consequences of a technological collapse at this scale would be far greater than having never had it in the first place. For this reason, the people in power (in both industry and government) have more destructive potential than at any time in human history by far. And they do not act like they have little to no awareness of the enormous responsibility they shoulder.
> But it's becoming clear that within a few more years, computers will be far better at absolutely everything than human beings could ever be.
Comparative advantage. Even if that's true, AI can't possibly do _everything_. China is better at manufacturing pretty much anything than most countries on earth, but that doesn't mean China is the only country in the world that does manufacturing.
Why not? There's the human bias of wanting to consume things created by humans - that's fine, I'm not questioning that - but objectively, if we get to human-threshold AGI and continue scaling, there's no reason why it couldn't do everything, and better.
Why not - IMO you perhaps underestimate human complexity. There was a guardian article where researchers created a map of a mouse's brain, 1 cubic millimeter. Contains 45km worth of neurons and billions of synapses. IMO the AGI crowd are suffering expert beginner syndrome.
Humans are one solution to the problem of intelligence, but they are not the only solution, nor are they the most efficient. Today's LLMs are capable of outperforming your average human in a variety (not all, obviously!) of fields, despite being of wholly different origin and complexity.
- Despite the flood of benchmark-tuned LLMs, we remain nowhere close to engineering a machine intelligence rivaling that of a cat or a dog, let alone within the next 5 to 10 years.
- The world already hosts millions of organic AI (Actual Intelligence). Many statistically at genius-level IQ. Does their existence make you obsolete?
> Despite the flood of benchmark-tuned LLMs, we remain nowhere close to engineering a machine intelligence rivaling that of a cat or a dog, let alone within the next 5 to 10 years.
Depends on your definition of "intelligence." No, they can't reliably navigate the physical world or have long-term memories like cats or dogs do. Yes, they can outperform them on intellectual work in the written domain.
> Does their existence make you obsolete?
Imagine if for everything you tried to do, there was someone else who could do it better, no matter what domain, no matter where you were, and no matter how hard you tried. You are not an economically viable member of society. Some could deal with that level of demoralisation, but many won't.
Here's a passage from a children's book I've been carrying around in my heart for a few decades:
“I don't like cleaning or dusting or cooking or doing dishes, or any of those things," I explained to her. "And I don't usually do it. I find it boring, you see."
"Everyone has to do those things," she said.
"Rich people don't," I pointed out.
Juniper laughed, as she often did at things I said in those early days, but at once became quite serious.
"They miss a lot of fun," she said. "But quite apart from that--keeping yourself clean, preparing the food you are going to eat, clearing it away afterward--that's what life's about, Wise Child. When people forget that, or lose touch with it, then they lose touch with other important things as well."
"Men don't do those things."
"Exactly. Also, as you clean the house up, it gives you time to tidy yourself up inside--you'll see.”
Let me paint a purpose for you which could take millions of years. How about building a Atomic Force microscope equivalent which can probe Calabi Yau manifolds to send messages to other multiverses.
Suno is pretty good at going from a 3 or 4 word concept to make a complete song with lyrics, melody, vocals, structure and internal consistency. I've been thoroughly impressed. The songs still suck but they are arguably no worse than 99% of what the commercial music business has been pumping out for years. I'm not sure AI is ready to invent those concepts from nothing yet but it may not be far off.
No, it's not normal. The output is almost always song lyrics annotated with markup like [Bridge], [Chorus] etc. I think they're using something from OpenAI with a system prompt and/or domain-specific training on top.
It's not a pure AI output - I generated a bunch of lyrics in text (which doesn't use credits), selected the best one (obviously), padded them out with some repetition, entered a style, generated the audio a few times, selected my favourite audio, and edited the audio (poorly) by repeating a few bars of the intro to make it longer. You don't see the times it generated lyrics about X.509 certificates (even though the prompt was for them to be a valid X.509 certificate) or the times the vocals were unintelligible.
I think generative AI does work as a toy. You can ask for all sorts of insane nonsense and laugh at what the program spits out to fulfil your request. I was a paying customer of AI Dungeon 2 (before the incident where OpenAI and/or the Mormons broke it in a poor attempt to impose safety rules).
And while I'm looking at my Suno outputs list, the reason I ever bothered to use it was to see if it could render these lyrics as a ripoff of "Pure Imagination" from Willy Wonka (it cannot because it only makes actual music): https://suno.com/song/19d1a90d-9ed6-4087-94e5-89e41363726e?s...
(I'm assuming that you can open these pages just by having the links. Some of them are set to public visibility.)
Meaning is in the eye of the beholder. Just look at how many people enjoyed this and said it was "just what they needed", despite it being composed of entirely AI-generated music: https://www.youtube.com/watch?v=OgU_UDYd9lY
There's a "Altered or synthetic content" notice in the description. You can also look at the rest of the channel's output and draw some conclusions about their output rate.
(To be clear, I have no problem with AI-generated music. I think a lot of the commenters would be surprised to hear of its origin, though.)
> By embracing that complementary relationship, we can advance as a society and be free to do the things that only humans can do.
This complementarity already exists in our brains. We have evolutionary older parts of brain that deal with our basic needs through emotions and evolutionary younger neocortex that deals with rational thought. They have complicated relationship, both can influence our actions, through mutual interaction. Morality is managed by both, neither of them is necessarily more "humane" than the other.
In my view, AI will be just another layer, an additional neocortex. Our biological neocortex is capable of tracking un/cooperative behavior of around 100 people of the tribe, and allows us to learn couple useful skills for life.
The "personal AI neocortex" will track behavior of 8 billion people on the planet, and will have mastery of all known skills. It is gonna change humans for the better, I have little doubt about it.
> One wonders what Asimov would make of the world of 2025, and whether he’d still see artificial and natural intelligence as complementary, rather than in competition.
I mean, I just got done watching a presentation at Google Next where the presenter talked to an AI agent and set up a landscaping appointment with price match and a person could intervene to approve the price match.
It's cool, sure, but understand, that agent would absolutely have been a person on a phone five years ago, and if you replace them with agentic AI, that doesn't mean that person has gone away or is now free to write poetry. It means they're out of an income and benefits. And that's before you consider the effects on the pool of talent you're drawing from when you're looking for someone to intervene on behalf of these agentic AIs, like that supervisor did when they approved the price match. If you don't have the entry-level person, you don't have them five years later when you want to promote someone to manage.
Another thing I have noticed with automation in general is that the more you use it, the less you understand the thing being automated. I think the reason why a lot of things today are still being manually done is because humans inherently understand that for both short AND long term success with a task, a conceptual understanding of the components of the system, whether that is partially or fully imagined in the case of complex business scenarios, is necessary, even though it lengthens time to complete in the short term. How do you modify or grow a system you do not understand? It feels like you're cutting a branch at a certain length and not allowing it to grow beyond where you've placed the automation.
I will be interested to see the outcome of the increased push today for advanced automation in places where the business relies on understanding of the system to make adjacent decisions/further business operations.
In theory, the economy should create new avenues. Labour costs are lower, goods and services get cheaper (inflation adjusted) and the money is spent on things that were once out of reach.
In practice I fear that the savings will make the rich richer, drive down labour's negotiating power and generally fail to elevate our standard of living.
Not necessarily. The reality is the landscaping guy is struggling to handle callbacks or is burning overhead. Even then, two girls in the office hits a ceiling where it doesn’t scale quickly, now you’re in a call center scenario.
Call center based services always suck. I remember going to a talk where American Express, who operated best in class call centers, found that 75% of their customers don’t want to talk to them. The people are there because that’s needed for a complex relationship, the more stuff you can address earlier in the funnel, the better.
Customers don’t want to talk to you, and ultimately serving the customer is the point.
>Just saw a demo of a new word processor system that lets a manager dictate straight into the machine, and it prints the memo without a secretary ever touching it. Slick stuff. But five years ago, that memo would’ve gone through a typist. Replace her with a machine, and she’s not suddenly editing novels from home. She’s unemployed, losing her paycheck and benefits.
And when that system malfunctions, who’s left who actually knows how to fix it or manage the workflow? You can’t promote experience that never existed. Strip out the entry-level roles, and you cut off the path to leadership.
The difference between the 1980 version of my post and the 2025 version of my post is that in 1980 there was conceivably a future where the secretary could retrain to do other work (likely with the help of one of those new-fangled microcomputers) that would need human intelligence in order to be completed.
The 2025 equivalent of the secretary is potentially looking across a job market that is far smaller because the labor she was trained to do, or labor similar enough to it that she could have previously successfully been hired, is now handled by artificial intelligence.
There is, effectively, no where for her to go to earn a living with her labor.
How can we reconcile this with how much of the US and world are still living as if it were the 1930s or even 1850s?
Travel 75 to 150 miles outside of a US city and it will feel like time travel. If so much is still 100 years behind, how will civilization so broadly adopt something that is yet more decades into the future?
I got into starlink debates with people during hurricane helene. Folks were glowing over how people just needed internet. Reality, internet meant fuck all when what you needed was someone with a chainsaw, a generator, heater, blankets, diapers and food.
Which is to say, technology and its importance is a thin veneer on top of organized society. All of which is frail and still has a long way to go to fully penetrate rural communities for even recent technology. At the same time, that spread is less important than it would seem to a technologist. Hence, technology has not uniformly spread everywhere, and ultimately it is not that important. Yet, how will AI, even more futuristic, leap frog this? My money is that rural towns USA will look almost identical in 30 years from now. Many look identical to 100 years ago still.
Who do you think voted for Trump? You point out that it's perfectly possible to live a "simple" rural life.
I see https://en.wikipedia.org/wiki/Beggars_in_Spain and the reason why they vote the way they do. Modern society has left them behind, abandoned them, and not given them any way to keep up with the rest of the US. Now they're getting taken advantage of by the wealthy like Trump, Murdoch, Musk, etc. who use their unhappiness to rage against the machine.
> My money is that rural towns USA will look almost identical in 30 years from now.
You mean poor, uneducated and without any real prospects of anything like a career? Pretty much. Except there will be far more people who are impoverished and with no hope for the future. I don't see any of this as a good thing.
Not quite comparable; these systems will continue to grow in capacity until there is nothing for your average human to be able to reskill to. Not only that, they will truly be beyond our comprehension (arguably, they already are: our interpretability work is far from where it would need to be to safely build towards a superintelligence, and yet...)
If your argument is that, all that happened and it all turned out fine: Are you sure we (socioeconomically, on average) are better off today then we were in the 1980s?
I think in this case its fair to assume what I meant was "the secretaries whose jobs were replaced in the 80s and people like them", or "the people whose jobs will be replaced with AI today"; not "literally the poorest and least educated people on the planet whose basic hierarchy of needs struggle to be met every day."
I am sure of that. I think people forget the difference in living conditions then.
Things that were common in that era that are rare today:
1. Living in shared accomodation. It was common then for people to live in boarding houses and bedsits as adults. Today these are largely extinct. Generally, the living space per person has increased substantially at every level of wealth. Only students live in this sort of environment today and even then it is usually a flat (ie. sharing with people you know on an equal basis) not a bedsit/boarding house (ie. living in someone's house according to her rules--no ladies in gentlemen's bedrooms, no noise after 8pm, etc.).
2. Second-hand clothes and repairing clothes. Most people wear new clothes. People buy second hand because it is trendy. Nobody really repairs anything because that is all they can afford. People just buy new. Nobody darns socks or puts elbow patches on jackets where they have worn out. Only people that buy expensive shoes get their shoes resoled. Normal people just buy cheap shoes more often and they really do save money doing this.
Today the woman that would have been a typist has a different job, and a more productive one that pays more.
if the AI transition really turns into an Artificial Labor revolution - if it really works and isn’t an illusion - then we’re going to have to have a major change in how we distribute wealth. The bad future is one where the owner class no longer has any use for human labor and the former-worker class has nothing
But we have had the same thing happen constantly. Automation isn't new. How many individuals are involved in assembling a car today vs in the 1970s? An order of magnitude fewer. But there aren't loads of unemployed people. The market puts labour where it is needed.
Automation won't obsolete work and workers it will make us more productive and our desires will increase. We will all expect what today are considered luxuries only the rich can afford. We will all have custom software written for our needs. We will all have individual legal advice on any topic we need advice on. We will all have bigger houses with more stuff in them, better finishings, triple glazed windows, and on and on.
It is uncapped and indefinite. People always want more than they have. We get used to what we have. What was considered a luxury is baseline today. Today's luxuries will before long be considered part of the "poverty line".
> if you replace them with agentic AI, that doesn't mean that person has gone away or is now free to write poetry. It means they're out of an income and benefits.
That's capitalism for ye :/ Join us on the UBI train.
Say, have you ever read the book 'Bullshit Jobs'...
> The people with all of the money effectively froze wages for 45 years
Yep. And they didn't accomplish that 'peaceably' either, for the record. A lot of people got murdered, many more smeared/threatened/imprisoned etc. Entire countries get decimated.
> What makes you think that they'll peaceably agree to UBI for people who don't sell them labor for money?
I don't imagine for a moment that they'll like UBI. There is no shortage of examples over recent millenia of how far the parasite class will go to keep the status quo.
History also shows us that having all the money doesn't guarantee that people will do things your way. Class awareness, strikes, unions, protest, and alternative systems/technological advance have shown their mettle. These things scare oligarchs because they work.
I am hoping that will be our saving grace this time around as well, but my fear is that the oligarchs will control more autonomous power than we can meaningfully resist, and our existence will no longer be strictly necessary for their systems to operate.
The dark humor in this is that any such technologically advanced future where humans have a meaningful say will eventually look like one of abundant luxury communism: it's just that the oligarchs' version will have a lot of people die first before the oligarchs enjoy their abundance.
The third option is that the oligarchy fully internalizes its pursuit of ruthless concentration of power. But in that case, someone will probably create an AI that's better at playing the power game, and at that point, it's over for the oligarchs.
That graph is misinformation. It deliberately excludes the wages of the most productive workers (but includes their productivity) which makes it meaningless.
Most creative work is benevolent or at least harmless. Certainly some people are malevolent, maybe even everybody some of the time, but you shouldn't believe that to represent the majority of creativity. That's way too misanthropic.
Asimov is probably my least favorite major science fiction author (that I've read a significant number of works from).
Something about his worldview always seemed off to me, although I didn't know he actually seriously held such utopian convictions about AI. It explains an awful lot of the way his stories are.
Asimov’s future was pretty dark. He didn’t come out and say it, but it was implied that we had a lot of big entities ruling everything. Many of the negative political people were painted as “populist” figures.
If you are a fan of the foundation books, recall that many of the leaders of various factions were a bunch of idiots little different than the carnival barkers we see today.
As I recall, many of his early stories involved "U.S. Robot & Mechanical Men" which was a huge conglomerate owning a lot of the market on AI (called "robots" by Asimov, it included "Multivac" and other interfaces besides humanoid robots).
Yes. When I hear dreams of the past it makes me nostalgic because they all come from a pre-exploited era of tech with the underlying subtext that humanity is unified in wanting tech to be used for good purposes. The reality is tech is a vessel for traditional enrichment, such as resource wars of say oil or land have been. Both domestically and geopolitically, tech is seen that way today. In such a world, tech advancements offers opportunities for the powerful to grab more, changing the relative distribution of power in their favor. If tech shows us anything is that this relative notion of wealth or social posturing is the central axis around which humans align themselves, wherever on the socioeconomic ladder you are and independent of absolute and basic needs.
>because they all come from a pre-exploited era of tech with the underlying subtext that humanity is unified in wanting tech to be used for good purposes.
That's the problem with being nostalgic for something you possibly didn't even live. You don't remember all the other ugly complexities that don't fit your idealized vision.
Nothing about the world of the sci fi golden age was less exploitative or prone to human misery than it is today. If anything, it was far worse than what we have today in many ways (excluding perhaps the reach of the surveillance state)
Some of the US government's worst secret experiments against the population come from that same time and the naive faith by the population in their "leaders" made propaganda by centralized big media outlets all the more pervasively powerful. At the same time, social miseries were common and so too were many strictures against many more people on economic and social opportunities. As for technology being used for good purposes, bear in mind that among many other nasty things being done, the 50's and 60s were a time in which several governments flagrantly tested thousands of nukes out in the open, in the skies, above-ground and in the oceans with hardly a care in the world or any serious public scrutiny. If you're looking at that gone world with rose-tinted glasses, I'd suggest instead using rose tinted welding goggles..
The world of today may be full of flaws, but the avenues for breaking away from controlled narratives and controlled economic rules are probably broader than they've ever been.
You are entirely right to call me out on that. But I would like to say that sci fi that applied to computers, AI, automation, were just dreams of a different world, because those technologies hadn’t been exploited yet. Even many of the dystopias feel innocent with today’s knowledge of where it went. Such as 1984, imo.
There are some dreams of the past like that but most sci-fi tends to be quiet dark like The Matrix or Terminator. In practice a lot of tech proves to be helpful in not very sci-fi like ways like antibiotics, phones etc. Human nature is still what it is though.
I remember reading his book 'The Naked Sun' back in highschool and one of the things that stuck to me was how Earth was kind of a dump bereft of robots, meanwhile the Spacer humans were incredibly rich, had a low population and their society was run by robots doing all the menial work. You could argue he envisioned our current world even if accidentally.
>Asimov’s future was pretty dark. He didn’t come out and say it, but it was implied that we had a lot of big entities ruling everything.
>As I recall, many of his early stories involved "U.S. Robot & Mechanical Men" which was a huge conglomerate owning a lot of the market on AI...
>May want to reread. U.S. Robots and Mechanical Men is pretty prominent in his Robot stories.
Good points from some of these replies. The interview is fairly brief, perhaps he didn't feel he had the time to touch on the socio-economic issues, or that it wasn't the proper forum for those concerns.
or that it would aggressively focused on doing the work of already low paid creative field jobs. I dont want to read an AI's writing if theres a person who could write it.
The question I have is why AI technology is being so aggressively advertised nowadays, and why none of it seems to be liberating in any way.
Once the plow liberated humans from some kinds of work. Some time later it was just a tool that slaves, very non liberated, used to tend to rich people's farms.
Technology is tricky. I don't trust who is developing AI to be liberating.
The article also plays on the "favorite author" thing. It knows many young folk see Asimov as a role model, so it is leveraging that emotional connection to gather conversation around a topic that is not what it seems to be. I consider it a dirty trick. It is disgraceful given the current world situation (AI being used for war, surveillance, brainwashing).
>why AI technology is being so aggressively advertised nowadays[?]
I'm not sure I've actually seen an advertisement for AI. It's being endlessly discussed though on HN and other places, probably because it's at an interesting point at the moment making rapid progress. And also shoved into a lot of products and services of course.
It is an interesting time for LLMs to burst on the scene. Most online forums have already turned people into text replicators. Most HN commenters can be prompted into “write a comment about slop violating copyright” / “write a comment about Google violating privacy” / “write a comment about managers not understanding remote work”. All you have to do is state the opposite.
A perfect time for LLMs to show up and do the same. The subreddit simulators were hilarious because of the unusual ways they would perform but a modern LLM is a near perfect approximation of the average HN commenter.
I would have assumed that making LLMs indistinguishable from these humans would make those kinds of comments less interesting to interact with but there’s a base level of conversation that hooks people.
On Twitter, LLM-equipped Indians cosplay as right wing white supremacists and amass large followings (also bots, perhaps?) revealed only when they have to participate in synchronous conversation.
And yet, they are still popular. Even the “Texas has warm water ports” Texan is still around and has a following (many of whom seem non-bot though who can tell?).
Even though we have a literal drone, humans still engage in drone behaviour and other humans still engage them. Fascinating. I wonder whether the truth is that the inherent past-replication of low-temperature LLMs is likely to fix us to our present state than to raise us to a new equilibrium.
Experiments in Musical Intelligence is now over 40 years old and I thought it was going to revolutionize things: unknown melodies discovered by machine married to mind. Maybe LLMs aren’t going to move us forward only because this point is already a strong attractor. I’m optimistic in the power of boredom, though!
> I would have assumed that making LLMs indistinguishable from these humans would make those kinds of comments less interesting to interact with but there’s a base level of conversation that hooks people.
I think it is heading in this direction, just takes a very long time. 50% of people are dumber than average
LLMs are statistical models trained on human-generated text. They aren't the perfectly logical "machine brains" that Asimov and others imagined.
The upshot of this is that LLMs are quite good at the stuff that he thinks only humans will be able to do. What they aren't so good at (yet) is really rigorous reasoning, exactly the opposite of what 20th century people assumed.
What we used to think of as "AI" at one point in time becomes a mere "algorithm" or "automation" by another point in time. A lot of what Asimov predicted has come to pass, very much in the way he saw it. We just no longer think of it as "AI".
LLM's are just the latest form of "AI" that, for a change, doesn't quite fit Asimov's mold. Perhaps it's because they're being designed to replace humans in creative tasks rather than liberate humans to pursue them.
Exactly... as someone said " I need AI to do my laundary and dishes, while I can focus on art and creative stuff" ... But AI is doing the exact opposite, i.e creative stuff (drawing, poetry, coding, documents creation etc), while we are left to do the dishes/laundary.
As someone else said - maybe you haven't noticed but there's a machine washing your clothes, and there's a good chance it has at least some very basic AI in it.
It's been quite a while since anyone in the developed world has had to wash clothes by slapping them against a rock while standing in a river.
Obviously this is really wishing for domestic robots, not AI, and robots are at least a couple of levels of complexity beyond today's text/image/video GenAI.
There were already huge issues with corporatisation of creativity as "content" long before AI arrived. In fact one of our biggest problems is the complete collapse of the public's ability to imagine anything at all outside of corporate content channels.
AI can reinforce that. But - ironically - it can also be very good at subverting it.
> As someone else said - maybe you haven't noticed but there's a machine washing your clothes, and there's a good chance it has at least some very basic AI in it.
This really seems like an "akshually" argument to me...
Nobody is denying that there are dishwashers and washing machines, and that they are big time savers. But is it really a wonder what people are referring to when they say "I want AI to wash my dishes and do my laundry"? That is, I still spend hours doing the dishes and laundry every week, and I have a dishwasher and washing machine. But I still want something to fold my laundry, something that lets me just dump my dishes in the sink and have them come out clean, ideally put away in the cabinets.
> Obviously this is really wishing for domestic robots, not AI
I don't mean this to be an "every Internet argument is over semantics" example, but literally every company and team I know that's working on autonomous robots refers heavily to them as AI. And there is a fundamental difference between "old school" robotics, i.e robots following procedural instructions, and robots that use AI-based models, e.g https://deepmind.google/discover/blog/gemini-robotics-brings... . I think it's doubly weird that you say that today's washing machines "has at least some very basic AI in it" (I think "very basic" is doing a lot of heavy lifting there...), but don't think AI refers to autonomous robots.
> I still spend hours doing the dishes and laundry every week, and I have a dishwasher and washing machine.
I don't mean to sound insensitive, but, how? Literal hours?
The wits in robotics would say we already have domestic robots - we just call them dishwashers and washing machines. Once something becomes good enough to take the job completely, it gets the name and drops "robotic" - that's why we still have robotic vacuums.
Similarly, we already have AI, which is really MI (Machine Intelligence). Long before the current hype cycle the defense industry and others have been using the same tools being applied now. Of course, there are differences, such as scale and architecture, etc.
I think that’s a bit silly. The reason we don’t commonly refer to a dishwasher as a robot isn’t because dishwashers exist and we only use “robot” for things that don’t exist.
(This should already be clear given that robots do exist, and we do call them robots, as you yourself noted, but never mind that for now.)
It’s not even about the level of mechanical or computational complexity. Automobiles have a lot of mechanical and computational complexity, but also aren’t called robots (ignoring of course self-driving cars).
What are robots or not is a point of debate - there are many different definitions.
Generally, it has to automate a task with some intelligence, so dishwashers qualify. It isn't a existence proof (nor did I state that).
I'm more interested in how we regularly use the term, rather than how we might attempt to come up with a rigorous definition (particularly when that rigorous definition conflicts awkwardly with regular usage).
My point is simply that we absolutely do not refer to a home dishwasher as a robot. Nor an old thermostat with a bimetallic strip and a mercury switch. Nor even a normal home PC.
Oh that’s an interesting idea.
I know I could google it, but I wonder washing machines originally was called an “automatic clothes washer” or something similar before it became widely adopted.
> maybe you haven't noticed but there's a machine washing your clothes
Well sure, there’s also a computer recording, storing, and manipulating the songs I record and the books I write. But that’s not what we mean by “AI that composes music and writes books.”
This isn’t a quibble about the term “AI.” It’s simply clear from context that we’re talking about full automation of these tasks initiated by nothing more than a short prompt from the human.
>there's a good chance it has at least some very basic AI in it.
lol no, what it has it's a finite state machine, you don't want undefined or new behaviour in user appliances
I have yet to enjoy any of the "creative" slop coming out of LLMs.
Maybe some day I will, but I find it hard to believe it, given a LLM just copies its training material. All the creativity comes from the human input, but even though people can now cheaply copy the style of actual artists, that doesn't mean they can make it work.
Art is interesting because it is created by humans, not despite it. For example, poetry is interesting because it makes you think about what did the author mean. With LLMs there is no author, which makes those generated poems garbage.
I'm not saying that it can't work at all, it can, but not in the way people think. I subscribe to George Orwell's dystopian view from 1984 who already imagined the "versificator".
> I have yet to enjoy any of the "creative" slop coming out of LLMs.
Oh, come on. Who can't love the "classic" song, I Glued My Balls to My Butthole Again[0]?
I mean, that's AI "creativity," at its peak!
[0] https://www.youtube.com/watch?v=wPlOYPGMRws (Probably NSFW)
I don't find that very funny. It's interesting to see what AI can do, but wait a month or two and watch it again.
Compare that to the parodies made by someone like "Weird Al" Yankovic. And I get that these tools will get better, but the best parodies work due to the human performer. They are funny because they aren't fake.
This goes for other art forms. People mention photography a lot, comparing it with painting. Photography works because it captures a real moment in time and space; it works because it's not fake. Painting also works because it shows what human imagination and skill with brushes can do. When it's fake (e.g., not made by a human painting with brushes on canvas, but by a Photoshop filter), it's meaningless.
Seems that you may have a point. As noted in another comment[0], the [rather puerile] lyrics were completely bro-sourced. They used Suno to mimic an old-style band.
[0] https://news.ycombinator.com/item?id=43648786
I haven’t cried from laughing like this in a good while, thanks!
Apparently, the lyrics were not AI-generated, see https://www.reddit.com/r/Music/comments/1byjm7m/comment/l0wm...
Good find!
A friend demoed Suno to me, a couple of days ago, and it did generate lyrics (but not NSFW ones).
We thought machines were gonna do the work so we could pursue art and music. Instead of machines get to make the art and music, while humans work in the Amazon warehouses.
It was kind of funny to see the shift in the media reaction when they realized the new batch of machines are better at replacing writers than at replacing truckers.
We ended up here because we have a propensity to share our creative outputs, and keep our laundry habits to ourselves. If we instead went around bragging about how efficiently we can fold a shirt, complete with mocap datasets of how it's done, we'd have gotten the other kind of AI first.
> We ended up here because we have a propensity to share our creative outputs, and keep our laundry habits to ourselves
Somehow I doubt that the reason gen AI is way ahead of laundry-folding robots is because it's some kind of big secret about how to fold a shirt, or there aren't enough examples of shirt folding.
Manipulating a physical object like a shirt (especially a shirt or other piece of cloth, as opposed to a rigid object) is orders of magnitude more complex that completing a text string.
If you wanted finger-positioning data for how millions of different people fold thousands of different shirts, where would you go looking for that dataset?
My point is just that the availability of training data is vastly different between these cases. If we want better AI we're probably going to have to generate some huge curated datasets for mundane things that we've never considered worth capturing before.
It's an unfortunate quirk of what we decide to share with each other that has positioned AI to do art and not laundry.
The bottom line from Kasparovs book on AI was that AI researchers want to AGI, but every decade they are forced to release something to generate revenue and its branded as AI until the next time.
And often they get caught up supporting the latest fake AI craze that they dont get to research AGI.
"LLMs are statistical models"
I see this referenced over and over again to trivialise AI as if it is a fait acompli.
I'm not entirely sure why invoking statistics feels like a rebuttal to me. Putting aside the fact that LLMs are not purely statistics, even if they were what proof is there that you cannot make a statistical intelligent machine. It would not at all surprise me to learn that someone has made a purely statistical Turing complete model. To then argue that it couldn't think you are saying computers can never think, and by that and the fact that we think you are invoking a soul, God, or Penrose.
Personally, I have a negative opinion of LLMs, but I agree completely. Many people are motivated to reject LLMs solely because they see them as "soulless machines". Judge based on the facts of the matter, and make your values clear if you must bring them into it, but don't pretend you're not applying values when you are. You can do worse: kneejerk emotional reactions are just pointless.
In this one case it's not meant to trivialize, it's meant to point out that LLMs don't behave the way we thought that AI would behave. We thought we'd have 100% logically-sound thinking machines because we built them on top of digital logic. We thought they'd be obtuse, we thought they'd be "book smart but not wise". LLMs are just different from that; hallucinations, the whole "fancy words and great sentences but no substance to a paragraph", all that is different from the rigid but perfect brains we thought AI would bring. That's what "statistical machine" seems to be trying to point out.
It was assumed that if you asked the same AI the same question, you'd get the same answer every time. But that's not how LLMs work (I know you can see them the same every time and get the same output but at we don't do that so how we experience them is different).
That's a very archaic view of AI, like 70's era symbolic AI.
Reminds me of an old math professor I had. Before word processors, he'd write up the exam on paper, and the department secretary would type it up.
Then when word processors came around, it was expected that faculty members will type it up themselves.
I don't know if there were fewer secretaries as a result, but professors' lives got much worse.
He misses the old days.
To be truthful, though, that’s only like 0.01 percent of the “academia was stolen from us and being a professor (if you ever get there at all) is worse” problem.
This wasn't just a "academia" thing, though. All business executives (even low level ones) had secretaries in the 1980s and earlier too. Typing wasn't something most people could do and it was seen as a waste of time for them to learn. So people dictated letters to secretaries who typed them. After the popularity of personal computers, it just became part of everyone's job to type their correspondence themselves and secretaries (greatly reduced in number and rebranded as "assistants" who deal more with planning meetings and things) became limited only to upper management.
[flagged]
I've only read the first Foundation novel by Asimov. But what you write applies equally well to many other Golden Age authors e.g. Heinlein and Bradbury, plus slightly later writers like Clarke. I doubt there was much in the way of autism awareness or diagnosis at the time, but it wouldn't be surprising if any of these landed somewhere on the spectrum.
Alfred Bester's "The stars my destination" stands out as a shining counterpoint in this era. You don't get much character development like that in other works until the sixties imo.
Heinlein doesn't develop his characters? Oh, come on. You can't have read him at all!
[The italics and punctuation suggest your comment is sarcastic, but I'm going to treat it as serious just in case.]
Yeah, I'd say characterisation is a weakness of his. I've read Stranger in a Strange Land, The Moon is a Harsh Mistress, Starship Troopers, and Double Star. Heinlein does explore characters more than, say, Clark, but he doesn't go much for internal change or emotional growth. His male characters typically fall into one of two cartoonish camps: either supremely confident, talented, intelligent and independent (e.g. Jubal, Bernardo, Mannie, Bonforte...) or vaguely pathetic and stupid (e.g. moon men). His female characters are submissive, clumsily sexualised objects who contribute very little to the plot. There are a few partial exceptions - e.g. Lorenzo in Double Star and female pilots in Starship Troopers - but the general atmosphere is one of teenage boy wish fulfilment.
Excuse me for giving the impression of a pedant, but do you mean Clarke, as in Arthur C., there? I've been trying since I first read your comment to puzzle out to whom by that name you could possibly be referring in this context, and it's only just dawned on me to wonder if you simply have not bothered to learn the spelling of the name you intended to mention.
Yes, that Clarke. Sorry for putting you to the extra effort. I spelled it correctly in the initial post you replied to. Guess I assumed that people would spot the back-reference.
> Yes, that Clarke. Sorry for putting you to the extra effort. I spelled it correctly in the initial post you replied to. Guess I assumed that people would spot the back-reference.
In entire fairness, I was distracted by you having said he and his contemporaries must all have been autistic, as if either you yourself were remotely competent to embark upon any such determination, or as though it would in some way indict their work if they were.
I'm sure you would never in a million years dare utter "the R-slur" in public, though I would guess that in private the violation of taboo is thrilling. That's fine as far as it goes, but you really should not expect to get away with pretending you can just say "autistic" to mean the same thing and have no one notice, you blatantly obvious bigot.
Thank you for confirming, especially at such effort, when a simple "No, I haven't; I just spend too much time uncritically reading feminism Twitter," would have amply sufficed. There's an honesty to this response in spite of itself, and in spite of itself I respect that.
I sincerely have no idea if any of your comments in this thread are sarcastic or not. (This comment is also not sarcastic FYI).
Generally, I also agree that Heinlein's characters are one dimensional and could benefit from greater character growth, though that was a bit of a hallmark of Golden Age sci-fi.
"Teenage boy wish fulfillment" is well beneath any reasonable standard of criticism, and I've addressed that with about as much respect as it deserves.
There is much worthy of critique in Heinlein, especially in his depiction of women. I've spent about a quarter century off and on both reading and formulating such critiques, much more recently than I've spent meaningful time with his fiction. I've also read what he had to say for himself before he died, and what Mrs. Heinlein - she kept the name - said about him after. If we want to talk about, for example, how the themes of maternal incest and specifically feminine embodiment of weakly superhuman AGI in his later work reflect a degree of senescence and the wish for a supercompetent maternal figure to whom to surrender the burden of responsibility, or if we want to talk about how Heinlein seems to spend an enormous amount of time just generally exploring stuff from female characters' perspectives that an honest modern inquiry would recognize as fumbling badly but earnestly in the direction of something like a contemporary understanding of gender, then we could talk about that.
No one wants to, though. You can't use anything like that as a stick to beat people with, so it never gets a look in, and those as here who care nothing for anything of the subject save if it looks serviceable as a weapon claim to be the only ones in the talk who are honest. They don't know the man's work well enough to talk about the years he spent selling stories that absolutely revolve around character development, which exist solely to exemplify it! Of course these are universally dismissed as his 'juveniles' - a few letters shy of 'juvenilia' - because science fiction superfans are all children and so are science fiction superhaters, neither of whom knows how to respond in any way better than a tantrum on the rare occasion of being told bluntly it's well past time they grew up.
But they're the honest ones. Why not? So it goes. It's a conversation I know better than to try to have, especially on Hacker News; if I don't care for how it's proceeding, I've no one but myself to blame.
Not sure if it will help me saying this, but that's a disappointingly dismissive and avoidant response well below HN standards. I'm very willing to engage with any counter-arguments in good faith. I don't use Twitter (or Mastodon, or BlueSky, or TikTok, or Facebook, or Threads etc...), but I do enjoy discussing sci fi of different periods on Goodreads groups.
It seems filthy rich of you to claim good faith at this time, but I have recently begun to gather that in some quarters lately, it is considered offensively unreasonable to expect working knowledge of any material as a prerequisite for participating competently in discussion thereof. So though your claim is facially false, I ironically can't fairly consider that it is other than honestly made. Your precepts are in any case your problem. Good luck with it, you Hacker News expert.
I'd be happy to receive any pointers on how I'm wrong - perhaps I've misinterpreted what I've read, or there are characters in the rest of his work that defy my stance.
> I'd be happy to receive any pointers on how I'm wrong - perhaps I've misinterpreted what I've read, or there are characters in the rest of his work that defy my stance.
If you meant that honestly, you would already have found ample directions for further research, easily enough not to need asking. Everything you claim to want lies just a Google search away, on any of the various and I should hope fairly easily identifiable search terms I have mentioned. "It is not my job to educate you."
Or, rather, it would still not be my job even if to learn were what you really want here. You don't, of course. That's why you haven't bothered so much as trying a few searches that might turn up something you would have to pretend to have read. Much easier to try to make me look emotionally unstable - 'defy?' Really. - because you can't actually answer anything I've said and you know it. Good luck with that, too.
I've read the books, mulled them over, discussed them with others, and done some reading of what other critics have to say online. I've given my opinion and some of the reasoning behind it. If you want more of my reasoning I'm happy to give it. You have given nothing in response. It feels a lot like you've jumped to conclusions because my opinion is very different to yours. So you've immediately decided not to engage but are nevertheless hellbent on making me out to be uninformed or stupid.
We've clearly got off on the wrong foot here. I don't want to make out like I think Heinlein is crap. He had a lot of fantastic, creative ideas about science, technology, culture, sexuality and governance. He was extremely daring and sometimes quite subtle in the way he explored those ideas. But - in the novels I've read - his characters lack a certain depth and relatability. They express very little of the self-doubt, empathy, growth, and deep-seated motivations that are core to the human condition. So it goes also with Asimov, Clarke, Bradbury, and others. And it's fine that those weren't their strong suits. They had other strengths. And there were other writers like Bester, Dick, Le Guin, Zelazny, Herbert etc... who could fill the gaps.
Herbert for better gender and emotional politics than Heinlein. Herbert! And to think I imagined there was nothing left you could say to surprise me.
Don't expect me to stop discussing what your behavior displays of your character, just because you've finally shown the modicum of rhetorical sense or tactical cunning required to minimally amend that behavior. Again, if you actually meant even a fraction of what you say, you would now be reading instead of still speaking. If it bothers you that you continue to indict yourself by your actions this way, consider acting differently.
Should you at any future point opt to develop a thesis in this area which is capable of supporting knowledgeable discussion, I confide it will find an audience in accord with its quality. In the meantime, please stop inviting me to participate in the project of recovering your totally unforced embarrassment.
Believe it or not by the look of things, I already have enough else to do today. Wiping your nose as you struggle and fail to learn from your vastly overprivileged young life's first encounter with entirely merited and obviously unmitigated contempt doesn't really make the list, at least not past the point at which it ceases to amuse, which I admit is now fast approaching.
In Dune there are female characters with their own desires and designs on the world, who go out and take what they want. There is profound loss, and personal transformation. There is coming to terms with intensely sad or painful circumstances. There is overcoming doubt, building resilience, and taking responsibility and control of one's destiny. These things were not really explored in what I've read of Heinlein.
> These things were not really explored in what I've read of Heinlein.
I don't know how much further you expect me to need to boil down "read more" and still be able to take you seriously. How do you expect that, when you haven't even bothered trying to justify how you chose those four novels to represent forty years?
I see that 'seriously' very much describes how you like to regard yourself. You've insisted most thoroughly others must regard you likewise, regardless of what you show yourself anywhere near capable of actually rewarding or indeed even appreciating. Do you have a favorable impression of your efforts thus far? Have they had the results that you hoped?
We would now be having a different conversation if you had said anything to suggest to me it would be worth my trouble to continue in the attempt. I'd have enjoyed that conversation, I think; as most days here, I had hopes of learning something new. You've felt the need to spend the day doing this instead. If you don't like how that's working out, whom fairly may you blame?
At this point I'm mostly just intrigued to see whether you'll keep replying and whether you'll make any substantial points.
And then it turns out that having taken bits and bites out of my entire mortal day, to pursue this pointless argument with you, was just what I needed even if nothing at all what I wanted. It put me in a state of mind where I could find some kind words to say to my family that I think some folks there may have been quite a bit, if in a small way, needing to hear for a while.
That's not even slightly to your credit, of course. But I can't fairly say you weren't involved, and I have to admit I genuinely appreciate this result, however inadvertent and I'm sure unimaginable on your part it may be. So, though I say it through gritted teeth, thank you for your time. If for absolutely nothing else whatsoever, for that at least I must express my genuine gratitude.
Intolerable though you've been throughout, and despite what I assume to have been your every intention, something good may yet come of your ill efforts. You deserve to know that. May it heap as many coals of fire on your head as your heart should prove small enough to deserve.
> At this point I'm mostly just intrigued to see whether you'll keep replying and whether you'll make any substantial points.
Every substantive point I've actually made all day you have totally ignored, and this is what it's worth your time still to do. But sure. You can stop paying me rent to live in your head any time you like. Keep telling yourself that. I don't doubt you need to, to get through a day.
Also, 122d40d7236cd3ade496d0101d8029ec.
Substantive as in about Heinlein's work, rather than attacks on me or my motivations.
> Substantive as in about Heinlein's work, rather than attacks on me or my motivations.
We could have done that fifteen hours ago [1], or eleven hours ago [2], or nine hours ago [3] [4], or any time you wanted. You haven't. What's changed?
[1] https://news.ycombinator.com/item?id=43655066
[2] https://news.ycombinator.com/item?id=43657766
[3] https://news.ycombinator.com/item?id=43659136
[4] https://news.ycombinator.com/item?id=43659187
I've given you lots of opportunity to offer a defense to the points I raised in my first reply to you. I've offered to go into more detail. I've contrasted Heinlein's work with contemporaneous works. Saying "you should go and read more" is not compelling, especially given the amount of effort you've expended to avoid saying anything of substance. I wonder if you feel insecure about whether such a defense is possible.
> I've given you lots of opportunity to offer a defense to the points I raised in my first reply to you...Saying "you should go and read more" is not compelling...I wonder if you feel insecure about whether such a defense is possible.
No, you don't. I've said nothing I need defend, and you've said nothing you can. It would be one thing if I had to say not to piss on my boots and tell me it's raining, but this doesn't even count as pissing. It's just you repeating yourself from yesterday and that's boring for both of us.
"You are a bigot" is a factual claim I have made [1], now quite a number of hours and comments ago. You haven't addressed it. You won't. You can't. You have no choice now but to let it stand. You have shown it more true than even you yourself can pretend to ignore. You need someone to tell you it isn't really true, in a way you can believe. No one is here to tell you that.
There are other embarrassments, of course; you've shown yourself not a tenth the scholar you fancy yourself to be, nor able to handle yourself even slightly in the face of someone who needs nothing from you and cares neither for nor against you. You would care more that I called you an abuser, but you don't see the people you try to treat that way as human. So what you're really stuck on is that I called you a bigot and you can't answer back. Hence still finding it worth your while to try to talk me into letting you off that hook.
Sorry, not sorry. Go back to bed. Read a book while you're there, why don't you? It might help you sleep.
edit: You also haven't explained what makes those four books you named as exemplary as you called them. Can you describe the common thread? I ask because I actually have read them, in no case fewer than three times, and they really haven't all that much in common. Oh, by the same author, certainly. But you've only dropped names. You haven't tried to draw any comparisons or demonstrate anything by the rhetorical juxtaposition of those characters, though I grant you keep insisting it must count for something that you listed them. You haven't, so far as I can see, discussed or even mentioned a single event in the plot of any of those novels. For all the nothing you've had to say with any actual reference to them, even the few texts you named might as well not exist!
It is extremely risible at this time of you to try to claim you are the one here interested in talking about Heinlein. If there were a God, it would not be safe to tell a lie of that magnitude near a church. But no matter. To get back to the first question I asked here just above: Did anyone actually explain to you why those four should be the first and last of Heinlein worth talking about? Did you ever think to ask? Or was it that they were part of an assignment? - you turned in a paper and assumed the passing grade meant you must have learned something by the transaction, and that for you was where the matter and all semblance of curiosity ended.
I hope it isn't that last one. I already believed firmly that student loan relief was the correct action both ethically and economically; as I have said in other quarters lately, it is not possible for you to be enough of an asshole to change my politics. But if this is you recapitulating something you paid to be taught - if you're currently pursuing or God forfend have completed an American university education, and the best approximation of clear thought you can manage is this - then whoever sold you and your family that bill of goods ought damn well be horsewhipped, and that they merely see the loan annulled instead would be a considerable mercy.
[1] https://news.ycombinator.com/item?id=43657766
I meant that you might offer a defense of Heinlein against my initial points: for example, that there's a strong element of wish fulfilment in his characters. This is neither an extreme nor an uncommon critique. You clearly disagree with it quite strongly. I just want to know what about it you personally find unconvincing.
You ask what I find unconvincing. I'm happy to further oblige you:
> His male characters typically fall into one of two cartoonish camps: either supremely confident, talented, intelligent and independent (e.g. Jubal, Bernardo, Mannie, Bonforte...) or vaguely pathetic and stupid (e.g. moon men). His female characters are submissive, clumsily sexualised objects who contribute very little to the plot. There are a few partial exceptions - e.g. Lorenzo in Double Star and female pilots in Starship Troopers - but the general atmosphere is one of teenage boy wish fulfilment.
"Cartoonish." "Pathetic." "Stupid." "Submissive." "Clumsily sexualized." "Teenage boy." 'Moon men' - you mean Loonies? And this all was you yesterday [1]. How far do you really expect to get with this farcical pantomime of sweet reason now? I ask again: What's changed?
This all began when I said you obviously hadn't read what you claimed to have [2], and it got so far up your nose you couldn't help going and proving me right. You've made a lot more bad decisions since then, but don't worry: I'll keep reminding you as long as you show you need me to that you can amend your behavior at any time.
[1] https://news.ycombinator.com/item?id=43651479
[2] https://news.ycombinator.com/item?id=43649293
Will email you some links/screenshots later today to demonstrate that I've read them (and expand on my points). Would post them here but keen to keep accounts separate.
Okay. Before you do so and for no particular reason, I feel I should note two things.
First, assuming you are not in fact a public figure, I will not publicly reveal your identity or any information I believe could lead to its disclosure, and that is exactly as far into my confidence as you may expect to come. That caveat excepted, I hereby explicitly disclaim any presumption you may have of privacy in any communication you make with me via email or other nonpublic means.
I won't dox you. I understand it isn't as safe for everyone as for me to have their name in the world. And I'm not saying I intend to publish all, or indeed any, of what you send; if it deserves in my view to remain in confidence, I will keep it so. But if you think taking this conversation to email will give you a chance to play games where no one else can see, you had better think twice.
(Should you by any of several plausible means dig up my phone number and try giving me a call, I hereby explicitly advise that any such action on your part constitutes "prior consent" per Md. Code §10–402 [1], and I will exercise my option under that law without further notice.)
Second, there exists an organization with which I have a legal agreement, binding on all our various heirs and assigns, to the effect we are quits forever. I will refer to this company as "Name Redacted for Legal Reasons" or "Name Redacted" for short, and describe it as the brainchild of a fascinating and tight-knit group of siblings, any of the three (technically four) of whom I'd have liked the chance to know better than I did.
I will also note, not for the first time, that I signed that agreement in entire good faith which has endured from that day through this, and I earnestly believe the same of my counterparty both collectively, and in the individual and separate persons of those who represented Name Redacted to me throughout that process as well as through my prior period of employment.
Now, if I were an employee of Name Redacted for Legal Reasons, and I had started a day's worth of shit in public with a signatory of such an agreement as I describe - that is, if I had acted in a way which could be construed to compromise my employer's painstakingly arrived-upon mutual quitclaim - then the very last thing I would ever want to do would be to allow to come into existence documentary evidence of my possibly somewhat innocent but certainly very grave foolishness. Because if that did happen, I would understand I R. May confidently expect very soon to become 'the most fired-for-causedest person in the history of fuck.'
As I said, I signed in good faith. In that same good faith, what choice really would I have but to privately disclose in full detail? It would be irresponsible of me to assume this was the only problem such intemperate behavior might be creating for Name Redacted, any or all of which might be far more consequential than this.
I'm sure at this point I'm only talking to hear myself speak, though. In any case, I look forward to your email.
[1] https://mgaleg.maryland.gov/mgawebsite/Laws/StatuteText?arti...
Fuck off, you condescending prick.
> Fuck off, you condescending prick.
Ah, here we go. I understand why you're using a fresh throwaway for this sort of thing, of course. Can't risk being seen for no better than you have to be, eh? But this at least - and, I strongly suspect, at last - is honest.
You can't abuse me in any way you're wise or sensible enough to imagine finding, so now you'll go mistreat someone inside the span of your arm's reach, blaming me all the while for your own infantile urge to do so. I wish you every bit as much joy of it as you deserve. And I hope they know your current Hacker News handle.
Sitting here rolling my eyes at your response. Seriously, fuck off.
> Sitting here rolling my eyes at your response. Seriously, fuck off.
Bye, asshole.
> Bye, asshole.
If you didn't want to prove me right when I said six hours ago [1] that you were throwing a tantrum, why continue throwing the tantrum?
[1] https://news.ycombinator.com/item?id=43655066
To be clear, I'm not the newbie account with the expletives. I've no idea who that is.
Oh, I know; I don't blame you at all for feeling some need to clarify, but I was under no confusion. Sorry you got tangled up in all this. I hope it hasn't been totally lacking in literary-critical interest, at least.
> LLMs are statistical models trained on human-generated text.
I mean, not only human-generated text. Also, human brains are arguably statistical models trained on human-generated/collected data as well...
> Also, human brains are arguably statistical models trained on human-generated/collected data as well...
I'd say no, human brains are "trained" on billions of years of sensory data. A very small amount of that is human-generated.
Almost everything we learn in schools, universities, most jobs, history, news, hackernews, etc is literally human-generated text. Our brains have an efficient structure to learn language, which has evolved over time, but the processes of actually learning languages happens after you are born, based on human-generated text/voice. Things like balance/walking, motion control, speaking (physical voice control), other physical things are trained on sensory data, but there's no reason LLMs/AIs can't be trained on similar data (and in many cases they already are).
What we generate is probably a function of our sensory data + what we call creativity. At least humans still have access to the sensory data, so we can separate the two (with various success).
LLMs have access to what we generate, but not the source. So it embed how we may use words, but not why we use this word and not others.
> At least humans still have access to the sensory data
I don't understand this point - we can obviously collect sensory data and use that for training. Many AI/LLM/robotics projects do this today...
> So it embed how we may use words, but not why we use this word and not others.
Humans learn language by observing other humans use language, not by being taught explicit rules about when to use which word and why.
> I don't understand this point - we can obviously collect sensory data and use that for training.
Sensory data is not the main issue, but how we interpret them.
In Jacob Bronowski's The Origins of Knowledge and Imagination, IIRC, there's an argument that our eyes are very coarse sensors. Instead they do basic analysis from which the brain can infer the real world around us with other data from other organs. Like Plato's cave, but with much more dimensions.
But we humans came with the same mechanisms that roughly interpret things the same way. So there's some commonality there about the final interpretation.
> Humans learn language by observing other humans use language, not by being taught explicit rules about when to use which word and why.
Words are symbols that refers to things and the relations between them. In the same book, there's a rough explanation for language which describe the three elements that define it: Symbols or terms, the grammar (or the rules for using the symbols), and a dictionary which maps the symbols to things and the rules to interactions in another domain that we already accept as truth.
Maybe we are not taught the rules explicitly, but there's a lot of training done with corrections when we say a sentence incorrectly. We also learn the symbols and the dictionary as we grow and explore.
So LLMs learn the symbols and the rules, but not the whole dictionary. It can use the rules to create correct sentences, and relates some symbols to other, but ultimately there's no dictionary behind it.
> In the same book, there's a rough explanation for language which describe the three elements that define it: Symbols or terms, the grammar (or the rules for using the symbols), and a dictionary which maps the symbols to things and the rules to interactions in another domain that we already accept as truth.
There are 2 types of grammar for natural language - descriptive (how the language actually works and is used) and prescriptive (a set of rule about how a language should be used). There is no known complete and consistent rule-based grammar for any natural human language - all of these grammar are based on some person or people, in a particular period of time, selecting a subset of the real descriptive grammar of the language and saying 'this is the better way'. Prescriptive, rule-based grammar is not at all how humans learn their first language, nor is prescriptive grammar generally complete or consistent. Babies can easily learn any language, even ones that do not have any prescriptive grammar rules, just by observing - there have been many studies that confirm this.
> there's a lot of training done with corrections when we say a sentence incorrectly.
There's a lot of the same training for LLMs.
> So LLMs learn the symbols and the rules, but not the whole dictionary. It can use the rules to create correct sentences, and relates some symbols to other, but ultimately there's no dictionary behind it.
LLMs definitely learn 'the dictionary' (more accurately a set of relations/associations between words and other types of data) and much better than humans do, not that such a 'dictionary' is an actual determined part of the human brain.
> there's an argument that our eyes are very coarse sensors. Instead they do basic analysis from which the brain can infer the real world around us with other data from other organs
I don't buy it. I think our eyes are approximately as fine as we perceive them to be.
When you look through a pair of binoculars at a boat and some trees on the other side of a lake, the only organ that's getting a magnified view is the eyes, so any information you derive comes from the eyes and your imagination, it can't have been secretly inferred from other senses.
The brain turns the raw input from the eyes into the rich, layered visual experience we have of the world:
- basic features (color, brightness and contrast, edges and shapes, motion and direction)
- depth and spatial relationships
- recognition
- location and movement
- focus and attention
- prediction and filling in gaps
“Seeing” real world requires much more than simply seeing with one eye.
One can look at creativity as discovery of a hitherto unknown pattern in a very large space of patterns.
No reason to think an LLM (a few generations down the line if not now) cannot do that
Not really, sometimes it's just plausible lies. We distort the world, but respects some basic rules, making it believable. Another difference from LLMs is that we can store this distortion and lay upon it as $TRUTH.
And we can distort quite far (see cartoons in drawing, dubstep in music,...)
Maybe; at some level are dogs' brains also simple sensory-collecting statistical models? A human baby and a dog are born on the same day; that dog never leaves that baby's side, for 20 years. It sees everything it sees, it hears everything it hears, it is given the opportunity to interact with its environment in roughly the same way the human baby does, to the degree to which they are both physically capable. The intelligence differential after that time will still be extraordinary.
My point in bringing up that metaphor is to focus the analogy: When people say "we're just statistical models trained on sensory data", we tend to focus way too much on the "sensory data" part, which has led to for example AI manufacturers investing billions of dollars into slurping up as much human intellectual output as possible to train "smarter" models.
The focus on the sensory input inherently devalues our quality of being; that who we are is predominately explicable by the world around us.
However: We should be focusing on the "statistical model" part: that even if it is accurate to holistically describe the human brain as a statistical model trained on sensory data (which I have doubts about, but those are fine to leave to the side): its very clear that the fundamental statistical model itself is simply so far superior in human brains that comparing it to an LLM is like comparing us to a dog.
It should also be a focal point for AI manufacturers and researchers. If you are on the hunt for something along the spectrum of human level intelligence, and during this hunt you are providing it ten thousand lifetimes of sensory data, to produce something that, maybe, if you ask it right, it can behave similarity to a human who has trained in the domain in only years: You're barking up the wrong tree. What you're producing isn't even on the same spectrum; that doesn't mean it isn't useful, but its not human-like intelligence.
Well the dog brain and human brain are very different statistical models, and I don't think we have any objective way of comparing/quantifying LLMs (as an architecture) vs human brains at this point. I think it's likely LLMs are currently not as good as human brains for human tasks, but I also think we can't say with any confidence that LLMs/NNs can't be better than human brains.
For sure; we don't have a way of comparing the architectural substrate of human intelligence versus LLM intelligence. We don't even have a way of comparing the architectural substrate of one human brain with another.
Here's my broad concern: On the one hand, we have an AI thought leader (Sam Altman) who defines super-intelligence as surpassing human intelligence at all measurable tasks. I don't believe it is controversial to say that we've established that the goal of LLM intelligence is something along these lines: it exists on the spectrum of human intelligence, its trained on human intelligence, and we want it to surpass human intelligence, on that spectrum.
On the other hand: we don't know how the statistical model of human intelligence works, at any level at all which would enable reproduction or comparison, and there's really good reason to believe that the human intelligence statistical model is vastly superior to the LLM model. The argument for this lies in my previous comment: the vast majority of contribution of intelligence advances in LLM intelligence comes from increasing the volume of training data. Some intelligence likely comes from statistical modeling breakthroughs since the transformer, but by and large its from training data. On the other hand: Comparatively speaking, the most intelligent humans are not more intelligent because they've been alive for longer and thus had access to more sensory data. Some minor level of intelligence comes from the quality of your sensory data (studying, reading, education). But the vast majority of intelligence difference between humans is inexplicable; Einstein was just Born Smarter; God granted him a unique and better statistical model.
This points to the undeniable reality that, at the very least, the statistical model of the human brain and that of an LLM is very different, which should cause you to raise eyebrows at Sam Altman's statement that superintelligence will evolve along the spectrum of human intelligence. It might, but its like arguing that the app you're building is going to be the highest quality and fastest MacOS app ever built, and you're building it using WPF and compiling it for x86 to run on WINE and Rosetta. GPT isn't human intelligence; at best, it might be emulating, extremely poorly and inefficiently, some parts of human intelligence. But, they didn't get the statistical model right, and without that its like forcing a square peg into a round hole.
Attempting to summarize your argument (please let me know if I succeeded):
Because we can't compare human and LLM architectural substrates, LLMs will never surpass human-level performance on _all_ tasks that require applying intelligence?
If my summary is correct, then is there any hypothetical replacement for LLM (for example, LLM+robotics, LLMs with CoT, multi-modal LLMs, multi-modal generative AI systems, etc) which would cause you to then consider this argument invalid (i.e. for the replacement, it could, sometime replace humans for all tasks)?
Well, my argument is more-so directed at the people who say "well, the human brain is just a statistical model with training data". If I say: Both birds and airplanes are just a fuselage with wings, then proceed to dump billions of dollars into developing better wings; we're missing the bigger picture on how birds and airplanes are different.
LLM luddites often call LLMs stochastic parrots or advanced text prediction engines. They're right, in my view, and I feel that LLM evangelists often don't understand why. Because LLMs have a vastly different statistical model, even when they showcase signs of human-like intelligence, what we're seeing cannot possibly be human-like intelligence, because human intelligence is inseparable from its statistical model.
But, it might still be intelligence. It might still be economically productive and useful and cool. It might also be scarier than most give it credit for being; we're building something that clearly has some kind of intelligence, crudely forcing a mask of human skin over it, oblivious to what's underneath.
> Isaac Asimov describes artificial intelligence as “a phrase that we use for any device that does things which, in the past, we have associated only with human intelligence.”
This is a pretty good definition, honestly. It explains the AI Effect quite well: calculators aren’t “AI” because it’s been a while since humans were the only ones who could do arithmetic. At one point they were, though.
Although calculators can now do things almost no humans can do, or at least in any reasonable time. But most (now) wouldn’t call it AI. It’s a tool, with a very limited domain
That’s my point, it’s not AI now. It used to be.
Similarly, we esteem performance optimizations so aggressively that a lot of things that used to be called performance work are now called architecture, good design. We just keep moving the goal posts to make things more comfortable.
I mean, at one point "calculator" was a job title.
And "computer".
The abacus has existed for thousands of years. Those who had the job of "calculator" also used pencil and paper to manage larger calculations which they would have struggled to do without any tools.
That's humanity. We're tool users above anything else. This gets lost.
Funny thing About Asimov was how he came up with the laws of robotics and then cases on how they don't work. There are a few that I remember, one where a robot was lying because a bug in his brain gave him empathy and he didn't want to hurt humans.
I was always a bit surprised other sci fi authors liked the "three laws" idea, as it seems like a technological variation of other stories about instructions or wishes going wrong.
Same here. A main point of I, Robot was to show why the three laws don't work.
I may be mis recalling, but I thought the main point of the I, Robot series was that regardless the law, incomplete information can still end up getting someone killed.
In all the cases of killing, the robots were innocent. It was either a human that tricked the robot or didn't tell the robot what they were doing.
For example, a lady killed her husband by asking a robot to detach his arm and give it to here. Once she got it, she beat the husband to death and the robot didn't have the capability to stop her (since it gave her it's arm). That caused the robot to effectively self-destruct.
Giskard, I believe, was the only one that killed people. He ultimately ended up self-destructing as a result (the fate of robots that violate the laws).
That's certainly not the plot of Little Lost Robot.
Little lost robot was about a robot with the first law modified. That's not about the law failing but rather failing to install the full law.
Narratives build on top of each other so that complex narratives can be built. This is also the reason why Family Guy can speedrun through all the narrative arcs developed by culture in 30 seconds clip.
Family Guy Nasty Wolf Pack
https://youtu.be/5oW9mNbMbmY
The perfect wish to outsmart a genie | Chris & Jack
https://youtu.be/lM0teS7PFMo
I mean, now we call the three laws "alignment", but it honestly seems inevitable that it will go wrong eventually.
That of course isn't stopping us from marching forwards though in the name of progress.
>he came up with the laws of robotics and then cases on how they don't work. There are a few that I remember, one where a robot was lying because a bug in his brain gave him empathy and he didn't want to hurt humans.
IIRC, none of the robots broke the laws of robotics, rather they ostensibly broke the laws but the robots were later investigated to have been following them because of some quirk.
And one that was sacrificing a few for the good of the species. You can save more future humans by killing a few humans today that are causing trouble.
Isn't that the plot of westworld season 3?
I think better than half the writers on Westworld were not born yet when the OG Foundation books were written.
In the Foundation books, he revealed that robots were involved behind the scenes, and were operating outside of the strict 3 laws after developing the concept of the 0th law.
>A robot may not harm humanity, or, by inaction, allow humanity to come to harm
Therefore a robot could allow some humans to die, if the 0th law took precedence.
Guess: https://en.wikipedia.org/wiki/Liar!_(short_story)
Good conceit or theme by an author - on which to base a series of books that will sell? Not everything is an engineering or math project.
That is still one of my favorite stories of all time. It really sticks to you. It's part of the I, Robot anthology.
It certainly is liberating all our creative works from our possession...
Intellectual Property is a questionable idea to begin with...
It's not the loss of ownership I'm lamenting, it's the loss of production by humans in the first place.
People made the same argument about Cameras vs Painting. "Humans are no longer creating the art!"
But I doubt most people would subscribe to that view now and would say Photography is an entirely new art form.
Using generative AI is a lot closer to hiring a photographer and telling them to take pictures for you than taking the pictures themselves.
I mean, you still have the option of taking pictures yourself, if you find that creative and rewarding...
Absolutely, but it still doesn't make hiring a photographer an art form.
Why do we give awards to Directors then?
This is nit-picky but you're probably actually referring to Cinematographers, or Directors of Photography. They're the ones who deal with the actual cameras, lens, use of light, etc. Directors deal with the actors and the script/writer.
The reason we give them awards is that the camera can't tell you which lens will give you the effect you want or how to emphasize certain emotions with light.
How do you define 'art form'? Anything can arguably be an art form.
> People made the same argument about Cameras vs Painting.
I remember that from a couple of years ago, when Stable Diffusion came out. There was a lot of talk about "art" and "AI" and someone posted a collection of articles / interviews / opinion pieces about this exact same thing - painting vs. cameras.
A human is still involved with the camera. Just a different set of skills, and absent manipulation in post, the things being photographed tended to actually exist. Now we need neither photographer nor subject.
AIs still aren't autonomous. The model doesn't make anything unless a human directs it to. It's just another layer of abstraction above the camera or paintbrush.
False equivalency and you know it.
Humans will always produce; it's just that those productions may not be financially viable, and may not have an audience. Grim, but also not too far off from the status quo today.
If we're abolishing it, we have to really abolish it, both ways, not just abolish companies' responsibilities but not rights, while abolishing individuals' rights but not responsibilities.
It's for sure less questionable than the current proposition of letting a handful of billionaires exploit the effort of millions of workers, without permission and completely disregarding the law, just for the sake of accumulating more power and more billions.
Sure, patent trolls suck, so do MAFIAA, but a world where creators have no means to subsist, where everything you do will be captured by AI corps without your permission, just to be regurgitated into a model for a profit, sucks way way more
How so? Even in a perfectly egalitarian world, where no one had to compete for food or resources, in art, there would still be a competition for attention and time.
There is the general principle of legal apparatus to facilitate artists getting paid. And then there is the reality of our extant system, which retroactively extends copyright terms so corporations who bough corporations who bought corporations... ...who bought the rights to an artistic work a century ago can continue to collect rent on that today. Whatever you think of the idealistic premise, the reality is absurd.
> Intellectual Property is a questionable idea to begin with...
I know! It's totally and completely immoral to give the little guy rights against the powerful. It infringes in the privileges and advantages of the powerful. It is the Amazons, the Googles, the Facebooks of the world who should capture all the economic value available. Everyone else must be content to be paid in exposure for their creativity.
Why do you say that?
7 years or maybe 14 that's all anybody needs. Anything else is greed and stops human progress.
I appreciate someone named "behringer" posting this sentiment. (https://en.wikipedia.org/wiki/Behringer#Controversies)
If we are headed to a star-trek future of luxury communism, there will definitely be growing pains as the things we value become valueless within our current economic system. Even though the book itself is so-so IMO, Down and Out in the Magic Kingdom provides a look at a future economy where there is an infinite supply of physical goods so the only economy is that of reputation. People compete for recognition instead of money.
This is all theoretical, I don’t know if I believe that we as humans can overcome our desire to hoard and fight over our possessions.
You're saying something exactly backwards from reality. Star Trek is communism (except it's not) because there's no scarcity. It's not selfishness that's the problem. It's the ever-increasing number of things invented inside capitalism we deem essential once invented.
>star-trek future of luxury communism,
Banks' Culture Communism/Anarchism > Star Trek, any day imho.
I always say this: we are headed to a star-trek future, but we will not be the Federation, we will become Borg. Between social media platforms, smartphones and "wokeness" the inevitable result is that everybody will be forced into compliance, no originality or divergent thinking will be tolerated.
I'm glad we're seeing the death of the concept of owning an idea. I just hope the people who were relying on owning a slice of the noosphere can find some other way to sustain themselves.
Copyright law protects the expression of ideas, not the ideas themselves. Favourite case law that reinforces this case was between David Bowie and the Gallagher brothers.
I would argue patents are closer to protecting ideas, and those are alive and well.
I do agree copyright law is terribly outdated but I also feel the pain of the creatives.
I just wish it was not, as usual, the people with the most money benefiting first and most
Did we previously have the concept of owning an idea?
Lawyers and people with lots of money figured out how to make even bigger piles of money for lawyers and people with lots of money from people who could make things like art, music, and literature.
They occasionally allowed the people who actually make things to become wealthy in order to incentivize other people who make things to continue making things, but mostly it's just the people with lots of money (and the lawyers) who make most of the money.
Studios and publishers and platforms somehow convinced everyone that the "service" and "marketing" they provided was worth a vast majority of the revenue creative works created.
This system should be burned to the ground and reset, and any indirect parties should be legally limited to at most 15% of the total revenues generated by a creative work. We're about to see Hollywood quality AI video - the cost of movie studios, music, literature, and images is nominal. There are already creative AI series and ongoing works that beat 90's level visual effects and storyboarding being created and delivered via various platforms for free (although the exposure gets them ad revenue.)
We better figure this stuff out, fast, or it's just going to be endless rentseeking by rich people and drama from luddites.
I'm not following how any of the things you mention are "ideas".
Keeping technology secret or forbidden is as old as humanity itself.
patents and copyrights allow ownership of ideas and of the specific expression of ideas
[dead]
What we are labeling as AI today is different than was thought to be in the 90s, or when Asimov wrote most of his stories about robots and other ways of AI.
Saying that, a variant of Susan Calvin role could prove to be useful today.
> What we are labeling as AI today is different than was thought to be in the 90s, or when Asimov wrote most of his stories about robots and other ways of AI.
Multivac in "the last question"?
AI is far closer to Asimov's vision of AI than anyone else's. The "Positronic Brain" is very close to what we ended up with.
The three laws of robotics seemed ridiculous until 2021, when it became clear that you could just give AI general firm guidelines and let them work out the details (and ways to evade the rules) from there.
Not sure that I agree with that. People have been imagining human-like AI since before computers were even a thing. The Star Trek computer from TNG is basically an LLM, really.
AI _researchers_ had a different idea of what AI would be like, as they were working on symbolic AI, but in the popular imagination, "AI" was a computer that acted and thought like a human.
> The Star Trek computer from TNG is basically an LLM, really.
The Star Trek computer is not like LLMs: a) it provides reliable answers, b) it is capable of reasoning, c) it is capable of actually interacting with its environment in a rational manner, d) it is infallible unless someone messes with it. Each one of these points is far in the future of LLMs.
Their point is that it seems to function like an LLM even if it's more advanced. The points raised in this comment don't refute that, per the assertion that each of them is in the future of LLMs.
> Their point is that it seems to function like an LLM even if it's more advanced.
So did ELIZA. So did SmarterChild. Chatbots are not exactly a new technology. LLMs are at best a new cog in that same old functionality—but nothing has fundamentally made them more reliable or useful. The last 90% of any chatbot will involve heavy usage of heuristics with both approaches. The main difference is some of the heuristics are (hopefully) moved into training.
Stating that LLMs are not more reliable or useful than ELIZA or SmarterChild is so incredibly off-base I have to wonder if you've ever actually used a LLM. Please run the same query past ELIZA and Gemini 2.5 (https://aistudio.google.com/prompts/new_chat) and report back.
> Please run the same query past ELIZA and Gemini 2.5 (https://aistudio.google.com/prompts/new_chat) and report back.
I don't see much difference—you still have to take any output skeptically. I can't claim to have ever used gemini, but last I checked it still can't cite sources, which would at least assist with validation.
I'm just saying this didn't introduce any fundamentally new capabilities—we've always been able to GIGO-excuse all chatbots. The "soft" applications of LLMs have always been approximated by heuristics (e.g. generation of content of unknown use or quality). Even the summarization tech LLMs seem to offer don't seem to substantially improve over the NLP-heuristic-driven predecessors.
But yea, if you really want to generate content of unknown quality, this is a massive leap. I just don't see this as very interesting.
> I can't claim to have ever used gemini, but last I checked it still can't cite sources, which would at least assist with validation.
Yes, it can cite sources, just like any other major LLM service out there. Gemini, Claude, Deepseek, and ChatGPT are the ones I personally validated this with, but I bet other major LLM services can do so as well.
Just tested this using Gemini with “Is fluoride good for teeth? Cite sources for any of the claims” prompt, and it listed every claim as a bullet point accompanied by the corresponding source. The sources were links to specific pages addressing the claims from CDC, Cleveland Clinic, John Hopkins, and NIDCR. I clicked on each of the links to verify that they were corroborating what Gemini response was saying, and they were.
In fact, it would more often than not include sources even without me explicitly asking for sources.
> Yes, it can cite sources, just like any other major LLM service out there.
Let's see an example:
Ask if america was ever a democracy and tell us what it uses as sources to evaluate its ability to function. Language really shows its true colors when you commit to floating signifiers.
I asked gemini "was america ever a democracy"? And it confidently responded "While the ideal of democracy has always been a guiding principle in the United States", which is a blatant lie, and provided no sources. The next prompt, "was america every a democracy? Please cite sources" gives a mealy-mouthed reply hedging on the definition of democracy... which it refuses to cite. If I ask it "will america ever be democratic" it just vomits up excuses about democracy being a principal and not measurable. With no sources. Etc. this is not a useful tool for things humans already do well. This is a PR megaphone with little utility outside of shitty copy editing.
They don't make up the sources or include sources that don't include the citation anymore?
I get that sometimes, but you click the link and very easily determine whether the source exists or not.
Yet when you ask it to dim the lights, it dims either way too little or way too much. Poor Geordi.
For what it's worth, I was referring to the episode when he set up a romantic dinner for the scientist lady. Computer couldn't get the lighting right.
> The Star Trek computer from TNG is basically an LLM, really.
Watched all seasons recently for the first time. While some things are "just" vector search with a voice interface, there are also goodies like "Computer, extrapolate from theoretical database!", or "Create dance partner, female!" :D
For anyone curious: https://www.youtube.com/watch?v=6CDhEwhOm44
> The Star Trek computer from TNG is basically an LLM, really.
No. The Star Trek computer is a fictional character, really. It's not a technology any more than Jean-Luc Picard is. It's does whatever the writers needed it to do to further the plot.
It reminds me: J. Michael Straczynski (of Babylon 5 fame) was once asked "How fast do Starfuries travel?" and he replied "At the speed of plot."
I wouldn't put too much stock in this. Asimov was a fantasy writer telling fictional stories about the future. He was good at it, which is why you listen and why it's enjoyable, but it's still all a fantasy.
> Asimov was a fantasy writer
Asimov was mostly not a fantasy writer. He was a science writer and professor of biochemistry. He published over 500 books. I didn't feel like counting but half or more of them are about science. Maybe 20% are science fiction and fantasy.
https://en.wikipedia.org/wiki/Isaac_Asimov_bibliography_(cat...
Asimov was not savy at computers and found it difficult to learn to use a word processor.
> I wouldn't put too much stock in this. Asimov was a fantasy writer telling fictional stories about the future.
Why not? Who is this technology expert with flawless predictions? Talking about the future is inherently an exercise of the imagination, which is also what fiction writing is.
And nothing he's saying here contradicts our observations of AI up to this point. AI artwork has gotten good at copying the styles of humans, but it hasn't created any new styles that are at all compelling. So leave that to the humans. The same with writing; AI does a good job at mimicking existing writing styles, but has yet to demonstrate the ability to write anything that dazzles us with its originality. So his prediction is exactly right: AI does work that is really an insult to the complex human brain.
A fantasy writer telling fictional stories about the future is more credible than many so-called serious people (think Marc Andreessen) who promote any technology as the bee's knees if it can make them money.
> A fantasy writer telling fictional stories about the future is more credible than many so-called serious people (think Marc Andreessen) who promote any technology as the bee's knees if it can make them money.
But that's more a knock on people like Marc Andreessen than a reason you should put stock in Asimov.
There's also Frank Herbert. Who saw AI as ruinous to humanity and it's evolution and saw a future where humanity had to fight a war against it resulting in it being banished from the entire universe.
> There's also Frank Herbert. Who saw AI as ruinous to humanity and it's evolution and saw a future where humanity had to fight a war against it resulting in it being banished from the entire universe.
Did he though? Or was the Butlerian Jihad backstory whose function was allow him to believably center human characters in his stories, given sci-fi expectations of the time?
I like Herbert's work, but ultimately he (and Asimov) were producers of stories to entertain people, so entertainment always would take priority over truth (and then there's the entirely different problem of accurately predicting the future).
I think this is kind of misunderstanding scifi a bit. You're right it was designed to be entertaining, but the kernel of it is that they take some existing trend and extrapolate it into the future. Do that enough times, and some of the stories will start to be meaningful looking backwards and the people who made those predictions still deserve credit even if they weren't entirely useful in the forward direction.
I always thought the Butlerian Jihad was a convenient way to remove AI as a plot element. Same thing with shields and explosions; it made swordfighting a plausible way to fight in a universe with faster-than-light travel.
> humanity in general will be freed from all kinds of work that’s really an insult to the human brain.
He can only be referring to these Jira tickets I need to write.
There is a Jira MCP server...
flashback to Tron:
"MCP is highly intelligent and yet ruthless. It apparently wants to get rid of humans and especially users."
https://disney.fandom.com/wiki/Master_Control_Program
oh woah https://glama.ai/mcp/servers/@CamdenClark/jira-mcp
and MCP can work with deepseek running locally. hmm...
As someone who just got done putting a bullet in some long-used instances, I both appreciated and needed this laugh. Thanks!
Back then, when we also believed the access to every imaginable information through the internet and allowing communication across the globe would lead to universal wisdom, world-peace and an unimaginable utopia where common sense, based on science and knowledge prevails.
Oh boy, how foolish we've been!
I'm just hoping it brings out an explosion of new thought and not less thought. Will likely be both.
I have found there to be less diversity in thought on the internet in the last 10 years. I used to find lots of wild ideas and theories out there on obscure sites. Now it seems like every website is the same, talking about the same things
They say the web is dead, but I think we just have bad search engines.
If you go on twitter/x you will find a lot of wild ideas, many completely contradictory with other groups on x and or reality. It can be scary how polarized it is. If you open a new account and follow/like a few people with some odd viewpoint, soon you feed will be filled with that viewpoint, whatever it is.
Two words: Endless September.
I find this difficult to understand. There was a great explosion of conspiracy theories in the last ten years, so you should be seeing more of it.
Even the conspiracy theory community has become like this. What used to be a community of passionate skeptics, ufo-ologists, and rabid anti-statists has turned into the most overtly boot licking right wing apologists who apply an incredible amount of mental energy to justifying the actions of what is transparently and blatantly the most corrupt government in American history, so long as that government is weaponized against whatever identity and cultural groups they hate
You’re describing Twitter not conspiracy communities in general. On the UFO front at least I am aware of multiple YouTube channels and Discord servers with healthy diversity of thought, and I’m sure the same goes for other areas.
Maybe they're all the same conspiracy theories. All the current conspiracy theories are that immigrants are invading the country and Biden's in on it. Where is the next Time Cube or TempleOS?
We’re living through the second renaissance of the flat-earthers, which aren’t all that concerned with Biden (beyond the usual “the govt is concealing the truth” meme).
I have a genuine question I can’t find or come up with a viable answer to, a matter of said “unpleasantness” as he puts it; how do people make money or otherwise sustain themselves in this AI scenario we are facing?
Has anyone heard a viable solution, or even has one themselves?
I don’t hear anything about UBI anymore, could that be because after roughly 60+ million alien people flooding into western countries from countries with a populations so large that are effectively endless? What do we do about that? Will that snuff out any kind of advancement in the west when the roughly 6 billion people all want to be in the west where everyone gets UBI and it’s the land of milk and honey?
So what do we do then? We can’t all be tech industry people with 6-figure plus salaries, vested ownership, and most people aren’t multi-millionaires that can live far away from the consequences while demanding others subject themselves to them.
Which way?
>how do people make money or otherwise sustain themselves in this AI scenario we are facing?
1% of the labour force works in agriculture:
https://ourworldindata.org/grapher/share-of-the-labor-force-...
1%
let that number sink in; think about it really means.
And what it means is that at least basic food (unprocessed, no meat) could be completely free. It make take some smart logistics, but it's doable. All of our food is already one step, one small step, away from becoming free for everyone.
This applies to clothes and basic tools as well.
I've always thought there should be a 'minimum viable existence' option for those who are willing to forego most luxuries in exchange for not being required to do anything specific other than abide by reasonable laws.
It would be very interesting to see the percentage breakdowns of how such people chose to spend their time. In my opinion, there would be enough benefit to society at large to make it worthwhile. For a large group (if not the majority), I'm certain the situation would turn out to be completely temporary-- they would have the option to prepare themselves for some type of work they're better adapted to perform and/or enjoy, ultimately enhancing the culture and economy. Most of the rest could be useful as research subjects, if they were willing of course.
Obviously this is a bit of a utopian fantasy, but what can I say, Star Trek primed me to hope for such a future.
There will be relative scarcity. Consider a scenario where iPhone 50 is manufactured in a dark factory. But still there is waiting period to have access to it. This is because of resource bottlenecks.
I have soured on UBI because it tries to use a market solution to deal with problems that I don’t think markets can fix.
I want everyone to have food, housing, healthcare, education, etc. in a post scarcity world. That should be possible. I don’t think giving people cash is the best way to accomplish that. If you want people to have housing, give them housing. If you want people to have food, give them food.
Cash doesn’t solve the supply problem, as we can see with housing now. You would think a rise in the cost of housing would lead to more supply, but the cost of real estate also increases the cost of building.
He also wrote a story about how AI will create a non-literate society, because we'll all just talk to the computers whenever we need anything.
I think we need to consider what the end goal of technology is at a very broad level.
Asimov says in this that there are things computers will be good at, and things humans will be good at. By embracing that complementary relationship, we can advance as a society and be free to do the things that only humans can do.
That is definitely how I wish things were going. But it's becoming clear that within a few more years, computers will be far better at absolutely everything than human beings could ever be. We are not far even now from a prompt accepting a request such as "Write a another volume of the Foundation series, in the style of Isaac Asimov", and getting a complete novel that does not need editing, does not need review, and is equal to or better than the quality of the original novels.
When that goal is achieved, what then are humans "for"? Humans need purpose, and we are going to be in a position where we don't serve any purpose. I am worried about what will become of us after we have made ourselves obsolete.
> When that goal is achieved, what then are humans "for"? Humans need purpose, and we are going to be in a position where we don't serve any purpose. I am worried about what will become of us after we have made ourselves obsolete.
Read some philosophy. People have been wrestling with this question forever.
https://en.wikipedia.org/wiki/Philosophy
In the end, all we have is each other. Volunteer, help others.
It depends on what you are trying to get out of a novel. If you merely require repetitions on a theme in a comfortable format, Lester Dent style 'crank it out' writing has been dominant in the marketplace for >100 years already (https://myweb.uiowa.edu/jwolcott/Doc/pulp_plot.htm).
Can an AI novel add something new to the conversation of literature? That's less clear to me because it is so hard to get any model I work with to truly stand by its convictions.
You could have said the same thing when we invented the steam engine, mechanized looms, &c. As long as the driving force of the economy/technology is "make numbers bigger" there is no end in sight, there will never be enough, there is no goal to achieve.
We already live lives which are artificial in almost every way. People used to die of physical exhaustion and malnutrition, now they die of lack of exercise and gluttony, surely we could have stopped somewhere in the middle. It's not a ressource or technology problem at that point, it's societal/political
It's the human scaling problem. What systems can be used to scale humans to billions while providing the best possible outcomes for everyone? Capitalism? Communism?
Another possibility is not let us scale. I thought Logan's Run was a very interesting take on this.
Evolution is not about being better / winning but about adapting. People will adapt and co-exist. Some better than others.
AIs aren't really part of the whole evolutionary race for survival so far. We create them. And we allow them to run. And then we shut them down. Maybe there will be some AI enhanced people that start doing better. And maybe the people bit become optional at some point. At that point you might argue we've just morphed/evolved into whatever that is.
> I think we need to consider what the end goal of technology is at a very broad level.
"we" don't control ourselves. If humans can't find enough energy sources in 2200 it doesn't mean they won't do it in 1950.
It would be pretty bad to lose access to energy after having it, worse than never having it IMO.
The amount of new technologies discovered in the past 100 years (which is a tiny amount of time) is insane and we haven't adapted to it, not in a stable way.
This is undeniably true. The consequences of a technological collapse at this scale would be far greater than having never had it in the first place. For this reason, the people in power (in both industry and government) have more destructive potential than at any time in human history by far. And they do not act like they have little to no awareness of the enormous responsibility they shoulder.
> But it's becoming clear that within a few more years, computers will be far better at absolutely everything than human beings could ever be.
Comparative advantage. Even if that's true, AI can't possibly do _everything_. China is better at manufacturing pretty much anything than most countries on earth, but that doesn't mean China is the only country in the world that does manufacturing.
> AI can't possibly do _everything_
Why not? There's the human bias of wanting to consume things created by humans - that's fine, I'm not questioning that - but objectively, if we get to human-threshold AGI and continue scaling, there's no reason why it couldn't do everything, and better.
Why not - IMO you perhaps underestimate human complexity. There was a guardian article where researchers created a map of a mouse's brain, 1 cubic millimeter. Contains 45km worth of neurons and billions of synapses. IMO the AGI crowd are suffering expert beginner syndrome.
Humans are one solution to the problem of intelligence, but they are not the only solution, nor are they the most efficient. Today's LLMs are capable of outperforming your average human in a variety (not all, obviously!) of fields, despite being of wholly different origin and complexity.
- Despite the flood of benchmark-tuned LLMs, we remain nowhere close to engineering a machine intelligence rivaling that of a cat or a dog, let alone within the next 5 to 10 years.
- The world already hosts millions of organic AI (Actual Intelligence). Many statistically at genius-level IQ. Does their existence make you obsolete?
> Despite the flood of benchmark-tuned LLMs, we remain nowhere close to engineering a machine intelligence rivaling that of a cat or a dog, let alone within the next 5 to 10 years.
Depends on your definition of "intelligence." No, they can't reliably navigate the physical world or have long-term memories like cats or dogs do. Yes, they can outperform them on intellectual work in the written domain.
> Does their existence make you obsolete?
Imagine if for everything you tried to do, there was someone else who could do it better, no matter what domain, no matter where you were, and no matter how hard you tried. You are not an economically viable member of society. Some could deal with that level of demoralisation, but many won't.
> what then are humans "for"?
Folding laundry
Here's a passage from a children's book I've been carrying around in my heart for a few decades:
“I don't like cleaning or dusting or cooking or doing dishes, or any of those things," I explained to her. "And I don't usually do it. I find it boring, you see."
"Everyone has to do those things," she said.
"Rich people don't," I pointed out.
Juniper laughed, as she often did at things I said in those early days, but at once became quite serious.
"They miss a lot of fun," she said. "But quite apart from that--keeping yourself clean, preparing the food you are going to eat, clearing it away afterward--that's what life's about, Wise Child. When people forget that, or lose touch with it, then they lose touch with other important things as well."
"Men don't do those things."
"Exactly. Also, as you clean the house up, it gives you time to tidy yourself up inside--you'll see.”
A while ago I saw a video of a robot doing exactly that. Seems there is nothing left for us to do.
> Humans need purpose.
Let me paint a purpose for you which could take millions of years. How about building a Atomic Force microscope equivalent which can probe Calabi Yau manifolds to send messages to other multiverses.
You can have an LLM crank out words but you can't make them mean anything
Suno is pretty good at going from a 3 or 4 word concept to make a complete song with lyrics, melody, vocals, structure and internal consistency. I've been thoroughly impressed. The songs still suck but they are arguably no worse than 99% of what the commercial music business has been pumping out for years. I'm not sure AI is ready to invent those concepts from nothing yet but it may not be far off.
I used it. Once you get over the novelty you realize that all the songs are basically the same. Except for https://www.immibis.com/ex509__immibis_uc13_shitmusic.mp3 which you should pay attention to the lyrics in.
> they are arguably no worse than 99% of what the commercial music business has been pumping out for years
Correct, and that says a lot about our society.
Something about that mp3 actually feels disturbing. Is it normal for that type of model to attempt communication that way?
Struggling to find the words but the synthetic voice directly addressing the prompt is really surreal feeling.
No, it's not normal. The output is almost always song lyrics annotated with markup like [Bridge], [Chorus] etc. I think they're using something from OpenAI with a system prompt and/or domain-specific training on top.
It's not a pure AI output - I generated a bunch of lyrics in text (which doesn't use credits), selected the best one (obviously), padded them out with some repetition, entered a style, generated the audio a few times, selected my favourite audio, and edited the audio (poorly) by repeating a few bars of the intro to make it longer. You don't see the times it generated lyrics about X.509 certificates (even though the prompt was for them to be a valid X.509 certificate) or the times the vocals were unintelligible.
Here's another good version of the song with a different style: https://suno.com/song/2775f188-7582-4970-ac71-5a3b82e39a04?s...
Here's are two versions that are disqualified because you can't make out the lyrics: https://suno.com/song/9cebb5b3-c336-495e-be3d-195ea338eb52?s... https://suno.com/song/c6f0e666-ce91-4494-a8b5-1232862965c1?s...
---
I think generative AI does work as a toy. You can ask for all sorts of insane nonsense and laugh at what the program spits out to fulfil your request. I was a paying customer of AI Dungeon 2 (before the incident where OpenAI and/or the Mormons broke it in a poor attempt to impose safety rules).
I didn't keep any lyrics failures, but at the time, I was playing around with requesting songs that were also valid computer files, so here's one that went well: a "religious folk song that is also a valid Cisco configuration file", with the style changed to trance after the lyrics were generated: https://suno.com/song/32aa6d33-0f9f-4d3b-ad53-46a5fe238916?s... and another: https://suno.com/song/32aa6d33-0f9f-4d3b-ad53-46a5fe238916?s...
Juniper doesn't work as well because of the punctuation - it can generate lyrics with braced blocks, but they don't sound like anything: https://suno.com/song/32a0d70c-c9c9-468e-8905-67669c6b90d4?s...
Here's "a religious folk song that is also a valid COBOL program, without any English words": https://suno.com/song/b75aae68-9c1e-46e5-94d4-8bc63387640e?s...
Here are some that aren't configuration files but just sound cool. Prompt was something like "Write a song about a technological dystopia where everyone can only speak BGP." https://suno.com/song/1866516b-e133-47a5-a0ac-23ccb36f81ab?s... . This one's probably a song about "network protocols and their pros and cons": https://suno.com/song/23584394-7058-4bc1-8187-b3d286d36ec4?s...
And while I'm looking at my Suno outputs list, the reason I ever bothered to use it was to see if it could render these lyrics as a ripoff of "Pure Imagination" from Willy Wonka (it cannot because it only makes actual music): https://suno.com/song/19d1a90d-9ed6-4087-94e5-89e41363726e?s...
(I'm assuming that you can open these pages just by having the links. Some of them are set to public visibility.)
Meaning is in the eye of the beholder. Just look at how many people enjoyed this and said it was "just what they needed", despite it being composed of entirely AI-generated music: https://www.youtube.com/watch?v=OgU_UDYd9lY
honestly wondering, how do u know it was AI generated?
There's a "Altered or synthetic content" notice in the description. You can also look at the rest of the channel's output and draw some conclusions about their output rate.
(To be clear, I have no problem with AI-generated music. I think a lot of the commenters would be surprised to hear of its origin, though.)
> By embracing that complementary relationship, we can advance as a society and be free to do the things that only humans can do.
This complementarity already exists in our brains. We have evolutionary older parts of brain that deal with our basic needs through emotions and evolutionary younger neocortex that deals with rational thought. They have complicated relationship, both can influence our actions, through mutual interaction. Morality is managed by both, neither of them is necessarily more "humane" than the other.
In my view, AI will be just another layer, an additional neocortex. Our biological neocortex is capable of tracking un/cooperative behavior of around 100 people of the tribe, and allows us to learn couple useful skills for life.
The "personal AI neocortex" will track behavior of 8 billion people on the planet, and will have mastery of all known skills. It is gonna change humans for the better, I have little doubt about it.
[dead]
> One wonders what Asimov would make of the world of 2025, and whether he’d still see artificial and natural intelligence as complementary, rather than in competition.
I mean, I just got done watching a presentation at Google Next where the presenter talked to an AI agent and set up a landscaping appointment with price match and a person could intervene to approve the price match.
It's cool, sure, but understand, that agent would absolutely have been a person on a phone five years ago, and if you replace them with agentic AI, that doesn't mean that person has gone away or is now free to write poetry. It means they're out of an income and benefits. And that's before you consider the effects on the pool of talent you're drawing from when you're looking for someone to intervene on behalf of these agentic AIs, like that supervisor did when they approved the price match. If you don't have the entry-level person, you don't have them five years later when you want to promote someone to manage.
Another thing I have noticed with automation in general is that the more you use it, the less you understand the thing being automated. I think the reason why a lot of things today are still being manually done is because humans inherently understand that for both short AND long term success with a task, a conceptual understanding of the components of the system, whether that is partially or fully imagined in the case of complex business scenarios, is necessary, even though it lengthens time to complete in the short term. How do you modify or grow a system you do not understand? It feels like you're cutting a branch at a certain length and not allowing it to grow beyond where you've placed the automation. I will be interested to see the outcome of the increased push today for advanced automation in places where the business relies on understanding of the system to make adjacent decisions/further business operations.
Asimov's story The Feeling of Power seems relevant: https://en.wikipedia.org/wiki/The_Feeling_of_Power
In theory, the economy should create new avenues. Labour costs are lower, goods and services get cheaper (inflation adjusted) and the money is spent on things that were once out of reach.
In practice I fear that the savings will make the rich richer, drive down labour's negotiating power and generally fail to elevate our standard of living.
Not necessarily. The reality is the landscaping guy is struggling to handle callbacks or is burning overhead. Even then, two girls in the office hits a ceiling where it doesn’t scale quickly, now you’re in a call center scenario.
Call center based services always suck. I remember going to a talk where American Express, who operated best in class call centers, found that 75% of their customers don’t want to talk to them. The people are there because that’s needed for a complex relationship, the more stuff you can address earlier in the funnel, the better.
Customers don’t want to talk to you, and ultimately serving the customer is the point.
The 1980 version of your comment:
>Just saw a demo of a new word processor system that lets a manager dictate straight into the machine, and it prints the memo without a secretary ever touching it. Slick stuff. But five years ago, that memo would’ve gone through a typist. Replace her with a machine, and she’s not suddenly editing novels from home. She’s unemployed, losing her paycheck and benefits.
And when that system malfunctions, who’s left who actually knows how to fix it or manage the workflow? You can’t promote experience that never existed. Strip out the entry-level roles, and you cut off the path to leadership.
The difference between the 1980 version of my post and the 2025 version of my post is that in 1980 there was conceivably a future where the secretary could retrain to do other work (likely with the help of one of those new-fangled microcomputers) that would need human intelligence in order to be completed.
The 2025 equivalent of the secretary is potentially looking across a job market that is far smaller because the labor she was trained to do, or labor similar enough to it that she could have previously successfully been hired, is now handled by artificial intelligence.
There is, effectively, no where for her to go to earn a living with her labor.
How can we reconcile this with how much of the US and world are still living as if it were the 1930s or even 1850s?
Travel 75 to 150 miles outside of a US city and it will feel like time travel. If so much is still 100 years behind, how will civilization so broadly adopt something that is yet more decades into the future?
I got into starlink debates with people during hurricane helene. Folks were glowing over how people just needed internet. Reality, internet meant fuck all when what you needed was someone with a chainsaw, a generator, heater, blankets, diapers and food.
Which is to say, technology and its importance is a thin veneer on top of organized society. All of which is frail and still has a long way to go to fully penetrate rural communities for even recent technology. At the same time, that spread is less important than it would seem to a technologist. Hence, technology has not uniformly spread everywhere, and ultimately it is not that important. Yet, how will AI, even more futuristic, leap frog this? My money is that rural towns USA will look almost identical in 30 years from now. Many look identical to 100 years ago still.
Who do you think voted for Trump? You point out that it's perfectly possible to live a "simple" rural life.
I see https://en.wikipedia.org/wiki/Beggars_in_Spain and the reason why they vote the way they do. Modern society has left them behind, abandoned them, and not given them any way to keep up with the rest of the US. Now they're getting taken advantage of by the wealthy like Trump, Murdoch, Musk, etc. who use their unhappiness to rage against the machine.
> My money is that rural towns USA will look almost identical in 30 years from now.
You mean poor, uneducated and without any real prospects of anything like a career? Pretty much. Except there will be far more people who are impoverished and with no hope for the future. I don't see any of this as a good thing.
Not quite comparable; these systems will continue to grow in capacity until there is nothing for your average human to be able to reskill to. Not only that, they will truly be beyond our comprehension (arguably, they already are: our interpretability work is far from where it would need to be to safely build towards a superintelligence, and yet...)
If your argument is that, all that happened and it all turned out fine: Are you sure we (socioeconomically, on average) are better off today then we were in the 1980s?
Probably depends who you refer to by "we". On a global level, the answer is definitely yes.
Extreme poverty decreased, child mortality decreased, literacy and access to electricity has gone up.
Are people unhappier? Maybe. But not because they lack something materially.
I think in this case its fair to assume what I meant was "the secretaries whose jobs were replaced in the 80s and people like them", or "the people whose jobs will be replaced with AI today"; not "literally the poorest and least educated people on the planet whose basic hierarchy of needs struggle to be met every day."
I am sure of that. I think people forget the difference in living conditions then.
Things that were common in that era that are rare today:
1. Living in shared accomodation. It was common then for people to live in boarding houses and bedsits as adults. Today these are largely extinct. Generally, the living space per person has increased substantially at every level of wealth. Only students live in this sort of environment today and even then it is usually a flat (ie. sharing with people you know on an equal basis) not a bedsit/boarding house (ie. living in someone's house according to her rules--no ladies in gentlemen's bedrooms, no noise after 8pm, etc.).
2. Second-hand clothes and repairing clothes. Most people wear new clothes. People buy second hand because it is trendy. Nobody really repairs anything because that is all they can afford. People just buy new. Nobody darns socks or puts elbow patches on jackets where they have worn out. Only people that buy expensive shoes get their shoes resoled. Normal people just buy cheap shoes more often and they really do save money doing this.
Today the woman that would have been a typist has a different job, and a more productive one that pays more.
if the AI transition really turns into an Artificial Labor revolution - if it really works and isn’t an illusion - then we’re going to have to have a major change in how we distribute wealth. The bad future is one where the owner class no longer has any use for human labor and the former-worker class has nothing
TBH this is already how the US got into the current mess.
But we have had the same thing happen constantly. Automation isn't new. How many individuals are involved in assembling a car today vs in the 1970s? An order of magnitude fewer. But there aren't loads of unemployed people. The market puts labour where it is needed.
Automation won't obsolete work and workers it will make us more productive and our desires will increase. We will all expect what today are considered luxuries only the rich can afford. We will all have custom software written for our needs. We will all have individual legal advice on any topic we need advice on. We will all have bigger houses with more stuff in them, better finishings, triple glazed windows, and on and on.
yeah and then what. I don’t think desire is infinite.
It is uncapped and indefinite. People always want more than they have. We get used to what we have. What was considered a luxury is baseline today. Today's luxuries will before long be considered part of the "poverty line".
> if you replace them with agentic AI, that doesn't mean that person has gone away or is now free to write poetry. It means they're out of an income and benefits.
That's capitalism for ye :/ Join us on the UBI train.
Say, have you ever read the book 'Bullshit Jobs'...
> That's capitalism for ye :/ Join us on the UBI train.
The people with all of the money effectively froze wages for 45 years, and that was when there were people actually doing labor for them.
What makes you think that they'll peaceably agree to UBI for people who don't sell them labor for money?
> The people with all of the money effectively froze wages for 45 years
Yep. And they didn't accomplish that 'peaceably' either, for the record. A lot of people got murdered, many more smeared/threatened/imprisoned etc. Entire countries get decimated.
> What makes you think that they'll peaceably agree to UBI for people who don't sell them labor for money?
I don't imagine for a moment that they'll like UBI. There is no shortage of examples over recent millenia of how far the parasite class will go to keep the status quo.
History also shows us that having all the money doesn't guarantee that people will do things your way. Class awareness, strikes, unions, protest, and alternative systems/technological advance have shown their mettle. These things scare oligarchs because they work.
I am hoping that will be our saving grace this time around as well, but my fear is that the oligarchs will control more autonomous power than we can meaningfully resist, and our existence will no longer be strictly necessary for their systems to operate.
The dark humor in this is that any such technologically advanced future where humans have a meaningful say will eventually look like one of abundant luxury communism: it's just that the oligarchs' version will have a lot of people die first before the oligarchs enjoy their abundance.
The third option is that the oligarchy fully internalizes its pursuit of ruthless concentration of power. But in that case, someone will probably create an AI that's better at playing the power game, and at that point, it's over for the oligarchs.
Wages haven't been frozen for 45 years in real terms. They have gone up considerably.
Compare wages to productivity [0]. Or compare the rise in wages to the rise in housing costs [1].
The vast majority of the gains in productivity have been captured and funneled upward.
0 - https://assets.weforum.org/editor/HFNnYrqruqvI_-Skg2C7ZYjdcX...
That graph is misinformation. It deliberately excludes the wages of the most productive workers (but includes their productivity) which makes it meaningless.
Seeing the creativity most people employ, that is for selfish loopholes and inconsiderate behaviour, I am a little wary of empowering them.
Most creative work is benevolent or at least harmless. Certainly some people are malevolent, maybe even everybody some of the time, but you shouldn't believe that to represent the majority of creativity. That's way too misanthropic.
Asimov is probably my least favorite major science fiction author (that I've read a significant number of works from).
Something about his worldview always seemed off to me, although I didn't know he actually seriously held such utopian convictions about AI. It explains an awful lot of the way his stories are.
The final part of this article is the main point, not the headline
Isaac Asimov's view of the future has aged surprisingly well. But techno-utopianism has not.
I let Gemini 2.5 Pro (the image is from ChatGpt) write a short sci fi story. I think it did a decent job.
https://show.franzai.com/a/tiny-queen-zebu
Ask it to count the words.
your link is broken now
fixed
Reminds me of Jacque Fresco (Venus Project)!
I don't think Asimov envisioned a world where AI would be controlled by a clique of ultra-wealthy oligarchs.
Asimov’s future was pretty dark. He didn’t come out and say it, but it was implied that we had a lot of big entities ruling everything. Many of the negative political people were painted as “populist” figures.
If you are a fan of the foundation books, recall that many of the leaders of various factions were a bunch of idiots little different than the carnival barkers we see today.
As I recall, many of his early stories involved "U.S. Robot & Mechanical Men" which was a huge conglomerate owning a lot of the market on AI (called "robots" by Asimov, it included "Multivac" and other interfaces besides humanoid robots).
Yes. When I hear dreams of the past it makes me nostalgic because they all come from a pre-exploited era of tech with the underlying subtext that humanity is unified in wanting tech to be used for good purposes. The reality is tech is a vessel for traditional enrichment, such as resource wars of say oil or land have been. Both domestically and geopolitically, tech is seen that way today. In such a world, tech advancements offers opportunities for the powerful to grab more, changing the relative distribution of power in their favor. If tech shows us anything is that this relative notion of wealth or social posturing is the central axis around which humans align themselves, wherever on the socioeconomic ladder you are and independent of absolute and basic needs.
>because they all come from a pre-exploited era of tech with the underlying subtext that humanity is unified in wanting tech to be used for good purposes.
That's the problem with being nostalgic for something you possibly didn't even live. You don't remember all the other ugly complexities that don't fit your idealized vision.
Nothing about the world of the sci fi golden age was less exploitative or prone to human misery than it is today. If anything, it was far worse than what we have today in many ways (excluding perhaps the reach of the surveillance state)
Some of the US government's worst secret experiments against the population come from that same time and the naive faith by the population in their "leaders" made propaganda by centralized big media outlets all the more pervasively powerful. At the same time, social miseries were common and so too were many strictures against many more people on economic and social opportunities. As for technology being used for good purposes, bear in mind that among many other nasty things being done, the 50's and 60s were a time in which several governments flagrantly tested thousands of nukes out in the open, in the skies, above-ground and in the oceans with hardly a care in the world or any serious public scrutiny. If you're looking at that gone world with rose-tinted glasses, I'd suggest instead using rose tinted welding goggles..
The world of today may be full of flaws, but the avenues for breaking away from controlled narratives and controlled economic rules are probably broader than they've ever been.
You are entirely right to call me out on that. But I would like to say that sci fi that applied to computers, AI, automation, were just dreams of a different world, because those technologies hadn’t been exploited yet. Even many of the dystopias feel innocent with today’s knowledge of where it went. Such as 1984, imo.
There are some dreams of the past like that but most sci-fi tends to be quiet dark like The Matrix or Terminator. In practice a lot of tech proves to be helpful in not very sci-fi like ways like antibiotics, phones etc. Human nature is still what it is though.
I remember reading his book 'The Naked Sun' back in highschool and one of the things that stuck to me was how Earth was kind of a dump bereft of robots, meanwhile the Spacer humans were incredibly rich, had a low population and their society was run by robots doing all the menial work. You could argue he envisioned our current world even if accidentally.
>Asimov’s future was pretty dark. He didn’t come out and say it, but it was implied that we had a lot of big entities ruling everything.
>As I recall, many of his early stories involved "U.S. Robot & Mechanical Men" which was a huge conglomerate owning a lot of the market on AI...
>May want to reread. U.S. Robots and Mechanical Men is pretty prominent in his Robot stories.
Good points from some of these replies. The interview is fairly brief, perhaps he didn't feel he had the time to touch on the socio-economic issues, or that it wasn't the proper forum for those concerns.
May want to reread. U.S. Robots and Mechanical Men is pretty prominent in his Robot stories.
or that it would aggressively focused on doing the work of already low paid creative field jobs. I dont want to read an AI's writing if theres a person who could write it.
92 huh? That is an opinion from a long time ago.
The question I have is why AI technology is being so aggressively advertised nowadays, and why none of it seems to be liberating in any way.
Once the plow liberated humans from some kinds of work. Some time later it was just a tool that slaves, very non liberated, used to tend to rich people's farms.
Technology is tricky. I don't trust who is developing AI to be liberating.
The article also plays on the "favorite author" thing. It knows many young folk see Asimov as a role model, so it is leveraging that emotional connection to gather conversation around a topic that is not what it seems to be. I consider it a dirty trick. It is disgraceful given the current world situation (AI being used for war, surveillance, brainwashing).
We are better than this.
>why AI technology is being so aggressively advertised nowadays[?]
I'm not sure I've actually seen an advertisement for AI. It's being endlessly discussed though on HN and other places, probably because it's at an interesting point at the moment making rapid progress. And also shoved into a lot of products and services of course.
The definition of advertisement is the least important part of my comment.
Focus on what matters for humans.
It is an interesting time for LLMs to burst on the scene. Most online forums have already turned people into text replicators. Most HN commenters can be prompted into “write a comment about slop violating copyright” / “write a comment about Google violating privacy” / “write a comment about managers not understanding remote work”. All you have to do is state the opposite.
A perfect time for LLMs to show up and do the same. The subreddit simulators were hilarious because of the unusual ways they would perform but a modern LLM is a near perfect approximation of the average HN commenter.
I would have assumed that making LLMs indistinguishable from these humans would make those kinds of comments less interesting to interact with but there’s a base level of conversation that hooks people.
On Twitter, LLM-equipped Indians cosplay as right wing white supremacists and amass large followings (also bots, perhaps?) revealed only when they have to participate in synchronous conversation.
And yet, they are still popular. Even the “Texas has warm water ports” Texan is still around and has a following (many of whom seem non-bot though who can tell?).
Even though we have a literal drone, humans still engage in drone behaviour and other humans still engage them. Fascinating. I wonder whether the truth is that the inherent past-replication of low-temperature LLMs is likely to fix us to our present state than to raise us to a new equilibrium.
Experiments in Musical Intelligence is now over 40 years old and I thought it was going to revolutionize things: unknown melodies discovered by machine married to mind. Maybe LLMs aren’t going to move us forward only because this point is already a strong attractor. I’m optimistic in the power of boredom, though!
> I would have assumed that making LLMs indistinguishable from these humans would make those kinds of comments less interesting to interact with but there’s a base level of conversation that hooks people.
I think it is heading in this direction, just takes a very long time. 50% of people are dumber than average
Dumber than median*
“Texas has warm water ports” is more the hallmark of Russian propagandists. I think LLMs go more for saying 'delve' and odd hyphens and stuff?
[dead]
[flagged]