I don't get his argument, and if it wasn't Martin Fowler I would just dismiss it. He admits himself that it's not an abstraction over previous activity as it was with HLLs, but rather a new activity altogether - that is prompting LLMs for non-deterministic outputs.
Even if we assume there is value in it, why should it replace (even if in part) the previous activity of reliably making computers do exactly what we want?
You don't think the damage was done by the people who religiously follow whatever loudmouths says? Those are the people I'd stop listening to, rather than ignoring what an educator says when sharing their perspective.
Don't get me wrong, I feel like Fowler is wrong about some things too, and wouldn't follow what he says as dogma, but I don't think I'd attribute companies going after the latest fad as his fault.
> gloriefied global variable is pretty high on my BS list
Say you have a test that is asserting the output of some code, and that code is using a global variable of some kind, how do you ensure you can have tests that are using different values for that global variable and it all works? You'd need to be able to change it during tests somehow.
Personally, I think a lot of the annoying parts of programming go away when you use a more expressive language (like Clojure), including this one. But for other languages, you might need to work around the limitations of the language and then approaches like using Singletons might make more sense.
At the same time, Fowlers perspective is pretty much always in the context of "I have this piece of already written code I need to make slightly better", obviously the easy way is to not have global variables in the first place, but when working with legacy code you do stumble upon one or three non-optimal conditions.
You need to understand that Mr. Fowler works for a consultancy.
LLMs sound great for consultants. A messy hyped technology that you can charge to pretend to fix? Jackpot.
All things these consultancies eventually promote are learnings they had with their own clients.
The OOP patterns he described in the past likely came from observing real developers while being in this consultant role, and _trying_ to document how they overcame typical problems of the time.
I have a feeling that the real people with skin on the game (not consultants) that came up with that stuff would describe it in much simpler terms.
Similarly, it is likely that some of these posts are based on real experience but "consultancified" (made vague and more complex than it needs to be).
I'm pretty deep into these things and have never had them solve a harder problem than I can solve. They just solve problems I can solve much, much faster.
Maybe that does add up to solving harder higher level real world problems (business problems) from a practical standpoint, perhaps that's what you mean rather than technical problems.
Or maybe you're referring to producing software which utilizes LLMs, rather than using LLMs to program software (which is what I think the blog post is about, but we should certainly discuss both.)
If you've never done web-dev, and want to create an web-app, where does that fall? In principle you could learn web-dev in 1 week/month, so technically you could do it.
> maybe you're referring to producing software which utilizes LLMs
but yes, this is what I meant, outsourcing "business logic" to an LLM instead of trying to express it in code.
OK, so we are having two classes of problems here - ones worth solving unreliably, and ones that are better solved without LLMs. Doesn't sound like a next level of abstraction to me
The story of programming is not largely one of humans striving to be more reliable when programming but putting up better defenses against our own inherent unreliabilities.
When I watch juniors struggle they seem to think that it's because they dont think hard enough whereas it's usually because they didnt build enough infrastructure that would prevent them from needing to think too hard.
As it happens, when it comes to programming, LLM unreliabilities seem to align quite closely with ours so the same guardrails that protect against human programmers' tendencies to fuck up (mostly tests and types) work pretty well for LLMs too.
A contrived example: there are only 100 MB of disk space left, but 1 GB of logs to write. LLM discards 900 MB of logs and keeps only the most important lines.
Sure, you can nitpick this example, but it's the kind of edge case handling that LLMs can "do something resonable" that before required hard coding and special casing.
Abstractions for high-level programming languages have always gone in multiple directions (or dimensions if you will). Operations in higher level languages abstract over multiple simpler operations in other languages, but they also allow for abstraction over human concepts, by introducing variable names for example. Variable names are irrelevant to a computer, but highly relevant to humans.
Languages are created to support both computers as well as humans. And to most humans, abstractions such as those presented by, say, Hibernate annotations, are as non-deterministic as can be. To the computer it is all the same, but that is increasingly becoming less relevant, given that software is growing and has to be maintained by humans.
So, yes, LLMs are interesting, but not necessarily that much of a game-changer when compared to the mess we are already in.
I’m programming software development workflows for Claude in plain english (custom commands). The non-deterministic is indeed a (tiny!) bit of a problem, but just tell Claude to improve the command so next time it won’t make the same mistake. One time it added an implementation section to the command. Pretty cool
This is the big game changer: we have a programming environment where the program can improve itself. That is something Fortran couldn’t do.
(bit off topic, but I wished someone told me this a few months ago; for regular programming generation, use TDD and a type safe language like Go. (Fast compile times and excellent testing support). Don’t aim for the magical huge waterfall prompt)
How many bandwagons has this guy jumped on? Now he says that LLMs will be the new high level programming languages but also that he listens to colleagues and hasn't really tried them yet.
I suppose he is aiming for a new book and speaker fees from the LLM industrial complex.
It is a new nature of abstraction, not a new level.
UP:
It lets us state intent in plain language, specs, or examples. We can ask the model to invent code, tests, docs, diagrams—tasks that previously needed human translation from intention to syntax.
BUT SIDEWAYS:
Generation is a probability distribution over tokens. Outputs vary with sampling temperature, seed, context length, and even with identical prompts.
You can make these things deterministic for sure, and so you could also store prompts plus model details instead of code if you really wanted to. Lots of reasons this would be a very very poor choice but you could do it.
I don't think that's how you should think about these things being non-deterministic though.
Let's call that technical determinism, and then introduce a separate concept, practical determinism.
What I'm calling practical determinism is your ability as the author to predict (determine) the results. Two different prompts that mean the same thing to me will give different results, and my ability to reason about the results from changes to my prompt is fuzzy. I can have a rough idea, I can gain skill in this area, but I can't gain anything like the same precision as I have reasoning about the results of code I author.
+ temperature=0.0 would be needed for reproducible outputs. And even with that, if it's actually reproducible or not depends on the model/weights themselves, not all of them are even when all those things are static. And then finally depends on the implementation of the model architecture as well.
I think the tricky part is that we tend to think that prompts with similar semantic meaning will give the same outputs (like a human), while LLMs can give vastly different outputs if you have one spelling mistake for example, or used "!" instead of "?", the effect varies greatly per model.
Hmm, I'm barely even a dabbler, but I'd assumed that the seed in question drove the (pseudo)randomness inherent in "temperature" - if not, what seed(s) do they use and why could one not set that/those too?
To your second part I wouldn't make that assumption - I can see how a non-technical person might, but surely programmers wouldn't? I've certainly produced very different output from that which I intended in boring old C with a mis-placed semi-colon after all!
> Hmm, I'm barely even a dabbler, but I'd assumed that the seed in question drove the (pseudo)randomness inherent in "temperature" - if not, what seed(s) do they use and why could one not set that/those too?
Implementations and architectures are different enough that it's hard to say "It's like X" in all cases. Last time I tried to achieve 100% reproducible outputs, which obviously includes hard-coding various seeds, I remember not getting reproducible outputs unless setting temperature to 0, I think this was with Qwen2 or Qwq used via Huggingface's Transformers library, but cannot find the exact details now.
Then in other cases, like the hosted OpenAI models, they straight up say "temperature to 0 makes them mostly deterministic", but I'm not exactly sure why they are unable to offer endpoints with determinism.
> I can see how a non-technical person might, but surely programmers wouldn't?
When talking even with developers about prompting and LLMs, there is still quite a few people who are surprised that "You are a helpful assistant." would lead to different outputs than "You are a helpful assistant!". I think if you're a programmer or not matters less, more about understanding how the LLMs actually work in order to understand that.
I'm in the process of actually building LLM based apps at the moment, and Martin Fowler's comments are on the money. The fact is seemingly insignificant changes to prompts can yield dramatically different outcomes, and the odd new outcomes have all these unpredictable downstream impacts. After working with deterministic systems most of my career it requires a different mindset.
It's also a huge barrier to adoption by mainstream businesses, which are used to working to unambiguous business rules. If it's tricky for us developers it's even more frustrating to end users. Very often they end up just saying, f* it, this is too hard.
I also use LLM's to write code and for that they are a huge productivity boon. Just remember to test! But I'm noticing that use of LLM's in mainstream business applications lags the hype quite a bit. They are touted as panaceas, but like any IT technology they are tricky to implement. People always underestimate the effort necessary to get a real return, even with deterministic apps. With indeterministic apps it's an even bigger problem.
Some failure modes can be annoying to test for. For example, if you exceed the model’s context window, nothing will happen in terms of errors or exceptions but the observable performance on the task will tank.
Counting tokens is the only reliable defence i found to this.
If you exceed the context window the remote LLM endpoint will throw you an error which you probably want to catch, or rather you want to catch that before it happens and deal with it. Either way, it's not a silent error that goes unnoticed usually, what makes you think that?
> If you exceed the context window the remote LLM endpoint will throw you an error which you probably want to catch
Not every endpoint works the same way, I'm pretty sure LM Studio's OpenAI-compatible endpoints will silently (from the clients perspective) truncate the context, rather than throw an error. It's up to the client to make sure the context fits in those cases.
OpenAI's own endpoints do show an error and refuses if you exceed the context length though. I think I've seen others use the "finish_reason" attribute too to signal the context length was exceeded, rather than setting an error status code on the response.
Overall, even "OpenAI-compatible" endpoints often aren't 100% faithful reproductions of the OpenAI endpoints, sadly.
That seems like terrible API design to just truncate without telling the caller. Anthropic, Google and OpenAI all will fail very loudly if you exceed the context window, and that's how it should be. But fair enough, this shouldn't happen anyway and the context should be actively handled before it blows up either way.
> That seems like terrible API design to just truncate without telling the caller
Agree, confused me a lot the first time I encountered it.
It would be great if implementations/endpoints could converge, but with OpenAI moving to the Responses API rather than ChatCompletion, yet the rest of the ecosystem seemingly still implementing ChatCompletion with various small differences (like how to do structured outputs), it feels like it's getting further away, not closer...
It's complicated, for example some models (o3) will throw an error if you set temperature.
What do you do if you want to support multiple models in your LLM gateway? Do you throw an error if a user sets temperature for o3, thus dumping the problem on them? Or just ignore it, but potentially creating confusion because temperature will seem to not work for some models?
Interesting, the completion return object is documented but theres no error or exception field. In practice the only errors ive seen so far have been on the HTTP transport layer.
It would make sense to me for the chat context to raise an exception. Maybe i should read the docs further…
I respect Martin Fowler greatly but those who, by their own admission, have not used current AI coding tools really don't have much to add regarding how they affect our work as developers.
I do hope he takes the time to get good with them!
> have not used current AI coding tools really don't have much to add regarding how they affect our work as developers
I dunno, sometimes it's helpful to learn about the perspectives of people who've watched something from afar as well, especially if they already have broad knowledge and context that is adjacent to the topic itself, and have lots of people around them deep in the trenches that they've discussed with.
A bit like historians still can provide valuable commentary on wars, even though they (probably) haven't participated in the wars themselves.
I agree, I don't use coding tools, "except to ask for a script to chatgpt every once in a while". But I experience it by reviewing and detecting LLM generated code by consultants and juniors. It's easy to ask them for the prompts for example, but when they use autocompletion based LLMs, it's really hard to distinguish source from target code.
> As we learn to use LLMs in our work, we have to figure out how to live with this non-determinism. This change is dramatic, and rather excites me. I'm sure I'll be sad at some things we'll lose, but there will also things we'll gain that few of us understand yet. This evolution in non-determinism is unprecedented in the history of our profession.
The whole point of computers is that they were deterministic, such that any effective method can be automated - leaving humans to do the non-deterministic (and hopefully more fun) stuff.
Why do we want to break this up-to-now hugely successful symbiosis?
I think the whole context of the article is "program with non-deterministic tools", while non-deterministic fuzzing and mutation testing is kind of isolated to "coming up with test cases", not something you constantly program side-by-side with, or even integrate into the (business-side) of the software project itself. That's how I've used fuzzing and mutation testing in the past at least, maybe others use it differently.
Otherwise yeah, there are a bunch of non-deterministic technologies, processes and workflows missing, like what Machine Learning folks been doing for decades, which is also software and non-deterministic, but also off-topic from context of the article, as I read it.
I don't get his argument, and if it wasn't Martin Fowler I would just dismiss it. He admits himself that it's not an abstraction over previous activity as it was with HLLs, but rather a new activity altogether - that is prompting LLMs for non-deterministic outputs.
Even if we assume there is value in it, why should it replace (even if in part) the previous activity of reliably making computers do exactly what we want?
Funny, I dismiss the opinion based on the author in question.
Serious question - why? I know of the author but don’t see a reason to value his opinion on this topic more or less because of this.
(Attaching too much value to the person instead of the argument is more of an ‘argument from authority’)
Let's just say I think a lot of damage was caused by their OOP evangelism back in the day.
You don't think the damage was done by the people who religiously follow whatever loudmouths says? Those are the people I'd stop listening to, rather than ignoring what an educator says when sharing their perspective.
Don't get me wrong, I feel like Fowler is wrong about some things too, and wouldn't follow what he says as dogma, but I don't think I'd attribute companies going after the latest fad as his fault.
Perhaps. Then again, advocating things like Singleton as anything beyond a gloriefied global variable is pretty high on my BS list.
An example: https://martinfowler.com/bliki/StaticSubstitution.html
> gloriefied global variable is pretty high on my BS list
Say you have a test that is asserting the output of some code, and that code is using a global variable of some kind, how do you ensure you can have tests that are using different values for that global variable and it all works? You'd need to be able to change it during tests somehow.
Personally, I think a lot of the annoying parts of programming go away when you use a more expressive language (like Clojure), including this one. But for other languages, you might need to work around the limitations of the language and then approaches like using Singletons might make more sense.
At the same time, Fowlers perspective is pretty much always in the context of "I have this piece of already written code I need to make slightly better", obviously the easy way is to not have global variables in the first place, but when working with legacy code you do stumble upon one or three non-optimal conditions.
You need to understand that Mr. Fowler works for a consultancy.
LLMs sound great for consultants. A messy hyped technology that you can charge to pretend to fix? Jackpot.
All things these consultancies eventually promote are learnings they had with their own clients.
The OOP patterns he described in the past likely came from observing real developers while being in this consultant role, and _trying_ to document how they overcame typical problems of the time.
I have a feeling that the real people with skin on the game (not consultants) that came up with that stuff would describe it in much simpler terms.
Similarly, it is likely that some of these posts are based on real experience but "consultancified" (made vague and more complex than it needs to be).
Because unreliably solving a harder problem with LLMs is much more valuable than reliably solving an easier problem without.
I'm pretty deep into these things and have never had them solve a harder problem than I can solve. They just solve problems I can solve much, much faster.
Maybe that does add up to solving harder higher level real world problems (business problems) from a practical standpoint, perhaps that's what you mean rather than technical problems.
Or maybe you're referring to producing software which utilizes LLMs, rather than using LLMs to program software (which is what I think the blog post is about, but we should certainly discuss both.)
> solve a harder problem than I can solve
If you've never done web-dev, and want to create an web-app, where does that fall? In principle you could learn web-dev in 1 week/month, so technically you could do it.
> maybe you're referring to producing software which utilizes LLMs
but yes, this is what I meant, outsourcing "business logic" to an LLM instead of trying to express it in code.
OK, so we are having two classes of problems here - ones worth solving unreliably, and ones that are better solved without LLMs. Doesn't sound like a next level of abstraction to me
The story of programming is not largely one of humans striving to be more reliable when programming but putting up better defenses against our own inherent unreliabilities.
When I watch juniors struggle they seem to think that it's because they dont think hard enough whereas it's usually because they didnt build enough infrastructure that would prevent them from needing to think too hard.
As it happens, when it comes to programming, LLM unreliabilities seem to align quite closely with ours so the same guardrails that protect against human programmers' tendencies to fuck up (mostly tests and types) work pretty well for LLMs too.
I was thinking more along this line: you can solve unreliably 100% of the problem with LLMs, or solve reliably only 80% of the problem.
So you trade reliability to get to that extra 20% of hard cases.
Which harder problems are LLMs going to (unreliably) solve in your opinion?
Anything which requires "common sense".
A contrived example: there are only 100 MB of disk space left, but 1 GB of logs to write. LLM discards 900 MB of logs and keeps only the most important lines.
Sure, you can nitpick this example, but it's the kind of edge case handling that LLMs can "do something resonable" that before required hard coding and special casing.
Abstractions for high-level programming languages have always gone in multiple directions (or dimensions if you will). Operations in higher level languages abstract over multiple simpler operations in other languages, but they also allow for abstraction over human concepts, by introducing variable names for example. Variable names are irrelevant to a computer, but highly relevant to humans.
Languages are created to support both computers as well as humans. And to most humans, abstractions such as those presented by, say, Hibernate annotations, are as non-deterministic as can be. To the computer it is all the same, but that is increasingly becoming less relevant, given that software is growing and has to be maintained by humans.
So, yes, LLMs are interesting, but not necessarily that much of a game-changer when compared to the mess we are already in.
I’m programming software development workflows for Claude in plain english (custom commands). The non-deterministic is indeed a (tiny!) bit of a problem, but just tell Claude to improve the command so next time it won’t make the same mistake. One time it added an implementation section to the command. Pretty cool
This is the big game changer: we have a programming environment where the program can improve itself. That is something Fortran couldn’t do.
(bit off topic, but I wished someone told me this a few months ago; for regular programming generation, use TDD and a type safe language like Go. (Fast compile times and excellent testing support). Don’t aim for the magical huge waterfall prompt)
How many bandwagons has this guy jumped on? Now he says that LLMs will be the new high level programming languages but also that he listens to colleagues and hasn't really tried them yet.
I suppose he is aiming for a new book and speaker fees from the LLM industrial complex.
It is a new nature of abstraction, not a new level.
UP: It lets us state intent in plain language, specs, or examples. We can ask the model to invent code, tests, docs, diagrams—tasks that previously needed human translation from intention to syntax.
BUT SIDEWAYS: Generation is a probability distribution over tokens. Outputs vary with sampling temperature, seed, context length, and even with identical prompts.
Surely given an identical prompt with a clean context and the same seed the outputs will not vary?
You can make these things deterministic for sure, and so you could also store prompts plus model details instead of code if you really wanted to. Lots of reasons this would be a very very poor choice but you could do it.
I don't think that's how you should think about these things being non-deterministic though.
Let's call that technical determinism, and then introduce a separate concept, practical determinism.
What I'm calling practical determinism is your ability as the author to predict (determine) the results. Two different prompts that mean the same thing to me will give different results, and my ability to reason about the results from changes to my prompt is fuzzy. I can have a rough idea, I can gain skill in this area, but I can't gain anything like the same precision as I have reasoning about the results of code I author.
+ temperature=0.0 would be needed for reproducible outputs. And even with that, if it's actually reproducible or not depends on the model/weights themselves, not all of them are even when all those things are static. And then finally depends on the implementation of the model architecture as well.
I think the tricky part is that we tend to think that prompts with similar semantic meaning will give the same outputs (like a human), while LLMs can give vastly different outputs if you have one spelling mistake for example, or used "!" instead of "?", the effect varies greatly per model.
Hmm, I'm barely even a dabbler, but I'd assumed that the seed in question drove the (pseudo)randomness inherent in "temperature" - if not, what seed(s) do they use and why could one not set that/those too?
To your second part I wouldn't make that assumption - I can see how a non-technical person might, but surely programmers wouldn't? I've certainly produced very different output from that which I intended in boring old C with a mis-placed semi-colon after all!
> Hmm, I'm barely even a dabbler, but I'd assumed that the seed in question drove the (pseudo)randomness inherent in "temperature" - if not, what seed(s) do they use and why could one not set that/those too?
Implementations and architectures are different enough that it's hard to say "It's like X" in all cases. Last time I tried to achieve 100% reproducible outputs, which obviously includes hard-coding various seeds, I remember not getting reproducible outputs unless setting temperature to 0, I think this was with Qwen2 or Qwq used via Huggingface's Transformers library, but cannot find the exact details now.
Then in other cases, like the hosted OpenAI models, they straight up say "temperature to 0 makes them mostly deterministic", but I'm not exactly sure why they are unable to offer endpoints with determinism.
> I can see how a non-technical person might, but surely programmers wouldn't?
When talking even with developers about prompting and LLMs, there is still quite a few people who are surprised that "You are a helpful assistant." would lead to different outputs than "You are a helpful assistant!". I think if you're a programmer or not matters less, more about understanding how the LLMs actually work in order to understand that.
> I think the tricky part is that we tend to think that prompts with similar semantic meaning will give the same outputs (like a human)
Trust me, this response would have been totally different if I were in a different mood.
This is too abstract and a concrete example of what this looks like in output is needed.
I'm in the process of actually building LLM based apps at the moment, and Martin Fowler's comments are on the money. The fact is seemingly insignificant changes to prompts can yield dramatically different outcomes, and the odd new outcomes have all these unpredictable downstream impacts. After working with deterministic systems most of my career it requires a different mindset.
It's also a huge barrier to adoption by mainstream businesses, which are used to working to unambiguous business rules. If it's tricky for us developers it's even more frustrating to end users. Very often they end up just saying, f* it, this is too hard.
I also use LLM's to write code and for that they are a huge productivity boon. Just remember to test! But I'm noticing that use of LLM's in mainstream business applications lags the hype quite a bit. They are touted as panaceas, but like any IT technology they are tricky to implement. People always underestimate the effort necessary to get a real return, even with deterministic apps. With indeterministic apps it's an even bigger problem.
Some failure modes can be annoying to test for. For example, if you exceed the model’s context window, nothing will happen in terms of errors or exceptions but the observable performance on the task will tank.
Counting tokens is the only reliable defence i found to this.
If you exceed the context window the remote LLM endpoint will throw you an error which you probably want to catch, or rather you want to catch that before it happens and deal with it. Either way, it's not a silent error that goes unnoticed usually, what makes you think that?
> If you exceed the context window the remote LLM endpoint will throw you an error which you probably want to catch
Not every endpoint works the same way, I'm pretty sure LM Studio's OpenAI-compatible endpoints will silently (from the clients perspective) truncate the context, rather than throw an error. It's up to the client to make sure the context fits in those cases.
OpenAI's own endpoints do show an error and refuses if you exceed the context length though. I think I've seen others use the "finish_reason" attribute too to signal the context length was exceeded, rather than setting an error status code on the response.
Overall, even "OpenAI-compatible" endpoints often aren't 100% faithful reproductions of the OpenAI endpoints, sadly.
That seems like terrible API design to just truncate without telling the caller. Anthropic, Google and OpenAI all will fail very loudly if you exceed the context window, and that's how it should be. But fair enough, this shouldn't happen anyway and the context should be actively handled before it blows up either way.
> That seems like terrible API design to just truncate without telling the caller
Agree, confused me a lot the first time I encountered it.
It would be great if implementations/endpoints could converge, but with OpenAI moving to the Responses API rather than ChatCompletion, yet the rest of the ecosystem seemingly still implementing ChatCompletion with various small differences (like how to do structured outputs), it feels like it's getting further away, not closer...
It's complicated, for example some models (o3) will throw an error if you set temperature.
What do you do if you want to support multiple models in your LLM gateway? Do you throw an error if a user sets temperature for o3, thus dumping the problem on them? Or just ignore it, but potentially creating confusion because temperature will seem to not work for some models?
I'm a big fan of fail early and fail loudly.
Interesting, the completion return object is documented but theres no error or exception field. In practice the only errors ive seen so far have been on the HTTP transport layer.
It would make sense to me for the chat context to raise an exception. Maybe i should read the docs further…
I respect Martin Fowler greatly but those who, by their own admission, have not used current AI coding tools really don't have much to add regarding how they affect our work as developers.
I do hope he takes the time to get good with them!
> have not used current AI coding tools really don't have much to add regarding how they affect our work as developers
I dunno, sometimes it's helpful to learn about the perspectives of people who've watched something from afar as well, especially if they already have broad knowledge and context that is adjacent to the topic itself, and have lots of people around them deep in the trenches that they've discussed with.
A bit like historians still can provide valuable commentary on wars, even though they (probably) haven't participated in the wars themselves.
I agree, I don't use coding tools, "except to ask for a script to chatgpt every once in a while". But I experience it by reviewing and detecting LLM generated code by consultants and juniors. It's easy to ask them for the prompts for example, but when they use autocompletion based LLMs, it's really hard to distinguish source from target code.
> As we learn to use LLMs in our work, we have to figure out how to live with this non-determinism. This change is dramatic, and rather excites me. I'm sure I'll be sad at some things we'll lose, but there will also things we'll gain that few of us understand yet. This evolution in non-determinism is unprecedented in the history of our profession.
The whole point of computers is that they were deterministic, such that any effective method can be automated - leaving humans to do the non-deterministic (and hopefully more fun) stuff.
Why do we want to break this up-to-now hugely successful symbiosis?
> This evolution in non-determinism is unprecedented in the history of our profession.
Not actually true. Fuzzing and mutation testing have been here for a while.
I think the whole context of the article is "program with non-deterministic tools", while non-deterministic fuzzing and mutation testing is kind of isolated to "coming up with test cases", not something you constantly program side-by-side with, or even integrate into the (business-side) of the software project itself. That's how I've used fuzzing and mutation testing in the past at least, maybe others use it differently.
Otherwise yeah, there are a bunch of non-deterministic technologies, processes and workflows missing, like what Machine Learning folks been doing for decades, which is also software and non-deterministic, but also off-topic from context of the article, as I read it.
I just have a problem with his use of the word "unprecedent".
This is not the first rodeo of our profession with non-determinism.
Right, in testing, but not in the compiler chain
I don't understand what you mean. Can you elaborate on your perception of what a "compiler chain" is and the supposed LLM role in it?