Anecdotally, our company's next couple quarters are projected to be a bloodbath. Spending is down everywhere, nearly all of our customers are pushing for huge cuts to their contracts, and in turn literally any costs we can jettison to keep jobs is being pushed through. We're hearing the same from our customers.
AI has been the only new investment our company has made (half hearted at that). I definitely get the sense that everyone pretending things are fine to investors, meanwhile they are playing musical chairs.
Back in my economics classes at college, a professor pointed out that a stock market can go up for two reasons: On one hand, the economy is legitimately growing and shares are becoming more valuable. But on the other hand, people and corporations could be cutting spending en masse so there's extra cash to flood the stock markets and drive up prices regardless of future earnings.
I work for one of the largest packaging companies in the world. Customers across the board in the US are cutting back on how much packaging they need due to presumably lower sales volume. Make of that information what you will.
car manufacturers, right at the beginning of covid, started cutting orders of components from their suppliers, thinking that demand is going to drop due to covid induced recession.
I know it’s not popular to bring politics into things on HN, but… From the outside at least, White House policy sounds like at least as much of a black swan event as COVID.
True. It must be added, that theres two wrenches in the machinery that transforms information into action currently.
Firstly - The average market behavior is average.
From experience, most people could not imagine anything of what was predicted, would come true. There is a large … debt of intellectual work that is being underwritten, allowing people to sell narratives which do not correspond to reality.
This is a direct result of a captured, unfair information environment.
As a result, the average behavior of the market is not pricing in these things, even if the plans were made clear.
I beg to differ. Epidemiologists and public health planners always knew such a pandemic would happen eventually. In fact, it wasn't even surprising that it came from a coronavirus as this virus group was the most likely contender with the influenza family.
The only open question was when. We dodged the bullet several times over the past two decades with SRAS, H5N1, MERS and H1N1 (notice, two influenza and two coronaviruses), but one virus slipping through was definitely the most likely outcome.
And I can confidently tell you: it will happen again.
At least until such time as his polling stumbles the GOP will do absolutely anything he says. And Trump will do whatever it takes to keep the grease coming in, I really think him turning on the printing presses is a long from the least likely scenario.
Would the GOP have to eat large quantities of excrement, yes. Have they become used to doing that (cf Epstein), yes.
> Back in my economics classes at college, a professor pointed out that a stock market can go up for two reasons
Reason #1 is lower interest rates, which increase the present value of future cash flows in DCF models. A professor who does not mention that does not know what they are talking about.
Almost all my money goes to mortgage, shit from China, food, and the occasional service. It does make me wonder some times how it all works. But it's been working like this for a long time now.
Real estate. The US economy floats on the perpetually-increasing cost of land. Thats where your mortgage money goes, to a series of finacial instruments to allow others to benifit from the eternally rising value of "your" property.
Apparently it's even hard to make molds in the US. China seems to be the top dog in the production chain. From design, to mold, to production, to packaging.
I am delighted to hear from the actual FedEx’s own Chuck Noland! Getting this post to appear on this website using only vellum and iron gall ink is an incredible feat. Could you share some about your process (in many months time obviously)?
depends, on which side, of the tarrifs an economy happens to be
and where, geopoliticaly.
AI, or whatever a mountain of processors churning all of the worlds data will be called later, still has no use case, other than total domination, for which it has brought a kind of lame service to all of the totaly dependent go along to get along types, but nothing approaching an actual guaranteed answer for anything usefull and profitable, lame, lame, infinite fucking lame tedious shit
that has prompted most people to.stop.even trying, and so a huge vast amount of genuine human
inspiration and effort is gone
Not just both sides, but infinite sides, every country border for anything that crosses the border. Making a pencil might requires dozens to thousands of mining/factories in the pencil supply chain and there is taxation at every level!
"Look at this lead pencil. There’s not a single person in the world who could make this pencil. Remarkable statement? Not at all. The wood from which it is made, for all I know, comes from a tree that was cut down in the state of Washington. To cut down that tree, it took a saw. To make the saw, it took steel. To make steel, it took iron ore. This black center—we call it lead but it’s really graphite, compressed graphite—I’m not sure where it comes from, but I think it comes from some mines in South America. This red top up here, this eraser, a bit of rubber, probably comes from Malaya, where the rubber tree isn’t even native! It was imported from South America by some businessmen with the help of the British government. This brass ferrule? [Self-effacing laughter.] I haven’t the slightest idea where it came from. Or the yellow paint! Or the paint that made the black lines. Or the glue that holds it together. Literally thousands of people co-operated to make this pencil. People who don’t speak the same language, who practice different religions, who might hate one another if they ever met! When you go down to the store and buy this pencil, you are in effect trading a few minutes of your time for a few seconds of the time of all those thousands of people. What brought them together and induced them to cooperate to make this pencil? There was no commissar sending … out orders from some central office. It was the magic of the price system: the impersonal operation of prices that brought them together and got them to cooperate, to make this pencil, so you could have it for a trifling sum.
That is why the operation of the free market is so essential. Not only to promote productive efficiency, but even more to foster harmony and peace among the peoples of the world."
Farmers stand to benefit from the current administration's trade and immigration policy; bailouts are part of the program. Bailouts were given out during the trade wars in 2017-2020. Bailouts are expected to pay out in early 2026 as part of the annual farm aid bill due in November.
Not only bailouts, but GOP aligned farmers voted for Trump to remove 2024 H-2A visa reforms that addressed abuse of the system (seizing passports, etc).
They didn't want to pay for the H-2A paperwork, but didn't like that undocumented laborers would move from farm to farm depending on conditions.
Farmers voted for tariffs so they should be happy.
What, they didn't think tariffs were going to be bad when just slammed on the table? Maybe they should also think about higher education, to learn how things work beyond high school education.
They've had the education system they depend on attacked and weakened for generations, they've had fear and distrust of science, experts, and scholars drilled into them, and they've been told countless lies including very comforting lies about what tariffs would mean for them by the very people they were told were the only ones who could be trusted.
I can see the appeal of blaming farmers for getting exactly what they voted for, but honestly they were suckers who were victimized. My hope is that many of them will feel betrayed enough to break free from their indoctrination and start looking for truths and answers outside of the circles which have played them for fools, but that's not going to be an easy process since it'll mean challenging their closest held prejudices and the tearing down and rebuilding of core parts of their identity. That sort of thing is hard enough to do when your world isn't falling apart around you and the last thing the ones who are willing to try need is everyone telling them they deserved what was done to them and that they'll get no sympathy from anyone.
I feel as bad for the farmers who voted for this as I feel for the farmers who were hurt by the civil war and the ending of slavery. They knew what they were supporting, thought they would be unaffected and actually benefit from the result.
That's a very empathetic take. But it's also essentially "society made them do it". When they also clearly voted to root out all their farm labor. They support their own education system.
They've spent enough joy owning the libs and scorning education. It's just FAFO.
I'd bet farmers would much rather have repeat customers or the promise of future valuable repeat customers over bailouts. But with these tariffs and retaliations, their former buyers are finding new sources. Even after the trade war ends, inertia will be another hurdle for our farmers.
US farming already has extremely high levels of government intervention aimed at price stability. This leads to all sorts of things like the government paying some farmers not to grow crops, some farmers being prevented from selling their crops, farmers getting paid after a bad season, government minimum price guarantees, and so on. And the overwhelming majority (like 95%+) of all produce in the US is sold by farmers to commodity markets, at more or less fixed rates, who then process/distribute it.
Maybe they should not vote for guy who did exactly this already once and said repeatedly that he will do this.
Frankly, they voted for Trump, because they thought only liberals, trans and other people they hate will be harmed. It is really not the case that they would be victims on an unexpected event. They wanted this to happen, just kind of to everyone else.
Pretty sure the stagnation has a cause beginning in 2025 and that has to do with things like: Canada refusing to buy ALL American liquor in retaliation. China refusing to buy ANY soy beans in retaliation. In retaliation for what you might ask?
I leave that as an exercise for the reader. If you are unable to answer that question honestly to yourself you need to seriously consider that your cognitive bias might be preventing you from thinking clearly.
One of my most frustrating things regarding the potential of an AI bubble was some very smart and intelligent researcher being incredibly bullish on AI on Twitter because if you extrapolate graphs measuring AI's ability to complete long-duration tasks (https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...) or other benchmarks then by 2026 or 2027 then you've basically invented AGI.
I'm going to take his statements at face value and assume that he really does have faith in his own predictions and isn't trying to fleece us.
My gripe with this statement is that this prediction is based on proxies for capability that aren't particularly reliable. To elaborate, the latest frontier models score something like 65% on SWE-bench, but I don't think they're as capable as a human that also scored 65%. That isn't to say that they're incapable, but just that they aren't as capable as an equivalent human. I think there's a very real chance that a model absolutely crushes the SWE-bench benchmark but still isn't quite ready to function as an independent software engineering agent.
So a lot of this bullishness basically hinges on the idea that if you extrapolate some line on a graph into the future, then by next year or the year after all white-collar work can be automated. Terrifying as that is, this all hinges on the idea that these graphs, these benchmarks, are good proxies.
>> by next year or the year after all white-collar work can be automated
Work generates work. If you remove the need for 50% of the work then a significant amount of the remaining work never needs to be done. It just doesn't appear.
The software that is used by people in their jobs will no longer be needed if those people aren't hired to do their jobs. There goes Slack, Teams, GitHub, Zoom, Powerpoint, Excel, whatever... And if the software isn't needed then it doesn't need to be written, by either a person or an AI. So any need for AI Coders shrinks considerably.
> I was discussing with a friend that my biggest concern with AI right now is not that it isn't capable of doing things... but that we switched from research/academic mode to full value extraction so fast that we are way out over our skis in terms of what is being promised, which, in the realm of exciting new field of academic research is pretty low-stakes all things considered... to being terrifying when we bet policy and economics on it.
That isn't overly prescient or anything... it feels like the alarm bells started a while ago... but wow the absolute "all in" of the bet is really starting to feel like there is no backup. With the cessation of EVs tax credits, the slowdown in infra spending, healthcare subsidies, etc, the portfolio of investment feels much less diverse...
Especially compared to China, which has bets in so many verticals, battery tech, EVs, solar, then of course all the AI/chips/fabs. That isn't to say I don't think there are huge risks for China... but geez does it feel like the setup for a big shift in economic power especially with change in US foreign policy.
I'll offer two counter-points. Weak but worth mentioning. wrt China there's no value to extract by on-shoring manufacturing -- many verticals are simply uninvestable in the US because of labor costs and the gap of cost to manufacture is so large it's not even worth considering. I think there's a level of introspection the US needs to contend with, but that ship has sailed. We should be forward looking in what we can do outside of manufacturing.
For AI, the pivot to profitability was indeed quick, but I don't think it's as bad as you may think. We're building the software infrastructure to accomodate LLMs into our work streams which makes everyone more efficient and productive. As foundational models progress, the infrastructure will reap the benefits a-la moore's law.
I acknowledge that this is a bullish thesis but I'll tell you why I'm bullish: I'm basically a high-tech ludite -- the last piece of technology I adopted was google in 1996. I converted from vim to vscode + copilot (and now cursor.) because of LLMs -- that's how transformative this technology is.
> many verticals are simply uninvestable in the US because of labor costs and the gap of cost to manufacture is so large it's not even worth considering.
I think this is covered in a number of papers from think tanks related to the current administration.
The overall plan, as I understood it, is to devalue the dollar while keeping the monetary reserve status. A weaker dollar will make it competitive for foreign countries to manufacture in the US. The problem is that if the dollar weakens, investors will fly away. But the AI boom offsets that.
For now it seems to work: the dollar lost more than 10% year to date, but the AI boom kept investors in the US stock market. The trade agreements will protect the US for a couple years as well. But ultimately it's a time bomb for the population, that will wake up in 10 years with half their present purchasing power, in non dollar terms.
I think an interesting way to measure the value is to argue "what would we do without it?"
If we removed "modern search" (Google) and had to go back to say 1995-era AltaVista search performance, we'd probably see major productivity drops across huge parts of the economy, and significant business failures.
If we removed the LLMs, developers would go back to Less Spicy Autocomplete and it might take a few hours longer to deliver some projects. Trolls might have to hand-photoshop Joe Biden's face onto an opossum's body like their forefathers did. But the world would keep spinning.
It's not just that we've had 20 years more to grow accustomed to Google than LLMs, it's that having a low-confidence answer or an excessively florid summary of a document are not really that useful.
Chatting with Claude about a topic is in another universe to google search.
I default to Claude for almost everything where I want to know something. I don’t trust Google’s results because of how weighted they are to SEO. Being good at SEO is a separate skill set.
The answers are not low confidence, cite sources, and can do things that Google cannot. For example: I used Claude to design a syllabus to learn about a technical domain along with quizzes and test suites for verification. It linked to video series, books, and articles grouped by an increasingly complex knowledge set.
Is this really true re: "modern search"? Genuine question because this is probably outside of my domain. I'm just trying to think of industries that would critically affected it we went from modern search to e.g. AltaVista/Yahoo/DogPile and kind of coming up empty except in that it might be more difficult for companies that have perfected modern SEO/advertising to maintain the same level of reach, but I don't think that's what you're alluding to?
I think there's a bubble around AI, but I don't think I agree with this argument. Google search launched in 1998, and ChatGPT launched in 2022.
In 2001, if Google had gone under like a lot of .com bubble companies, I think the economic impact visible to people of the time would have been marginal. There was no Google News, Gmail, Android, and the alternatives (AltaVista, Ask Jeeves, MSN Search) would have been enough. Google was a forcing function for the others to compete with the new paradigm or die trying. It wasn't itself an economic behemoth the way it is today.
I think if OpenAI folded today, you'd still have several companies in the generative AI space. To me, OpenAI's reminiscent of Google in the late 90s in its impact, although culturally it's very different. It's a general purpose website anyone with an internet connection can visit, deep industry competitors are having to adapt to its model to stay alive, and we're seeing signs of a frothy tech bubble a few years after its founding. People across industry verticals, government, law, and NGOs are using it, and students are learning with it.
One counterpoint to this would be that companies like Google reacted to the rise of social media with stuff like Google+, but to me the level to which "AI" is baked into every product at Google exceeds that play by a great margin. At most I remember a "post to plus" link at the top of GMail and a few hooks within the contact/email management views. In contrast, they are injecting AI results into almost every search I make and across almost every product of theirs I use today.
If you fast forward 20 years, I would be surprised if companies specializing in LLMs were not major players the way today's tech giants are. Some of the companies might have the same names, but they'll have changed.
> At most I remember a "post to plus" link at the top of GMail and a few hooks within the contact/email management views.
Google probably could have been whatsapp but to push Google+ scrapped a successful gmail chat for hangouts, which you had to visit Google+ feed each time to open at first.
Another thing to note about China: while people love pointing to their public transit as an example of a country that's done so much right, their (over)investment in this domain has led to a concerning explosion of local government debt obligations which isn't usually well-represented in their overall debt to GDP ratios many people quote. I only state that to state that things are not all the propaganda suggests it might be in China. The big question everyone is asking is, what happens after Xi. Even the most educated experts on the matter do not have an answer.
I, too, don't understand the OP's point of quickly pivoting to value extraction. Every technology we've ever invented was immediately followed by capitalists asking "how can I use this to make more money". LLMs are an extremely valuable technology. I'm not going to sit here and pretend that anyone can correctly guess exactly how much we should be investing into this right now in order to properly price how much value they'll be generating in five years. Except, its so critical to point out that the "data center capex" numbers everyone keeps quoting are, in a very real (and, sure, potentially scary) sense, quadruple-counting the same hundred-billion dollars. We're not actually spending $400B on new data centers; Oracle is spending $nnB on Nvidia, who is spending $nnB to invest in OpenAI, who is spending $nnB to invest in AMD, who Coreweave will also be spending $nnB with, who Nvidia has an $nnB investment in... and so forth. There's a ton of duplicate-accounting going on when people report these numbers.
It doesn't grab the same headlines, but I'm very strongly of the opinion that there will be more market corrections in the next 24 months, overall stock market growth will be pretty flat, and by the end of 2027 people will still be opining on whether OpenAI's $400B annual revenue justifies a trillion dollars in capex on new graphics cards. There's no catastrophic bubble burst. AGI is still only a few years away. But AI eats the world none-the-less.
My point is not that value extraction wouldn't happen, my point is simply that in addition to the value extraction we also made other huge shifts in economic policy that taken together really seem to put us on a path towards an "AGI or bust" situation in the future.
Is that a bit hyperbolic? isn't this just the same as dotcom and housing bubbles before where we pivoted a bit too hard into a specific industry? maybe... but I also am not sure it would be wise to assume past results will indicate future returns with this one.
AI is appealing to the investors not because it solves human problems, but because it solves some of the problems of previous bubbles.
When we wired the world for the Internet in the 1990s, or built railways across the continent in the 1800s, we eventually reached a point where even the starriest-eyed investors could see they've covered effectively the entire addressable market. Eventually AOL ran out of new customers no matter how many CDs they mailed out, or we had connected every city of more than 50 people with steel rail, and you could hear the music was slowing down.
By dangling the AGI brass ring out there, they can keep justifying the expenditure past many points of diminishing returns, because the first thing we'll ask the Omnipotent AGI is how to earn the quadrillions spent back, with interest.
It also has the benefit of being a high-churn business. The rails laid in 1880, or the fiber pulled in 2000, were usable for decades, but in the AI bubble, the models are obsolete in months and the GPUs in years. It generates huge looking commercial numbers just to tread water.
> In those intervening years, a bunch of AI companies might be unable to pay back their debts.
Dumb question: isn't a lot of the current investment in the form of equity deals and/or funded by existing tech company profit lines? What do we actually know about the debt levels and schedules of the various actors?
Google, Meta, and Microsoft are funding AI out of their existing profits so they will probably be fine. The others may be getting GPUs in exchange for equity but they still have to pay real money for the datacenters, generators, etc. That real money is borrowed and they would default in case of a crash. Potentially hundreds of billions of defaults.
The latest xAI GPU SPV is 2/3 debt; maybe that's a trend or maybe not. The debt will be defaulted and the equity will be recapitalized to almost nothing. Same outcome.
I think if there's a rational reasoning behind Trump unleashing ICE and the national guard on the domestic population, this must be it: "the economy is doing really bad, and we need a smokescreen so people won't talk about it."
Hmmm kinda ties into the whole problem of well-off/happy people not being particularly eager to chant "foreigners out", but when they're desperate they take any explanation for their misery they can get their hands on that sounds workable (because, no, you can't go up to a billionaire and just "take all their stuff", but you CAN beat up a foreigner or other disadvantaged person that is worse off than you)
I think another reason for the recent global rise of anti-immigration parties is also that the relative economic value of immigrants (as unskilled labor) has gone down, and the "costs" (cultural/language friction) have become more visible.
Even in the unlikely event AI somehow delivers on its valuations and thereby doesn't disappoint, the implied negative externalities on the population (mass worker redundancy, inequality that makes even our current scenario look rosy, skyrocketing electricity costs) means that America's and the world's future looks like a rocky road.
I personally hope AI doesn't quite deliver on its valuations, so we don't lose tons of jobs, but instead of a market crash, the money will rotate into quantum and crispr technologies (both soon to be trillion dollar+ industries). People who bet big on AI might lose out some but not be wiped out. That's best casing it though.
I like this, because I hate the idea that we should either be rooting for AI to implode and cause a crash, or for it to succeed and cause a crash (or take us into some neo-feudal society).
This is how I'm starting to view many of these things. It's just that the metrics we use to evaluate the economy are getting out of sync. For instance, if "consumer sentiment is at Great Recession levels", why do we need some other indicator to accept that there's a problem? Isn't that a bad thing on its own?
"Bad" is a judgment call. Trump approval ratings haven't dipped that far, so Congressional Republicans won't dare abandon him and there's not much political will for change.
It might change if we get into millions of foreclosures like the great recession and the pain really hits home. From what I can tell right now they're in wartime mode where they just need to buckle down until Trump wins and makes other countries pay for tariffs or something.
We're definitely not in a crash yet, but it does feel like we're the roller coaster just tipping over the peak: unemployment is rising for the first time in a couple years, there's basically no GDP growth apart from AI investment, and the yield curves look scary. The crash could be any second now, especially because tech earnings week is coming up and that could indicate how much revenue, or lack thereof, the AI investment is bringing in.
So the crash is only official once Wall Street's exuberance matches the economy as perceived by it's workforce? Is that a crash or just a latent arrival of the signal itself?
US uniquely is suited to maximally benefit startups emerging in a new space, but maximally prevent startups entering a mature space. No smart, young person in the US matriculates into industries paved over by incumbents as they wisely anticipate that they will be in an industry deliberately hamstrung by regulatory capture.
All growth is in AI now because that's where all the smartest people are going. If AI were regulated significantly, they'd go to other industries and they would be growing faster (though not as much likely)
However there is the broader point that AI is poised offer extreme leverage to those who win the AGI race, justifying capex spending on such absurd margins.
Reminder: If you're going to feel doomer about how tech capex represents like nn% of US GDP growth, you should do some research into what percentage of US GDP growth, especially since 2020, has been the result of government printing. Arguably, our GDP growth right now is more "real" than the historical GDP growth numbers between 2020-2023, but all of it is so warped by policy that its hard to tell what's going on.
We're in extremely unprecedented times. Sometimes maybe good, sometimes maybe shit. The old rules don't apply. Etc.
At the end of the day, if you look at almost any government, roughly 2/3 of expenses go towards healthcare and education things which, AI worlkflow are very likely continue offsetting a larger and larger percentage of the costs on.
Can we still have a financial crisis from all this investment going bust because it might take too long for it to make a difference in manufacturing enough automation hardware for everyone? Yes.
But, the fundamentals are still there, parents will still send their kids to some type of school, and people will trade good in exchange for health services. That's not going to change. Neither will the need to use robots in nursing homes, I think that assumption is safe to make.
What's difficult to predict change in is adoption in manufacturing, and repairs ( be that repairing bridges or repairing your espresso machine ) because that is more of a "3D" issue and hard to automate reliably (think about how many gpus today would it actually take to get a robot to reason out and repair a whole in your drywall), given that your RL environments and training data needs grow exponentially. Technically, your phone should have enough gpu performance to do your taxes with a 3B model and a bunch of tools, eventually it'll even be better than you at it. But to tun an actual robot with multiple cameras and stuff doing troubleshooting and decision making.... you're gonna need a whole 8x rack of gpus for that.
And that's what makes it now difficult to predict what's going to happen. The areas under the curve can vary widely. We could get a 1B AGI model in 6 months, or it could take 5 years for agentic workflows to fully automate everyones taxes and actually replace 2/3 of radiology work...
Either way, while theres a significant chance of this transition to the automation age being rough, I am overall quite optimistic given the fundamentals of what governments actually spend majority of their money on.
I wouldn't even call it political. It's financial, and should be criminal. The people who are elected to represent us are just taking bribes and being paid off to allow corporations to screw us over.
I wouldn't even say "corporations" because honestly, it's just the one corporation that's keeping the US tax system mired in pointless, manual complexity: Intuit.
There is also a whole political line of thinking that making taxes easier makes them more palatable, so if you want to “starve the beast” at all costs you actually want tax filing to be as painful as possible.
An easy position for people wealthy enough to painlessly have their accountant do their taxes for them. If they really wanted people to struggle with their taxes they should be discouraging or outlawing companies like turbo tax who make taxes easier for the peasant class forcing most people to fill everything out by hand on paper forms.
Talk to an educator.
Education is being actively harmed by AI. Kids don’t want to do any difficult thinking work so they aren’t learning. (Literally any teacher you talk to will confirm this)
AI in medicine is challenging because AI is bad at systems thinking, citation of fact and data privacy. Three things that are absolutely essential for medicine. Also everything for healthcare needs regulatory approval so costs go up and flexibility goes down. We’re ten years away from any AI for medicine being cost effective.
Having an AI do your taxes is absurd. They regularly hallucinate. I 100% guarantee that if you do your taxes with AI you won’t pass an audit. AI literally can’t count. You’re be better off asking it to vibecode a replacement for TurboTax. But again the product won’t be AI it will be traditional code.
Trying for AGI down the road of an LLM is insanity sauce. It’s a simulated language center that can’t count, it can’t do systems thinking. It can’t cite known facts. We’re not six months away we’re a decade or a “cost effective fusion” distance (defined as perpetually 20 years in the future from any point in time)
There are at least six Silicon Valley startups working on AGI. Not a single one of them has published an architecture strategy that might work. None of the “almost AGI” products that have ever come out have a path to AGI.
Meh is the most likely outcome. I say this as someone who uses it a lot for things it is good at.
separate from this article, I don't have a very high opinion of the author. he has an astonishing record of being uninformed and/or just plain wrong in everything I've ever heard him write about.
but as far as this article, the "tech capex as a percentage of GDP growth" is an incredible cherrypicking of statistics to create a narrative... when tech became a boodbath starting in 2022, the rest of the economy continued on strong. all the way until 2025, the rest of the economy was booming while tech layoffs and budget cuts after covid were exploding. so starting that chart in early 2023 when tech had bottomed out (compared to the rest of the economy) is misleading. tech capex as a percentage of the overall GDP has been consistently rising since 2010 - https://gqg.com/highchartal-paper-large-tech-capex-spend-as-...
this is obviously related to the advent of public cloud computing more than anything. why this chart appears to clash with the author's chart is the author's chart specifically calls out just percentage of GDP growth, not overall GDP. so the natural conclusion is that while tech has been in borderline recessionary conditions since 2022, it is now becoming stable (if not recovering) while the rest of the economy that didn't have the post-covid pullback (nor the same boom during covid, of course) is now having issues largely due to geopolitics and global trade.
is there an AI bubble? who cares. it's not as meaningful to the broader economy as these cherrypicking stats imply. if it's a bubble, it represents maybe .3% of the GDP. no one would be screaming from the mountain tops about a shaky economy and a bubble if that same .3% was represented by a bubble in the restaurant industry or manufacturing. in fact, in recent years, those industries DID have inflationary bubbles and it was treated like a positive thing for the most part.
I think a lot of this overanalysis and prodding for flaws in tech is generally an attempt at schadenfreude hoping that tech becomes just another industry like carpentry or plumbing. in particular, hoping for a scenario where tech is not as culturally impactful as it is today. because people are worried and frustrated about the economy, don't understand the value of tech, and hope it stops sucking up so much funding and focus by society in general.
they're not 100% wrong in being untrusting or skeptical of tech. the tech industry hasn't always been the best steward of the wealth and power it possesses. but they are generally wrong about valuations or impact of tech on the economy. like the people spending all this money are clueless. the stock market fell 900 points on friday, wiping out over $1 trillion in value over the course of a couple hours. yet the hundreds of billions invested in datacenters is a sign of impending economic doom.
is the economy good? I don't think it's doing great. but it has little to do with AI one way or another. "AI" is just another trend of making technology more accessible to the masses. no more scarier, complicated, or impactful than microcomputers, DSL, cellular phones, or youtube. and while the economy crashed in 2008, youtube and facebook did well. yet there was none of this dooming about tech specifically at the time simply because the tech industry wasn't as controversial at the time.
There's a lot of people who can only process their own failures by assuming that everyone and everything must also, eventually fail; that anything successful is temporary and "not real". And there's a lot of down people in the tech industry right now; we're in a recession, after all.
There's also a significant number of people (e.g. Doctorow) who have made their entire brand on doomerism; and whether they actually believe what they say or have just become addicted to the views is an irrelevant implementation detail.
The anti-AI slop that dominates HackerNews doesn't serve anything productive or interesting. Its just an excuse for people to not read the article, go straight to the comments, and parrot the same thing they've parroted twenty times.
You are way too nice with the author, if I were you I’d omit the fake empathy which dilutes your substantial points. The author is hallucinating worse than AI.
So what if other people downvote you for being too critical.
Ah, so I see we've entered the "normalizing the end of presidental term limits" part of the downward spiral. Maybe I need to accelerate my plans to get the fuck out of here.
We're already past the point where there is no meaningful notion of "normal" that actually impacts what happens in government. Normalizing things doesn't matter that much if people care so little that they elect someone who's done what Trump did his first time.
I mean he's selling the hats and I've seen some talking heads on the news say they'll look at ways for him to do it. The two term limit is a kinda recent precedent all things considered, so...
Are there any bookies for that? Seems like an easy way to get rich betting against that happening. If not, then I would instead wager the "market sentiment" is that Trump isn't actually serious about a 2028 bid or that he won't actually be able to overcome 22A.
boomers have already agreed multiple times this century that businesses are not allowed to go bankrupt in fear that their retirement portfolios may not be juiced to the gills. So instead we bail everyone out on the taxpayers dime and leave the debt for some poor schmuck in the future to figure out.
It (was) also settled precedent that he can't stop spending money required to be spent by Congress (settled during Nixon's term), but the supremes decided it's different now. Same for firing heads of supposed independent federal departments, which was supposed to prevent presidential manipulation.
And the s.c. created presidential immunity out of nothing. For now the president has unchecked power, the conservative dream of a unitary executive.
This will all end when a Democrat is in power again. This is not a sarcastic exaggeration, one way they teed this up was shadow docket decisions like the Kavanaugh rule (ice can arrest/kidnap you based on appearance), it's not a precedent as shadow docket so they can reverse it any time.
In the normative sense of "another atrocity like this cannot occur", then yes.
However your comment instead sounds like you are dismissing it as a non-concern... in which case I suggest you wake the heck up. We've had months now of seeing President and his cabinet actively and willfully breaking federal and Constitutional law, with the entire Republican legislature complicit.
It wouldn't even the first time states tried to remove him from their ballots either. [0]
I can’t help but think a lot of these comments are actually written by AI — and that, in itself, showcases the value of AI. The fact that all of these comments could realistically have been written by AI with what’s available today is mind-blowing.
I use AI on a day-to-day basis, and by my best estimates, I’m doing the work of three to four people as a result of AI — not because I necessarily write code faster, but because I cover more breadth (front end, back end, DevOps, security) and make better engineering decisions with a smaller team. I think the true value of AI, at least in the immediate future, lies in helping us solve common problems faster. Though it’s not yet independently doing much, the most relevant expression I can think of is: “Those who cannot do, teach.” And AI is definitely good at relaying existing knowledge.
What exactly is the utility of AI writing comments that seem indistinguishable from people? What is the economic value of a comment or an article?
At present rate, there is a good argument to be made that the economic value is teetering towards negative
A comment on a post or an article on the internet has value ONLY if there are real people at the other end of the screen reading it and getting influenced by it
But if you flood the internet with AI slop comments and articles, can you be 100% sure that all the current users of your app will stick around?
If there are no people to read your articles, your article has zero economic value
Perhaps economic value can come from a more educated and skilled workforce if they're using AI for private tuition (if it can write as well as us, it can provide a bespoke syllabus, feedback etc.)
Automation over teaching sounds terrible in the long run, but I could see why learning languages and skills could improve productivity. The "issue" might be here that there's more to gain in developing nations with poor education standards, and so while capital concentrates more to the US because they own the tech, geographical differences in labour productivity reduces.
What is the economic value of a wheel? If we flood the market with wheels, we’re going to need far fewer sleds and horses. Pretty soon, no one might need horses at all — can you imagine that?
Anecdotally, our company's next couple quarters are projected to be a bloodbath. Spending is down everywhere, nearly all of our customers are pushing for huge cuts to their contracts, and in turn literally any costs we can jettison to keep jobs is being pushed through. We're hearing the same from our customers.
AI has been the only new investment our company has made (half hearted at that). I definitely get the sense that everyone pretending things are fine to investors, meanwhile they are playing musical chairs.
Back in my economics classes at college, a professor pointed out that a stock market can go up for two reasons: On one hand, the economy is legitimately growing and shares are becoming more valuable. But on the other hand, people and corporations could be cutting spending en masse so there's extra cash to flood the stock markets and drive up prices regardless of future earnings.
I work for one of the largest packaging companies in the world. Customers across the board in the US are cutting back on how much packaging they need due to presumably lower sales volume. Make of that information what you will.
tariffs could be an explanation.
sometimes volume and total $ are not the same.
car manufacturers, right at the beginning of covid, started cutting orders of components from their suppliers, thinking that demand is going to drop due to covid induced recession.
Guess what happened next?
Covid was a black swan event. Unless we see something like the MBS collapse, the underlying economic weakness isn’t due to a such an acute root cause.
Not sure how comparable they are.
I know it’s not popular to bring politics into things on HN, but… From the outside at least, White House policy sounds like at least as much of a black swan event as COVID.
Black swan event should be unexpected. Trumps victory was within expected possibilities. It was also his second victory.
And his moves after winning were not unexpected either - he is doing what his opponents predicted he will do.
True. It must be added, that theres two wrenches in the machinery that transforms information into action currently.
Firstly - The average market behavior is average.
From experience, most people could not imagine anything of what was predicted, would come true. There is a large … debt of intellectual work that is being underwritten, allowing people to sell narratives which do not correspond to reality.
This is a direct result of a captured, unfair information environment.
As a result, the average behavior of the market is not pricing in these things, even if the plans were made clear.
A coronavirus causing a global pandemic at some point was even more expected though.
And even the erratic government reactions to the pandemic was not entirely unpredictable either to be fair.
> Trumps victory was within expected possibilities.
While people felt that polls indicated a Harris victory and the margin of Trump's win was a surprise and a failure of forecasters, in reality it was always a toss-up in forecasts: https://en.wikipedia.org/wiki/Nationwide_opinion_polling_for...
> Covid was a black swan event
I beg to differ. Epidemiologists and public health planners always knew such a pandemic would happen eventually. In fact, it wasn't even surprising that it came from a coronavirus as this virus group was the most likely contender with the influenza family.
The only open question was when. We dodged the bullet several times over the past two decades with SRAS, H5N1, MERS and H1N1 (notice, two influenza and two coronaviruses), but one virus slipping through was definitely the most likely outcome.
And I can confidently tell you: it will happen again.
> Guess what happened next?
Stimulus and zero interest rate followed by 10% inflation.
A gigantic contracyclic fiscal policy was adopted to sustain demand.
Do you think Trump and the GOP will do that anytime soon?
At least until such time as his polling stumbles the GOP will do absolutely anything he says. And Trump will do whatever it takes to keep the grease coming in, I really think him turning on the printing presses is a long from the least likely scenario.
Would the GOP have to eat large quantities of excrement, yes. Have they become used to doing that (cf Epstein), yes.
Short the stock market then if you feel a recession is coming
> Back in my economics classes at college, a professor pointed out that a stock market can go up for two reasons
Reason #1 is lower interest rates, which increase the present value of future cash flows in DCF models. A professor who does not mention that does not know what they are talking about.
The fact that this is even plausibly true means that the non-AI (and maybe even non-tech) American economy has been stagnating for years by now.
Almost all my money goes to mortgage, shit from China, food, and the occasional service. It does make me wonder some times how it all works. But it's been working like this for a long time now.
Real estate. The US economy floats on the perpetually-increasing cost of land. Thats where your mortgage money goes, to a series of finacial instruments to allow others to benifit from the eternally rising value of "your" property.
Much worse the case in Canada & Australia unfortunately
Why are you buying shit from China?
Apparently it's even hard to make molds in the US. China seems to be the top dog in the production chain. From design, to mold, to production, to packaging.
Fstopper has one or two nice videos about it. Can't even buy US made glass bottles that fits his needs in the US: https://www.youtube.com/watch?v=xewpuM1eJRg
It cuts the middleman, supermarkets and online stores are 90% Chinese dropship these days. Why pay the 50-500% markup?
Why, don't you? How would you have posted this message if you hadn't?
I am delighted to hear from the actual FedEx’s own Chuck Noland! Getting this post to appear on this website using only vellum and iron gall ink is an incredible feat. Could you share some about your process (in many months time obviously)?
Where are you getting your shit from?
Prices going up 20-25% due to excessive money printed and hence high inflation during last administration don't help.
The stimulus spending started under the administration prior to the last one.
In the United States, elections were held on November 2020. A new administration would’ve started in January 2021.
Federal taxes are #1 expense for most people. People forget to think about it because of direct deduction.
The tariff wars certainly didn't help.
depends, on which side, of the tarrifs an economy happens to be and where, geopoliticaly.
AI, or whatever a mountain of processors churning all of the worlds data will be called later, still has no use case, other than total domination, for which it has brought a kind of lame service to all of the totaly dependent go along to get along types, but nothing approaching an actual guaranteed answer for anything usefull and profitable, lame, lame, infinite fucking lame tedious shit that has prompted most people to.stop.even trying, and so a huge vast amount of genuine human inspiration and effort is gone
The thing about tariffs is you’re guaranteed to be on both sides because the other side retaliates.
Farmers get screwed twice because our tariffs increase the costs of their inputs and the retaliation reduces the value of outputs.
If I was a farmer I’d be tearing my hair out about now.
Not just both sides, but infinite sides, every country border for anything that crosses the border. Making a pencil might requires dozens to thousands of mining/factories in the pencil supply chain and there is taxation at every level!
Milton Friedman - I, Pencil: https://www.youtube.com/watch?v=67tHtpac5ws and https://thenewinquiry.com/milton-friedmans-pencil/
"Look at this lead pencil. There’s not a single person in the world who could make this pencil. Remarkable statement? Not at all. The wood from which it is made, for all I know, comes from a tree that was cut down in the state of Washington. To cut down that tree, it took a saw. To make the saw, it took steel. To make steel, it took iron ore. This black center—we call it lead but it’s really graphite, compressed graphite—I’m not sure where it comes from, but I think it comes from some mines in South America. This red top up here, this eraser, a bit of rubber, probably comes from Malaya, where the rubber tree isn’t even native! It was imported from South America by some businessmen with the help of the British government. This brass ferrule? [Self-effacing laughter.] I haven’t the slightest idea where it came from. Or the yellow paint! Or the paint that made the black lines. Or the glue that holds it together. Literally thousands of people co-operated to make this pencil. People who don’t speak the same language, who practice different religions, who might hate one another if they ever met! When you go down to the store and buy this pencil, you are in effect trading a few minutes of your time for a few seconds of the time of all those thousands of people. What brought them together and induced them to cooperate to make this pencil? There was no commissar sending … out orders from some central office. It was the magic of the price system: the impersonal operation of prices that brought them together and got them to cooperate, to make this pencil, so you could have it for a trifling sum.
That is why the operation of the free market is so essential. Not only to promote productive efficiency, but even more to foster harmony and peace among the peoples of the world."
Farmers stand to benefit from the current administration's trade and immigration policy; bailouts are part of the program. Bailouts were given out during the trade wars in 2017-2020. Bailouts are expected to pay out in early 2026 as part of the annual farm aid bill due in November.
You do have to make it till then, a lot of smaller farmers may not and it will increase consolidation of farming even more
Not only bailouts, but GOP aligned farmers voted for Trump to remove 2024 H-2A visa reforms that addressed abuse of the system (seizing passports, etc).
They didn't want to pay for the H-2A paperwork, but didn't like that undocumented laborers would move from farm to farm depending on conditions.
https://www.youtube.com/watch?v=zdWrHb8b-c0
Farmers voted for tariffs so they should be happy. What, they didn't think tariffs were going to be bad when just slammed on the table? Maybe they should also think about higher education, to learn how things work beyond high school education.
They've had the education system they depend on attacked and weakened for generations, they've had fear and distrust of science, experts, and scholars drilled into them, and they've been told countless lies including very comforting lies about what tariffs would mean for them by the very people they were told were the only ones who could be trusted.
I can see the appeal of blaming farmers for getting exactly what they voted for, but honestly they were suckers who were victimized. My hope is that many of them will feel betrayed enough to break free from their indoctrination and start looking for truths and answers outside of the circles which have played them for fools, but that's not going to be an easy process since it'll mean challenging their closest held prejudices and the tearing down and rebuilding of core parts of their identity. That sort of thing is hard enough to do when your world isn't falling apart around you and the last thing the ones who are willing to try need is everyone telling them they deserved what was done to them and that they'll get no sympathy from anyone.
So I thought the same thing but this take persuaded me somewhat. https://youtu.be/badGHJLDpP8?si=5GgFcZky38V0wyCh
I feel as bad for the farmers who voted for this as I feel for the farmers who were hurt by the civil war and the ending of slavery. They knew what they were supporting, thought they would be unaffected and actually benefit from the result.
They are not some dumb poor podunks
That's a very empathetic take. But it's also essentially "society made them do it". When they also clearly voted to root out all their farm labor. They support their own education system.
They've spent enough joy owning the libs and scorning education. It's just FAFO.
In the last trade war (2017), the farmers got 18 billion dollars in bailouts. It's the same guy, so they're waiting again for their handouts.
I'd bet farmers would much rather have repeat customers or the promise of future valuable repeat customers over bailouts. But with these tariffs and retaliations, their former buyers are finding new sources. Even after the trade war ends, inertia will be another hurdle for our farmers.
US farming already has extremely high levels of government intervention aimed at price stability. This leads to all sorts of things like the government paying some farmers not to grow crops, some farmers being prevented from selling their crops, farmers getting paid after a bad season, government minimum price guarantees, and so on. And the overwhelming majority (like 95%+) of all produce in the US is sold by farmers to commodity markets, at more or less fixed rates, who then process/distribute it.
Maybe they should not vote for guy who did exactly this already once and said repeatedly that he will do this.
Frankly, they voted for Trump, because they thought only liberals, trans and other people they hate will be harmed. It is really not the case that they would be victims on an unexpected event. They wanted this to happen, just kind of to everyone else.
Pretty sure the stagnation has a cause beginning in 2025 and that has to do with things like: Canada refusing to buy ALL American liquor in retaliation. China refusing to buy ANY soy beans in retaliation. In retaliation for what you might ask? I leave that as an exercise for the reader. If you are unable to answer that question honestly to yourself you need to seriously consider that your cognitive bias might be preventing you from thinking clearly.
Also EU moving away from US weapons. We're destroying all our exports.
The fundamentals behind the 2008 financial crisis didn't come from nowhere and the "solution" to 2008 did little more than kick the can down the road.
One of my most frustrating things regarding the potential of an AI bubble was some very smart and intelligent researcher being incredibly bullish on AI on Twitter because if you extrapolate graphs measuring AI's ability to complete long-duration tasks (https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...) or other benchmarks then by 2026 or 2027 then you've basically invented AGI.
I'm going to take his statements at face value and assume that he really does have faith in his own predictions and isn't trying to fleece us.
My gripe with this statement is that this prediction is based on proxies for capability that aren't particularly reliable. To elaborate, the latest frontier models score something like 65% on SWE-bench, but I don't think they're as capable as a human that also scored 65%. That isn't to say that they're incapable, but just that they aren't as capable as an equivalent human. I think there's a very real chance that a model absolutely crushes the SWE-bench benchmark but still isn't quite ready to function as an independent software engineering agent.
So a lot of this bullishness basically hinges on the idea that if you extrapolate some line on a graph into the future, then by next year or the year after all white-collar work can be automated. Terrifying as that is, this all hinges on the idea that these graphs, these benchmarks, are good proxies.
And if they aren't, oh wow.
> very smart and intelligent researcher being incredibly bullish on AI on Twitter
A bit offtopic but as time goes by, I believe we can be very intelligent in some aspects and very, very naive and/or wrong in other aspects.
>> by next year or the year after all white-collar work can be automated
Work generates work. If you remove the need for 50% of the work then a significant amount of the remaining work never needs to be done. It just doesn't appear.
The software that is used by people in their jobs will no longer be needed if those people aren't hired to do their jobs. There goes Slack, Teams, GitHub, Zoom, Powerpoint, Excel, whatever... And if the software isn't needed then it doesn't need to be written, by either a person or an AI. So any need for AI Coders shrinks considerably.
You mean Julian Schrittwieser (collaborator on AlphaGo and first author on MuZero)?
https://www.julian.ac/blog/2025/09/27/failing-to-understand-...
I will repeat my comment from 70 days ago:
> I was discussing with a friend that my biggest concern with AI right now is not that it isn't capable of doing things... but that we switched from research/academic mode to full value extraction so fast that we are way out over our skis in terms of what is being promised, which, in the realm of exciting new field of academic research is pretty low-stakes all things considered... to being terrifying when we bet policy and economics on it.
That isn't overly prescient or anything... it feels like the alarm bells started a while ago... but wow the absolute "all in" of the bet is really starting to feel like there is no backup. With the cessation of EVs tax credits, the slowdown in infra spending, healthcare subsidies, etc, the portfolio of investment feels much less diverse...
Especially compared to China, which has bets in so many verticals, battery tech, EVs, solar, then of course all the AI/chips/fabs. That isn't to say I don't think there are huge risks for China... but geez does it feel like the setup for a big shift in economic power especially with change in US foreign policy.
I'll offer two counter-points. Weak but worth mentioning. wrt China there's no value to extract by on-shoring manufacturing -- many verticals are simply uninvestable in the US because of labor costs and the gap of cost to manufacture is so large it's not even worth considering. I think there's a level of introspection the US needs to contend with, but that ship has sailed. We should be forward looking in what we can do outside of manufacturing.
For AI, the pivot to profitability was indeed quick, but I don't think it's as bad as you may think. We're building the software infrastructure to accomodate LLMs into our work streams which makes everyone more efficient and productive. As foundational models progress, the infrastructure will reap the benefits a-la moore's law.
I acknowledge that this is a bullish thesis but I'll tell you why I'm bullish: I'm basically a high-tech ludite -- the last piece of technology I adopted was google in 1996. I converted from vim to vscode + copilot (and now cursor.) because of LLMs -- that's how transformative this technology is.
> many verticals are simply uninvestable in the US because of labor costs and the gap of cost to manufacture is so large it's not even worth considering.
I think this is covered in a number of papers from think tanks related to the current administration.
The overall plan, as I understood it, is to devalue the dollar while keeping the monetary reserve status. A weaker dollar will make it competitive for foreign countries to manufacture in the US. The problem is that if the dollar weakens, investors will fly away. But the AI boom offsets that.
For now it seems to work: the dollar lost more than 10% year to date, but the AI boom kept investors in the US stock market. The trade agreements will protect the US for a couple years as well. But ultimately it's a time bomb for the population, that will wake up in 10 years with half their present purchasing power, in non dollar terms.
Which think tanks?
I think an interesting way to measure the value is to argue "what would we do without it?"
If we removed "modern search" (Google) and had to go back to say 1995-era AltaVista search performance, we'd probably see major productivity drops across huge parts of the economy, and significant business failures.
If we removed the LLMs, developers would go back to Less Spicy Autocomplete and it might take a few hours longer to deliver some projects. Trolls might have to hand-photoshop Joe Biden's face onto an opossum's body like their forefathers did. But the world would keep spinning.
It's not just that we've had 20 years more to grow accustomed to Google than LLMs, it's that having a low-confidence answer or an excessively florid summary of a document are not really that useful.
Chatting with Claude about a topic is in another universe to google search.
I default to Claude for almost everything where I want to know something. I don’t trust Google’s results because of how weighted they are to SEO. Being good at SEO is a separate skill set.
The answers are not low confidence, cite sources, and can do things that Google cannot. For example: I used Claude to design a syllabus to learn about a technical domain along with quizzes and test suites for verification. It linked to video series, books, and articles grouped by an increasingly complex knowledge set.
You are putting too much hope on a glorified parrot.
Parrot? Sure, but a parrot operating in a high dimensional manifold. This breaks naive human assumptions.
Is this really true re: "modern search"? Genuine question because this is probably outside of my domain. I'm just trying to think of industries that would critically affected it we went from modern search to e.g. AltaVista/Yahoo/DogPile and kind of coming up empty except in that it might be more difficult for companies that have perfected modern SEO/advertising to maintain the same level of reach, but I don't think that's what you're alluding to?
you can say something similar about google search about 5 years after release too
I think there's a bubble around AI, but I don't think I agree with this argument. Google search launched in 1998, and ChatGPT launched in 2022.
In 2001, if Google had gone under like a lot of .com bubble companies, I think the economic impact visible to people of the time would have been marginal. There was no Google News, Gmail, Android, and the alternatives (AltaVista, Ask Jeeves, MSN Search) would have been enough. Google was a forcing function for the others to compete with the new paradigm or die trying. It wasn't itself an economic behemoth the way it is today.
I think if OpenAI folded today, you'd still have several companies in the generative AI space. To me, OpenAI's reminiscent of Google in the late 90s in its impact, although culturally it's very different. It's a general purpose website anyone with an internet connection can visit, deep industry competitors are having to adapt to its model to stay alive, and we're seeing signs of a frothy tech bubble a few years after its founding. People across industry verticals, government, law, and NGOs are using it, and students are learning with it.
One counterpoint to this would be that companies like Google reacted to the rise of social media with stuff like Google+, but to me the level to which "AI" is baked into every product at Google exceeds that play by a great margin. At most I remember a "post to plus" link at the top of GMail and a few hooks within the contact/email management views. In contrast, they are injecting AI results into almost every search I make and across almost every product of theirs I use today.
If you fast forward 20 years, I would be surprised if companies specializing in LLMs were not major players the way today's tech giants are. Some of the companies might have the same names, but they'll have changed.
> At most I remember a "post to plus" link at the top of GMail and a few hooks within the contact/email management views.
Google probably could have been whatsapp but to push Google+ scrapped a successful gmail chat for hangouts, which you had to visit Google+ feed each time to open at first.
> We should be forward looking in what we can do outside of manufacturing.
For example?
Another thing to note about China: while people love pointing to their public transit as an example of a country that's done so much right, their (over)investment in this domain has led to a concerning explosion of local government debt obligations which isn't usually well-represented in their overall debt to GDP ratios many people quote. I only state that to state that things are not all the propaganda suggests it might be in China. The big question everyone is asking is, what happens after Xi. Even the most educated experts on the matter do not have an answer.
I, too, don't understand the OP's point of quickly pivoting to value extraction. Every technology we've ever invented was immediately followed by capitalists asking "how can I use this to make more money". LLMs are an extremely valuable technology. I'm not going to sit here and pretend that anyone can correctly guess exactly how much we should be investing into this right now in order to properly price how much value they'll be generating in five years. Except, its so critical to point out that the "data center capex" numbers everyone keeps quoting are, in a very real (and, sure, potentially scary) sense, quadruple-counting the same hundred-billion dollars. We're not actually spending $400B on new data centers; Oracle is spending $nnB on Nvidia, who is spending $nnB to invest in OpenAI, who is spending $nnB to invest in AMD, who Coreweave will also be spending $nnB with, who Nvidia has an $nnB investment in... and so forth. There's a ton of duplicate-accounting going on when people report these numbers.
It doesn't grab the same headlines, but I'm very strongly of the opinion that there will be more market corrections in the next 24 months, overall stock market growth will be pretty flat, and by the end of 2027 people will still be opining on whether OpenAI's $400B annual revenue justifies a trillion dollars in capex on new graphics cards. There's no catastrophic bubble burst. AGI is still only a few years away. But AI eats the world none-the-less.
[1] https://www.sciencedirect.com/science/article/abs/pii/S09275...
My point is not that value extraction wouldn't happen, my point is simply that in addition to the value extraction we also made other huge shifts in economic policy that taken together really seem to put us on a path towards an "AGI or bust" situation in the future.
Is that a bit hyperbolic? isn't this just the same as dotcom and housing bubbles before where we pivoted a bit too hard into a specific industry? maybe... but I also am not sure it would be wise to assume past results will indicate future returns with this one.
AI is appealing to the investors not because it solves human problems, but because it solves some of the problems of previous bubbles.
When we wired the world for the Internet in the 1990s, or built railways across the continent in the 1800s, we eventually reached a point where even the starriest-eyed investors could see they've covered effectively the entire addressable market. Eventually AOL ran out of new customers no matter how many CDs they mailed out, or we had connected every city of more than 50 people with steel rail, and you could hear the music was slowing down.
By dangling the AGI brass ring out there, they can keep justifying the expenditure past many points of diminishing returns, because the first thing we'll ask the Omnipotent AGI is how to earn the quadrillions spent back, with interest.
It also has the benefit of being a high-churn business. The rails laid in 1880, or the fiber pulled in 2000, were usable for decades, but in the AI bubble, the models are obsolete in months and the GPUs in years. It generates huge looking commercial numbers just to tread water.
I often wonder, what if the AGI responds with "idk man, your situation seems pretty messed up". It will be comical
> but geez does it feel like the setup for a big shift in economic power
It happened ten years ago, it's just that perceptions haven't changed yet.
> In those intervening years, a bunch of AI companies might be unable to pay back their debts.
Dumb question: isn't a lot of the current investment in the form of equity deals and/or funded by existing tech company profit lines? What do we actually know about the debt levels and schedules of the various actors?
Google, Meta, and Microsoft are funding AI out of their existing profits so they will probably be fine. The others may be getting GPUs in exchange for equity but they still have to pay real money for the datacenters, generators, etc. That real money is borrowed and they would default in case of a crash. Potentially hundreds of billions of defaults.
Huh? Surely most of the investment in AI labs is equity investment, not debt (so it can't be defaulted on).
The latest xAI GPU SPV is 2/3 debt; maybe that's a trend or maybe not. The debt will be defaulted and the equity will be recapitalized to almost nothing. Same outcome.
We'll find out
I think if there's a rational reasoning behind Trump unleashing ICE and the national guard on the domestic population, this must be it: "the economy is doing really bad, and we need a smokescreen so people won't talk about it."
Hmmm kinda ties into the whole problem of well-off/happy people not being particularly eager to chant "foreigners out", but when they're desperate they take any explanation for their misery they can get their hands on that sounds workable (because, no, you can't go up to a billionaire and just "take all their stuff", but you CAN beat up a foreigner or other disadvantaged person that is worse off than you)
I think another reason for the recent global rise of anti-immigration parties is also that the relative economic value of immigrants (as unskilled labor) has gone down, and the "costs" (cultural/language friction) have become more visible.
Even in the unlikely event AI somehow delivers on its valuations and thereby doesn't disappoint, the implied negative externalities on the population (mass worker redundancy, inequality that makes even our current scenario look rosy, skyrocketing electricity costs) means that America's and the world's future looks like a rocky road.
I think part of the problem is the variance (economically) of AI delivering is so wide, that even that's hard to predict. e.g, is end stage AI:
- Where we have intelligent computers and robots that can take over most jobs
- A smarter LLM that can help with creative work but limited interaction with the physical world
- Something else we haven't imagined yet
Depending on where we end up, the current investment could provide a great ROI or a negative one.
Yes, if AI proves to be a 10x productivity booster, it probably means most people will be unemployed
Electricity was a 10x productivity boost, just over a way longer timespan. We‘re just speedrunning this.
The plow was a 10x productivity booster. Guess what happened next?
10X population?
Also, what happens to those employed when they each have 10 people trying to take their job. It’s a downward spiral for employment as we know it.
I personally hope AI doesn't quite deliver on its valuations, so we don't lose tons of jobs, but instead of a market crash, the money will rotate into quantum and crispr technologies (both soon to be trillion dollar+ industries). People who bet big on AI might lose out some but not be wiped out. That's best casing it though.
I like this, because I hate the idea that we should either be rooting for AI to implode and cause a crash, or for it to succeed and cause a crash (or take us into some neo-feudal society).
"skyrocketing electricity costs"
You said it right here. No one is going to give up energy at such a cheap rate anymore. Those days are over. Darkness for the US is coming.
> And yet despite those warning signs, there has been nothing even remotely resembling an economic crash yet.
Well... define "economic crash."
The outputs no longer correlate with the inputs. Is it possible it's "crashed" already? And is now running in a faulty state?
This is how I'm starting to view many of these things. It's just that the metrics we use to evaluate the economy are getting out of sync. For instance, if "consumer sentiment is at Great Recession levels", why do we need some other indicator to accept that there's a problem? Isn't that a bad thing on its own?
"Bad" is a judgment call. Trump approval ratings haven't dipped that far, so Congressional Republicans won't dare abandon him and there's not much political will for change.
It might change if we get into millions of foreclosures like the great recession and the pain really hits home. From what I can tell right now they're in wartime mode where they just need to buckle down until Trump wins and makes other countries pay for tariffs or something.
We're definitely not in a crash yet, but it does feel like we're the roller coaster just tipping over the peak: unemployment is rising for the first time in a couple years, there's basically no GDP growth apart from AI investment, and the yield curves look scary. The crash could be any second now, especially because tech earnings week is coming up and that could indicate how much revenue, or lack thereof, the AI investment is bringing in.
So the crash is only official once Wall Street's exuberance matches the economy as perceived by it's workforce? Is that a crash or just a latent arrival of the signal itself?
US uniquely is suited to maximally benefit startups emerging in a new space, but maximally prevent startups entering a mature space. No smart, young person in the US matriculates into industries paved over by incumbents as they wisely anticipate that they will be in an industry deliberately hamstrung by regulatory capture.
All growth is in AI now because that's where all the smartest people are going. If AI were regulated significantly, they'd go to other industries and they would be growing faster (though not as much likely)
However there is the broader point that AI is poised offer extreme leverage to those who win the AGI race, justifying capex spending on such absurd margins.
Reminder: If you're going to feel doomer about how tech capex represents like nn% of US GDP growth, you should do some research into what percentage of US GDP growth, especially since 2020, has been the result of government printing. Arguably, our GDP growth right now is more "real" than the historical GDP growth numbers between 2020-2023, but all of it is so warped by policy that its hard to tell what's going on.
We're in extremely unprecedented times. Sometimes maybe good, sometimes maybe shit. The old rules don't apply. Etc.
At the end of the day, if you look at almost any government, roughly 2/3 of expenses go towards healthcare and education things which, AI worlkflow are very likely continue offsetting a larger and larger percentage of the costs on.
Can we still have a financial crisis from all this investment going bust because it might take too long for it to make a difference in manufacturing enough automation hardware for everyone? Yes.
But, the fundamentals are still there, parents will still send their kids to some type of school, and people will trade good in exchange for health services. That's not going to change. Neither will the need to use robots in nursing homes, I think that assumption is safe to make.
What's difficult to predict change in is adoption in manufacturing, and repairs ( be that repairing bridges or repairing your espresso machine ) because that is more of a "3D" issue and hard to automate reliably (think about how many gpus today would it actually take to get a robot to reason out and repair a whole in your drywall), given that your RL environments and training data needs grow exponentially. Technically, your phone should have enough gpu performance to do your taxes with a 3B model and a bunch of tools, eventually it'll even be better than you at it. But to tun an actual robot with multiple cameras and stuff doing troubleshooting and decision making.... you're gonna need a whole 8x rack of gpus for that.
And that's what makes it now difficult to predict what's going to happen. The areas under the curve can vary widely. We could get a 1B AGI model in 6 months, or it could take 5 years for agentic workflows to fully automate everyones taxes and actually replace 2/3 of radiology work...
Either way, while theres a significant chance of this transition to the automation age being rough, I am overall quite optimistic given the fundamentals of what governments actually spend majority of their money on.
For the vast majority of US taxpayers, automating their taxes is feasible right now and the obstacles are political not technical.
I wouldn't even call it political. It's financial, and should be criminal. The people who are elected to represent us are just taking bribes and being paid off to allow corporations to screw us over.
I wouldn't even say "corporations" because honestly, it's just the one corporation that's keeping the US tax system mired in pointless, manual complexity: Intuit.
There is also a whole political line of thinking that making taxes easier makes them more palatable, so if you want to “starve the beast” at all costs you actually want tax filing to be as painful as possible.
An easy position for people wealthy enough to painlessly have their accountant do their taxes for them. If they really wanted people to struggle with their taxes they should be discouraging or outlawing companies like turbo tax who make taxes easier for the peasant class forcing most people to fill everything out by hand on paper forms.
H&R Block also.
This is incorrect actually. Largest spending is usually welfare and health, education is pretty small.
The fundamentals are not there.
Talk to an educator. Education is being actively harmed by AI. Kids don’t want to do any difficult thinking work so they aren’t learning. (Literally any teacher you talk to will confirm this)
AI in medicine is challenging because AI is bad at systems thinking, citation of fact and data privacy. Three things that are absolutely essential for medicine. Also everything for healthcare needs regulatory approval so costs go up and flexibility goes down. We’re ten years away from any AI for medicine being cost effective.
Having an AI do your taxes is absurd. They regularly hallucinate. I 100% guarantee that if you do your taxes with AI you won’t pass an audit. AI literally can’t count. You’re be better off asking it to vibecode a replacement for TurboTax. But again the product won’t be AI it will be traditional code.
Trying for AGI down the road of an LLM is insanity sauce. It’s a simulated language center that can’t count, it can’t do systems thinking. It can’t cite known facts. We’re not six months away we’re a decade or a “cost effective fusion” distance (defined as perpetually 20 years in the future from any point in time)
There are at least six Silicon Valley startups working on AGI. Not a single one of them has published an architecture strategy that might work. None of the “almost AGI” products that have ever come out have a path to AGI.
Meh is the most likely outcome. I say this as someone who uses it a lot for things it is good at.
This is hard-paywalled.
Huh, got in fine on my phone, which has the weaker paywall workaround.
Read to the point where it says subscribe to see the rest.
Heh, it was a fairly long preview. I stand corrected.
The thing that's become synonymous with "hallucination" and "slop"? Cool, good outlook for us.
separate from this article, I don't have a very high opinion of the author. he has an astonishing record of being uninformed and/or just plain wrong in everything I've ever heard him write about.
but as far as this article, the "tech capex as a percentage of GDP growth" is an incredible cherrypicking of statistics to create a narrative... when tech became a boodbath starting in 2022, the rest of the economy continued on strong. all the way until 2025, the rest of the economy was booming while tech layoffs and budget cuts after covid were exploding. so starting that chart in early 2023 when tech had bottomed out (compared to the rest of the economy) is misleading. tech capex as a percentage of the overall GDP has been consistently rising since 2010 - https://gqg.com/highchartal-paper-large-tech-capex-spend-as-... this is obviously related to the advent of public cloud computing more than anything. why this chart appears to clash with the author's chart is the author's chart specifically calls out just percentage of GDP growth, not overall GDP. so the natural conclusion is that while tech has been in borderline recessionary conditions since 2022, it is now becoming stable (if not recovering) while the rest of the economy that didn't have the post-covid pullback (nor the same boom during covid, of course) is now having issues largely due to geopolitics and global trade.
is there an AI bubble? who cares. it's not as meaningful to the broader economy as these cherrypicking stats imply. if it's a bubble, it represents maybe .3% of the GDP. no one would be screaming from the mountain tops about a shaky economy and a bubble if that same .3% was represented by a bubble in the restaurant industry or manufacturing. in fact, in recent years, those industries DID have inflationary bubbles and it was treated like a positive thing for the most part.
I think a lot of this overanalysis and prodding for flaws in tech is generally an attempt at schadenfreude hoping that tech becomes just another industry like carpentry or plumbing. in particular, hoping for a scenario where tech is not as culturally impactful as it is today. because people are worried and frustrated about the economy, don't understand the value of tech, and hope it stops sucking up so much funding and focus by society in general.
they're not 100% wrong in being untrusting or skeptical of tech. the tech industry hasn't always been the best steward of the wealth and power it possesses. but they are generally wrong about valuations or impact of tech on the economy. like the people spending all this money are clueless. the stock market fell 900 points on friday, wiping out over $1 trillion in value over the course of a couple hours. yet the hundreds of billions invested in datacenters is a sign of impending economic doom.
is the economy good? I don't think it's doing great. but it has little to do with AI one way or another. "AI" is just another trend of making technology more accessible to the masses. no more scarier, complicated, or impactful than microcomputers, DSL, cellular phones, or youtube. and while the economy crashed in 2008, youtube and facebook did well. yet there was none of this dooming about tech specifically at the time simply because the tech industry wasn't as controversial at the time.
There's a lot of people who can only process their own failures by assuming that everyone and everything must also, eventually fail; that anything successful is temporary and "not real". And there's a lot of down people in the tech industry right now; we're in a recession, after all.
There's also a significant number of people (e.g. Doctorow) who have made their entire brand on doomerism; and whether they actually believe what they say or have just become addicted to the views is an irrelevant implementation detail.
The anti-AI slop that dominates HackerNews doesn't serve anything productive or interesting. Its just an excuse for people to not read the article, go straight to the comments, and parrot the same thing they've parroted twenty times.
> The anti-AI slop that dominates HackerNews doesn't serve anything productive or interesting.
To you. I find the debate quite valuable, as there is a wide open future and we're in the midst of figuring out where "here" is.
You are way too nice with the author, if I were you I’d omit the fake empathy which dilutes your substantial points. The author is hallucinating worse than AI.
So what if other people downvote you for being too critical.
[dead]
[flagged]
> which cuts down on the risk of a trump 2028 run
Ah, so I see we've entered the "normalizing the end of presidental term limits" part of the downward spiral. Maybe I need to accelerate my plans to get the fuck out of here.
We're already past the point where there is no meaningful notion of "normal" that actually impacts what happens in government. Normalizing things doesn't matter that much if people care so little that they elect someone who's done what Trump did his first time.
I mean he's selling the hats and I've seen some talking heads on the news say they'll look at ways for him to do it. The two term limit is a kinda recent precedent all things considered, so...
I'm not sure I'd consider the 22nd amendment "kinda recent precedent."
I'm not sure how recent 1947 is? Kids would say it's 100 years ago, although the math obviously doesn't quite check out, we're getting there.
Although I'd say precedent goes back to George Washington refusing a 3rd term.
Are there any bookies for that? Seems like an easy way to get rich betting against that happening. If not, then I would instead wager the "market sentiment" is that Trump isn't actually serious about a 2028 bid or that he won't actually be able to overcome 22A.
Looks like there's about 1M in volume on Polymarket, so you could definitely dump a good bit of money there if you feel strongly:
https://polymarket.com/event/presidential-election-winner-20...
That's winner - any for "will he be on at least one state ballot?"
Trump doesn't have a very good ROI though... if I put $10k into a Trump "no" I'd only make $373 (or ~3.7%) ROI which is worse than CDs currently
[flagged]
In any other decade, I'd scoff at the idea that widespread economic problems would be a net-benefit in averting something worse for the country.
... I miss those years.
boomers have already agreed multiple times this century that businesses are not allowed to go bankrupt in fear that their retirement portfolios may not be juiced to the gills. So instead we bail everyone out on the taxpayers dime and leave the debt for some poor schmuck in the future to figure out.
Trump cannot run again.
It (was) also settled precedent that he can't stop spending money required to be spent by Congress (settled during Nixon's term), but the supremes decided it's different now. Same for firing heads of supposed independent federal departments, which was supposed to prevent presidential manipulation.
And the s.c. created presidential immunity out of nothing. For now the president has unchecked power, the conservative dream of a unitary executive.
This will all end when a Democrat is in power again. This is not a sarcastic exaggeration, one way they teed this up was shadow docket decisions like the Kavanaugh rule (ice can arrest/kidnap you based on appearance), it's not a precedent as shadow docket so they can reverse it any time.
Yet.
Trump can do whatever nobody will stop him from doing. Who's going to stop him from running again?
He can if SCOTUS says he can
In the normative sense of "another atrocity like this cannot occur", then yes.
However your comment instead sounds like you are dismissing it as a non-concern... in which case I suggest you wake the heck up. We've had months now of seeing President and his cabinet actively and willfully breaking federal and Constitutional law, with the entire Republican legislature complicit.
It wouldn't even the first time states tried to remove him from their ballots either. [0]
[0] https://www.scotusblog.com/2024/03/supreme-court-rules-state...
Just repeating all the same links that are already being discussed around here for weeks.
How the AI Bubble Will Pop
https://news.ycombinator.com/item?id=45448199
America is now one big bet on AI
https://news.ycombinator.com/item?id=45502706
Jeff Bezos says AI is in a bubble but society will get 'gigantic' benefits
https://news.ycombinator.com/item?id=45464429
etc
etc
Slightly?
Boy, you're on for a MAJOR dissapointment.
I can’t help but think a lot of these comments are actually written by AI — and that, in itself, showcases the value of AI. The fact that all of these comments could realistically have been written by AI with what’s available today is mind-blowing.
I use AI on a day-to-day basis, and by my best estimates, I’m doing the work of three to four people as a result of AI — not because I necessarily write code faster, but because I cover more breadth (front end, back end, DevOps, security) and make better engineering decisions with a smaller team. I think the true value of AI, at least in the immediate future, lies in helping us solve common problems faster. Though it’s not yet independently doing much, the most relevant expression I can think of is: “Those who cannot do, teach.” And AI is definitely good at relaying existing knowledge.
What exactly is the utility of AI writing comments that seem indistinguishable from people? What is the economic value of a comment or an article?
At present rate, there is a good argument to be made that the economic value is teetering towards negative
A comment on a post or an article on the internet has value ONLY if there are real people at the other end of the screen reading it and getting influenced by it
But if you flood the internet with AI slop comments and articles, can you be 100% sure that all the current users of your app will stick around?
If there are no people to read your articles, your article has zero economic value
Perhaps economic value can come from a more educated and skilled workforce if they're using AI for private tuition (if it can write as well as us, it can provide a bespoke syllabus, feedback etc.)
Automation over teaching sounds terrible in the long run, but I could see why learning languages and skills could improve productivity. The "issue" might be here that there's more to gain in developing nations with poor education standards, and so while capital concentrates more to the US because they own the tech, geographical differences in labour productivity reduces.
What is the economic value of a wheel? If we flood the market with wheels, we’re going to need far fewer sleds and horses. Pretty soon, no one might need horses at all — can you imagine that?