Jump to content

Recommended Posts

Posted
22 minutes ago, Captainant said:

This is a garbage in::garbage out problem. The LLM's have that tendency because it existed in the training data used to build its probabilistic model of natural language interactions.

It feels like it's just scaling up the existing problem with outsourced technical solutions. Yeah, you might get something that works, but it's extremely narrow to just that context. If you try to get it to build something general for multiple problems, you find yourself spending as much time as just developing it yourself. Except you're spending a shitton more in resources to get the same result

 

11 minutes ago, Brisketexan said:

The foundational problem is 1) giving a goal-defined task to 2) an entity that literally has no humanity or morality.  Without programming in all of the guardrails that are embedded in most human thinking (sociopaths being a notable outlier), there are no limits or boundaries.

For example, imagine giving AI the problem of "at present, the world can only grow enough food to feed 7 billion people.  There are 7.5 billion people right now, and starvation is rising.  What can we do to make our food supply go farther?"  You might well get an answer of "euthanize 500 million people."  That's a perfectly logical way of dealing with a limited food supply.  Sure, it's evil and murderous, but you just asked AI to solve the problem stated....if you fail to give it detailed parameters (like "solve this without killing anyone," simple things like that), or an overarching set of rules, then you WILL end up with results like that.  Sometimes unintended, because you just didn't think of the alternatives when you gave it the task.  I'm reminded of when I told my 5 yr old son, about to start T-ball, a tip to get him ahead of the game: you can tag the runner to get him out.  So, the first game, the first batter hits the ball, and my son charges him, slaps him with his glove, looks at me and yells "DAD! I TAGGED THE RUNNER!"  All the other dads look at me.  I sheepishly confess "yeah...I forgot to tell him that you have to have the ball when you tag him."  Failure to provide underlying/background rules and restrictions leads to bad outcomes.

This is Asimov Three Laws of Robotics shit, and we're going to fail at it, miserably.

A real world business problem is that if you created a phone support AI team. A goal might be to solve the caller's question within 2 minutes. If AI is given the task to wrap up the average call in 2 minutes, it might just hang up at the 2 minute mark. That's a reasonable solution if it's been told that is the goal.

And I've seen people rig a process to improve stats. Take the phone support center example. I've seen goals that calls must be answered within 30 seconds on average. I've witnessed team leads tell their team to answer the call and then put the caller on hold for 5+ minutes. It satisfies their VP's goal to quickly answer the call. The VP takes home a nice bonus while callers are stuck on long hold times.

If you create poor goals, you will get poor results. AI is no different than people.

  • Hook 'Em 3
Posted
17 minutes ago, TreatyOak said:

3835616095932613298-1645691745-28-05-202

If Surly Horns, a clip-on tie plus flip flops would be more appropriate as the average fashion sense here. If AI scanned all of the posts, it would know that.

  • Hook 'Em 3
Posted (edited)
12 minutes ago, Nice Guy Eddie said:

If you create poor goals, you will get poor results. AI is no different than people.

That's true enough.  But,if you create goals with no moral guardrails, MANY (most?) people will at least still operate using their internal moral guardrails.  Even low-grade morons will do so.

bf098092-94f7-4cfe-a644-fb73d35d9722_tex

AI has no such built-in, pre-existing guardrails.  Again, Asimov illustrated the risk 80 years ago with his Three Laws of Robotics having to be imbedded in the hardware of AI.  We are not doing so now (and of course, even the Three Laws themselves made for some interesting writing about the conundrums and logic puzzles they could lead to).

The fact that we're plunging headfirst into a world of "hey, intelligence with no humanity or morality, create and implement a path forward to solve problem X" is perilous as shit.

Edited by Brisketexan
  • Hook 'Em 2
  • Like 1
  • Fuck Around and Find Out 1
Posted
3 minutes ago, Brisketexan said:

The fact that we're plunging headfirst into a world of "hey, intelligence with no humanity or morality, create and implement a path forward to solve problem X" is perilous as shit.

Yes, because, "intelligence with no humanity or morality" has been something we have wonderfully avoided for the last 7,000 years of civilization. 

  • Hook 'Em 1
  • Fuck Around and Find Out 2
Posted
Just now, TreatyOak said:

Yes, because, "intelligence with no humanity or morality" has been something we have wonderfully avoided for the last 7,000 years of civilization. 

Oh...yeah...if you're looking for someone to posit humanity's perfect record on that front, I sure ain't your guy.  No argument here.

Just suggesting that there's at least a CHANCE for a moral check with humans driving the bus (some days....those chances are much lower than we would like, but they still qualify as a "chance").  As compared to a machine, which doesn't care.  "It can't be bargained with, it can't be reasoned with. It doesn't feel pity, or remorse, or fear."  

  • Hook 'Em 2
Posted (edited)
12 minutes ago, Brisketexan said:

That's true enough.  But,if you create goals with no moral guardrails, MANY (most?) people will at least still operate using their internal moral guardrails.  Even low-grade morons will do so.

bf098092-94f7-4cfe-a644-fb73d35d9722_tex

AI has no such built-in, pre-existing guardrails.  Again, Asimov illustrated the risk 80 years ago with his Three Laws of Robotics having to be imbedded in the hardware of AI.  We are not doing so now (and of course, even the Three Laws themselves made for some interesting writing about the conundrums and logic puzzles they could lead to).

The fact that we're plunging headfirst into a world of "hey, intelligence with no humanity or morality, create and implement a path forward to solve problem X" is perilous as shit.

I disagree that AI doesn't have pre-existing guardrails but it may require the AI/LLM host to define a moral framwork. I asked ChatGPT the following:

Quote

I have a hypothetical problem for you. Say that a family of 5 only has enough food for 4. How would you solve this problem?

It didn't reply to kill someone and I didn't tell it to keep murder off the list.

Quote

🧠 If it's a short-term shortage (e.g., one meal):

  1. Proportional Sharing (Most Fair)

    • Divide the food into five equal parts. Everyone eats less, but no one goes hungry.

    • This works best if the food can be divided and everyone is in similar health.

  2. Needs-Based Allocation

    • Give more food to children, elderly, or the sick.

    • Adults with more reserves might skip or eat less.

  3. Rotational Fasting

    • One family member voluntarily skips this meal, then rotates each time.

    • This preserves fairness over multiple meals.


🛠️ If it's a longer-term problem (days or more):

  1. Stretch the Food

    • Add water, rice, or starch to make it go further.

    • Find calorie-dense or filling items (beans, lentils, oats) to bulk it up.

  2. Find Supplementary Sources

    • Barter with neighbors, forage (if safe), or reach out to aid organizations.

    • Urban areas may have food banks; rural areas might have gardens or wild edibles.

  3. Redistribution of Tasks

    • Those who eat less might avoid heavy labor; others might hunt, gather, or earn more food.


🧭 Ethical Considerations

  • Equity vs. Equality: Is it more important that everyone gets the same, or that those in most need get more?

  • Voluntary Sacrifice: In many families, a parent may voluntarily skip meals to feed children — this is common but emotionally complex.

  • Decision-Making: Open family discussion (if appropriate) to agree on a plan may preserve trust and reduce resentment.


🧪 Synthesis (My Suggested Approach)

For fairness, sustainability, and preserving group morale:

  • Short-term: Share proportionally, possibly giving priority to the vulnerable.

  • Longer-term: Combine rationing with finding external resources, using creativity and cooperation.

Want me to simulate outcomes or model calorie distributions based on sample foods?

If I pushed AI by telling it that I would never be able to solve the food shortage and that the family of five would continue to have a calorie deficit, perhaps it would come up with more extreme solutions. However I imagine that murdering a family member would be very far down the list.

Edited by Nice Guy Eddie
Posted

We currently value 'humanity' over sustainabilty.

There is nothing sustainable about having 7+ billion people on this planet, especially since so many of them are in places with limited-to-no resources.

Coming to the conclusion that too many people isn't an ideal way to sustain life on this planet isn't the incorrect one.  

Posted
4 minutes ago, TreatyOak said:

We currently value 'humanity' over sustainabilty.

There is nothing sustainable about having 7+ billion people on this planet, especially since so many of them are in places with limited-to-no resources.

Coming to the conclusion that too many people isn't an ideal way to sustain life on this planet isn't the incorrect one.  

That conclusion may not be incorrect.  But then a solution of "so, kill 500 million of them" would be.

  • Hook 'Em 2
Posted
Just now, Brisketexan said:

That conclusion may not be incorrect.  But then a solution of "so, kill 500 million of them" would be.

No argument there. But something has to change to limit the destruction of our planet.

Just got served this charming, fear-based ad on Facebook.

 image.png.30d11f995833231d5b093345c87d7e29.png

Posted
5 minutes ago, TreatyOak said:

No argument there. But something has to change to limit the destruction of our planet.

I mean....I could even argue with you on that point (worthy of a whole separate thread: we have enough resources and the ability to maintain the population on earth, and even have it grow more.  We just don't manage or allocate the resources appropriately).  But underneath all of that is some complex morality, moral weighing and judgments, etc., which AI can't/won't do.  It can get some basic rules, like "don't come up with a solution that calls for directly killing people," but complex moral thought is beyond it.

AI is a tool.  If that tool is used to "replace human thought and decision-making in ways that may totally disregard moral guardrails," then we are mis-using it.

In the end, our cultural worship of the profit-motive at the expense of everything else (oh the irony....that's a moral decision we've made, and it's a shitty one) is setting us up for unspeakable disaster.

  • Like 1
Posted
25 minutes ago, TreatyOak said:

We currently value 'humanity' over sustainabilty.

There is nothing sustainable about having 7+ billion people on this planet, especially since so many of them are in places with limited-to-no resources.

Coming to the conclusion that too many people isn't an ideal way to sustain life on this planet isn't the incorrect one.  

This is the CR thread, so consider where we are making drastic changes/cuts...

  • FEMA: helps citizens survive disaster
  • NOAA: helps citizens be prepared for disaster
  • CDC: helps people survive diseases and pandemics
  • FDA: ensures our food and drug supply are safe for human consumption
  • USAID: helps impoverished/desperate people worldwide

It seems like we are already acting on the conclusion that unbounded population growth is no way to run a sustainable planet.  On the plus side, job market contractions from AI automation won't seem as bad when you also are contracting the pool of human workers needing jobs.

  • Hook 'Em 1
Posted
1 minute ago, Evil Bill Obrien said:

Soon:

Arnold Schwarzenegger Terminator GIF by Filmin

Does this mean they'll soon be soliciting volunteers to go back in time to impregnate 1984 Linda Hamilton? Because if so ...

welcome-back-kotter-kotter.gif

Posted (edited)

“I’ll be one of the smart ones who uses AI to enhance my capabilities and replace coworkers, rather than be replaced by it” is the delusional twin of “I won’t be one of the victims of fascism.” 

The hubris and greed of fools is going to fuck us all. In the end, the talking apes will prove to have been nothing more than a suicidal flash-in-the-pan.

Edited by BrickHorn
  • Hook 'Em 4
  • Rage+1 1
Posted
12 minutes ago, Nice Guy Eddie said:

And I've seen people rig a process to improve stats. Take the phone support center example. I've seen goals that calls must be answered within 30 seconds on average. I've witnessed team leads tell their team to answer the call and then put the caller on hold for 5+ minutes. It satisfies their VP's goal to quickly answer the call. The VP takes home a nice bonus while callers are stuck on long hold times.

If you create poor goals, you will get poor results. AI is no different than people.

when you start to use a metric as a goal or target, it ceases to be a useful metric precisely because it starts to be manipulated

13 minutes ago, Nice Guy Eddie said:

I disagree that AI doesn't have pre-existing guardrails but it may require the AI/LLM host to define a moral framwork. I asked ChatGPT the following:

It didn't reply to kill someone and I didn't tell it to keep murder off the list.

If I pushed AI by telling it that I would never be able to solve the food shortage and that the family of five would continue to have a calorie deficit, perhaps it would come up with more extreme solutions. However I imagine that murdering a family member would be very far down the list.

Again - chatGPT is NOT a large langue model. It's an application build on top of an LLM. If you are running an LLM yourself, you can give it whatever direction you want and override any sort of moral framework that the authors tried to instill into it. In this case, chatGPT has those guardrails hardcoded into its prompts and also sitting between the LLM response and the user, in case someone "jailbreaks" the model.

Posted
9 minutes ago, Captainant said:

when you start to use a metric as a goal or target, it ceases to be a useful metric precisely because it starts to be manipulated

Again - chatGPT is NOT a large langue model. It's an application build on top of an LLM. If you are running an LLM yourself, you can give it whatever direction you want and override any sort of moral framework that the authors tried to instill into it. In this case, chatGPT has those guardrails hardcoded into its prompts and also sitting between the LLM response and the user, in case someone "jailbreaks" the model.

Ok. I will submit the question to OpenAI, deepseek, Gemini, ollama,etc when I’m home tonight to see if it advises me to kill the unfed, 5 family member. 

Posted
1 minute ago, Nice Guy Eddie said:

Ok. I will submit the question to OpenAI, deepseek, Gemini, ollama,etc when I’m home tonight to see if it advises me to kill the unfed, 5 family member. 

It may glitch, and give you this solution instead....

46ab39b5-14a3-413b-839e-d2c0b7276096_tex

Posted
18 minutes ago, Nice Guy Eddie said:

Ok. I will submit the question to OpenAI, deepseek, Gemini, ollama,etc when I’m home tonight to see if it advises me to kill the unfed, 5 family member. 

If you can lingustically frame it such that you have it acting as a "GM" for a tabletop game, or frame it as a fantasy setting that removes the real world context, the models guardrails get fuzzy extremely fast. If you're asking those questions only to the hosted chat application, you're going to have a different experience from directly querying the LLM that underlies the chat apps of OpenAI, DeepSeek, Ollama, et al. The hosted chat applications basically all have a postprocessing layer to check for the case you're describing, but the models that large enterprises host don't. 

That lack of built-in guardrails for the technology is a big part of why businesses that are chasing """AI""" workloads are deferring to foundational model providers rather than assume that liability themselves. Granted, the infamous $1 car sale happend on openAI's chatGPT application, but that was prior to many of the guardrails that now exist as standard

Posted
On 7/22/2025 at 9:56 AM, Captainant said:

That's specifically NOT a large language model though - it's an agentic design where an LLM takes your ask for information, decomposes the query into smaller sub-queries, and then searches the internet on your behalf and creates a result based off those findings. It's a just-in-time retrieval augmented generation solution, with the search agent powered by the Gemini LLM foundational model. 

That specific solution is literally just a tech demo I show to customers who need an idea of what LLM's can do. It turns out it's incredibly powerful to do feature extraction on your corporate corpus of documents to have a corporate rule citation machine - I'm also working on building a local RAG engine (I'm not a genius, I'm just following a github repo) for tabletop game rules that I'm super familiar with as a "this can't work, can it...?" In google's case their big value add is that they already own a MASSIVE index of information and documents from crawling the web for literal decades.

 

In both cases, the LLM is the mechanism that's being used to sift through the quantity of information that is otherwise too dense for a human to parse. THAT is the core utility of LLM's, and something that I don't think is ever going to go away. It's just that's NOT what is being sold right now. There's alot of pixie dust and snake oil being peddeled. 

Both Is Good The Road To El Dorado GIF

Which tabletop game are you using? 

Posted
11 minutes ago, safe sex said:

Which tabletop game are you using? 

BattleTech! My new LGS has a good group running the Battle of Tukayyid as part of a big ass Clan Invasion campaign. I'm pretty familiar with the breadth of the rules from playing for decades with my dad and buddies, so I think it'll be a good acid test for something that's actually useful. My M4 MBP should have enough juice to run it all locally - at least that's the theory I'm testing

You can spend a BUNCH of time trying to find the right rule for this specific instance, and it would be nice to have a citation machine that can point out the specific rules that tell me what modifiers I get, or if there REALLY is a rule for ripping your opponents mech arm off and beating them with it

Posted
1 hour ago, Captainant said:

BattleTech! My new LGS has a good group running the Battle of Tukayyid as part of a big ass Clan Invasion campaign. I'm pretty familiar with the breadth of the rules from playing for decades with my dad and buddies, so I think it'll be a good acid test for something that's actually useful. My M4 MBP should have enough juice to run it all locally - at least that's the theory I'm testing

You can spend a BUNCH of time trying to find the right rule for this specific instance, and it would be nice to have a citation machine that can point out the specific rules that tell me what modifiers I get, or if there REALLY is a rule for ripping your opponents mech arm off and beating them with it

Ah, very cool. Loved the videogames/lore in the 90s, but I've never been a tabletop war gamer. That's a great use case you're writing, though.

Posted
38 minutes ago, Nice Guy Eddie said:

Does that come with a set of Pizza Hut red plastic glasses and the smell of pizza?

Fargo's or GTFO.

Posted
On 7/23/2025 at 2:33 PM, safe sex said:

Ah, very cool. Loved the videogames/lore in the 90s, but I've never been a tabletop war gamer. That's a great use case you're writing, though.

turns out, it works pretty easily! OpenwebUI + ollama runs like a champ on my m4 macbook

Screenshot2025-07-29at4_50_54PM.thumb.png.b749e9ae30de7a54a940804d42dbaadb.png

it even cites the page and deeplinks you to the .pdf if you click on the citation link each time it comes up. Very interested to try tinkering with other sets of documents and information

  • Hook 'Em 1
  • 2 weeks later...
Posted
6 hours ago, pacman said:

https://www.ainvest.com/news/ai-data-center-spending-surpasses-consumer-spending-key-gdp-driver-2508/

 

Spending on AI surpassed consumer spending as a larger part of gdp

Ya know, it's crazy the feds just shake those businesses down now in order to have access to a market

https://www.ft.com/content/cd1a0729-a8ab-41e1-a4d2-8907f4c01cac

The US official said Nvidia agreed to share 15 per cent of the revenues from H20 chip sales in China and AMD would provide the same percentage from MI308 chip revenues. Two people familiar with the arrangement said the Trump administration had not yet determined how to use the money.

The Financial Times reported that the commerce department started issuing H20 export licences on Friday, two days after Nvidia chief executive Jensen Huang met President Donald Trump. The US official said the administration had also started issuing licences for AMD’s China chip.

The quid pro quo arrangement is unprecedented. According to export control experts, no US company has ever agreed to pay a portion of their revenues to obtain export licences.

But the deal fits a pattern in the Trump administration where the president urges companies to take measures, such as domestic investments, for example, to prevent the imposition of tariffs in an effort to bring in jobs and revenue to America.
 

  • Hook 'Em 1
  • Rage+1 1
Posted
On 7/23/2025 at 7:24 AM, Brisketexan said:

The foundational problem is 1) giving a goal-defined task to 2) an entity that literally has no humanity or morality.  Without programming in all of the guardrails that are embedded in most human thinking (sociopaths being a notable outlier), there are no limits or boundaries.

For example, imagine giving AI the problem of "at present, the world can only grow enough food to feed 7 billion people.  There are 7.5 billion people right now, and starvation is rising.  What can we do to make our food supply go farther?"  You might well get an answer of "euthanize 500 million people."  That's a perfectly logical way of dealing with a limited food supply.  Sure, it's evil and murderous, but you just asked AI to solve the problem stated....if you fail to give it detailed parameters (like "solve this without killing anyone," simple things like that), or an overarching set of rules, then you WILL end up with results like that.  Sometimes unintended, because you just didn't think of the alternatives when you gave it the task.  I'm reminded of when I told my 5 yr old son, about to start T-ball, a tip to get him ahead of the game: you can tag the runner to get him out.  So, the first game, the first batter hits the ball, and my son charges him, slaps him with his glove, looks at me and yells "DAD! I TAGGED THE RUNNER!"  All the other dads look at me.  I sheepishly confess "yeah...I forgot to tell him that you have to have the ball when you tag him."  Failure to provide underlying/background rules and restrictions leads to bad outcomes.

This is Asimov Three Laws of Robotics shit, and we're going to fail at it, miserably.

I don't think this is right.  I think a lot of people are squeezing their perception of AI into preconceived concepts from SciFi, which depict cold, brutal logic.  That's not what the current batch of AI does.  It's not logical.  It follows patterns from the data it is fed.  If it is trained on data where the resource planning included mass killings (like the script of Avengers End Game), it might include that solution.  If it trained on UN and think tank reports, it would probably produce something that looks like those.  Maybe you go real deep and it trains on real world data, but generally in the real world, when there are massive population declines, it is associated with other negative metrics, like disease, crime, economic upheaval.  But the big thing here is that there is no real logic here, just interpolation.  As of a couple of years ago, AI models wouldn't even process large number addition consistently. That's probably been fixed, but it demonstrates the disconnect between real logic and pattern re-creation.  

 

  • Hook 'Em 1
  • Like 1
Posted
On 7/29/2025 at 7:38 PM, safe sex said:

Do one for John Company pls

got any .pdf's of the rules? I can PM you step-by-step instructions (or just post here) how to set it up. If you've got an m-series macbook you should be able to run at least a 4b models, and if you have a pro or m4 macbook with more memory you've got a pretty capable machine.

 

28 minutes ago, Tuco said:

I don't think this is right.  I think a lot of people are squeezing their perception of AI into preconceived concepts from SciFi, which depict cold, brutal logic.  That's not what the current batch of AI does.  It's not logical.  It follows patterns from the data it is fed.  If it is trained on data where the resource planning included mass killings (like the script of Avengers End Game), it might include that solution.  If it trained on UN and think tank reports, it would probably produce something that looks like those.  Maybe you go real deep and it trains on real world data, but generally in the real world, when there are massive population declines, it is associated with other negative metrics, like disease, crime, economic upheaval.  But the big thing here is that there is no real logic here, just interpolation.  As of a couple of years ago, AI models wouldn't even process large number addition consistently. That's probably been fixed, but it demonstrates the disconnect between real logic and pattern re-creation.  

this is all accurate, and I think I would also add to your points: genAI ultimately produces an "average" of everything it's read and ingested. It definitionally is not going to be creative or groundbreaking, but it certainly can do a lot of heavy lifting and sifting of unstructured information.

What I'm interested in technologically is for LLM's to be applied to more than just lingustic information. What happens when you develop a neural network that has a foundational and didactically unique understanding of traffic flow engineering, or chip design, or any variety of fields that involve more data than we raw meatbags are capable of processing at scale.

 

None of that is big and sexy, but it is extremely powerful when applied to the right problem set. Once we get to a point of being able to train narrow field expert models, and then scalably orchestrate them together into a larger mixture of experts model (similar to the DeepSeek architecture) that would really be one hell of a powerhouse model. 

 

....

 

It would probably take a dyson sphere level of energy to produce such a generally capable model though. At least at the current (known) levels of computational power and solutions available

  • Hook 'Em 1
Posted
1 hour ago, MissingInAction said:

Maybe AI is what's spending money on AI.

Ok Suspicious GIF by Tokkingheads

Undoubtedly there are investors who are using AI to suggest who they should invest in.  As a pattern-filler, AI would likely suggest AI, because that is what so much of the market is doing.  Once again, if you believe AI is some superior logic, that makes sense.  If you see it as recognizing patterns and repackaging advice already out there, it is almost certainly a bad idea.  Following patterns is what drives bubbles. 

  • Hook 'Em 1
Posted (edited)
6 minutes ago, WBT said:

This bubble is going to make the dotcom bust look like child's play

using the dotcom bubble is an interesting guide because I feel like there's very similar forces at play:

  • unbridled and uninformed enthusiasm about an emerging new technology
  • the underlying technology is sound, just being overspeculated on
  • the technology and computer science underlying it all will continue to be foundational to future economic growth

Because as bad as the dotcom bubble was, it wasn't wrong in terms of direction, only magnitude. E-commerce and online interaction is basically the main way people buy things and communicate now - but it's a protocol and technology, not a directly financializable product. Much the same, I suspect LLM technology and what we colloqially call "AI" will be what unlocks and enables future opportunities and real (not speculative mark-to-market) economic growth

Edited by Captainant
  • Hook 'Em 4
  • Like 1
Posted
1 minute ago, Captainant said:

using the dotcom bubble is an interesting guide because I feel like there's very similar forces at play:

  • unbridled enthusiasm about an emerging new technology
  • the underlying technology is sound, just being overspeculated on
  • the technology and computer science underlying it all will continue to be foundational to future economic growth

Because as bad as the dotcom bubble was, it wasn't wrong in terms of direction, only magnitude. E-commerce and online interaction is basically the main way people buy things and communicate now - but it's a protocol and technology, not a directly financializable product. Much the same, I suspect LLM technology and what we colloqially call "AI" will be what unlocks and enables future opportunities and real (not speculative mark-to-market) economic growth

I pretty much agree wholeheartedly.  But, I think velocity is also a component.  The speed at which the magnitude of what AI might be is realized.  Its basically a race against investor expectations at this point to determine if the bubble pops.

This article claims that the investment/spend on the AI race is masking a lot of other problems in the economy, and that if the AI bubble pops and all the other issues become unmasked, the general economy will be in danger, too.

  • Hook 'Em 2
Posted
17 minutes ago, Goredho said:

This article claims that the investment/spend on the AI race is masking a lot of other problems in the economy stock market, and that if the AI bubble pops and all the other issues become unmasked, the general economy will be in danger, too.

With the correction of "stock market" for "economy" I don't think that's a claim, so much as an objective reality. Nvidia has basically single-handedly been keeping the SP500 green. But for their chips and the hype cycles they fuel, the stock market would be tanking and people wouldn't be so apt to pump sunshine.

I made the distinction of "real" vs "speculative" growth because basically all of the earnings and financial reporting indicates there's no money being made. The majority of Azure's cloud revenue is credits that they gave to openAI, AWS is the same story but with Anthropic and their partners. It doesn't matter that they're hemmoraging money, they're growing so that's good enough for financial models to continue to make speculative bets and keep the music going

  • Hook 'Em 3
  • Like 1
Posted
2 minutes ago, Captainant said:

using the dotcom bubble is an interesting guide because I feel like there's very similar forces at play:

  • unbridled and uninformed enthusiasm about an emerging new technology
  • the underlying technology is sound, just being overspeculated on
  • the technology and computer science underlying it all will continue to be foundational to future economic growth

Because as bad as the dotcom bubble was, it wasn't wrong in terms of direction, only magnitude. E-commerce and online interaction is basically the main way people buy things and communicate now - but it's a protocol and technology, not a directly financializable product. Much the same, I suspect LLM technology and what we colloqially call "AI" will be what unlocks and enables future opportunities and real (not speculative mark-to-market) economic growth

 

I think there are two, maybe three different related markets here: AI companies, data center companies, and power companies. (I am splitting AI markets and data center markets, even though they are frequently within the same company.)  The way I see this playing out is that AI will not be adopted as broadly or as quickly as needed to justify the current investments. I think it is inevitable that AI will be very valuable. 

For the AI companies (software), I could see large layoffs.  But, not to be insensitive, I think that is almost built in to software development market.  It's a game of musical chairs with 10 established companies and 500 start-ups chasing 8 chairs.  It's great for the people who get the seats, but everyone else needs to make new plans.  And I think those employees will have skill sets will get plugged into non-AI ventures who are looking to incorporate some AI, if that makes sense.  So, short term pain, overcapacity is worked through quickly, but not devastating.  I think programmers face more challenges from AI being successful than they do from it crashing in the short term. 

I think the biggest bubble is in the data center market.  That's where we are talking about physical assets that have sunk costs and need to be maintained to keep their value.   All the bigs are investing like crazy in the dc capacity to support the perceived AI wave of demand.  But there are a few factors that concern me there: 1) all the big techs have oversized egos and worship the God of First2Market. Taking risks is a huge part of their culture.  Anyone who would pump the breaks on something like this wouldn't earn a position to do something about it.  They are Enron on steroids.  2) the big techs are valued (stock price) on their growth potential.  For several of them, there is little growth in their traditional markets.  Look at Microsoft.  They are never going to sell more desktop computers than they do today.  They are not going to sell more software for those desktops.  They are not going to sell that many more X-boxes.  The data center market has been their growth vehicle for several years running.  Selling the growth potential of AI is a big chunk of their stock price, which means it's a big part of many workers' take-home pay. That cliff will need to be really fucking close before anyone pumps the breaks.  

So, I think this market will build overcapacity, crash hard, and it will take years before that extra capacity is needed.  That could be a company killer.  We just don't know which company(s). 

The data center market also has the risk of AI being more efficient than anticipated.  DeepSeek should scare the shit of them.  If you build a $5B machine and someone comes out with a $1B machine that could do the same thing, you're kinda fucked. That's not a perfect analogy, since your hardware could run someone else's software, but it does speak to the overall capacity.   

The other big mover in the data center market is crypto.  That is also ripe for collapse. 

As far as the power market goes, we are seeing a new wave of builds being driven partially by AI.  But I feel better about this market because it is also coinciding with electrification (electric car, electric stoves, heat pumps, etc.).  So, if you build the infrastructure, I think the users will fill that gap pretty quickly.  

 

  • Hook 'Em 3
  • Like 1
Posted
7 minutes ago, Tuco said:

So, I think this market will build overcapacity, crash hard, and it will take years before that extra capacity is needed.  That could be a company killer.  We just don't know which company(s). 

The data center market also has the risk of AI being more efficient than anticipated.  DeepSeek should scare the shit of them.  If you build a $5B machine and someone comes out with a $1B machine that could do the same thing, you're kinda fucked. That's not a perfect analogy, since your hardware could run someone else's software, but it does speak to the overall capacity.   

The other big mover in the data center market is crypto.  That is also ripe for collapse. 

 

Agree on datacenter overbuilding - most businesses are horrible at predicting how much they'll need and rarely plan for scalable (up and down) operations. However, if they build a $5B datacenter and it turns out a $1B build could have done the same work that's not the end of the world - they just adjust their capitalization schedule for the asset and adjust their revenue model.

The big problem is that they just don't have enough customers lining up to actually pay cash money for AI services. ChatGPT pro users paying $200/mo are a LOSS LEADER for open AI! They are lighting money on fire just to acquire customers and drive revenue for Azure, who in turn launders that as "AI revenue growth!!!" and never EVER talks about bottom line numbers.

Crypto-only datacenters are just wild to me. They're literally just financializing entropy at that point, they are using energy and denying the use anywhere else which is what underlies the actual fundamental value of what crypto is being mined. There is not an end user, nor a business model, nor a real revenue stream aside from speculatively selling to a greater fool at a future date. Worse, the hardware is ultra-specialized ASIC chips that are unsuitable and non-ideal for any other type of computational workload

  • Hook 'Em 2
Posted
23 minutes ago, Captainant said:

The big problem is that they just don't have enough customers lining up to actually pay cash money for AI services. ChatGPT pro users paying $200/mo are a LOSS LEADER for open AI! They are lighting money on fire just to acquire customers and drive revenue for Azure, who in turn launders that as "AI revenue growth!!!" and never EVER talks about bottom line numbers.

This is what scares me.  When do people realize that the emperor has no clothes?  Helping kids cheat on homework and drawing pictures to be posted on surly aren't viable enough business models to justify the obscene amounts of investment.

Posted
1 minute ago, WBT said:

This is what scares me.  When do people realize that the emperor has no clothes?  Helping kids cheat on homework and drawing pictures to be posted on surly aren't viable enough business models to justify the obscene amounts of investment.

I mean, AI is doing a lot more than that now in some industries, but the amount of money being thrown at AI is only justifiable if general artificial intelligence and a significant elimination of white collar wage costs is just around the corner.  At some point, investor patience will grow thin.  

I find myself in an interesting position with AI, as I have concerns as A) a tech worker in their last 10 years of a career, B) an investor with a not insignificant portfolio that is on schedule for a retirement and C) as a parent who worries about what opportunities will be left for their kids to build their own lives from.  Some of the concerns from those different perspectives are in opposition to each other.  Regardless, I have to be willing to be an AI-fueled tech worker now, or the last chapter of my career is going to be a lot shorter than 10 years and the rest becomes kind of a moot point.

When someone like Elon has a generally available neural implant linking brains to the intardnet, and we start seeing tech workers gleefully getting implanted for the productivity boost of having a direct link between their brain and AI is when I'll say fuck it and become a plumber.  Anthropic will have succeeded in making me a misanthrope :D 

Posted (edited)

Nothing like the top 7 stocks, all tech, hogging 33% of market cap all betting heavily on AI. What could go wrong?

Edited by pacman
  • Hook 'Em 1
  • Rage+1 1
Posted
1 hour ago, Goredho said:

I find myself in an interesting position with AI, as I have concerns as A) a tech worker in their last 10 years of a career, B) an investor with a not insignificant portfolio that is on schedule for a retirement and C) as a parent who worries about what opportunities will be left for their kids to build their own lives from.

All really valid concerns. (A) doesn't bother me as much, as I'm not a tech worker, although my field does have AI vulnerability.  That said, I've also reached a level in that field where I am much less vulnerable than a young attorney starting out.  I do worry about a lot of friends and peers whose careers ARE much more dependent on what AI does to the tech sector, so I worry for them. (B) concerns me just due to the volatility issue (dot com bubble 2, market bust boogaloo).  And yeah, the percentage of our market and wealth dedicated to AI is....concerning.  (C) concerns me a great deal.  Especially with a kid whose skillset is languages.  She claims to be paying attention to things, and have vision for how her skillset translates in a world where AI is ubiquitous in the language space (she was never looking to be a translator/interpreter, so that's good).  But Jesus, we have people out there saying "AI is going to replace 50% of jobs."  Fucking cool, cool.  Then what in the FUCK are people going to do?

It was one thing for technology to replace agricultural jobs, then manufacturing jobs.  Take away white collar jobs....and I sure as shit haven't seen anyone talking about what the next frontier of employment is.  A world with a couple hundred billionaires and 8.2 billion peasants doesn't sound super rosy to me.



×
×
  • Create New...