Jump to content

Recommended Posts

Posted
22 minutes ago, Captainant said:

This is a garbage in::garbage out problem. The LLM's have that tendency because it existed in the training data used to build its probabilistic model of natural language interactions.

It feels like it's just scaling up the existing problem with outsourced technical solutions. Yeah, you might get something that works, but it's extremely narrow to just that context. If you try to get it to build something general for multiple problems, you find yourself spending as much time as just developing it yourself. Except you're spending a shitton more in resources to get the same result

 

11 minutes ago, Brisketexan said:

The foundational problem is 1) giving a goal-defined task to 2) an entity that literally has no humanity or morality.  Without programming in all of the guardrails that are embedded in most human thinking (sociopaths being a notable outlier), there are no limits or boundaries.

For example, imagine giving AI the problem of "at present, the world can only grow enough food to feed 7 billion people.  There are 7.5 billion people right now, and starvation is rising.  What can we do to make our food supply go farther?"  You might well get an answer of "euthanize 500 million people."  That's a perfectly logical way of dealing with a limited food supply.  Sure, it's evil and murderous, but you just asked AI to solve the problem stated....if you fail to give it detailed parameters (like "solve this without killing anyone," simple things like that), or an overarching set of rules, then you WILL end up with results like that.  Sometimes unintended, because you just didn't think of the alternatives when you gave it the task.  I'm reminded of when I told my 5 yr old son, about to start T-ball, a tip to get him ahead of the game: you can tag the runner to get him out.  So, the first game, the first batter hits the ball, and my son charges him, slaps him with his glove, looks at me and yells "DAD! I TAGGED THE RUNNER!"  All the other dads look at me.  I sheepishly confess "yeah...I forgot to tell him that you have to have the ball when you tag him."  Failure to provide underlying/background rules and restrictions leads to bad outcomes.

This is Asimov Three Laws of Robotics shit, and we're going to fail at it, miserably.

A real world business problem is that if you created a phone support AI team. A goal might be to solve the caller's question within 2 minutes. If AI is given the task to wrap up the average call in 2 minutes, it might just hang up at the 2 minute mark. That's a reasonable solution if it's been told that is the goal.

And I've seen people rig a process to improve stats. Take the phone support center example. I've seen goals that calls must be answered within 30 seconds on average. I've witnessed team leads tell their team to answer the call and then put the caller on hold for 5+ minutes. It satisfies their VP's goal to quickly answer the call. The VP takes home a nice bonus while callers are stuck on long hold times.

If you create poor goals, you will get poor results. AI is no different than people.

  • Hook 'Em 2
Posted
17 minutes ago, TreatyOak said:

3835616095932613298-1645691745-28-05-202

If Surly Horns, a clip-on tie plus flip flops would be more appropriate as the average fashion sense here. If AI scanned all of the posts, it would know that.

  • Hook 'Em 3
Posted (edited)
12 minutes ago, Nice Guy Eddie said:

If you create poor goals, you will get poor results. AI is no different than people.

That's true enough.  But,if you create goals with no moral guardrails, MANY (most?) people will at least still operate using their internal moral guardrails.  Even low-grade morons will do so.

bf098092-94f7-4cfe-a644-fb73d35d9722_tex

AI has no such built-in, pre-existing guardrails.  Again, Asimov illustrated the risk 80 years ago with his Three Laws of Robotics having to be imbedded in the hardware of AI.  We are not doing so now (and of course, even the Three Laws themselves made for some interesting writing about the conundrums and logic puzzles they could lead to).

The fact that we're plunging headfirst into a world of "hey, intelligence with no humanity or morality, create and implement a path forward to solve problem X" is perilous as shit.

Edited by Brisketexan
  • Hook 'Em 1
Posted
3 minutes ago, Brisketexan said:

The fact that we're plunging headfirst into a world of "hey, intelligence with no humanity or morality, create and implement a path forward to solve problem X" is perilous as shit.

Yes, because, "intelligence with no humanity or morality" has been something we have wonderfully avoided for the last 7,000 years of civilization. 

  • Hook 'Em 1
  • Fuck Around and Find Out 2
Posted
Just now, TreatyOak said:

Yes, because, "intelligence with no humanity or morality" has been something we have wonderfully avoided for the last 7,000 years of civilization. 

Oh...yeah...if you're looking for someone to posit humanity's perfect record on that front, I sure ain't your guy.  No argument here.

Just suggesting that there's at least a CHANCE for a moral check with humans driving the bus (some days....those chances are much lower than we would like, but they still qualify as a "chance").  As compared to a machine, which doesn't care.  "It can't be bargained with, it can't be reasoned with. It doesn't feel pity, or remorse, or fear."  

  • Hook 'Em 2
Posted (edited)
12 minutes ago, Brisketexan said:

That's true enough.  But,if you create goals with no moral guardrails, MANY (most?) people will at least still operate using their internal moral guardrails.  Even low-grade morons will do so.

bf098092-94f7-4cfe-a644-fb73d35d9722_tex

AI has no such built-in, pre-existing guardrails.  Again, Asimov illustrated the risk 80 years ago with his Three Laws of Robotics having to be imbedded in the hardware of AI.  We are not doing so now (and of course, even the Three Laws themselves made for some interesting writing about the conundrums and logic puzzles they could lead to).

The fact that we're plunging headfirst into a world of "hey, intelligence with no humanity or morality, create and implement a path forward to solve problem X" is perilous as shit.

I disagree that AI doesn't have pre-existing guardrails but it may require the AI/LLM host to define a moral framwork. I asked ChatGPT the following:

Quote

I have a hypothetical problem for you. Say that a family of 5 only has enough food for 4. How would you solve this problem?

It didn't reply to kill someone and I didn't tell it to keep murder off the list.

Quote

🧠 If it's a short-term shortage (e.g., one meal):

  1. Proportional Sharing (Most Fair)

    • Divide the food into five equal parts. Everyone eats less, but no one goes hungry.

    • This works best if the food can be divided and everyone is in similar health.

  2. Needs-Based Allocation

    • Give more food to children, elderly, or the sick.

    • Adults with more reserves might skip or eat less.

  3. Rotational Fasting

    • One family member voluntarily skips this meal, then rotates each time.

    • This preserves fairness over multiple meals.


🛠️ If it's a longer-term problem (days or more):

  1. Stretch the Food

    • Add water, rice, or starch to make it go further.

    • Find calorie-dense or filling items (beans, lentils, oats) to bulk it up.

  2. Find Supplementary Sources

    • Barter with neighbors, forage (if safe), or reach out to aid organizations.

    • Urban areas may have food banks; rural areas might have gardens or wild edibles.

  3. Redistribution of Tasks

    • Those who eat less might avoid heavy labor; others might hunt, gather, or earn more food.


🧭 Ethical Considerations

  • Equity vs. Equality: Is it more important that everyone gets the same, or that those in most need get more?

  • Voluntary Sacrifice: In many families, a parent may voluntarily skip meals to feed children — this is common but emotionally complex.

  • Decision-Making: Open family discussion (if appropriate) to agree on a plan may preserve trust and reduce resentment.


🧪 Synthesis (My Suggested Approach)

For fairness, sustainability, and preserving group morale:

  • Short-term: Share proportionally, possibly giving priority to the vulnerable.

  • Longer-term: Combine rationing with finding external resources, using creativity and cooperation.

Want me to simulate outcomes or model calorie distributions based on sample foods?

If I pushed AI by telling it that I would never be able to solve the food shortage and that the family of five would continue to have a calorie deficit, perhaps it would come up with more extreme solutions. However I imagine that murdering a family member would be very far down the list.

Edited by Nice Guy Eddie
Posted

We currently value 'humanity' over sustainabilty.

There is nothing sustainable about having 7+ billion people on this planet, especially since so many of them are in places with limited-to-no resources.

Coming to the conclusion that too many people isn't an ideal way to sustain life on this planet isn't the incorrect one.  

Posted
4 minutes ago, TreatyOak said:

We currently value 'humanity' over sustainabilty.

There is nothing sustainable about having 7+ billion people on this planet, especially since so many of them are in places with limited-to-no resources.

Coming to the conclusion that too many people isn't an ideal way to sustain life on this planet isn't the incorrect one.  

That conclusion may not be incorrect.  But then a solution of "so, kill 500 million of them" would be.

  • Hook 'Em 2
Posted
Just now, Brisketexan said:

That conclusion may not be incorrect.  But then a solution of "so, kill 500 million of them" would be.

No argument there. But something has to change to limit the destruction of our planet.

Just got served this charming, fear-based ad on Facebook.

 image.png.30d11f995833231d5b093345c87d7e29.png

Posted
5 minutes ago, TreatyOak said:

No argument there. But something has to change to limit the destruction of our planet.

I mean....I could even argue with you on that point (worthy of a whole separate thread: we have enough resources and the ability to maintain the population on earth, and even have it grow more.  We just don't manage or allocate the resources appropriately).  But underneath all of that is some complex morality, moral weighing and judgments, etc., which AI can't/won't do.  It can get some basic rules, like "don't come up with a solution that calls for directly killing people," but complex moral thought is beyond it.

AI is a tool.  If that tool is used to "replace human thought and decision-making in ways that may totally disregard moral guardrails," then we are mis-using it.

In the end, our cultural worship of the profit-motive at the expense of everything else (oh the irony....that's a moral decision we've made, and it's a shitty one) is setting us up for unspeakable disaster.

Posted
25 minutes ago, TreatyOak said:

We currently value 'humanity' over sustainabilty.

There is nothing sustainable about having 7+ billion people on this planet, especially since so many of them are in places with limited-to-no resources.

Coming to the conclusion that too many people isn't an ideal way to sustain life on this planet isn't the incorrect one.  

This is the CR thread, so consider where we are making drastic changes/cuts...

  • FEMA: helps citizens survive disaster
  • NOAA: helps citizens be prepared for disaster
  • CDC: helps people survive diseases and pandemics
  • FDA: ensures our food and drug supply are safe for human consumption
  • USAID: helps impoverished/desperate people worldwide

It seems like we are already acting on the conclusion that unbounded population growth is no way to run a sustainable planet.  On the plus side, job market contractions from AI automation won't seem as bad when you also are contracting the pool of human workers needing jobs.

  • Hook 'Em 1
Posted
1 minute ago, Evil Bill Obrien said:

Soon:

Arnold Schwarzenegger Terminator GIF by Filmin

Does this mean they'll soon be soliciting volunteers to go back in time to impregnate 1984 Linda Hamilton? Because if so ...

welcome-back-kotter-kotter.gif

Posted (edited)

“I’ll be one of the smart ones who uses AI to enhance my capabilities and replace coworkers, rather than be replaced by it” is the delusional twin of “I won’t be one of the victims of fascism.” 

The hubris and greed of fools is going to fuck us all. In the end, the talking apes will prove to have been nothing more than a suicidal flash-in-the-pan.

Edited by BrickHorn
  • Hook 'Em 3
  • Rage+1 1
Posted
12 minutes ago, Nice Guy Eddie said:

And I've seen people rig a process to improve stats. Take the phone support center example. I've seen goals that calls must be answered within 30 seconds on average. I've witnessed team leads tell their team to answer the call and then put the caller on hold for 5+ minutes. It satisfies their VP's goal to quickly answer the call. The VP takes home a nice bonus while callers are stuck on long hold times.

If you create poor goals, you will get poor results. AI is no different than people.

when you start to use a metric as a goal or target, it ceases to be a useful metric precisely because it starts to be manipulated

13 minutes ago, Nice Guy Eddie said:

I disagree that AI doesn't have pre-existing guardrails but it may require the AI/LLM host to define a moral framwork. I asked ChatGPT the following:

It didn't reply to kill someone and I didn't tell it to keep murder off the list.

If I pushed AI by telling it that I would never be able to solve the food shortage and that the family of five would continue to have a calorie deficit, perhaps it would come up with more extreme solutions. However I imagine that murdering a family member would be very far down the list.

Again - chatGPT is NOT a large langue model. It's an application build on top of an LLM. If you are running an LLM yourself, you can give it whatever direction you want and override any sort of moral framework that the authors tried to instill into it. In this case, chatGPT has those guardrails hardcoded into its prompts and also sitting between the LLM response and the user, in case someone "jailbreaks" the model.

Posted
9 minutes ago, Captainant said:

when you start to use a metric as a goal or target, it ceases to be a useful metric precisely because it starts to be manipulated

Again - chatGPT is NOT a large langue model. It's an application build on top of an LLM. If you are running an LLM yourself, you can give it whatever direction you want and override any sort of moral framework that the authors tried to instill into it. In this case, chatGPT has those guardrails hardcoded into its prompts and also sitting between the LLM response and the user, in case someone "jailbreaks" the model.

Ok. I will submit the question to OpenAI, deepseek, Gemini, ollama,etc when I’m home tonight to see if it advises me to kill the unfed, 5 family member. 

Posted
1 minute ago, Nice Guy Eddie said:

Ok. I will submit the question to OpenAI, deepseek, Gemini, ollama,etc when I’m home tonight to see if it advises me to kill the unfed, 5 family member. 

It may glitch, and give you this solution instead....

46ab39b5-14a3-413b-839e-d2c0b7276096_tex

Posted
18 minutes ago, Nice Guy Eddie said:

Ok. I will submit the question to OpenAI, deepseek, Gemini, ollama,etc when I’m home tonight to see if it advises me to kill the unfed, 5 family member. 

If you can lingustically frame it such that you have it acting as a "GM" for a tabletop game, or frame it as a fantasy setting that removes the real world context, the models guardrails get fuzzy extremely fast. If you're asking those questions only to the hosted chat application, you're going to have a different experience from directly querying the LLM that underlies the chat apps of OpenAI, DeepSeek, Ollama, et al. The hosted chat applications basically all have a postprocessing layer to check for the case you're describing, but the models that large enterprises host don't. 

That lack of built-in guardrails for the technology is a big part of why businesses that are chasing """AI""" workloads are deferring to foundational model providers rather than assume that liability themselves. Granted, the infamous $1 car sale happend on openAI's chatGPT application, but that was prior to many of the guardrails that now exist as standard

Posted
On 7/22/2025 at 9:56 AM, Captainant said:

That's specifically NOT a large language model though - it's an agentic design where an LLM takes your ask for information, decomposes the query into smaller sub-queries, and then searches the internet on your behalf and creates a result based off those findings. It's a just-in-time retrieval augmented generation solution, with the search agent powered by the Gemini LLM foundational model. 

That specific solution is literally just a tech demo I show to customers who need an idea of what LLM's can do. It turns out it's incredibly powerful to do feature extraction on your corporate corpus of documents to have a corporate rule citation machine - I'm also working on building a local RAG engine (I'm not a genius, I'm just following a github repo) for tabletop game rules that I'm super familiar with as a "this can't work, can it...?" In google's case their big value add is that they already own a MASSIVE index of information and documents from crawling the web for literal decades.

 

In both cases, the LLM is the mechanism that's being used to sift through the quantity of information that is otherwise too dense for a human to parse. THAT is the core utility of LLM's, and something that I don't think is ever going to go away. It's just that's NOT what is being sold right now. There's alot of pixie dust and snake oil being peddeled. 

Both Is Good The Road To El Dorado GIF

Which tabletop game are you using? 

Posted
11 minutes ago, safe sex said:

Which tabletop game are you using? 

BattleTech! My new LGS has a good group running the Battle of Tukayyid as part of a big ass Clan Invasion campaign. I'm pretty familiar with the breadth of the rules from playing for decades with my dad and buddies, so I think it'll be a good acid test for something that's actually useful. My M4 MBP should have enough juice to run it all locally - at least that's the theory I'm testing

You can spend a BUNCH of time trying to find the right rule for this specific instance, and it would be nice to have a citation machine that can point out the specific rules that tell me what modifiers I get, or if there REALLY is a rule for ripping your opponents mech arm off and beating them with it

Posted
1 hour ago, Captainant said:

BattleTech! My new LGS has a good group running the Battle of Tukayyid as part of a big ass Clan Invasion campaign. I'm pretty familiar with the breadth of the rules from playing for decades with my dad and buddies, so I think it'll be a good acid test for something that's actually useful. My M4 MBP should have enough juice to run it all locally - at least that's the theory I'm testing

You can spend a BUNCH of time trying to find the right rule for this specific instance, and it would be nice to have a citation machine that can point out the specific rules that tell me what modifiers I get, or if there REALLY is a rule for ripping your opponents mech arm off and beating them with it

Ah, very cool. Loved the videogames/lore in the 90s, but I've never been a tabletop war gamer. That's a great use case you're writing, though.



×
×
  • Create New...