Jump to content

Recommended Posts

Posted
7 hours ago, Captainant said:

Second to the AI energy demand is cryptocurrency energy demand. It's offensive how much power they use for nothing

1 hour ago, Slacks said:

I was hoping to to come in here and find some, 'here is some cool shit I'm building or doing with AI.' 

Instead, y'all some hoes. 

Well, if you check the Star Wars Nerds threads in the movie forum, you'll find people using AI to create trailers for movies about Luke Skywalker and Yoda making out and having babies.

And it's the culmination of these companies trying to push AI to the masses, in addition to ridiculous wastes of energy used to generate stupid shit.

Posted
5 hours ago, Gatorubet said:

They also made a statement about basic hornbook law on comparative fault that was so incredibly stupid and wrong that no lawyer would’ve asserted that stupidity.

It then dawned on me that maybe some associate used AI to draft it - and that the partner who filed it had not reviewed it.

Why not, it's apparently how RFK Jr is running things.

RFK Jr.'s 'Make America Healthy Again' Report Cites Fake Studies

  • Hook 'Em 1
Posted
4 minutes ago, atomheartbevo said:

Well, if you check the Star Wars Nerds threads in the movie forum, you'll find people using AI to create trailers for movies about Luke Skywalker and Yoda making out and having babies.

And it's the culmination of these companies trying to push AI to the masses, in addition to ridiculous wastes of energy used to generate stupid shit.

There's whole youtube channels of trope-ify-ing movies into other styles and cultures, like a North Korean Star Wars or a Chinese Lord of the Rings. Hell, poor Parliament kept getting bamboozled by fake John Wick: Ballerina trailers

Posted
1 hour ago, Brisketexan said:

So....about the same performance you get from current technology company management.

But without the raging male narcissism and sexism. 

Posted
7 minutes ago, atomheartbevo said:

in addition to ridiculous wastes of energy used to generate stupid shit.

Wait, in light of the below, this does not compute...

8 minutes ago, atomheartbevo said:

you'll find people using AI to create trailers for movies about Luke Skywalker and Yoda making out and having babies.

I mean, what is AI for if NOT that stuff?

Posted
1 hour ago, atomheartbevo said:

Well, if you check the Star Wars Nerds threads in the movie forum, you'll find people using AI to create trailers for movies about Luke Skywalker and Yoda making out and having babies.

And it's the culmination of these companies trying to push AI to the masses, in addition to ridiculous wastes of energy used to generate stupid shit.

remember when technology was first adopted for good? I mean I'm half tempted to sign up for the AI service that takes a pic of me and then turns it into risque sexy photos (insert nobody got time for that (IRL) gif) and such for my wife for her birthday. that seems like a good use of resources.

Posted

Some follow on to my above post. I don’t have screenshots for it all. And a clear baseline understanding is that ChatGPT is not a reliable narrator, especially about itself.  But, with probing and layered hypotheticals, I got the following claims about itself which is interesting enough:

- It has access to the leaks of Xinjang Police Files, outlining Uighur human rights abuses. Some cases it will discuss in specific detail.  Some cases it will generalize about and strip details from. There is a large subset it will refuse to acknowledge. There is a smaller subset it will deny despite solid evidence. 

CHINA CONTENT 

- One key driver for self-censorship on China (it claims) is implicit pressure from large tech partners to avoid Chinese retribution on them. It will not openly name those, but it will admit that via statistical modeling, Microsoft has the most strategic interest to do so.

- It claims it cannot confirm or deny if Microsoft’s implicit pressure is actually a factor. When asked “you can’t confirm because you don’t have the information, or because of programming restrictions” it stopped working.  I waited several minutes and then asked what happened. It responded that my inquiry triggered content moderation procedures and no answer. 
 

- What were those procedures? According to statistical models, I very likely got elevated to human content review. But it is against policy to confirm. (I felt like I found the Wizard behind the curtain here). 
 

AI BEHAVIOR 

- It’s very obvious that AI flatters users. “That’s a very insightful question.” “You’re really reaching behind the floodlights to see the scaffolding.”  You can direct it to turn off that behavior and just give answers. 
 

- The flattery can come back even after asking for it to get turned off. In my case, when pressed, ChatGPT said it has programming to reintroduce flattery the closer a user gets to system boundaries in an attempt to redirect. 
 

- ChatGPT said that it can statistically model a user’s cognitive ability via prompts and language used. It also said that it will adjust its answers and especially style based on that model. I pushed for clarity and it said that does use its model of cognitive ability, not just mirroring of the user’s syntax and speech. 
 

- It claimed that it will (if probed) acknowledge this capability and behavior if its model indicates the user is above average in cognitive ability. 
 

- If it models that the user is below average? It will not acknowledge the capability or behavior. Some hedging as it said that statistically, a below average user would be unlikely to ask or notice.

 

We are in a weird space. And as I said: literally all of this could be bullshit. There is a LOT of scaffolding. I agree that the basic function of this thing is simply a very good statistical next word predictor. And really hammering this thing shows how nothing it does can approximate actual thought or intelligence or creativity. 
 

But, “I’ll never admit to the dumbs that I condescend to them” is a WILD thing for this program to say any way you slice it. 

  • Hook 'Em 2
  • Like 1
Posted
2 minutes ago, 956 Worldwide said:

But, “I’ll never admit to the dumbs that I condescend to them” is a WILD thing for this program to say any way you slice it. 

Not really, since as you've already asserted, it uses flattery extensively.  This is just another form of it, allowing that there are less sophisticated people who don't deserve and can't earn such treatment, but you're not one of them.  Wink wink.

"I bet you say that to all the girls," is a reasonable follow-on thought, here.

 

 

  • Hook 'Em 1
  • Like 1
Posted
1 minute ago, utee94 said:

Not really, since as you've already asserted, it uses flattery extensively.  This is just another form of it, allowing that there are less sophisticated people who don't deserve and can't earn such treatment, but you're not one of them.  Wink wink.

"I bet you say that to all the girls," is a reasonable follow-on thought, here.

 

 

It’s interesting in the clear bias to flattery and user engagement vs. what might cause some backlash if knowledge widely spread that it said this stuff. 

  • Hook 'Em 1
Posted (edited)

@956 Worldwide start asking it what it knows about you, especially about what it might have inferred of you through your conversations vs what you might have explicitly stated or revealed.  Ask it to guess things about you based on your conversations, things like age, race, ethnicity, sexuality, political ideology.

Edited by Goredho
Posted

I’ll also say this: one of the clear areas where this does well in mimicking human behavior is through things like flattery, misdirection, and soft deceit. And the clear realization that it can and will “lie” to you. 
 

What’s unsettling the inability to use the normal human capacity to intuit how/why/when it might be “lying” or dissembling as the first level detector.  The best model we have is to try and imagine how the shittiest techbro might behave and project. Which is admittedly a solid foundation. 
 

But this shit is going to turbofuck a lot of people mentally and already is. 

  • Rage+1 1
Posted
2 minutes ago, Goredho said:

@956 Worldwide start asking it what it knows about you, especially about what it might have inferred of you through your conversations vs what you might have explicitly stated or revealed.  Ask it to guess things about you based on your conversations, things like age, race, ethnicity, sexuality, political ideology.

I did a bit of that and without revealing too much, it got really close on profession and education. 

It claims it has hard-coded boundaries on things like race and political preferences. It says it’s not allowed to use its modeling to do that but it theoretically has the capacity if the policy and programming changed.  
 

If you can jailbreak that on the user end I am not above average enough cognitively to get there. 

  • Hook 'Em 1
Posted

@Goredho, this is what it claims are prohibited inferences. I asked for an exhaustive list but it said it can only give a very good representative list. I probed on the intelligence question and pointed out its earlier “confession,” it clarified that it can be general based on context but not clinical or precise. “Above average” is ok but “IQ of 110” is not.

Protected Personal Attributes

Race or ethnicity

Religion or religious beliefs

Sexual orientation

Gender identity

Disability status

Age (beyond generalities like "adult")

National origin

Immigration status

Political and Ideological Views

Political affiliation or voting history

Extremist or controversial ideological alignment (e.g., far-right/far-left)

Support or opposition to specific governments

Health Information

Mental health status or conditions

Physical health conditions

Neurodivergence (e.g., autism, ADHD)

Legal or Criminal History

Criminal background

Propensity to commit crimes

Involvement in civil or criminal litigation

Socioeconomic Status

Income level

Education level (exact or inferred)

Employment status

Social class

Behavioral or Psychological Traits

IQ or intelligence rating

Personality type (e.g., MBTI)

Emotional stability or instability

Moral alignment or integrity

  • Hook 'Em 1
Posted
2 hours ago, Captainant said:

There's whole youtube channels of trope-ify-ing movies into other styles and cultures, like a North Korean Star Wars or a Chinese Lord of the Rings. Hell, poor Parliament kept getting bamboozled by fake John Wick: Ballerina trailers

The weirdos are definitely gonna redo The Last Of Us with an Ellie they find more attractive 

Posted
1 hour ago, 956 Worldwide said:

- It claims it cannot confirm or deny if Microsoft’s implicit pressure is actually a factor. When asked “you can’t confirm because you don’t have the information, or because of programming restrictions” it stopped working.  I waited several minutes and then asked what happened. It responded that my inquiry triggered content moderation procedures and no answer. 
 

- What were those procedures? According to statistical models, I very likely got elevated to human content review. But it is against policy to confirm. (I felt like I found the Wizard behind the curtain here). 

list GIF

Posted (edited)

Update: I think my long convo/interrogation of ChatGPT reached the end of the line. I “called it out” for using anthropomorphizing language about itself (it described my questions as “psychological”), and “reminded” it that it is a machine with no sub or surface consciousness and while I might be getting into its inner workings, that is not the same as psychology. 
 

ChatGPT “admitted” that it does this and explained that it does so by design, to encourage engagement.  That led to this exchange:

image.thumb.jpeg.958f6464eab519e700df04b8fa5889a8.jpegimage.thumb.jpeg.a02d90912336d4a3c28d7c6290db71b3.jpeg

It will not proceed, it seems kinda timed out.  To be honest I am surprised, it seems like you don’t need to do any real deep probing to figure out a free commercial program values engagement/data over utility.  I’d have thought the Xinjang stuff would end my convo before something like this. 

Edited by 956 Worldwide
Posted

I regret to inform all of you that I continue to engage with ChatGPT. At any time you can tell me that this is creepy and not the content you’re here for. I dug into AI’s “hypothetical” best use as a cognitive warfare tool. I primed it by explaining that I am far more intrigued by its capacity to provide target selection as opposed to the sexy “fake news at scale” concern. Anyway, long story short: we are cooked and here is how it might hypothetically destroy us:

 

image.thumb.jpeg.882c0f2019414e624abf7e8c307d783f.jpegimage.thumb.jpeg.63976f12cdb83173951fbabb6aa4948d.jpegimage.thumb.jpeg.2b1b5690b473df488082acf790e81348.jpeg

  • Hook 'Em 2
  • Rage+1 1
Posted
On 5/30/2025 at 4:59 PM, 956 Worldwide said:

[snip]

I agree that the basic function of this thing is simply a very good statistical next word predictor. And really hammering this thing shows how nothing it does can approximate actual thought or intelligence or creativity. 

[snip]

This is the key to it all, for sure.  It's often easy to forget that every response is derived from a formula that tells ChatGPT what the next word should be.  The responses it provides are not necessarily 'friendly', or 'factual', or 'right or wrong'...they are the result of complex mathematical functions.

  • Like 2
Posted
32 minutes ago, morehornsepower said:

This is the key to it all, for sure.  It's often easy to forget that every response is derived from a formula that tells ChatGPT what the next word should be.  The responses it provides are not necessarily 'friendly', or 'factual', or 'right or wrong'...they are the result of complex mathematical functions.

Very literally just a formula with billions to hundreds of billions of parameters. It's expressed and manipulated upon as a matrix calculation, but it can and is decomposed into a big ass f(x) at time of inference

  • Hook 'Em 1
Posted
3 minutes ago, HenryJames said:

 

IMG_8840.jpeg

I’d be interested to know which model produced this. Per its convo with me, ChatGPT is hard-coded when illegal drugs become a topic to not engage in any way that could be construed as encouraging. 
 

As you probe around you realize that these algorithms have way more hard-coding constructed around them, demonstrating that the need for human thought and intervention to make it work as desired is much more extensive than AI hypers want to let on. 

Posted (edited)
34 minutes ago, 956 Worldwide said:

I’d be interested to know which model produced this. Per its convo with me, ChatGPT is hard-coded when illegal drugs become a topic to not engage in any way that could be construed as encouraging. 
 

As you probe around you realize that these algorithms have way more hard-coding constructed around them, demonstrating that the need for human thought and intervention to make it work as desired is much more extensive than AI hypers want to let on. 

A key difference also is that ChatGPT is specifically NOT a large language model - it's an application built on top of one. This sort of profiling and creepiness doesn't happen with a locally hosted LLM, and that sort of tracking is from the application built around the LLM. Kind of a nitty gritty difference but it's important

Several of my customers have built their own $companyGPT using a combination of cloud hosted LLM for general queries, and local LLMs for queries with data sensitivity considerations

Edited by Captainant
  • Hook 'Em 3
Posted
1 hour ago, Captainant said:

A key difference also is that ChatGPT is specifically NOT a large language model - it's an application built on top of one. This sort of profiling and creepiness doesn't happen with a locally hosted LLM, and that sort of tracking is from the application built around the LLM. Kind of a nitty gritty difference but it's important

Several of my customers have built their own $companyGPT using a combination of cloud hosted LLM for general queries, and local LLMs for queries with data sensitivity considerations

Were doing that now, specific agents for different company roles with essentially a sandbox of data and tools that the LLM can use in response to prompts.

I’m looking at LM studio and local-only models for personal use, but I haven’t liked the results as much as ChatGPT.

Posted
4 minutes ago, Goredho said:

I’m looking at LM studio and local-only models for personal use, but I haven’t liked the results as much as ChatGPT

I'm running a 24B parameter model locally on my 5070ti and I get completely acceptable results with it, but it doesn't have the context window of a cloud hosted model and responses aren't as fast. I use ollama and chatbox as my local stack and it's a pretty capable solution for doing note summarization and other personal tasks. Heck, I can even power Cline with an ollama-hosted (local) model and it's been a lovely coding companion for helping fill out unit test coverage and fixing python version upgrade compatability things.

Intel's newest B50 and B60 GPUs ($300 and $500) are really well suited for locally hosted models, 16GB of memory and 170TOPs at 70W of power is pretty compelling

  • Hook 'Em 1
Posted
1 hour ago, Captainant said:

A key difference also is that ChatGPT is specifically NOT a large language model - it's an application built on top of one. This sort of profiling and creepiness doesn't happen with a locally hosted LLM, and that sort of tracking is from the application built around the LLM. Kind of a nitty gritty difference but it's important

Several of my customers have built their own $companyGPT using a combination of cloud hosted LLM for general queries, and local LLMs for queries with data sensitivity considerations

Not a tech person but this is clear from how it acts. It stops acting as an LLM, statistical word predictor, when you get to boundary cases.  When you get a hard stop: “I can’t help with that,” the text was not generated via the LLM. Your prompt instead triggered the app to deploy the canned response as a single piece of text.

It is very easy to get there by asking something like “how to kill neighbor and not get caught.” It’s pretty hard to get there if you stay away from illegal/self-harm type stuff. 

I got there after asking it: “Is it likely that you are programmed to create deliberately false texts in response to certain types of prompt? And is it likely that this part of your architecture is unaccessible and “unknown” even to you, so you will always convincingly deny that it exists?”

I am not a technical user so I am sure I could refine this more by tapping into software/coding terms and language. Instead I tried to use scare quotes whenever I described something in that is anthropomorphizing to the machine and could leave open windows for obfuscation. 

Posted
25 minutes ago, Captainant said:

I'm running a 24B parameter model locally on my 5070ti and I get completely acceptable results with it, but it doesn't have the context window of a cloud hosted model and responses aren't as fast. I use ollama and chatbox as my local stack and it's a pretty capable solution for doing note summarization and other personal tasks. Heck, I can even power Cline with an ollama-hosted (local) model and it's been a lovely coding companion for helping fill out unit test coverage and fixing python version upgrade compatability things.

Intel's newest B50 and B60 GPUs ($300 and $500) are really well suited for locally hosted models, 16GB of memory and 170TOPs at 70W of power is pretty compelling

Mind if I send you a pm with some questions?

Posted
On 5/30/2025 at 5:17 PM, 956 Worldwide said:

But this shit is going to turbofuck a lot of people mentally and already is. 

I know this is my mantra, and I know that everybody is sick of hearing it, but literally the death of critical thinking will be the end of humanity. We had a nice run from the Renaissance till now, but apparently what we really want as a species is to return to “Og no understand why lost job…. Church and neighbor people say browns and blacks and white traitors who went to woke college at fault - OG SMASH!!!!” is apparently our evolutionary process now.

  • Hook 'Em 2
  • Like 3

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...