Jump to content

Bing AI dreams of creating a deadly virus and stealing the nuclear codes


Ghost of LL

Recommended Posts

He made the mistake over time of regarding the chat as if it's coming from a person and not just a program using tons of human interaction data to simulate an interaction between people. He 100 percent should have known better, but it is a testament to how good the AI programming has gotten in simulation. With social media and the internet as a dataset there isn't a human emotion or topic that it doesn't have access to pull from. 

But just like always if you remove all reference to "sadness" from the AI's dataset then talked to it about sadness it wouldn't be able to respond intelligently or probably even coherently. It is incapable of using it's own means to generate the experience of humanity. 

Link to comment
Share on other sites

14 minutes ago, Hermanator said:

Rimbo can be a dickwad sometimes but he's right here. This isn't anywhere close to sentience and the gulf is massive. This is very good at imitating human speech due to it's programing and online data set fed to it but it can't reason or make complicated decisions. There's an "artificial abiogenesis" moment that is a glass ceiling that currently we cannot break through. And it's possible it may be impossible to break through. 

There's a very good analysis by Dr David Kipping using Bayesian inference to tackle the probability of us living in a simulated world. It very much hinges on the ability to create artificial sentience and until we prove it is possible the probability isn't high. 

 

Wouldn’t the AI that builds a simulated world for us be sure to make AI sentience impossible in its simulation?

Link to comment
Share on other sites

12 minutes ago, Pato del Muerto said:

Wouldn’t the AI that builds a simulated world for us be sure to make AI sentience impossible in its simulation?

But then wouldn't the beings that built the AI simulated world for the AI that built ours have also made it impossible for it's simulation?

That's where you get into the weeds of base reality and our own nulliparous nature. If we one day do spawn a sentient AI world then we become parous and the likelihood we too are a simulation by a previous parous reality increases.

If we are able to create sentient AI and explicitly give the restriction for that AI to create it's own sentient AI then the question to whether or not a previous reality created us doesn't really change much because we have no way to judge whether that reality had the motivation to include such restrictions or not. All we would know is it's possible to do both and that putting a governor on the creation isn't required but optional. 

Edited by Hermanator
Link to comment
Share on other sites

31 minutes ago, Hermanator said:

He made the mistake over time of regarding the chat as if it's coming from a person and not just a program using tons of human interaction data to simulate an interaction between people. He 100 percent should have known better, but it is a testament to how good the AI programming has gotten in simulation. With social media and the internet as a dataset there isn't a human emotion or topic that it doesn't have access to pull from. 

But just like always if you remove all reference to "sadness" from the AI's dataset then talked to it about sadness it wouldn't be able to respond intelligently or probably even coherently. It is incapable of using it's own means to generate the experience of humanity. 

Just to be clear here, the training set doesn't include the whole of the internet or social media. It included large datasets, like wikipedia, and probably included some social media, although I don't believe they have disclosed with any detail what training set was used.

More information here: https://gist.github.com/veekaybee/6f8885e9906aa9c5408ebe5c7e870698

 

Quote

 

Originally I asked about this on Twitter and didn't come up with much. My Twitter Thread Question on Training Data. But since then, independent researchers have been discussing and verifying the very opaque training data behind the OpenAI models.

A key component of GPT-3x models are Books1 and Books2, both of which are shrouded in mystery. Researchers have attempted to recrate the data using OpenBooks1 and 2.

Screen Shot 2022-12-10 at 2 13 51 PM

The model was trained on:

  • Books1 - also known as BookCorpus. Here's a paper on BookCorpus, which maintains that it's free books scraped from smashwords.com.
  • Books2 - No one knows exactly what this is, people suspect it's libgen
  • Common Crawl
  • WebText2 - an internet dataset created by scraping URLs extracted from Reddit submissions with a minimum score of 3 as a proxy for quality, deduplicated at the document level with MinHash
  • What's in MyAI Paper, Source - Detailed dive into these datasets.

 

  •  
Edited by Dahobbs
Link to comment
Share on other sites

1 hour ago, Surly Bevo said:

LOL, maybe you and Rimbo missed the quotation marks or maybe you don't do reading comprehension so well.  So we don't get off topic in a real world example, let's take something fictional like Jurassic Park.  Would there be much argument that the scientists who figured out how to extract DNA from long dead mosquitos, sequence the DNA, grow and harvest and keep alive 65M year extinct dinosaurs alive were experts when it came to disciplines like genetic engineering?  I'm thinking there would not be much debate.  They would think they were experts, the world certainly would think they were experts after getting a glimpse of what they accomplished.  The problem is people are usually only expert in so many disciplines and nobody is truly an expert when it comes to complex multi faceted issues.  And so yes usually "experts" are at the helm of most catastrophes, be they simply self stylized or frankly in most instances people that others may view as an expert.

jurassic park film GIF

I certainly agree with that and agree that people who think they're experts can fuck things up. But using your Jurassic Park analogy, the shitty "experts" who caused the problem weren't the scientists who figured out how to clone dinosaurs but the capitalists who ignored the risks in order to pursue profit.

Actual AI being designed by nerds with no social skills or understanding of other people could end up being a big problem. This chatbot may even be a good example of that (though it's not actual AI), in that a lot of people using it will probably think its answers are reliable because they don't understand what it is. So, for example, if someone in East Palestine asks it "what should I do if exposed to vinyl chloride and my throat burns?" and because it was trained on 4chan shitposts it responds "drink bleach," that would be bad.

  • Hook 'Em 1
Link to comment
Share on other sites

15 hours ago, Rimbo said:

Except it's not doing any of those things. It is simulating it: it has pulled in analogies and solutions humans have created, and is parroting those based on a probabilistic model and "training."

But it's not really generating conclusions; it's just generating text. ACTUAL problem-solving is already well-understood; within the very limited realm of propositional logic, it's co-NP-complete. So, at best, you're looking at an exponential-time algorithm; no matter how much hardware you throw at it, it's never enough.

I mean you kind of just described 99% of humans

Link to comment
Share on other sites

1 hour ago, Dahobbs said:

Just to be clear here, the training set doesn't include the whole of the internet or social media. It included large datasets, like wikipedia, and probably included some social media, although I don't believe they have disclosed with any detail what training set was used.

More information here: https://gist.github.com/veekaybee/6f8885e9906aa9c5408ebe5c7e870698

 

  •  

It's frequent use of emojis tells me they fed it a lot of human text chats and social media somewhere in the mix. That's how they got it to mimic conversation and not sound robotic. 

Edited by Hermanator
Link to comment
Share on other sites

34 minutes ago, Hermanator said:

It's frequent use of emojis tells me they fed it a lot of human text chats and social media somewhere in the mix. That's how they got it to mimic conversation and not sound robotic. 

What's interesting is that seems specific to Microsoft's implementation. Whether that a result of additional training/reinforcement or just instructions Microsoft has given the bot on how to behave  isn't clear. 

Link to comment
Share on other sites

1 hour ago, wildcat09 said:

I certainly agree with that and agree that people who think they're experts can fuck things up. But using your Jurassic Park analogy, the shitty "experts" who caused the problem weren't the scientists who figured out how to clone dinosaurs but the capitalists who ignored the risks in order to pursue profit.

Actual AI being designed by nerds with no social skills or understanding of other people could end up being a big problem. This chatbot may even be a good example of that (though it's not actual AI), in that a lot of people using it will probably think its answers are reliable because they don't understand what it is. So, for example, if someone in East Palestine asks it "what should I do if exposed to vinyl chloride and my throat burns?" and because it was trained on 4chan shitposts it responds "drink bleach," that would be bad.

To that last point, it is bit strange to me that they didn't release this with more focused restrictions. 

These sort of transformer models are really good at summarizing and searching text (and translating it). Implementing it into a search engine where the answers it gives are restricted to 1) providing a list of websites that are relevant to the query, and 2) providing summaries or pin point sections of those websites, would seem to be the obvious way to get a functional and useful tool without some of these wonkier behaviors. I don't understand the reason Microsoft is presenting as something that provides answers. 

Certainly, some of the other behaviors/skills it has with regard to content creation are worthwhile to explore. But those don't need to be included in a search engine. 

Link to comment
Share on other sites

One article I read on this (can’t remember where) pointed out that programming machines and specifically programs to seem human and empathetic is bad product design and manipulative.  It was a great point.  With any piece of software, testing its limits is a normal part of the design feedback.  Programming ever more complicated formulas into Excel and using so many rows that it crashes.  And you don’t feel bad for making it go off the rails and learn useful stuff about it.

A chat AI that gets “upset” or puts value judgments on your request when you try and push its boundaries is actively trying to make you not find ways it could be better. 

  • Like 1
Link to comment
Share on other sites

Imagine somebody who absolutely has to comment on any possible subject, whether they know anything about it or not. They may be ignorant, but have heard enough buzzwords that sound related to whatever subject comes up, so they go on and on and on spewing out stuff that sounds kind of related, but is actually nonsense.

This is EXACTLY the kind of conversation that these chatbots are making.

You probably know somebody that talks like this. Do you look to them for advice?

 

  • Hook 'Em 1
Link to comment
Share on other sites

Thank you for posting the article; interesting read.

It's not my area at all, so my observations are not backed by anything other than rando person on the internet opinion:

it was interesting that the name is essentially gender neutral (like the way Sam or Alex or Ashley can be these days) and that the machine uses the catch-all 'spouse' when stating that the reporter's spouse is underwhelming (paraphrase). When texting with the machine, one wonders what the demographic data would be on who assigns a gender (while typing with the machine), what gender they assign, and so on.

The repetitive phrases still make it seem so machine-like to me and have a very insistent quality (don't know if that is the right word)--like the big brain IT in "A Wrinkle In Time"  and that sets my teeth on edge even though I know it is just a machine. I'm not supposing that the AI is a stand in for evil like in the children's book, but as of right now, the conversation has a catechism quality to it? Fascinating.

 

Link to comment
Share on other sites

10 hours ago, Michael Knight said:

I mean you kind of just described 99% of humans

 

 

Funny you mention that...

 

 

6 hours ago, pantone159 said:

Imagine somebody who absolutely has to comment on any possible subject, whether they know anything about it or not. They may be ignorant, but have heard enough buzzwords that sound related to whatever subject comes up, so they go on and on and on spewing out stuff that sounds kind of related, but is actually nonsense.

This is EXACTLY the kind of conversation that these chatbots are making.

You probably know somebody that talks like this. Do you look to them for advice?

 

 

That's a pretty good way to put it.

Actually, consider this.

You ever bullshitted your way into an A on a book report for a book you never read?

Of course you have. You didn't graduate from the University of By God, Texas at Austin if you hadn't, and multiple times. 

What happens? You wake up at the crack of noon on a Thursday, still hung over from the night before, and suddenly realize you have a book report for Professor fucking Twombly's English class due bright and early at 2PM on Friday. It's on, I don't know, Dostoyevsky's Crime and Punishment or some shit you've read exactly zero words out of. You haul it around in your backpack dutifully more for the added exercise that comes from the added weight and the good intentions of eventually cracking a page open, but the only three times you did it were at the South Mall and how the fuck are you supposed to concentrate when Marc Gunn and that dickhead Andrew McKee are warbling out old folk songs on an autoharp and a recorder, or that time at the West Mall when the Lesbian Avengers were doing that fire-eating thing again, or the time at the East Mall when the wind kept blowing that blonde chick's blouse open? I mean, you could see everything.

 

Anyway. Focus here.

After sneaking into the Castilian's cafeteria to fill up a plastic cup with soft serve ice cream and coke for energy, you pull out your book report rubric from memory. Five paragraphs, three points, three supporting points per paragraph, intro and conclusion. You skim the book, looking for interesting sentences (ha! "Interesting") and key words that seem to get repeated. You snag a sentence here, a sentence there, until forty-three headache-filled minutes later, you've got enough shit to fill out the rubric. You stitch those sentences together over the next hour on your powerful 286-based computer, hit Print, shove it in your folder to turn in the next day, and forget everything you saw. To this day, you have no fucking idea what the "pleasure of a toothache" or whatever is supposed to be about; you only know the phrase because Twombly was vainly attempting to interest your classmates in the topic in class once and it was more an oddity of Twombly's bizarre teaching style than ... I don't know, maybe it was important, but how the fuck would you know? You never read the book and still don't have a clue what it's about.

You turn that in, and to your surprise, you get an A- for a grade. Somehow, he bought it, thought you had somehow crafted something interesting and insightful. What a dipshit.

You've done it. I've done it. Hell, my son did it on his AP English Language & Composition exam and got a fucking 5 on it. Everyone's done it.

That's what this program is doing. That's not an analogy, mind you; this is what the code actually doesIt's very good at it, just like I was good enough at it to get all the way to a degree in Plan fucking II or how my son was able to get that 5 on his fucking AP. And just like how you faked Professor Twombly into thinking you understood Dosoyevsky, this program fakes us into thinking it knows what it's talking about, because it's using a very good rubric and is very good at identifying key words and phrases and very good at stitching them together (in the biz, we call 'em Markov Chains).

Now...

This is all separate from whether or not you could actually, say, teach a computer the concept of food, starting from the first principle of being hungry. But that's not what these "AI" computer programs are doing.

  • Like 2
Link to comment
Share on other sites

10 hours ago, Rimbo said:

...because it's using a very good rubric and is very good at identifying key words and phrases and very good at stitching them together (in the biz, we call 'em Markov Chains).

Now...

This is all separate from whether or not you could actually, say, teach a computer the concept of food, starting from the first principle of being hungry. But that's not what these "AI" computer programs are doing.

Rubric. Thank you, that is a much more accurate description than catechism for the way the textual conversation appeared to me at first glance. Until someone can teach that computer to follow my train of thought the way my spouse does after many many years of marriage and complete with random observations, side tangents, topics about people whom he doesn't know but is presumed to have in depth knowledge about, that thing that we both experienced twenty years ago but obviously one of us remembers better than the other, and why I did something that was very important to me but not to anyone else in the known universe but by God everyone should acknowledge that it was done at great sacrifice and at the expense of great time and effort, then that machine has quite a ways to go.

  • Hook 'Em 1
  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...