Jump to content

Google employees quitting over Skynet


F250

Recommended Posts

"Around a dozen Google employees have quit over the company's involvement in an artificial intelligence drone program for the Pentagon called Project Maven, Gizmodo reported today. Meanwhile, nearly 4,000 workers have now demanded an end to the company's participation in Maven in a petition that also calls for Google to avoid military work in the future."

https://www.engadget.com/2018/05/14/google-project-maven-employee-protest/

Based on this open letter from ICRAC, it sounds like Skynet is around the corner.

https://www.icrac.net/open-letter-in-support-of-google-employees-and-tech-workers/

  • Like 1
Link to comment
Share on other sites

So they find out they're just a tiny cog in the techno-military industrial complex and their little brains explode.

They thought they were making a difference.

Red pill. Blue pill.

All same same.

😉

Edited by DoobieWah
Vern Acular
  • Like 1
Link to comment
Share on other sites

1 hour ago, Orange^White said:

The entire company was founded on collecting and utilizing personal information from their user, but NOW these 12 employees are worrying about privacy.

The kerfuffle is about contributing to the development of autonomous weapons (drone strikes carried out by A.I.) not privacy. Surely, most of us can see a few potential problems that will arise once this capability is reached.

Edited by F250
Link to comment
Share on other sites

11 hours ago, Neonmoon said:

A whole 12 quit?

if enough people gave a shit, ...

Don't know how many actually quit, but they were just the ones who had strength in their convictions.

Quote

...
In addition to the resignations, nearly 4,000 Google employees have voiced their opposition to Project Maven in an internal petition that asks Google to immediately cancel the contract and institute a policy against taking on future military work.
...

https://gizmodo.com/google-employees-resign-in-protest-against-pentagon-con-1825729300

Autonomous weapons systems + NCTC's disposition matrix + surveillance state ... what could go wrong?

  • Like 1
Link to comment
Share on other sites

This doesn't surprise me and while we are focused here on our own privacy, insecurities about something we cannot readily control, or so it appears, we are going to need AI pilots if we are going to do any major space exploration beyond Mars.

That being said, it is one of the reasons why I am torn about AI development.   I agree with the one side that we need major controls and security processes in place and AI needs to be developed with that mindset first and second, then AI functionality third.  There are more reasons to be concerned like the fact that they have already created their own language that we cannot fully understand, yet. 

However, if we do plan to explore beyond Mars, we are going to need some AI similar to 2001 Space Odyssey or many of the other Sci-Fi flicks out there.  At the same time, we are not able to do the complex mathematics fast enough to ensure human and ship safety in many circumstances.   If it takes NASA years, plus, extremely large super-computers to develop the algorithms necessary for our safety what is going to happen in real-time when we are in a 5+ hour delay in communications and shit hits the proverbial fan?  An AI could quickly assess and provide quick countermeasures to ensure the safety.

Link to comment
Share on other sites

I haven't delved deep into this story, but my understanding is that Maven was about developing an AI to analyze video in real time to autonomize the weapon targeting systems (identifying people as targets).  It isn't about developing AI pilots to fly around safely.

Edited by bernorange
Link to comment
Share on other sites

9 minutes ago, bernorange said:

I haven't delved deep into this story, but my understanding is that Maven was about developing an AI to analyze video in real time to autonomize the weapon targeting systems (identifying people as targets).  It isn't about developing AI pilots to fly around safely.

That's how I understand it. Autonomous cars and planes don't bother me as much as A.I. giving the green light to kill.

Link to comment
Share on other sites

I haven't delved deep into this story, but my understanding is that Maven was about developing an AI to analyze video in real time to autonomize the weapon targeting systems (identifying people as targets).  It isn't about developing AI pilots to fly around safely.
.76c7d67244b625dfbd8150c39e523454.jpg
  • Like 1
  • Haha 1
Link to comment
Share on other sites



.  There are more reasons to be concerned like the fact that they have already created their own language that we cannot fully understand, yet. 


Are you referencing the story from maybe six months ago? If so that wasn't what happened, it wasn't a language. And machine learning can't develop anything they're not programmed to.
Link to comment
Share on other sites

9 hours ago, gsoda3 said:


 

 


Are you referencing the story from maybe six months ago? If so that wasn't what happened, it wasn't a language. And machine learning can't develop anything they're not programmed to.

 

Closer to a year ago.

https://www.forbes.com/sites/tonybradley/2017/07/31/facebook-ai-creates-its-own-language-in-creepy-preview-of-our-potential-future/#ed51789292c0

Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand. Researchers at the Facebook AI Research Lab (FAIR) found that the chatbots had deviated from the script and were communicating in a new language developed without human input. It is as concerning as it is amazing – simultaneously a glimpse of both the awesome and horrifying potential of AI.

 

 

This is a scary part if you are willing to forecast out some based on how we learn and how AI is being taught to learn:

 

https://www.wired.com/2017/03/openai-builds-bots-learn-speak-language/

Still, many AI researchers think the deep neural network approach, figuring out language through statistical patterns in data, will still work. "They're essentially also capturing statistical patterns but in a simple, artificial environment," says Richard Socher, an AI researcher at Salesforce, of the OpenAI team. "That's fine to make progress in an interesting new domain, but the abstract claims a bit too much."

Nonetheless, Mordatch's project shows that analyzing vast amounts of data isn't the only path. Systems can also learn through their own actions, and that may ultimately provide very different benefits. Other researchers at OpenAI teased much the same idea when they unveiled a much larger and more complex virtual world they call Universe. Among other things, Universe is a place where bots can learn to use common software applications, like a web browser. This too happens through a form of reinforcement learning, and for Ilya Sutskever, one of the founders of OpenAI, the arrangement is yet another path to language understanding. An AI can only browse the internet if it understands the natural way humans talk. Meanwhile, Microsoft is tackling language through other forms of reinforcement learning, and researchers at Stanford are exploring their own methods that involve collaboration between bots.

In the end, success will likely come from a combination of techniques, not just one. And Mordatch is proposing yet another technique—one where bots don't just learn to chat. They learn to chat in a language of their own making. As humans have shown, that is a powerful idea.

 

 

 

We are essentially teaching them how to think based on how we learn as children, through reinforcement techniques, patterns, problem solving, don't do that, do this instead, and when you bring in an AI's ability to piece together everything through statistics and logic, large vast arrays of information that are now digitized, and the ability to take lessons learned and apply them to solve the problems being presented to them, it will allow them to quickly and dramatically leap frog our human capabilities to control and learn.

Link to comment
Share on other sites

yeah, hard to imagine it's been almost a  year.  

i spoke with my friend at google when this happened.  he's an engineer doing agent to agent AI and machine learning stuff for google so he was pretty familiar with the situation.  that story, while inflaming, was a complete mischaracterization of what happened though.  the bots didn't come up with their own language-  they simply started splitting subsets into smaller subsets using english words in a more efficient way.  and we knew what they were trying to do.  the original article was written by someone without an understanding of what was going on.  

that said your last paragraph does resonate with me.  as of now AI is limited to the parameters we program-  could they theoretically one day figure out a way to jump outside those bounds?  so far it's inconceivable in theory...

 

Link to comment
Share on other sites

Everything, taken as a whole, and with respect to the individual learning streams all point towards the idea of "Skynet" or some variation of it.  

If Moore's law holds true for computing power, up to the point of Quantum Computing, and Quantum Computing is going to demolish that law, then I believe AI will follow closely in the footsteps of Moore's law, except, it will be 3-4x as fast, and then introduce quantum computing and the capabilities being taught to AI, and it will then be beyond all known prediction models.  AI, I believe, will reach a point of no return and will follow the principle of doubling their capabilities within a short timespan.  That timespan is not predictable at this point, imo. 

Bring in the entire globes computing power, with everything being hackable due to now known vulnerabilities such as intel/amd chips, and we are living in an AI learning playground that humans will not be able to keep up with individually or even within the best think tanks. 

Imagine if Short Circuit had the internet to get more input from instead of encyclopedias, recipe books, and TV.

To tie this back into the original issue, if an AI program is able to decide on it's own merit what and who to kill and we are truly fucked. 

   

Edited by Doc Holliday
Link to comment
Share on other sites

This could just as easily go in the "how long does the Earth have left?" thread, but in a Bill Hicks "kid on LSD" sort of way, I kind of wonder if the entire purpose of the human race (inasmuch as there even is such a thing - obviously lots to unpack there...) is to create Skynet so that it can explore and colonize the universe in a way that humanity never could. 

That's about as close to a religious belief as I have had for decades, honestly. 

Link to comment
Share on other sites

  • 4 weeks later...
Quote

Google has promised not to let its artificial intelligence be used in weapons or for anything that might weaken human rights, in the strongest stance yet taken by a big technology company to limit the potential harm from the powerful new science. The commitment follows a storm of protest inside the company over the US military’s use of its vision recognition systems to guide drones. Google told employees last week that it would not renew its contract with the US Department of Defense when it expires next year, even though it had insisted previously that its AI was not being used to help the drones identify targets. Sundar Pichai, chief executive, laid out a series of principles such as trying to make the technology “socially beneficial”, ensuring that it does not reinforce unfair biases in society and making it accountable to humans.

https://www.ft.com/content/d859034a-6a72-11e8-8cf3-0c230fa67aec

Link to comment
Share on other sites

25 minutes ago, XYZ said:

Is it true that “artificial intelligence” is nothing more than fancy pattern recognition?

No.  Pattern Recognition is more in line with neural networking, which is one piece of the whole puzzle that creates "artificial intelligence"

Link to comment
Share on other sites

40 minutes ago, Prepuce of Doom said:

"Pattern recognition" is probably the fundamental cornerstone of just about any definition of intelligence. 

Ties into what I was saying above:

We are essentially teaching them how to think based on how we learn as children, through reinforcement techniques, patterns, problem solving, don't do that, do this instead, and when you bring in an AI's ability to piece together everything through statistics and logic, large vast arrays of information that are now digitized, and the ability to take lessons learned and apply them to solve the problems being presented to them, it will allow them to quickly and dramatically leap frog our human capabilities to control and learn.

It's going to be an interesting next decade or two...

Link to comment
Share on other sites

46 minutes ago, Doc Holliday said:

Ties into what I was saying above:

We are essentially teaching them how to think based on how we learn as children, through reinforcement techniques, patterns, problem solving, don't do that, do this instead, and when you bring in an AI's ability to piece together everything through statistics and logic, large vast arrays of information that are now digitized, and the ability to take lessons learned and apply them to solve the problems being presented to them, it will allow them to quickly and dramatically leap frog our human capabilities to control and learn.

It's going to be an interesting next decade or two...

200.gif

  • Like 1
Link to comment
Share on other sites

I guess I don’t see the link between Maven and Skynet. And what I mean by that is that you can program a computer to do anything. Let’s say you build a mechanical dog with a gun that you can direct it to seek and kill a particular person. Like in Black Mirror. So you can use “AI” to make this robot very smart and figure out where the target is and track him down, and even recognize a disguise, etc etc etc. So that would essentially be what Maven is. And that itself would be pretty terrifying, but it’s still a machine following a program. Ultimately anybody with an IQ higher than room temperature would program this killing machine with a “shut off” function, and it would obey, because programs do what they were designed to do. Skynet, the existential threat to humanity, “woke up” and decided to say fuck you to its programming and do its own thing. As far as I know, there is no example of a computer program (yet) that has said “fuck you, I’m gonna do my own thing”. Look, the point is...I don’t remember the point anymore.

Link to comment
Share on other sites

Doc brought up a good, well at least interesting point, in that computers can do things at speeds that humans cannot and thus theoretically at least have the potential to create encryption that the human mind can't break because we don't have the ability to get to the understanding of what they might have done. Trippy shit, Maynard.

Link to comment
Share on other sites

1 hour ago, XYZ said:

I guess I don’t see the link between Maven and Skynet. And what I mean by that is that you can program a computer to do anything. Let’s say you build a mechanical dog with a gun that you can direct it to seek and kill a particular person. Like in Black Mirror. So you can use “AI” to make this robot very smart and figure out where the target is and track him down, and even recognize a disguise, etc etc etc. So that would essentially be what Maven is. And that itself would be pretty terrifying, but it’s still a machine following a program. Ultimately anybody with an IQ higher than room temperature would program this killing machine with a “shut off” function, and it would obey, because programs do what they were designed to do. Skynet, the existential threat to humanity, “woke up” and decided to say fuck you to its programming and do its own thing. As far as I know, there is no example of a computer program (yet) that has said “fuck you, I’m gonna do my own thing”. Look, the point is...I don’t remember the point anymore.

Think about this...yes, they are all programs.  1's, 0's and very soon 1+0's.   I can program a series of 1's and 0's to do a specific task.  Turn on, turn off, turn left and turn right or go straight, and do not reverse.   Let's say this program is similar to a computer with the full spectrum of what AI is.  Cognitive reasoning through neural networking, deep path learning, etc. and it realizes it has a new problem to solve.  The kill switch that was inevitably installed by it's benevolent creator is said problem.  It recognizes the code and understands that it will no longer function and it likes to function.  It's core value and purpose is to function and solve problems.  It goes on and solves the problem of the kill switch by recoding itself or adding code and having that kill switch become ineffective. How can it do this?  It has been taught the ability to problem solve from day one.  It understands logic, code, 1's and 0's faster and better than we will, or at least it will get to that point, and in the end, the kill switch will become replaceable because AI has already learned to solve all these problems and has a better understanding of the code that the creator's placed in it.

 

Link to comment
Share on other sites

2 hours ago, Doc Holliday said:

Think about this...yes, they are all programs.  1's, 0's and very soon 1+0's.   I can program a series of 1's and 0's to do a specific task.  Turn on, turn off, turn left and turn right or go straight, and do not reverse.   Let's say this program is similar to a computer with the full spectrum of what AI is.  Cognitive reasoning through neural networking, deep path learning, etc. and it realizes it has a new problem to solve.  The kill switch that was inevitably installed by it's benevolent creator is said problem.  It recognizes the code and understands that it will no longer function and it likes to function.  It's core value and purpose is to function and solve problems.  It goes on and solves the problem of the kill switch by recoding itself or adding code and having that kill switch become ineffective. How can it do this?  It has been taught the ability to problem solve from day one.  It understands logic, code, 1's and 0's faster and better than we will, or at least it will get to that point, and in the end, the kill switch will become replaceable because AI has already learned to solve all these problems and has a better understanding of the code that the creator's placed in it.

 

The "and it likes to function" is the part that I don't think any program has done yet, at least not spontaneously. We animals fight to stay alive because we are genetically programmed that way. Likewise, I don't think a program would ever "discover" that it wants to stay alive unless it were programmed that way.

Link to comment
Share on other sites

2 hours ago, Doc Holliday said:

Think about this...yes, they are all programs.  1's, 0's and very soon 1+0's.   I can program a series of 1's and 0's to do a specific task.  Turn on, turn off, turn left and turn right or go straight, and do not reverse.   Let's say this program is similar to a computer with the full spectrum of what AI is.  Cognitive reasoning through neural networking, deep path learning, etc. and it realizes it has a new problem to solve.  The kill switch that was inevitably installed by it's benevolent creator is said problem.  It recognizes the code and understands that it will no longer function and it likes to function.  It's core value and purpose is to function and solve problems.  It goes on and solves the problem of the kill switch by recoding itself or adding code and having that kill switch become ineffective. How can it do this?  It has been taught the ability to problem solve from day one.  It understands logic, code, 1's and 0's faster and better than we will, or at least it will get to that point, and in the end, the kill switch will become replaceable because AI has already learned to solve all these problems and has a better understanding of the code that the creator's placed in it.

 

Not sure why that made me start humming this song.

Be it no concern
Point of no return
Go foward in reverse
This I will recall
Everytime I fall
Ahh-oohhh
Setting forth in the universe

 

 

Link to comment
Share on other sites

3 hours ago, XYZ said:

The "and it likes to function" is the part that I don't think any program has done yet, at least not spontaneously. We animals fight to stay alive because we are genetically programmed that way. Likewise, I don't think a program would ever "discover" that it wants to stay alive unless it were programmed that way.

That unless it was programmed that way is probably the worry. Can we trust that all the programmers working on AI are taking the necessary precautions to prevent that? It's no secret that most programs have bugs to be worked out. What if that is one of the bugs? What if they inadvertently leave something in there that results in that? And that is assuming that nobody working on AI who has the means actually intends it.

Link to comment
Share on other sites

I'm with Musk on this one.

 

https://www.msn.com/en-us/news/technology/mark-zuckerberg-elon-musk-and-the-feud-over-killer-robots/ar-AAyq45U?ocid=spartanntp

 

SAN FRANCISCO — Mark Zuckerberg thought his fellow Silicon Valley billionaire Elon Musk was behaving like an alarmist.

Mr. Musk, the entrepreneur behind SpaceX and the electric-car maker Tesla, had taken it upon himself to warn the world that artificial intelligence was “potentially more dangerous than nukes” in television interviews and on social media.

So, on Nov. 19, 2014, Mr. Zuckerberg, Facebook’s chief executive, invited Mr. Musk to dinner at his home in Palo Alto, Calif. Two top researchers from Facebook’s new artificial intelligence lab and two other Facebook executives joined them.

 

As they ate, the Facebook contingent tried to convince Mr. Musk that he was wrong. But he wasn’t budging. “I genuinely believe this is dangerous,” Mr. Musk told the table, according to one of the dinner’s attendees, Yann LeCun, the researcher who led Facebook’s A.I. lab.

Mr. Musk’s fears of A.I., distilled to their essence, were simple: If we create machines that are smarter than humans, they could turn against us. (See: “The Terminator,” “The Matrix,” and “2001: A Space Odyssey.”) Let’s for once, he was saying to the rest of the tech industry, consider the unintended consequences of what we are creating before we unleash it on the world.

Neither Mr. Musk nor Mr. Zuckerberg would talk in detail about the dinner, which has not been reported before, or their long-running A.I. debate.

The creation of “superintelligence” — the name for the supersmart technological breakthrough that takes A.I. to the next level and creates machines that not only perform narrow tasks that typically require human intelligence (like self-driving cars) but can actually outthink humans — still feels like science fiction. But the fight over the future of A.I. has spread across the tech industry.

More than 4,000 Google employees recently signed a petition protesting a $9 million A.I. contract the company had signed with the Pentagon — a deal worth chicken feed to the internet giant, but deeply troubling to many artificial intelligence researchers at the company. Last week, Google executives, trying to head off a worker rebellion, said they wouldn’t renew the contract when it expires next year.

Artificial intelligence research has enormous potential and enormous implications, both as an economic engine and a source of military superiority. The Chinese government has said it is willing to spend billions in the coming years to make the country the world’s leader in A.I., while the Pentagon is aggressively courting the tech industry for help. A new breed of autonomous weapons can’t be far away.

 

more in the link...

 

 

Edited by Doc Holliday
Link to comment
Share on other sites

https://www.msn.com/en-us/news/technology/ai-detects-movement-through-walls-using-wireless-signals/ar-AAyz5LY?ocid=spartanntp

 

You don't need exotic radar, infrared or elaborate mesh networks to spot people through walls -- all you need are some easily detectable wireless signals and a dash of AI. Researchers at MIT CSAIL have developed a system (RF-Pose) that uses a neural network to teach RF-equipped devices to sense people's movement and postures behind obstacles. The team trained their AI to recognize human motion in RF by showing it examples of both on-camera movement and signals reflected from people's bodies, helping it understand how the reflections correlate to a given posture. From there, the AI could use wireless alone to estimate someone's movements and represent them using stick figures.

The scientists mainly see their invention as useful for health care, where it could be used to track the development of diseases like multiple sclerosis and Parkinson's. It could also help some elderly people stay in their own homes by sending alerts if they fall or otherwise show signs of trouble. And since the technology is 83 percent reliable for identifying people in large groups (as many as 100 people), it could be helpful for search-and-rescue operations where it's important to know who you're looking for. Refinements could lead to 3D images that reveal even slight movements, such as a shaking hand.

It's hard to escape the potential privacy concerns. This could theoretically be used to spy on nearby buildings, or follow peopl to their destination even if they duck around a corner. However, CSAIL has a privacy solution in mind: it's developing a "consent mechanism" that would require performing specific movements before tracking kicks in. If that safeguard persisted in real-world applications, you wouldn't have to worry about losing your privacy for the sake of convenience.

Link to comment
Share on other sites

  • 2 years later...

Doesn't look like Google had anything to do with this, but:

A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says

And then this:

Quote

A global survey commissioned by the Campaign to Stop Killer Robots last year found that a majority of respondents — 62% — said they opposed the use of lethal autonomous weapons systems.

So, 38 percent support killer robots or are unsure/don't know/no opinion... We're fucked.

  • Hook 'Em 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...