Jump to content

ChatGPT AI Tool— We all work for robots now


956 Worldwide

Recommended Posts

On 5/24/2023 at 10:36 AM, Buzzrock said:

Agreed. Given that we are in the infancy of the technology, and how fast it will iterate, I expect that those tweaks won’t be needed before long. 

The chat models released for public use are very basic and primitive compared to what is currently on the edge of discovery in development. I cringe a bit seeing people say stuff like "oh it does this but it can't do that it's way off from being able to do what I do" because it's a whole lot closer than you think. White collar workers are going to be first to get railroaded by AI with coders and developers probably first up on the block. They're very close to having AI not only create it's own code but optimize it and make configuration changes on it's own. If you're in that field I hope you're only a few years out from retirement. I haven't been told this but I'm pretty sure the whole public release of these chat models is to warm the public up for what's coming down the pipe soon.

The robotics side is further away as navigating the physical world is far more challenging and even just artificial sight alone eats up massive amounts of computational power. It's ironic that "self driving cars" has been the AI most talked about in recent years when it's possibly the farthest away due to the insane amount of accuracy required to function on a national or global scale. We're talking to the tiniest of a fraction of a percent accuracy that as far as I know we are nowhere near close to achieving. 

Link to comment
Share on other sites

1 minute ago, Hermanator said:

The chat models released for public use are very basic and primitive compared to what is currently on the edge of discovery in development. I cringe a bit seeing people say stuff like "oh it does this but it can't do that it's way off from being able to do what I do" because it's a whole lot closer than you think. White collar workers are going to be first to get railroaded by AI with coders and developers probably first up on the block. They're very close to having AI not only create it's own code but optimize it and make configuration changes on it's own. If you're in that field I hope you're only a few years out from retirement. I haven't been told this but I'm pretty sure the whole public release of these chat models is to warm the public up for what's coming down the pipe soon.

The robotics side is further away as navigating the physical world is far more challenging and even just artificial sight alone eats up massive amounts of computational power. It's ironic that "self driving cars" has been the AI most talked about in recent years when it's possibly the farthest away due to the insane amount of accuracy required to function on a national or global scale. We're talking to the tiniest of a fraction of a percent accuracy that as far as I know we are nowhere near close to achieving. 

There's some fascinating areas of practice emerging with using LLM's to crunch through academia and try ideas that mathematicians haven't gotten to yet.

In this case, a LLM found a new algo for matrix multiplication that removed a calculation. Once the problem was able to be described mathematically, an ML model is ideal to explore the possibility space and find new and unexplored techniques. 

The kicker is - the meatsack mathematicians were able to find more improvements from the computer's solution. I foresee ML/neural nets being the mechanism that lets us more effectively and efficiently use the huge amount of data that we collect and use now.

  • Hook 'Em 1
Link to comment
Share on other sites

4 minutes ago, Your Mom said:

I read yesterday that in December 2022 GPT-4 took the bar exam and failed spectacularly. Four months later they had it take it again and it got a top 10% score. It’s gathering knowledge at an alarming rate.  

all it is doing is highlighting how antiquated our means of human assessment really are. rote fact memorization is stupid and has been since the invention of the encyclopedia much less the internet. 

it's important not to think of chapgpt as a learning entity. all it is really doing is dressing up factual information on easy to digest language. it's far superior to a search engine at this point. but as far as interpreting information and/or "learning" is not doing any of that. everything it does is derivative. 

  • Hook 'Em 1
  • Like 2
Link to comment
Share on other sites

1 hour ago, Your Mom said:

Kind of, but my understanding is that GPT4 is different. It’s able to not only regurgitate knowledge, but also learn new tasks it has never been taught before and often learning those tasks faster than a human could. 

The word learn may get people hung up on semantics but the word amongst AI development circles is the cutting edge at the moment is the ability to make adjustments and alter configurations on it's own without having been prompted. Once an AI is able to analyze and improve upon itself more than likely the floodgates open. 

Link to comment
Share on other sites

45 minutes ago, Hermanator said:

The word learn may get people hung up on semantics but the word amongst AI development circles is the cutting edge at the moment is the ability to make adjustments and alter configurations on it's own without having been prompted. Once an AI is able to analyze and improve upon itself more than likely the floodgates open. 

Do you mean reinforcement learning?

Link to comment
Share on other sites

Didn't see this yet:

https://www.legaldive.com/news/chatgpt-fake-legal-cases-generative-ai-hallucinations/651557/

----

Excerpt:

However, Judge P. Kevin Castel wrote in an early May order regarding the plaintiff’s filing that “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.” He called it “an unprecedented circumstance.”

In his affidavit filed later in May, Schwartz said that ChatGPT not only provided the legal sources, but assured him of the reliability of the opinions and citation that the court has called into question.

For example, a document attached to his affidavit indicates he asked the generative AI-powered ChatGPT if one of the six cases the judge has called bogus was real and the chatbot responded that it was.

Additionally, he asked ChatGPT if the other cases provided were fake. The chatbot responded that they were also real and “can be found in reputable legal databases such as LexisNexis and Westlaw.”

Schwartz acknowledged in the affidavit that his source for the legal opinions “has revealed itself to be unreliable.”

Schwartz’s mistakes resulted in a story published on the front page of The New York Times, and a judge has scheduled a hearing in the coming weeks to determine possible sanctions.

Edited by JBJ
Link to comment
Share on other sites

IBM Watson had to be brutally torn to pieces so that ChatGPT could make us unemployed

It’s ok, all we have to do is stockpile enough ARs, ammo and provisions to hold us over until the economy turns itself inside out, right?

I would have liked to have seen Montana, with my AI generated wife.
Link to comment
Share on other sites

2 hours ago, DefinitelyNotHollywoodColt said:

We’re barreling towards universal basic income and very few people having jobs.

It might be awesome. It might destroy us. I guess we’ll find out.

Lol. Why give basic income? It will hurt profits if they increase costs by subsidizing those takers

In a world where we valued humanity and dignity over shareholder profits, it would be awesome. But my bet's on dystopian because there sure are a bunch of greedy "fuck you I got mine" type of folks out there

  • Hook 'Em 3
  • Like 1
Link to comment
Share on other sites

2 minutes ago, Captainant said:

Lol. Why give basic income? It will hurt profits if they increase costs by subsidizing those takers

In a world where we valued humanity and dignity over shareholder profits, it would be awesome. But my bet's on dystopian because there sure are a bunch of greedy "fuck you I got mine" type of folks out there

Tough to profit off a failed state.

  • Fuck Around and Find Out 1
Link to comment
Share on other sites

On 6/4/2023 at 10:02 AM, DefinitelyNotHollywoodColt said:

We’re barreling towards universal basic income and very few people having jobs.

It might be awesome. It might destroy us. I guess we’ll find out.

I still remember when "the internet" was going to displace somewhere between 30-40M people.  The opposite happened.  Hundreds of millions of new roles opened up in a myriad of ways nobody anticipated.  Not saying that will be the exact scenario, but adaptations will happen.  Initially I think we'll see a surge in blue-collar work needing physical interactions.  trades will explode.  

Learn to code will be replaced with learn to fix.  

  • Hook 'Em 1
  • Like 1
Link to comment
Share on other sites

23 hours ago, Captainant said:

Lol. Why give basic income? It will hurt profits if they increase costs by subsidizing those takers

In a world where we valued humanity and dignity over shareholder profits, it would be awesome. But my bet's on dystopian because there sure are a bunch of greedy "fuck you I got mine" type of folks out there

I've thought UBI was an end game conclusion for some time now. At some point, enough people are displaced that the critical mass will be empowered to revolt and damage the system. Billionaires and corporations will cede to UBI, as UBI will be politically mandated at some point. 

Andrew Yang was just a decade too early.

Link to comment
Share on other sites

14 minutes ago, HamsterHookah said:

I've thought UBI was an end game conclusion for some time now. At some point, enough people are displaced that the critical mass will be empowered to revolt and damage the system. Billionaires and corporations will cede to UBI, as UBI will be politically mandated at some point. 

Andrew Yang was just a decade too early.

It's funny how on the fuckin nose Star Trek is sometimes - in 2024 there's supposed to be a massive homeless riot in LA, and then a massive global nuclear WWIII in the late 40's/early 50's. 

 

But hey, at least we'll get warp travel this century too!

  • Hook 'Em 1
Link to comment
Share on other sites

14 hours ago, hayden_horn said:

i just tried to use this to plan my trip to Greece and London. Even with flight information, it couldn't keep the days straight for sample itineraries. I was correcting the thing a lot. like it was fucking with me. i dunno. 

Use Google Bard!

  • Hook 'Em 1
Link to comment
Share on other sites

On 6/11/2023 at 9:45 AM, Mdhorn said:

Read that Bard is real time internet and Chatgpt only went to 2021. So planning a vacation with Bard should be more accurate. 

Exactly, but even Bard needs to be fact checked and stress-tested still, I found out the hard way.

Link to comment
Share on other sites

12 minutes ago, Blotto said:

I found this to be an interesting read. Financial Times interview with Sci-fi author Ted Chiang. 

https://archive.is/xCmxi

The proximal danger from AI are two-fold.

1. Humans entrusting too much to it too quickly.  Human executives are already pushing their orgs to use AI to maximize output with existing resources (mine are).  That's a context that forces trust, perhaps too quickly and without an AI "earning it."  There will be some disasters in the near future resulting from scenarios like these:  a worker in the trenches, pressured to deliver at a heightened rate through the use of AI, will accept the solution an AI provides them without proper vetting.  They will roll out the AI generated solution, and shit will break.  Maybe that's only a minor outage on a website like Twitter, but if its trusted too quickly for something like C&C software, maybe it's the failure of Hoover Dam.  The worker in the trenches will be blamed, or perhaps the AI (perfect scapegoat, it can't defend itself), but really these will be disasters of leadership.

2. Humans using it to generate orders of magnitude more disinformation and making it to where reality is increasingly inscrutable

  • Like 1
Link to comment
Share on other sites

2 minutes ago, Goredho said:

2. Humans using it to generate orders of magnitude more disinformation and making it to where reality is increasingly inscrutable

The fracturing of society based on multiple "parallel realities" is all but a certainty in my mind. Rejecting realities that contradict your personal beliefs in favor of realities that reinforce them. I think that tendency is human nature to a degree, but AI will demolish the traditional guardrails that kept us more or less chugging along the same path. 

  • Hook 'Em 1
Link to comment
Share on other sites

8 minutes ago, Blotto said:

The fracturing of society based on multiple "parallel realities" is all but a certainty in my mind. Rejecting realities that contradict your personal beliefs in favor of realities that reinforce them. I think that tendency is human nature to a degree, ...

"Cognitive Dissonance really isn't a problem." - Dissociative Identity Disorder patient

  • Like 2
Link to comment
Share on other sites

6 minutes ago, Blotto said:

The fracturing of society based on multiple "parallel realities" is all but a certainty in my mind. Rejecting realities that contradict your personal beliefs in favor of realities that reinforce them. I think that tendency is human nature to a degree, but AI will demolish the traditional guardrails that kept us more or less chugging along the same path. 

Yep, accuracy takes time.  You have to research, vet, cross-reference, and so on.  In the time it takes for a human to produce factual information, an AI can produce thousands of inaccurate stories, recordings & videos to make what is real unknowable.

What will be interesting is when that AI generated disinformation starts to saturate the pool of available information used to train and reinforce AI.

Link to comment
Share on other sites

47 minutes ago, Goredho said:

Yep, accuracy takes time.  You have to research, vet, cross-reference, and so on.  In the time it takes for a human to produce factual information, an AI can produce thousands of inaccurate stories, recordings & videos to make what is real unknowable.

A large language model on its own is likely to be incorrect, because it hasn't been designed from the start to BE correct. However, there's emerging patterns like Retrieval Augmented Generation where you use an interface, typically something like LangChain, to allow your LLM to actually go and query data/knowledge sources to seed its base of linguistic tokens that it generates its response with. The accuracy to documentation that it's indexed is pretty incredibly solid - and it'll even let you know when it doesn't know and is guessing!

A peer of mine just finished building a crazy demo that combines together a foundational LLM with a variety of other more specialized ML services to feed it in a .csv of timeseries data and create a data analyst report that is directly responsive to questions about the underlying tabular data.

The data and tools necessary to build and maintain these models are becoming more and more accessible - we're rapidly going to find ourselves in a position where having an idea and even an implementation plan is the easy part

Link to comment
Share on other sites

2 hours ago, HonkeyVape said:

Exactly, but even Bard needs to be fact checked and stress-tested still, I found out the hard way.

I ran trials on it with my vacation to see if we arrived at similar conclusions, but no way am I entrusting AI with my vacation.  

Link to comment
Share on other sites

2 hours ago, Goredho said:

Yep, accuracy takes time.  You have to research, vet, cross-reference, and so on.  In the time it takes for a human to produce factual information, an AI can produce thousands of inaccurate stories, recordings & videos to make what is real unknowable.

What will be interesting is when that AI generated disinformation starts to saturate the pool of available information used to train and reinforce AI.

Just plug it into one of our political think tanks to spit out garbage that politicians can leverage/reference to the populace.  

Link to comment
Share on other sites

2 hours ago, Mdhorn said:

Just plug it into one of our political think tanks to spit out garbage that politicians can leverage/reference to the populace.  

You do know that's basically what Cambridge Analytica was doing with their microtargetted ads right? There was quite a bit of news about it back in 2018, along with corroborating reporting and documentary evidence of the people you'd expect to be abusing social media for personal and political gain

Link to comment
Share on other sites

3 hours ago, Mdhorn said:

I ran trials on it with my vacation to see if we arrived at similar conclusions, but no way am I entrusting AI with my vacation.  

I asked ChatOn to calculate the pressure drop of some fluid through x amount of sch 40 pipe. Asked if the same question 3 times. It solved the problem a different way each time. Once it was turbulent flow and twice it was laminar.  Got 3 different answers, orders of magnitude different 

Edited by UT_OB1
Link to comment
Share on other sites

2 hours ago, Captainant said:

You do know that's basically what Cambridge Analytica was doing with their microtargetted ads right? There was quite a bit of news about it back in 2018, along with corroborating reporting and documentary evidence of the people you'd expect to be abusing social media for personal and political gain

I knew they had created their own think tanks to self reference, but I didn't know that AI was involved.  I guess that algorithm's is pretty much the same thing.  

Link to comment
Share on other sites

2 hours ago, UT_OB1 said:

I asked ChatOn to calculate the pressure drop of some fluid through x amount of sch 40 pipe. Asked if the same question 3 times. It solved the problem a different way each time. Once it was turbulent flow and twice it was laminar.  Got 3 different answers, orders of magnitude different 

That was supposed to be the selling point of Bard--that you could get different solutions for the same problem.  

Link to comment
Share on other sites

25 minutes ago, Mdhorn said:

I knew they had created their own think tanks to self reference, but I didn't know that AI was involved.  I guess that algorithm's is pretty much the same thing.  

The actual real thing happening that "AI" refers to is a tuning of the weights and biases of the parameters on your algorithm until it produces a desired output for a set of validation cases. Once it's trained and tuned, a machine learning model (read: AI) is functionally just "an algorithm" in that it is not going to change or iterate or behave deterministically different from any other execution given the exact same set of input parameters.

Achieving that change in behavior from a current state to a future, more optimal state means more training - more modifying and tuning of those aforementioned weights and biases to more closely achieve your desired output state.

It's a bit of a nerd alert, but if you're looking to burn an hour on some nerdy youtube, this is genuinely an excellent introduction to the underlying mathematical and computational principles behind neural networks and "AI and Machine Learning" (in spoiler so the youtube doesn't make the page big)

Spoiler
  • Hook 'Em 2
Link to comment
Share on other sites

On 6/12/2023 at 1:39 PM, Goredho said:

The proximal danger from AI are two-fold.

1. Humans entrusting too much to it too quickly.  Human executives are already pushing their orgs to use AI to maximize output with existing resources (mine are).  That's a context that forces trust, perhaps too quickly and without an AI "earning it."  There will be some disasters in the near future resulting from scenarios like these:  a worker in the trenches, pressured to deliver at a heightened rate through the use of AI, will accept the solution an AI provides them without proper vetting.  They will roll out the AI generated solution, and shit will break.  Maybe that's only a minor outage on a website like Twitter, but if its trusted too quickly for something like C&C software, maybe it's the failure of Hoover Dam.  The worker in the trenches will be blamed, or perhaps the AI (perfect scapegoat, it can't defend itself), but really these will be disasters of leadership.

2. Humans using it to generate orders of magnitude more disinformation and making it to where reality is increasingly inscrutable

I think people overestimate how much we will ever be able to "trust" AI.  Engineers have being using computer models to do most of their math for years and some of this is still true for those programs.  There's known (albeit uncommon) issues that have persisted for years.  Some of which have lead to disastrous results.

Link to comment
Share on other sites

  • 1 month later...
2 minutes ago, Lord Melbourne said:

Google is not ChatGPT but this seems like it would be pretty achievable. 
 

 

It's not that hard to be honest - large language models can do the yeoman's work of pounding out rough notes into a formatted document. There's been major improvements with accuracy and correctness with the use of techniques like Retrieval Augmented Generation to dynamically "fill in the blanks" in the content it's generating with sourced information. It's becoming a big big usecase with enterprise search to have a "smart" search of a business's internal sharepoint. Every big cloud player has their own flavor of a solution on it, and it's really not that hard to set up.

Link to comment
Share on other sites

  • 4 weeks later...

I spent a week with a team doing prompt engineering and building transformers on a LLM foundation model for a travel industry use case this week.

Long story short, my wife had to pick up another on-call week and so I’m sitting here wondering how one would correlate a mom’s name to predict the probability that she would become involved in a CPS case.

So while my wife is on a conference call with LEOs and field workers due to this hour’s train respective train wreck, I quietly explain my thinking to my daughter and end it with my strip club announcer voice “Let’s give it up for Cinnamon!!!!!! She is headed to the VIP room”

So then my daughter starts wondering what male names are the most common for dads who end up with a case called in against them.

And no, FBI, Im not about to see if I can find an API to run against the state’s data warehouse.

  • Like 1
Link to comment
Share on other sites

4 hours ago, BearSchlong said:

I spent a week with a team doing prompt engineering and building transformers on a LLM foundation model for a travel industry use case this week.

Long story short, my wife had to pick up another on-call week and so I’m sitting here wondering how one would correlate a mom’s name to predict the probability that she would become involved in a CPS case.

So while my wife is on a conference call with LEOs and field workers due to this hour’s train respective train wreck, I quietly explain my thinking to my daughter and end it with my strip club announcer voice “Let’s give it up for Cinnamon!!!!!! She is headed to the VIP room”

So then my daughter starts wondering what male names are the most common for dads who end up with a case called in against them.

And no, FBI, Im not about to see if I can find an API to run against the state’s data warehouse.

It's gotta be Wayne.

  • Hook 'Em 1
Link to comment
Share on other sites

19 hours ago, BearSchlong said:

I spent a week with a team doing prompt engineering and building transformers on a LLM foundation model for a travel industry use case this week.

Long story short, my wife had to pick up another on-call week and so I’m sitting here wondering how one would correlate a mom’s name to predict the probability that she would become involved in a CPS case.

So while my wife is on a conference call with LEOs and field workers due to this hour’s train respective train wreck, I quietly explain my thinking to my daughter and end it with my strip club announcer voice “Let’s give it up for Cinnamon!!!!!! She is headed to the VIP room”

So then my daughter starts wondering what male names are the most common for dads who end up with a case called in against them.

And no, FBI, Im not about to see if I can find an API to run against the state’s data warehouse.

A LLM is not the right tool to solve those problems of prediction based on a number of factors, but name alone is a horrible way to bring forth biases into your dataset and predictor. 

What FMs are you using? Mostly anthropic Claude I'm betting?

Link to comment
Share on other sites

On 6/4/2023 at 1:38 PM, DefinitelyNotHollywoodColt said:

Tough to profit off a failed state.

Not really, in some ways it’s even easier. Not talking “South Sudan” level fail, but Pakistan or Russia type failure? If anything it’s even more fun to be incredibly rich in places like that.

We are kidding ourselves if we don’t accept that this end state is the goal for our plutocrat class.  A seething, impoverished mass racing to the bottom for their crumbs.  That’s the dream. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...