Jump to content

Captainant

Certifiably Surly
  • Posts

    18753
  • Joined

  • Days Won

    6

Captainant last won the day on July 9 2024

Captainant had the most liked content!

Reputation

37876 Surly 1%

About Captainant

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. pretty gross self-report, IMO
  2. Heh funny typo on his name plate at a recent cabinet meeting earlier today interestingly ordered the "S" to be struck twice, I guess
  3. Well when I think of our current administration, I think "surgical operation" lmfao
  4. it wasn't really fudging so much as a consistent and predictable inaccuracy that held across multiple presidencies, but acting like it was a conspiracy is fun too I guess. But I take it to mean, you don't really care about the number anymore now that it's just not being published?
  5. I disagree pretty strongly with this chart, mainly because it's making the same fallacious leaps in logic that idiot middle managers do about how much """AI""" can do. Assuming that every judge, patient, and consumer will just happily lap up AI slop is a little outrageous. This is still extremely bullish on AI in general, despite your preferred posturing as "bearish"
  6. Nope I haven't This is an actual use case for LLMs. All the cloud providers have pretty handy built in agents now that do a similar function. Can't think of the right cloud trail query? No problem, just describe it and they'll generate the query and drop it into the editor box for you. I use Cline with a locally hosted LLM to accelerate local development and it's been very handy for covering unit test gaps,and building new projects from source that have complex dependencies. It's great for reading through dense logs and crap that I don't want to look through but need a morsel of information out of.
  7. I'm not sure since in that scenario, we would be 10-2 and almost certainly in with a major marquee win like that? Definitely one of the stronger 10-2 teams in the field
  8. What's up with cancelling the jobs report, inflation report, and GDP report? I wonder when ignoring the scary data will finally help rates
  9. I think playing the best helps make a team better. We could have lost to OSU and still made it in if we hadn't lost to a Billy Napiers Florida team.
  10. Ultimately, we don't drop a game to a dogshit Florida team, we're in the playoff. Missing the playoff is a good and meaningful consequence for the team and Sark to have to come to grips with. I mean, I hope we still make the big dance, but I wouldn't bet on it unless I got really really good odds
  11. IDK, but this is all an extremely effective means to get everyone to stop thinking about President Donald J Trump molesting children with Jeffery Epstein
  12. Wulaw Horn and the real estate gang are very concerned about (obviously!!!) FAKE numbers that would harm the economy
  13. quite an interesting play to make while pardoning convicted drug trafficers who brought literal tons of cocaine into the country
  14. I was doing this BEFORE it was genAI nonsense, I swear! Most of my time is spent convincing customers they don't need an LLM and traditional ML modeling is more than sufficient to meet their business problem - AND they can still tell their business they're """using AI""" It's frustrating because there are genuinely useful genAI use cases out there, but there's a much higher volume of snake oil being bought and sold. Not to mention the army of idiot middle managers that don't understand the jobs they manage, so they think they can "just replace everyone with AI!"
  15. [context: I am literally a certified AI/ML specialist and builder and have taken enough high-order math at UT to at least accurately conceptualize what's happening in the black box] The end state of "unable to walk and falling down" is effectively the end-state of a "Hapsburg" model. When you train a model on synthetic data produced by a similar model, you get the same concentration of error effect that is observed in genetics whenever inbreeding happens in a population. In the current and active development state of AI, these LLM companies like Anthropic and Gemini and OpenAI are scraping the internet for ANY data they can, and they don't really care about the provenence of its origin. As a result, the "for-profit" LLM's are getting trained on a heavily poisoned and synthetic dataset that contains a bunch of data that will embed defects into the model and train it to produce worse errors. Not to mention the general problem with model poisoning that Anthropic published a paper on a month or two ago
×
×
  • Create New...