Jump to content

Captainant

Certifiably Surly
  • Posts

    16964
  • Joined

  • Days Won

    6

Everything posted by Captainant

  1. You tellin' me the Democratic People's Republic of Korea is neither democratic NOR a republic???
  2. From my perspective, it's mainly been the consultancy and business class clamoring for LLMs. The tech leaders I talk to asked me for help with their genAI strategy "because our board is asking us what we're doing to catch up", which invariably results in some crappy contract to bring in an overpriced reference architecture sold as the whizbang3000™. There's yet a ton of utility yet to be realized but we'll never get there as long as the conversation is being driven by hype and uninformed investors
  3. If you're a practitioner in the space, check out this GitHub repo. A PhD student trained (fine-tuned, to be technical) their own 3B parameter model using about $30 in compute. https://github.com/Jiayi-Pan/TinyZero It brings a reinforcement learning layer to train it to have better reasoning and self-reject responses that don't make sense. I'm working on building that repo on my local system this week, IF my 12GB of vram is enough, but I'm very interested to see what sort of domain expertise I can cram into my own distilled models trained on my own hardware. The incredible innovation here appears to be a significantly optimized training architecture that results in decreased compute costs for training and inference. Really excited to tear into this stuff - if I can attain a mastery I'll have my goals basically complete for the year
  4. The lesson we should be taking from this is seeing just how inflated our stock market is. None of nvidia's business changed, they aren't selling any fewer cards and no deals changed, and yet they took a $600,000,000,000 haircut simply because they weren't leading the hype train for a moment
  5. Aw that's not fair, guadaloopy just gets a lot of shit for defending FSD as much as he did. It's not like the site admins have made a rule against his being on the site or anything
  6. On my machine I'm getting 40-50% more token throughput and full responses with reasoning in about the same time as Llama3.2, comparing similar parameter count distilled models. Are y'all doing any fine tuning or RAG on your corpus of data? If you're looking for an easy button and have CUDA cores, nvidia's RTX chat app is a pretty solid and dummy-proof ingestion engine to show proof of concept at a local scale. It creates the embeddings and stores them as a static file in your filesystem, so it's not nearly as scalable as a proper vector store, but it's wayyyyy fewer moving prices for proving the concept
  7. Welcome back to the limelight, measles! https://www.houstonhealth.org/houston-measles-advisory#4257225834-2515353578
  8. Oh and, kind of neato, a PhD student has already replicated the R1-zero's results with a $30 trained distilled model https://github.com/Jiayi-Pan/TinyZero I may try running that locally, it would be pretty wild to be able to self-author a 3B parameter model on a consumer-grade card (12GB 3080)
  9. Yeah I just grabbed a few open source copies of the deepseek r1 model and the 14B model runs shockingly well and fast and only takes 10GB of memory while handling a query, I'd bet my wife's m2 macbook air could run it. It outputs its reasoning and thinking as it works through a problem. It is a distilled model, but I've been impressed with its performance so far compared to relative bar of other LLM's If you're interested in setting it up, it's pretty easy to do locally with ollama and chatbox, this reddit post is a good howto with pictures
  10. From a computational linguistics perspective I'm also curious to quantify the token throughput difference between Latin languages and Chinese. A single character can mean a phrase or several words, and a subtle difference can totally change the meaning. The information density in the language itself seems like it would lend itself favorably to building a more efficient LLM architecture, by necessity. I'm really surprised by how strongly markets are reacting, they must have had a TON of growth priced in
  11. What's especially weird is that the new hot Chinese AI is just software - it runs and fine tunes great on Nvidia hardware. It's just a much more efficient architecture than western LLMs are using, so it doesn't need the computational heft NV wants to sell you. Surprising how much a software release is negatively impacting a hardware company Computationally this is super interesting - Chinese is much more information-dense language than English or most Latin languages. A single character can mean several words, and tiny differences can drastically change the meaning. It's linguistically not that surprising that they might develop an LLM architecture with greater throughput than a latin-speaking nation might. Fascinating times
  12. There's an astonishing level of hype and price inflation in that space in the markets with outrageous speculative valuations already baked in. Shit like an electric car company being worth more than it's next 5 competitors combined because somethingsomething AI. Nevermind that the AI doesn't contribute revenue to the business at all, and their cars are selling fewer YoYoY - TO THE MOON MEME STOCK BAYBEEE
  13. Our system does not build innovations as a primary output. It builds profits and profitable services. Making a model for $10M and giving it away means no bonuses for the sales and biz dev teams who are frantically trying to dig a moat. It would be pretty wild if this single instance of a non VC backed competitor was all it took to break the fever and realize how much snake oil is being bought and sold
  14. God I hope not, it's pretty fucking stupid to keep subsidizing russia and North koreas forex funds
  15. Still can't believe they didn't go with a bead blasted look, or something that's easier to clean up. Hell of a alot easier to match that than a brushed finish
  16. No, like he showed up on screen at an AfD rally and delivered remarks about how Germany should stop feeling so bad about what happened in the past
  17. Bozo don't you have some NFT scam to sell people on?
  18. My guy, elon is speaking at the neonazi rallies in Germany by teleconference. It's not just his seig heil, but that's certainly been the straw that broke the camels back. It's pretty fucked up to imply that all autistic people are neonazis and love hitler, not gonna lie. Also, since elmo took over Twitter has steadily worse performance and fewer features without logging in. The tweet embedding feature was meaningfully slowing the site and page loads down in threads with a lot of tweets. That's not the case with other embeds, twitter was pretty exquisitely enshittified
  19. Working now btw, I dig the orange border too. Good shit, man
  20. Not loading for me on mobile web
  21. https://www.nbcnews.com/news/world/trump-says-wants-clean-gaza-move-palestinians-jordan-egypt-rcna189317 “I’d rather get involved with some of the Arab nations and build housing at a different location where they can maybe live in peace for a change,” Trump said to reporters aboard Air Force One on Saturday. “You’re talking about probably a million and a half people, and we just clean out that whole thing and say, ‘You know, it’s over,’” he added.
  22. Gotta love unpolled tax increases! No income tax though amirite hurrrrdeeeeehurree
  23. Wanted to circle back and put some context on your libs of tiktok shitposts, you goddamn Nazi apologist x3w7cy1svafe1DASH_480.mp4
  24. https://www.reuters.com/world/europe/elon-musk-appears-video-german-far-right-campaign-event-2025-01-25/
×
×
  • Create New...