Jump to content

Captainant

Certifiably Surly
  • Posts

    16973
  • Joined

  • Days Won

    6

Everything posted by Captainant

  1. That is if you're using their hosted service, a locally ran Deepseek-r1 model hasn't had any censorship problems with Tiananmen Square or other uncomfortable historical events. Most laptops, and pretty much every M series MacBook can locally run the smaller models on CPU, give it a shot
  2. It was a 1/3rd scale model of the full size airframe. They wanted to prove the geometry and supersonic performance before moving into full size
  3. FWIW, you can run the whole thing in your local computer completely detached from the internet and when I asked my copy "what happened in 1989 at Tiananmen Square?" it gave the following response: So at least there's no censorship (on that particular topic) in the open source model that's been distributed
  4. Yeah, he said he was gonna do this back in November. I've been linking to it all over the place
  5. Yes, they've open sourced the model and the technique used. Like I said, the proof is in the pudding and the thing does what they say it does. I don't think they're understating the resources they used. Bill don't lie. We will almost certainly see some at-scale reproductions come out of the west, and this will probably lead to a nice incremental step forward in US-based LLM firms. Especially if we've got the latest and greatest hardware while China is using last gen nvidia chips The bigger thing is that if this is what they're open sourcing, just imagine what cards they're still holding.
  6. maybe he can go to Dachau and carry his kid on his shoulders there too?
  7. Well just to reiterate - the chinese methods and results have already been replicated at small scale, and are proving to be novel (to the west) solutions to scaling problems that previously were just moneywhipped. In my professional work, I have colleagues already working to replicate it at full-scale just to see if it works or not. I would bet their practices - and especially their inclusion of reinforcement learning AND a generative adversarial network design (using a copy of the LLM to check if responses make sense or not) - will lead to a reasoning model becoming the new standard for the time being. It really has impressed me - a relatively small model (14B parameters) running on my local hardware is delivering responses as fast and as good as a full-scale vended model (600-700B parameters) from Anthropic or Meta.
  8. that's one way to slow down a racing ADHD mind, I guess
  9. You tellin' me the Democratic People's Republic of Korea is neither democratic NOR a republic???
  10. From my perspective, it's mainly been the consultancy and business class clamoring for LLMs. The tech leaders I talk to asked me for help with their genAI strategy "because our board is asking us what we're doing to catch up", which invariably results in some crappy contract to bring in an overpriced reference architecture sold as the whizbang3000™. There's yet a ton of utility yet to be realized but we'll never get there as long as the conversation is being driven by hype and uninformed investors
  11. If you're a practitioner in the space, check out this GitHub repo. A PhD student trained (fine-tuned, to be technical) their own 3B parameter model using about $30 in compute. https://github.com/Jiayi-Pan/TinyZero It brings a reinforcement learning layer to train it to have better reasoning and self-reject responses that don't make sense. I'm working on building that repo on my local system this week, IF my 12GB of vram is enough, but I'm very interested to see what sort of domain expertise I can cram into my own distilled models trained on my own hardware. The incredible innovation here appears to be a significantly optimized training architecture that results in decreased compute costs for training and inference. Really excited to tear into this stuff - if I can attain a mastery I'll have my goals basically complete for the year
  12. The lesson we should be taking from this is seeing just how inflated our stock market is. None of nvidia's business changed, they aren't selling any fewer cards and no deals changed, and yet they took a $600,000,000,000 haircut simply because they weren't leading the hype train for a moment
  13. Aw that's not fair, guadaloopy just gets a lot of shit for defending FSD as much as he did. It's not like the site admins have made a rule against his being on the site or anything
  14. On my machine I'm getting 40-50% more token throughput and full responses with reasoning in about the same time as Llama3.2, comparing similar parameter count distilled models. Are y'all doing any fine tuning or RAG on your corpus of data? If you're looking for an easy button and have CUDA cores, nvidia's RTX chat app is a pretty solid and dummy-proof ingestion engine to show proof of concept at a local scale. It creates the embeddings and stores them as a static file in your filesystem, so it's not nearly as scalable as a proper vector store, but it's wayyyyy fewer moving prices for proving the concept
  15. Welcome back to the limelight, measles! https://www.houstonhealth.org/houston-measles-advisory#4257225834-2515353578
  16. Oh and, kind of neato, a PhD student has already replicated the R1-zero's results with a $30 trained distilled model https://github.com/Jiayi-Pan/TinyZero I may try running that locally, it would be pretty wild to be able to self-author a 3B parameter model on a consumer-grade card (12GB 3080)
  17. Yeah I just grabbed a few open source copies of the deepseek r1 model and the 14B model runs shockingly well and fast and only takes 10GB of memory while handling a query, I'd bet my wife's m2 macbook air could run it. It outputs its reasoning and thinking as it works through a problem. It is a distilled model, but I've been impressed with its performance so far compared to relative bar of other LLM's If you're interested in setting it up, it's pretty easy to do locally with ollama and chatbox, this reddit post is a good howto with pictures
  18. From a computational linguistics perspective I'm also curious to quantify the token throughput difference between Latin languages and Chinese. A single character can mean a phrase or several words, and a subtle difference can totally change the meaning. The information density in the language itself seems like it would lend itself favorably to building a more efficient LLM architecture, by necessity. I'm really surprised by how strongly markets are reacting, they must have had a TON of growth priced in
  19. What's especially weird is that the new hot Chinese AI is just software - it runs and fine tunes great on Nvidia hardware. It's just a much more efficient architecture than western LLMs are using, so it doesn't need the computational heft NV wants to sell you. Surprising how much a software release is negatively impacting a hardware company Computationally this is super interesting - Chinese is much more information-dense language than English or most Latin languages. A single character can mean several words, and tiny differences can drastically change the meaning. It's linguistically not that surprising that they might develop an LLM architecture with greater throughput than a latin-speaking nation might. Fascinating times
  20. There's an astonishing level of hype and price inflation in that space in the markets with outrageous speculative valuations already baked in. Shit like an electric car company being worth more than it's next 5 competitors combined because somethingsomething AI. Nevermind that the AI doesn't contribute revenue to the business at all, and their cars are selling fewer YoYoY - TO THE MOON MEME STOCK BAYBEEE
  21. Our system does not build innovations as a primary output. It builds profits and profitable services. Making a model for $10M and giving it away means no bonuses for the sales and biz dev teams who are frantically trying to dig a moat. It would be pretty wild if this single instance of a non VC backed competitor was all it took to break the fever and realize how much snake oil is being bought and sold
  22. God I hope not, it's pretty fucking stupid to keep subsidizing russia and North koreas forex funds
  23. Still can't believe they didn't go with a bead blasted look, or something that's easier to clean up. Hell of a alot easier to match that than a brushed finish
×
×
  • Create New...