suoko@feddit.it to AI@lemmy.ml · 10 days agoCutting-edge Chinese “reasoning” model rivals OpenAI o1—and it’s free to downloadarstechnica.comexternal-linkmessage-square28fedilinkarrow-up183arrow-down11file-textcross-posted to: technology@lemmy.worldtechnology@beehaw.orgtechnology@lemmy.ml
arrow-up182arrow-down1external-linkCutting-edge Chinese “reasoning” model rivals OpenAI o1—and it’s free to downloadarstechnica.comsuoko@feddit.it to AI@lemmy.ml · 10 days agomessage-square28fedilinkfile-textcross-posted to: technology@lemmy.worldtechnology@beehaw.orgtechnology@lemmy.ml
minus-squareScipitie@lemmy.dbzer0.comlinkfedilinkarrow-up2·9 days agoHow much vram does your TI pack? Is that the standard 8gb ddr6? I will because I’m surprised and impressed that a 14b model runs smoothly. Thanks for the insights!
minus-squarebirdcat@lemmy.mllinkfedilinkarrow-up2·2 days agoi dont even have a GPU and the 14b model runs at an acceptable speed. but yes, faster and bigger would be nice… or knowing how to distill the biggest one, cuz I only use it for something very specific.
minus-squarejohant@lemmy.mllinkfedilinkarrow-up2·7 days agosorry it should have said 3080 ti which has 12 GB of Vram. Also I guess the model is Q4.
How much vram does your TI pack? Is that the standard 8gb ddr6?
I will because I’m surprised and impressed that a 14b model runs smoothly.
Thanks for the insights!
i dont even have a GPU and the 14b model runs at an acceptable speed. but yes, faster and bigger would be nice… or knowing how to distill the biggest one, cuz I only use it for something very specific.
sorry it should have said 3080 ti which has 12 GB of Vram. Also I guess the model is Q4.
No worries, thank you!