Alternate account for @simple@lemmy.world

  • 1.89K Posts
  • 2.43K Comments
Joined 2 years ago
cake
Cake day: July 3rd, 2023

help-circle

  • simple@lemm.eetoOpen Source@lemmy.mlProton's biased article on Deepseek
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    13 hours ago

    I understand it well. It’s still relevant to mention that you can run the distilled models on consumer hardware if you really care about privacy. 8GB+ VRAM isn’t crazy, especially if you have a ton of unified memory on macbooks or some Windows laptops releasing this year that have 64+GB unified memory. There are also websites re-hosting various versions of Deepseek like Huggingface hosting the 32B model which is good enough for most people.

    Instead, the article is written like there is literally no way to use Deepseek privately, which is literally wrong.


  • simple@lemm.eetoOpen Source@lemmy.mlProton's biased article on Deepseek
    link
    fedilink
    English
    arrow-up
    93
    arrow-down
    3
    ·
    13 hours ago

    DeepSeek is open source, meaning you can modify code(new window) on your own app to create an independent — and more secure — version. This has led some to hope that a more privacy-friendly version of DeepSeek could be developed. However, using DeepSeek in its current form — as it exists today, hosted in China — comes with serious risks for anyone concerned about their most sensitive, private information.

    Any model trained or operated on DeepSeek’s servers is still subject to Chinese data laws, meaning that the Chinese government can demand access at any time.

    What??? Whoever wrote this sounds like he has 0 understanding of how it works. There is no “more privacy-friendly version” that could be developed, the models are already out and you can run the entire model 100% locally. That’s as privacy-friendly as it gets.

    “Any model trained or operated on DeepSeek’s servers are still subject to Chinese data laws”

    Operated, yes. Trained, no. The model is MIT licensed, China has nothing on you when you run it yourself. I expect better from a company whose whole business is on privacy.



  • simple@lemm.eetodailygames@lemmy.zipWhenTaken
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    21 hours ago
    #WhenTaken #339 (31.01.2025)
    
    I scored 626/1000🎗️
    
    1️⃣📍326 km - 🗓️6 yrs - 🥇182/200
    2️⃣📍10.7K km - 🗓️1 yrs - 🥉99/200
    3️⃣📍2.8 km - 🗓️6 yrs - 🥇193/200
    4️⃣📍3.7K km - 🗓️38 yrs - 🥉33/200
    5️⃣📍3.0K km - 🗓️12 yrs - 🥉119/200
    

    Fourth image ruined me.






  • This isn’t true. R1 trades blows with O1 which is the best model that OpenAI released so far - all the hype on O3 is just vaporware until we have an actual product. Most of the buzz isn’t because China made a comparable model with less, it’s because they released the weights for free.

    Why would you pay $200/month to OpenAI when you can use R1 for free? Better yet, companies can now self-host it for better security and way cheaper costs. The hype is warranted.

















  • simple@lemm.eetoMicroblog Memes@lemmy.worldOpenAI hard work got stolen...
    link
    fedilink
    English
    arrow-up
    75
    arrow-down
    2
    ·
    edit-2
    3 days ago

    I am not crazy! I know they copied our data! I knew it was OpenAI material. One after Magna Carta. As if I could ever make such a mistake. Never. Never! I just – I just couldn’t prove it. They – they covered their tracks, they got that idiot at the copy shop to lie for them. You think this is something? You think this is bad? This? This chicanery? They’ve done worse. Are you telling me that a model just happens to form like that? No! They orchestrated it! Deepseek!