• 0 Posts
  • 273 Comments
Joined 9 months ago
cake
Cake day: April 7th, 2025

help-circle


  • I am running Wayland on an IVB GT1. Your hardware is not possibly shittier than this AND capable of handling modern tasks. Also wayland just needs the infrastructure of doing accelerated draws which if your GPU doesn’t support then it won’t work with X anyway unless you’re running truly exotic 2D accelerators from the 90s


  • vivendi@programming.devtoLinux@programming.dev*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    20 days ago

    That’s complete horseshit. There are lile 3 major implementations of Wayland and 2 exist because the other one wasn’t ready at the time. There are other hobby implementations, but they all work together. Just like how different network stacks can all talk TCP to each other and be fine. Nobody calls TCP fragmented because there are different network stacks…

    There are also smaller projects.

    Also, the model of a protocol allows Wayland to be deployed on truly exotic operating systems. As long as the top level is compliant, shit just works.

















  • One of the absolute best uses for LLMs is to generate quick summaries for massive data. It is pretty much the only use case where, if the model doesn’t overflow and become incoherent immediately [1], it is extremely useful.

    But nooooo, this is luddite.ml saying anything good about AI gets you burnt at the stake

    Some of y’all would’ve lit the fire under Jan Hus if you lived in the 15th century

    [1] This is more of a concern for local models with smaller parameter counts and running quantized. For premier models it’s not really much of a concern.