I’ve tried using deepseek (first time I’ve ever used an LLM, so maybe I’m being dumb) to help me a little with designing some circuit because my reference book was leaving out a LOT of crucial information.

The results have been … subpar. The model seems to be making quite elementary mistakes, like leaving floating components with missing connections.

I’m honestly kinda disappointed. Maybe this is a weak area for it. I’ve probably had to tell deepseek more about designing the circuit in question than it has told me.

Edit: I realised I was just being dumb, since LLMs aren’t designed for this task.

  • davel [he/him]@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    12
    ·
    5 days ago

    I wouldn’t have thought an LLM to be of use for circuit design in the first place, so I wouldn’t have been disappointed.

      • lorty@lemmygrad.ml
        link
        fedilink
        arrow-up
        8
        ·
        5 days ago

        Maybe I’m wrong but the amount of code available online you can use to feed a model is probably a few orders of magnitude larger than circuit designs.

      • KrasnaiaZvezda@lemmygrad.ml
        link
        fedilink
        arrow-up
        7
        ·
        5 days ago

        LLMs are often trained on up to some 80% code, depending on use although usually it’s probably lower, as that has been shown to improve their logical/thinking skills.

        Basically, if the task can be done with only words and there is a lot of data of it present day LLMs can probably get really good at it if properly trained for it, but for things like circuits, where a lot of the data is likelly to be graphical or there might just not be much of it, LLMs aren’t yet as good at.