Gaywallet (they/it)@beehaw.org to Technology@beehaw.org · 2 years agoA jargon-free explanation of how AI large language models workarstechnica.comexternal-linkmessage-square14fedilinkarrow-up1100arrow-down10cross-posted to: auai@programming.devaicompanions@lemmy.worldtechnology@lemmy.world
arrow-up1100arrow-down1external-linkA jargon-free explanation of how AI large language models workarstechnica.comGaywallet (they/it)@beehaw.org to Technology@beehaw.org · 2 years agomessage-square14fedilinkcross-posted to: auai@programming.devaicompanions@lemmy.worldtechnology@lemmy.world
minus-squarePenguinTD@lemmy.calinkfedilinkEnglisharrow-up1·2 years agocause in the end it’s all statistics and math, human are full of mistakes(intentional or not), living language evolve over time(even the grammar), so whatever we are building “now” is a contemporary “good enough” representation.
minus-squarekosmoz@beehaw.orglinkfedilinkEnglisharrow-up1·2 years agoAlso, humans tend to be notoriously bad at both statistics and math :^)
cause in the end it’s all statistics and math, human are full of mistakes(intentional or not), living language evolve over time(even the grammar), so whatever we are building “now” is a contemporary “good enough” representation.
Also, humans tend to be notoriously bad at both statistics and math :^)