hodlbod on Nostr: This article says a lot of what I wanted to say about LLMs, but couldn't find the ...
This article says a lot of what I wanted to say about LLMs, but couldn't find the words:
https://acko.net/blog/the-l-in-llm-stands-for-lying/I don't agree with his conclusion that leaning in to intellectual property rights and source citation is the solution, though that's an interesting though. But there are some great sections, particularly in the first half. Here are some highlights:
> LLMs do something very specific: they allow individuals to make forgeries of their own potential output, or that of someone else, faster than they could make it themselves.
> Experienced veterans who turn to AI are said to supposedly fare better, producing 10x or even 100x the lines of code from before. When I hear this, I wonder what sort of senior software engineer still doesn't understand that every line of code they run and depend on is a liability.
>
> One of the most remarkable things I've heard someone say was that AI coding is a great application of the technology because everything an agent needs to know is explained in the codebase. This is catastrophically wrong and absurd, because if it were true, there would be no actual coding work to do.
>
> It's also a huge tell. The salient difference here is whether an engineer has mostly spent their career solving problems created by other software, or solving problems people already had before there was any software at all. Only the latter will teach you to think about the constraints a problem actually has, and the needs of the users who solve it, which are always far messier than a novice would think.
Published at
2026-03-05 19:36:45 CETEvent JSON
{
"id": "a414f61f5629069f6536ca53081e96889c9d55c9c7d441f8fb19afdd6523535c",
"pubkey": "97c70a44366a6535c145b333f973ea86dfdc2d7a99da618c40c64705ad98e322",
"created_at": 1772735805,
"kind": 1,
"tags": [],
"content": "This article says a lot of what I wanted to say about LLMs, but couldn't find the words: https://acko.net/blog/the-l-in-llm-stands-for-lying/\n\nI don't agree with his conclusion that leaning in to intellectual property rights and source citation is the solution, though that's an interesting though. But there are some great sections, particularly in the first half. Here are some highlights:\n\n\u003e LLMs do something very specific: they allow individuals to make forgeries of their own potential output, or that of someone else, faster than they could make it themselves.\n\n\u003e Experienced veterans who turn to AI are said to supposedly fare better, producing 10x or even 100x the lines of code from before. When I hear this, I wonder what sort of senior software engineer still doesn't understand that every line of code they run and depend on is a liability.\n\u003e\n\u003e One of the most remarkable things I've heard someone say was that AI coding is a great application of the technology because everything an agent needs to know is explained in the codebase. This is catastrophically wrong and absurd, because if it were true, there would be no actual coding work to do.\n\u003e\n\u003e It's also a huge tell. The salient difference here is whether an engineer has mostly spent their career solving problems created by other software, or solving problems people already had before there was any software at all. Only the latter will teach you to think about the constraints a problem actually has, and the needs of the users who solve it, which are always far messier than a novice would think.",
"sig": "62ebcb617b6a2e76c7eaeba50ac147a5caafd9cf6987188137361337661e66072c3c02a21c78a0ef08fa291c13a40c60926c05fe9ee502785e4e62188f14e266"
}