That’s not how LLMs work, it’s part of the reinforcement learning or SFT dataset, data labelers would have written or generated tons of examples using this and other patterns (all the emoji READMEs for example) that the models emulate. The early ones had very formulaic essay style outputs that always ended with “in conclusion”, lots of the same kind of bullet lists, and a love of adjectives and delving, all of which were intentionally trained in. It’s more subtle now but it’s still there.
Maybe I was being imprecise, but I’m not sure what you mean by “not how LLMs work” - discovering patterns of how humans write is exactly the signal they are trained against. Either explicitly curated like SFT or coaxed out during RLHF, no?
It could even have been picked up in pretraining and then rewarded during rlhf when the output domain was being refined; I haven’t used enough LLMs before post training to know what step it usually becomes noticeable.