There is one other very useful form of "expansion" that LLMs do.
If you aren't aware: (high-parameter-count) LLMs can be used pretty reliably to teach yourself things.
LLM base models "know things" to about the same degree that the Internet itself "knows" those things. For well-understood topics — i.e. subjects where the Internet contains all sorts of open-source textbooks and treatments of the subject — LLMs really do "know their shit": they won't hallucinate, they will correct you when you're misunderstanding the subject, they will calibrate to your own degree of expertise on the subject, they will make valid analogies between domains, etc.
Because of this, you can use an LLM as an infinitely-patient tutor, to learn-through-conversation any (again, well-understood) topic you want — and especially, to shore up any holes in your understanding.
(I wouldn't recommend relying solely on the LLM — but I've found "ChatGPT in one tab, Wikipedia open in another, switching back and forth" to be a very useful learning mode.)
See this much-longer rambling https://news.ycombinator.com/item?id=43797121 for details on why exactly this can be better (sometimes) than just reading one of those open-source textbooks.
I use LLMs a huge amount in my work as a senior software engineer to flesh out the background information required to make my actual contributions understandable to those without the same background as me. eg, if I want to write a proposal on using SLO's and error budgets to make data driven decisions about which errors need addressing and which don't, inside a hybrid kubernetes and serverless environment, I could do a few things:
* Not provide background information and let people figure it out for themselves. This will not help me achieve my goals.
* Link them to Google's SRE book and hope they read it. Still not achieving my goals, because they won't.
* Spend 3 hours writing the relevant background information out for them to read as part of my proposal. This will achieve my goals, but take an extra 3 hours.
* Tell the LLM what I'm looking for and why, then let it write it for me in 2 minutes, instead of 3 hours. I can check it over, make sure it's got everything, refine it a little, and I've still saved 2.5 hours.
So for me, I think the author has missed a primary reason people use LLMs. It saves a bunch of time.
For a teacher, they want to see you spend those 3h to see what you come up with, and if there's something they should direct your attention to, or something they should change in their instruction.
But ultimately, getting the concise summary for a complex topic (like SLIs and SLOs are) is brilliant, but would be even better if it was full of back-links to deeper dives around the Internet and the SRE book.
It feels like the information is there strewn across the internet, in forums, Reddit posts, stack overflow, specs, books. But to trawl though it all was so time consuming. With an LLM you can quickly distill it down to just the information you need.
Saying that, I do feel like reading the full spec for something is a valuable exercise. There may be unknown unknowns that you can't even ask the LLM about. I was able to become a subject expert in different fields just but sitting down and reading through the specs / RFCs, while other colleagues continued to struggle and guess.
This is how I currently am relearning upper high school math. It’s tremendously helpful as I am a why guy.
Why is the angle called m? Why is a combination nPr * (1/r)? What is 1/r doing there?
I use mathacademy.com as my source of practice. Usually that’s enough but I tend to fall over if small details aren’t explained and I can’t figure out why those details are there.
In high school this was punished. With state of the art LLMs, I have a good tutor.
Also it’s satisfying to just upload a page in my own handwriting and it understands what I did, and is able to correct me there.
The fact that we see lower grades as "punishment" is the root of the problem: grades are an assessment of our understanding level and competence on a topic.
Now, I know psychologically it's not as simple, and both society and ourselves equate academic (and professional, later on) success with personal worth, but that's a deeper, harder topic.
If you aren't aware: (high-parameter-count) LLMs can be used pretty reliably to teach yourself things.
LLM base models "know things" to about the same degree that the Internet itself "knows" those things. For well-understood topics — i.e. subjects where the Internet contains all sorts of open-source textbooks and treatments of the subject — LLMs really do "know their shit": they won't hallucinate, they will correct you when you're misunderstanding the subject, they will calibrate to your own degree of expertise on the subject, they will make valid analogies between domains, etc.
Because of this, you can use an LLM as an infinitely-patient tutor, to learn-through-conversation any (again, well-understood) topic you want — and especially, to shore up any holes in your understanding.
(I wouldn't recommend relying solely on the LLM — but I've found "ChatGPT in one tab, Wikipedia open in another, switching back and forth" to be a very useful learning mode.)
See this much-longer rambling https://news.ycombinator.com/item?id=43797121 for details on why exactly this can be better (sometimes) than just reading one of those open-source textbooks.