AI really speeds up the “I am so new to this topic I don’t even know what questions to ask” phase of learning. When approaching a new topic, it gives me a general overview which I can pick apart into deeper queries that I research the old fashioned way.
I believe this is the least known and most important use-case for LLMs today -- understanding and inferring the meaning in language. We've all seen cases where Google can occasionally surface the right content for indirect descriptions of what you're looking for. The most famous example I can think of is searching for "eyebrows kid" returns images of Will Poulter. Google's knowledge engine is getting pretty good at this kind of thing, but LLMs are way better.
Language models are exceedingly good at understanding the meaning of your language without the use of specific keywords. Here's an example from a recent search I did.
> metals can be flexed or bend, and it will regain some of its prior shape
in Google returns either "ductile" or "shape-memory alloy," which are both incorrect.
> What is the property of a material where it prefers to stay in its current form? This is often found in metals, where it can be flexed or bent, and it will regain some of its prior shape?
in ChatGPT correctly explains that
> The property you are referring to is called elasticity. Elasticity is the ability of a material to return to its original shape after being deformed (stretched, compressed, bent, etc.) when the external forces are removed.
We all know that LLMs can hallucinate, and they are therefore not a reliable source of truth or knowledge. I'm not necessarily trying to say that LLMs are more accurate than something like Google's knowledge engine. The value in LLMs is that they can infer your meaning to some degree of accuracy (just like asking a human) so that you can productively continue your own research in more depth.
This so much. Things I don't know the name of or technical jargon for I can now give an example then ask what is this called? Then go from there
I've found the same for setting up my environments. Before little things that were minor quality of life issues that I just didn't know what to search for, or couldn't be bothered to go hunting down in the docs, I can easily give an example and ask questions about. Little things that I put up with for years are all now resolved and I'm realizing how much nicer having a lot of little things not worth dealing with individually gone is. I don't have to put up with little things because I can't context switch for a day to figure them out, but I can spend 10 minutes with AI to resolve them without much brain power/context switching on my part.
In the IDE/tools/frameworks I use so many little things that 'it would be nice if it did this, but I don't have the time to figure out' are now resolved.
I can’t speak for “traditional” learning methods, since I’ve long been out of school. But I find chat gpt is about as good at first pass research as google was circa 2003. Google is no longer good for research. Used to be I didn’t know much on a topic and I could start googling and page 3 or so would start to have key words necessary to blow open the topic. google no longer even populates page 3.
So I start with basic questions in chat gpt. I know it’s lying to me. But it uses words and phrases that seem interesting. I can iterate over that, and in a short time I know a phrase I can actually search for in google to find authoritative content.
I wouldn’t quite call it a game changer yet, as all it’s done is give me back what I once had. But it is a bonus in that it can do an ok job synthesizing new examples from content that already had lots of disperate examples on the web. It can also give some clues when you find conflicting information as to why the information is conflicting. It’s making stuff up a lot, but there are always good clues in the output.
Since the advent of AI, when I am doing research I have to scroll down a little further after making a search before I can see the results, because now there is some AI stuff in the way at the top of the page.
Not much. It's too inaccurate for my research and is a bad writer.
Day to day I write Lean, and I use moogle.ai to find theorems. It's... fine as a first pass. The website constantly gets confused about similar-looking theorems, it can't pattern match, and it can't really introspect typeclasses (which can be hiding the theorems I want). However it usually can usually help me go from a vague description of what I want to some relevant doc pages, so credit where it's due for that.
This is something I found useful doing but not sure if it applies to learning in general, but here goes:
- I regularly read technical texts, some related to math, some related to programming/software engineering
- I ask Chatgpt4 (now Claude Sonnet 3.5) to quiz me on the nuances of topics X, Y, Z
My prompt template looks like so:
"I'd like to test my understanding of {topic}. I'd like you to quiz me one question at a time. Ask nuanced questions. Answers need to be multiple choice that I can pick from. Avoid congratulatory tone in responses. If I pick the right choice for a question, move on to next one without asking. Provide detailed explanations in case I answer something incorrectly."
I've found this surprisingly effective at pointing out gaps in my understanding. Ymmv.
I learned linear algebra and would discuss it with GPT4.
Sometimes I made mistakes, sometimes GPT4 made mistakes.
Once GPT4 wouldn't agree with something the textbook said. I said "it's in the text book", it said "then the text book must be wrong", "no, you are wrong", "I'm sorry, but that statement is not generally true", "here's the proof"--only after I gave GPT4 the proof did it finally accept the textbook was correct. It was also able to detect subtle mistakes in the proof, and could not be persuaded by a faulty proof. [0]
I think the biggest help was just participating in conversations about math, anytime I needed. It made me more engaged, more focused on what the textbook was saying and whether or not the textbook was matching what GPT4 and I had discussed.
You know the saying, "the easiest way to get an answer online is to post the wrong answer and start an argument", something like that. Well, that's similar to what GPT4 was doing for me, it would have an engaged discussion, maybe an argument, maybe leave me wondering about something, and that was very motivating when reading the textbook.
The textbook still played a central role in my learning. (GPT4 did catch a mistake in the textbook once though.)
I use it a lot to ask dumb simple questions. I've been delving into a new tech stack at work, and I'm already familiar with the concepts, but I just don't know how to do those things in this specific tech stack. AI saves me a lot of time digging through documentation and SEO spam. It often gets me to the answer faster.
However, I usually only use it to ask dumb simple questions. When it comes to anything more complex or obscure, it often falls flat on its face and either hallucinates or misunderstands the question. Then I'll do an old fashioned web search and find a clear cut answer in stack overflow.
My experience has been AI is very unreliable right now and you simply can't trust what it tells you. So I only use it in very limited ways.
The ChatGPT app lets you just take photos of everything. I take photos of labels and ask it to explain all the chemicals in my food, shampoo, toothpaste, etc. Which one are the preservatives? How long do those preservatives last? What's the chemical that makes my teeth hurt when I eat ice cream? What happens if a toothpaste doesn't have that chemical?
Also lately I've been taking photos of stuff like coffee grinders and making it guess what it is. It's surprisingly very accurate and you can use it to explore the thought processes of why someone might pick a particular set.
>> you can use it to explore the thought processes of why someone might pick a particular set
The danger I see here, is that if you ask an LLM to explain the thought processes, it will never say “I don’t know”. It will instead describe some thought processes associated with coffee grinders. It may say something like “this grinder has fine grain controls that allow customizing the size of grind.” …which that particular grinder doesn’t have at all…but that’s a thing people write about when choosing grinders. The frustration is that 90% of the answer will be accurate, but somewhere in all the sentences is a hallucination, treated with the exact same authority as the rest of the answer.
To be really specific, it identified a La Marzocco Linea Mini and incorrectly, a La Marzocco Swift grinder. I cross-checked with Google and the Swift was incorrect. ChatGPT then suggested it could be a Mahlkönig EK43 or Nueva Simonelli Mythos.
Mahlkönig EK43 was the correct answer, but the combo is unusual, because people will usually get a couple of La Marzoccos from a supplier who holds both, and Mahlkönig is an unseen brand here. Why go to the trouble? La Marzocco makes good enough grinders.
With further interrogation, ChatGPT was absolutely insistent on it being a EK43, a limited edition model called The Icon, which was notable for its white color and gold trimmings. This kind of precision is easy to verify, but it's not a detail that comes up in a Google search for the EK43.
The correct answer was that the coffee shop's owner's mentor was from Vietnam, where the EK43 was more common. It was particularly good for making complex latte art like unicorns as it has a precision that allowed controlling the acidity of the crema.
But this whole thought process was just wild. I want it to give me all kinds of crazy answers. I want it to have a high miss rate. It's perfect for brainstorming. But you need sufficient expertise to guide ChatGPT to the next answers.
> the chemical that makes my teeth hurt when I eat ice cream
Does ChatGPT give you an authoritative sounding answer about chemicals, or does it point out the possibility of having sensitive teeth and that it's the temperature of the ice-cream, not any chemicals, that are making your teeth hurt?
This is why I don't ask people, humans are more likely to give an authoritative answer hinting that "you're the problem".
I switch between two toothpastes. One is better for teeth whitening, the other lets me eat ice cream. The sensitive toothpaste are just too safe for me, it doesn't have that minty taste. And I can A/B test between two different ones, even of the same brand, to see which chemical is worse for it.
Anyway, to shorten its answer, it's triclosan, sodium lauryl sulfate, sodium hydroxite, or possibly some of the flavorings.
To be honest, I've never felt like I needed additional tooling to make learning faster or easier. You're inherently rate-limited by the bandwidth of your own brain, so it mostly comes down to finding information in the first place. It sounds like others haven't known where to start in the past, but I guess I'm lucky not to have ever had that problem. I'm drowning in more information than I can take in already. My bookshelves have always been and will always be full of books I intend to get to eventually. All the other things taking up mental time and energy, spending time with my wife, trying to have a family, making and eating food, sleep, exercise, software can't help me with. The only time-consuming activity I can potentially offload is paid work if I can find some other way to pay the bills.
It lowered the bar for being curious. I used to Google all sorts of questions that popped in my mind. For a while I even had a list called "things I don't understand".
Then Google got worse and I started to resent having to refine my query multiple times and sorting through junk results.
Now I ask ChatGPT and get a straightforward answer. I am aware that it's sometimes wrong, but as an average of many shallow introductions, it's excellent.
I've tried to use ChatGPT to answer programming questions where I need either a library reference and/or an example. I find that ChatGPT isn't better than looking up the library reference, but that sometimes it's faster for it to generate the example that I'd otherwise have to look on several pages before I find.
I also spend too long clarifying what I mean.
For example, I wanted for a Rust program to detach into the background, and ChatGPT (with my stupid prompting) kept suggesting I just run `std::process::Command::new("program")`, but I want a single executable to detach! Eventually, once I struck the right chord, it suggested the `daemonize` crate. But it wasn't until after I'd found that by conventional search.
I sometimes use the Kagi !fgpt pattern if I know that what I'm searching for has a good average answer. It'll give that answer and skip the blinking ads, cookie pop-ups, newsletter popups, and autonomously scrolling on my behalf.
I'm looking forward to having an offline AI assistant that'll search and accumulate, rather than hallucinate answers from a bunch of stolen code snippets that akin to "copy-pasting from StackOverflow, but with hallucinations."
Thanks to ChatGPT, I'm much less hesitant to delve (haha) into unfamiliar topics. "Hi! I'm a beginner programmer. I'm interested in learning Idris but I know next to nothing about dependent types. Could you explain them in a couple of sentences?"
Then, after the answer, I ask follow-up questions. I also try to check the answers against other sources, e.g. docs or Wikipedia in order to spot hallucinations.
Yes... AI helps transform learning content to make it more possible to consume in ways that are effective for learning. But it can't help you with things up Blooms hierarchy (like synthesis) and indeed using ChatGPT might harm you doing that - it prevents you feeling the severe pain of bad/inconsistent writing.
Having learned before and after ChatGPT, the workload of students has remained the same, but some can obviously be done with AI, and most students do this, or have tried it.
I believe as a result, the efficacy boost has been offset by the lower amount of time people spend studying - just like most historical studying technologies. For every 1 student who uses AI to learn, there is 2-3 who prefer just to use it to cheat. But it works brilliantly for every type of student for basic explanations, run downs of historic authors or positions, etc. But this is pretty much just wikipedia stuff re-arranged to your learning level. It's helpful, but not augmenting.
For university, it hasn't changed by a lot. Ideally I would want a tool where I could input all slides, worksheets and scripts for a module and then get a nice summary, mock exams, ... nothing can do this which is why I wrote my own tool for this, it works alright, but it isn't 100% accurate and misses bits of information. So I am mostly back to compiling my own summaries and learning based on old exams.
As a side project I am currently building a drone myself on a really tight budget. While I am pretty good on the coding side, my understanding of electronics is basically non-existent. So when I am asking basic questions it's quite helpful; as soon as I am giving it specifications ("Will this brushless motor and this ECS work with a 4S lipo?") it breaks down completely - so it's helpful, but far from perfect.
Llama helped me become more comfortable with Lagrangian mechanics. I had to keep correcting its math errors along the way, but it was nonetheless very good at answering my questions -- like having a patient and empathetic but very absent-minded professor at my fingertips. And because I was always on my toes and double-checking its work for mistakes, I had to learn actively with my brain switched on, instead of just reading passively.
So ironically, its flaws made it a pretty good teacher in this case.
I do have to be especially careful not to ask it leading questions, because it's so biased towards positive affirmation that it would rather lie to say I'm right than to explain why I'm mistaken.
It's the first thing I go to when I have a "stupid" question. I also just use it to learn languages and math, and get clarification when something in my area of expertise is confusing. It's dramatically reduced my search engine use. It also enabled me to write an anki clone that I just wouldn't have bothered with otherwise.
That said, it is actively harmful when discussing components of Chinese characters - it hallucinates so much it's essentially unusable. I stick to traditional resources for that. I'm also reading as many scientific papers as before, there's really no substitute for that yet, and I haven't found it very good at literature searches.
Yes, it's an anki replacement. I found adding/modifying cards programmatically was a nightmare. I also wanted to add new features, like having ephemeral cards that disappear after a few reviews (the use case was translation of chatgpt-generated sentences - knowing myself I'd just unconsciously memorize the answer and not actually go through the task of translation so I wanted to automatically get rid of those cards after they outlived their useful lifespan). And in general I didn't want to be constrained by someone else's software.
Yeah my current approach for programmatic Anki notes is genanki + using a stable ID as the first field of cards to allow for updates. Great point about ephemeral sentences -- something for me to think about creating an auto-suspension add-on for (I've seen the same issues about memorizing structure).
What did you use for scheduling? I've seen ebisu [0] used before but found it difficult to grok. I can say as I've used FSRS [1] within Anki I've started to like its decisions alot.
I think the main thing that keeps me from rolling my own Anki is mobile support. I had an Android app once and having to keep up-to-date with all the changing App Store requirements was annoying as hell. Eventually they took down the app for some compliance thing.
For scheduling, I just guessed at the weights - the interval to the next review just increases geometrically at a different rate for each difficulty rating. I'll probably run into ease hell issues but it hasn't happened yet. I tuned the weights by intuition over the course of a few days until I arrived at something similar to what I had in Anki.
Mobile support was a concern of mine as well but I just took the easy way out and make a simple SPA in javascript and futzed with the CSS until it looked good on my phone (also mostly ChatGPT). One explicit design principle was that I will never support anyone but myself, which simplified things greatly. The SPA interacts with a little CRUD app that uses a sqlite database to store my cards and activity.
I really like to learn by example so learning SwiftUI via ChatGPT has been brilliant.
I find tutorials often have some kind of weird additional thing in them I don’t care about. Like they’re making a list app but can’t help over complicating it with other stuff like adding in images or videos or parsing xml when I just want to learn something specific.
ChatGPT has been awesome for this. You can get simple examples and look through them. Ask questions about the code. Try it out. Change things. If it doesn’t work how you expect, paste it back in with a question.
It’s made learning a new language so much easier and probably 5x faster. I’ve started doing the same kind of thing with learning Spanish too.
Between the hallucinations and over the top agreeableness I have a hard time finding LLMs useful as a reliable learning aid. First you have to disentangle what output is factual from what it's just making up and then you have to be careful not to feed it loaded questions lest you unwittingly lead it on such that it's just agreeing with whatever you say. The amount of work required to get satisfactory results might be better spent doing research in more traditional ways.
All that said I'm very excited for the future and look forward to these problems being solved as I believe they eventually will.
I use it a ton to generate example sentences I can then import into Anki to practice vocabulary in a foreign language, specifically Finnish.
I've even been working on a tiny open source wrapper around the OpenAI API specifically to speed this process up, based on what I've learned works from experience: https://github.com/hiAndrewQuinn/wordmeat
I use OpenAI and Anthropic tools for brainstorming/rubber ducking/sanity checks.
I know that the LLM provided specifics are almost never good enough for a final answer. However, LLMs can get me to think out of my own personal box.
Basically, this has replaced what I once used to get out of being surrounded by human peers. However, I was reticent to bother humans, and I have no such reservations about asking a chatbot a dumb question.
My son used the "custom GPT" in ChatGPT a lot e.g., for Organic Chemistry and Advanced Biology. He said they saved him a ton of time as it explained basic concepts when he had trouble understanding the textbook (which is written in the most Orwellian manner possible). Before, he googled Libretext and Khan Academy etc., but the custom GPTs summarizes that information nicely.
As someone focused on grad school, I find myself much less frequently getting stumped by problems and rage-quitting. I also use some prompts which help to speed up my learning in general while making sure any LLM doesn’t give me the answer.
One of my research interests is on how humans use expert systems (akin to how Go players’ ELO ramped significantly after the release of AlphaGo).
i think it help to expedite learning/research by quite a bit. before this, i had to append "site:reddit.com" to my google searches and spend hours to read multiple post/articles about the topic.
now, i do my learnings/research from something like phind or perplexity. I have a shortcut "!pl" or "!p" setup on my address bar. I just have to type in "!pl food recommendation for keto diet" for example in my address search bar and it would summarize everything for me. After that, all i had to do is to read/click a few more links in deep into reddit or wtv to verify what the LLM tells me is according to it's citations, then i can gauge if the answers are credible or not. i just ask some follow-ups if i have any. The result has been quite satisfactory so far
I used to ask questions on various /r/Ask subreddits. For example, AskHistorians and AskPhilosophy.
Just as ChatGPT became more available, these subreddits decided to make their posting policies unnecessarily strict. The "answering culture" has also become more hostile, with people downvoting questions that don't fit into the monoculture of Reddit's hivemind.
And so I have found ChatGPT to be very useful for asking about philosophical and historical questions, specifically asking for resources on a particular topic/problem. E.g., "Has any philosopher written about XYZ topic?" It will sometimes give me imaginary resources, but usually it'll recommend actual books written on the subject.
I feel like all the low tier questions I'd have are solved with ChatGPT. These questions would also be solvable by simply re-reading documentation, but its nice to be able to ask another "person" to explain it for you. Maybe my attention span is just non-existent
Massively changed my learning in the development. Before I sat in front of my pc and did not know where to start. Also understanding the innerhalb working is much easier. I learned next.js in just a few weeks
Last year, when I was studying for an AWS Certification, I used a lot of chatGPT to explain me services and concepts with useful analogies and practical use cases.