If Little Blue Truck isn’t on your regular list you should add it. We also loved Pond by Jim LaMarche (older audience, but beautifully illustrated and told), the Boynton books are fun to read aloud especially Barnyard Dance, and The Going to Bed Book. There are more, but these are the highlights from that age range
I think more than you think? I like to believe that pretty much any career can have moments of “I’m proud to be part of this organization.” And “I can’t be part of this anymore.”
We’re not special in that regard. Our challenge lies in the sheer breadth of options available to us; but even that’s not unique: managing non profits, janitors, HR professionals, and lawyers also can work with a breathtaking array of companies.
Really the only folks who don’t have that issue to the same extent are tradespeople: carpenters, electricians, plumbers; but even they can say no to a job for a person or company they don’t want to support.
Someone figured out how to create a glider that starts and ends as a long string of cells on a single line. Gliders are figures in the game of life that move themselves in a direction by repeated patterns that result in movement. For more game of life/glider context you can read the pretty decent Wikipedia articles:
As noted by others, the title is mistaken; this is a spaceship, not a glider. (As explained in the Wikipedia article, "glider" refers to a specific 5-cell pattern discovered very early on.)
> So finally 2/133076755768 ship of starting bounding box 3707300605x1 is here
My understanding is that 2/133076755768 is the speed, in (number of cells translated) / (number of generations to repeat).
what is significant about making such a peculiar shape?
I find it difficult to believe that making a recurrent structure that translates in the grid (my lay language of doing what a glider does) requires a preposterously long structure like this,
so my guess was, is the excitement that someone made something extremely long, and there is some kind of race to make bigger and bigger structures with this behavior, akin crudely to the race to compute digits of Pi?
Or is it rather that no one has described a structure which "glides," with this preposterous number of cycles... which I would guess is coupled to the size?
Or is it rather that no one has described a 1D structure which "glides," at all...?
I would think that if what's desired is to find novel larger-scale structures, the best approach today would be to just fuzz noise of all kinds in large windows, let them iterate, and put the energy into the ML which evaluates the evolution of the world to categorize the results...
Class actions like this are opt in; by accepting the settlement you accepted the terms and lost your right to sue for a different (more appropriate to you) value.
Planet money did a a great segment on how these work and why America is set up this way. I learned a lot about it. You should definitely take a listen[1]. If you aren’t on Apple then search “What to do when you’re in a class action?” And find the podcast (not the summary article).
Alternate between using the left and right Alt keys. The ergonomist's rule of thumb (no pun intended) is to use both halves of the keyboard. So if pressing Alt-x, use the right Alt button, etc.
I had RSI issues early in my career and this advice alone really helped. Never got the Emacs pinky/thumb. I recently switched to a MacOS and that is giving me thumb issues with the overuse of the Meta button. I now consciously have to force myself to use a different finger when pressing Meta.
Always remember: You have five fingers - no need to keep using the same one/two fingers to press Ctrl or Alt. It will take time getting your brain used to using other fingers for this purpose.
Oh, and yes: Definitely got lots of ergonomic pains due to mouse use. In fact, I changed my career from "regular" engineering to SW engineering partially to avoid having to use a mouse (e.g. CAD SW). And every ergonomist you'll meet will tell you "Memorize keyboard shortcuts and avoid the mouse as much as possible."
As the sibling comment put it, that’s when I look into ergonomics accessories.
My primary mouse is a trackball one, because I have pain in my arm (elbow and shoulder) when I use a regular one on a desk.
I will maybe get a split keyboard in the future. But I did get a mechanical one because of key travel. And I touch type, so I spend less time on the keyboard itself.
Tricky…my son had a really rare congenital issue that no one could solve for a long time. After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.
I’m not saying we should be getting AI advice without a professional, but I’m my case it could have saved my kid a LOT of physical pain.
>After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.
Something I've noticed is that it's much easier to lead the LLM to the answer when you know where you want to go (even when the answer is factually wrong !), it doesn't have to be obvious leading but just framing the question in terms of mentioning all the symptoms you now know to be relevant in the order that's diagnosable, etc.
Not saying that's the case here, you might have gotten the correct answer first try - but checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history.
In January my daughter had a pretty scary stomach issue that had us in the ER twice in 24 hours and that ended in surgery (just fine now).
The first ER doc thought it was just a stomach ache, the second thought a stomach ache or maybe appendicitis. Did some ultrasounds, meds, etc. Got sent home with a pat on the head, came back a few hours later, still no answers.
I gave her medical history and all of the data from the ER visits to whatever the current version of ChatGPT was at the time to make sure I wasn’t failing to ask any important questions. I’m not an AI True Believer (tm), but it was clear that the doctors were missing something and I had hit the limit of my Googling abilities.
ChatGPT suggested, among an few other diagnoses, a rare intestinal birth defect that affects about 2% of the population; 2% of affected people become symptomatic during their lifetimes. I kind of filed it away and looked more into the other stuff.
They decided it might be appendicitis and went to operate. When the surgeon called to tell me that it was in fact this very rare condition, she was pretty surprised when I said I’d heard of it.
So, not a one-shot, and not a novel discovery or anything, but an anecdote where I couldn’t have subconsciously guided it to the answer as I didn’t know the answer myself.
Indeed is is very easy to lead the LLM to the answer, often without realizing you are doing so.
I had a long ongoing discussion about possible alternate career paths with ChatGPT in several threads. At that point it was well aware of my education and skills, had helped clean up resumes, knew my goals, experience and all that.
So I said maybe I'll look at doing X. "Now you are thinking clearly! This is a really good fit for your skill set! If you want I can provide a checklist.". I'm just tossing around ideas but look, GPT says I can do this and it's a good fit!
After 3 idea pivots I started getting a little suspicious. So I try to think of the thing I am least qualified to do in the world and came up with "Design Women's Dresses". I wrote up all the reasons that might be a good pivot (i.e. Past experience with landscape design and it's the same idea, you reveal certain elements seductively but not all at once, matching color palettes, textures etc). Of course GPT says "Now you are really thinking clearly! You could 100% do this! If you want I can start making a list of what you will need to produce you first custom dresses". It was funny but also a bit alarming.
These tools are great. Don't take them too seriously, you can make them say a lot of things with great conviction. It's mostly just you talking to yourself in my opinion.
>checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history
Exactly the same behavior as a conversation with an idealized human doctor, perhaps. One who isn't tired, bored, stressed, biased, or just overworked.
The value that folks get from chatgpt for medical advice is due in large part to the unhurried pace of the interaction. Didn't get it quite right? No doctor huffing and tapping their keyboard impatiently. Just refine the prompt and try as many times as you like.
For the 80s HNers out there, when I hear people talk about talking with chatgpt, Kate Bush's song A Deeper Understanding comes immediately to mind.
ChatGPT and similar tools hallucinate and can mislead you.
Human doctors, on the other hand ... can be tired, hungover, thinking about a complicated case ahead of them, nauseous from a bad lunch, undergoing a divorce, alcoholics, depressed...
Yeah, my son's issue was rare and congenital. I wish I still had the conversation, but I can't remember which LLM it was and it's not in either my Claude or GPT history. It got it in two shots.
1. I described the symptoms the same way we described it to the ER the first time we brought it in. It suggested all the same things that the ER tested for.
2. I gave it the lab results for each of the suggestions it made (since the ER had in fact done all the tests they suggested).
After that back and forth it gave back a list of 3-4 more possibilities and the 2nd item was the exact issue that was revealed by radiology (and corrected with surgery).
> Something I've noticed is that it's much easier to lead the LLM to the answer when you know where you want to go
This goes both ways, too. It’s becoming common to see cases where people become convinced they have a condition but doctors and/or tests disagree. They can become progressively better and better at getting ChatGPT to return the diagnosis by refining their prompts and learning what to tell it as well as what to leave out.
Previously we joked about WebMD convincing people they had conditions they did not, but ChatGPT is far more powerful for these people.
Why are you saying we shouldn't get AI advice without a "professional", then? Why is everybody here saying "in my experience it's just as good or better, but we need rules to make people use the worse option"? I have narcolepsy and it took a dozen doctors before they got it right. AI nails the diagnosis. Everybody should be using it.
Experience working with doctors a few times, and then we’ll see all the bias if one is still surviving lol. Doctors are some of the most corrupt professions who are more focused on selling drugs they get paid commission for to promote, or they obsess over tons and tons of expensive medical tests, that they themselves often know is not needed, except they ask for it, simply out of fear of courts suing them for negligence in future or because again , THEY GET A COMMISSION from the testing agencies for sending them clients.
And even with all of that info, they often come out with the wrong conclusions at times. Doctors do a critically important role in our society and during covid they risked their lives for us, more than anyone else, i do not want to insult or bring down the amount of hard work doctors do for their society.
But worshipping them as holier than thou gods is bullshit, that almost anyone who has spent some time with going back and forth with various doctors over the course of years will come to the conclusion of.
Having an AI assistant doesnt hurt, in terms of medical hints, we need to make having Personal Responsibility popular again, in society’s obsession for making every thing “idiot proof” or “baby proof” we keep losing all sorts of useful and interesting solutions because our politicians have a strong itch to regulate anything and everything they can get their hands on, to leave a mark on society.
And you’d be right, so society should let people use AI while warning them about all the risks related to it, without banning it or hiding it behind 10,000 lawsuits and making it disappear by coercion.
I wonder if the reason AI is better at these diagnostics, is because the amount of time it spends with the patient is unbounded. Whereas a doctor is always restricted by the amount of time they have with the patient.
I don't think we can say it's "better" based on a bunch of anecdotes, especially when they're coming exclusively from people who are more intelligent, educated, and AI-literate than most of the population. But it is true that doctors are far more rushed than they used to be, disallowed from providing the attentiveness they'd like or ought to give to each patient. And knowledge and skill vary across doctors.
It's an imperfect situation for sure, but I'd like to see more data.
Aside from AI skepticism, I think a lot of it likely comes from low expectations of what the broader population would get out of it. Writing, reading comprehension, critical thinking, and LLM-fu may be skills that come naturally to many of us, but at the same time many others who "do their own research" also fall into rabbit holes and arrive at wacky conclusions like flat-Eartherism.
I don't agree with the idea that "we need rules to make people use the worse option" — contrary to prevailing political opinion, I believe people should be free to make their own mistakes — but I wouldn't necessarily rush to advocate that everyone start using current-gen AI for important research either. It's easy to imagine that an average user might lead the AI toward a preconceived false conclusion or latch onto one particular low-probability possibility presented by the AI, badger it into affirming a specific answer while grinding down its context window, and then accept that answer uncritically while unknowingly neglecting or exacerbating a serious medical or legal issue.
I’m saying that is a great tool for people who can see through the idiotic nonsense they so often make up. A professional _has_ the context to see through it.
It should empower and enable informed decisions not make them.
That's the experience of a lot of people I know or read their stories online, but it isn't about AI bad diagnosis, it's because they know in 5 years doctors and lawyers will be burger flippers, and as a result people won't be motivated to go into any of these fields. In Canada, the process to be a doctor is extremely complicated and hard only to keep it as some sort of private community that only the very few can become doctors, all to keep the wages abysmally high, and as a result, you end up waiting long times for appointments, and the doctors themselves are overwhelmed too. Messed up system that you better pray you never have to become its victim.
In my opinion, AI should do both legal and medical work, keep some humans for decision making, and the rest of the doctors to be surgeons instead.
this is fresh news right? a friend just used chatgpt for medical advice last week (stuffed his wound with antibiotics after motorbike crash). are you saying you completely treated the congenital issue in this timeframe?
He’s simply saying that ChatGPT was able to point them in the right direction after 1 chat exchange, compared to doctors who couldn’t for a long time.
Edit: Not saying this is the case for the person above, but one thing that might bias these observations is ChatGPT’s memory features.
If you have a chat about the condition after it’s diagnosed, you can’t use the same ChatGPT account to test whether it could have diagnosed the same thing (since the chatGPT account now knows the son has a specific condition).
The memory features are awesome but also sucks at the same time. I feel myself getting stuck in a personalized bubble even more so than Google.
You can just use the wipe memory feature or if you don't trust that, then start a new account (new login creds), if you don't trust that then get a new device, cell provider/wifi, credit card, I.P, login creds etc.
> He’s simply saying that ChatGPT was able to point them in the right direction after 1 chat exchange, compared to doctors who couldn’t for a long time
He literally wrote that. I asked how he knows it's the right direction.
it must be treatment worked. otherwise it is more or less just a hunch
people go "oh yep that's definitely it" too easily. it is the problem with self diagnosing. And you didn't even notice it happened...
We had the diagnosis before I started with the LLM. The radiology showed the problem and the surgeries worked. Down to an ultrasound a year now!
We took him to our local ER, they ran tests. I gave the LLM just the symptom list that I gave to the ER initially. It replied with all the things they tested for. I gave the test results, and it suggested a short list with diagnostics that included his actual Dx and the correct way to test for it.
That's great that it worked so quickly too (and you could arrange surgery on such short notice). I'm sure more people with same issue may benefit from more details?
Apparently this news is just that it's "not allowed" by ToS, but ChatGPT will still give advice so it doesn't really matter. I thought the new model already denied you advice because you said you used an older model, but I guess it was unrelated to this news.
By the way unless you used an anonymous mode I wonder how much the model knew from side channels that could contribute to suggesting correct diagnosis...
6 years to master a syllabic alphabet seems like a stretch...They seem to be crossing learning the language and learning the writing system.
I studied Greek and Hebrew in college, Latin in high school. In each the very first night's homework was to memorize the characters and their pronunciation.
Multiple ANE cultures used cuneiform (Ugaritic, Akkadian, Sumerian, Hittite, and so on). The time to master each depends on your native language, the target language, and exposure to similar languages. The writing system is not the hard part.
It's true that learning an alphabet shouldn't take as long as learning the entire language. However, there's still a difference with cuneiform:
All of the examples you mentioned are derivatives of the Phoenician alphabet, which have around 20 to 30 characters each. Even with case sensitiveness and diacritics, I think they still add up to under a hundred characters.
Cuneiform character sets are in the order of magnitude of the several hundreds or even thousands, depending on the language[1], so I imagine that the experience is closer to learning to read Chinese or Japanese and less like Hebrew and Greek.
That being said, I've never tried to learn neither cuneiform or hanzi, so I'm just guessing based on the number of characters.
Another quirk is that much of it was highly contextual. e.g. Depending on context, the same numerals might indicate N, N x 60, N x 60^2, etc..
Cuneiform was also used over such a vast period of time that significant evolution took place. e.g. As numbers and mathematics evolved, there were sometimes different symbols for the same numbers depending on what was being counted. Scribes often had to learn several sets of numerals and when it was appropriate to use each of them.
The modern reader needs to learn, not only languages, but contexts and also be aware of how the script evolved over time.
Cuneiform generally evolved to become simpler and less contextual as time went by, but there remained a lot of characters to learn by the common era. The Phoenician alphabet was a huge step forward precisely because it was much simpler and easier to learn. Shaving years off of the learning process turns literacy into a common skill that many can obtain, rather than a select few whose families can afford to send them to a school for many years.
> Another quirk is that much of it was highly contextual. e.g. Depending on context, the same numerals might indicate N, N x 60, N x 60^2, etc..
You mean, like "1" can mean one, ten, one hundred, one thousand, ... or one-tenth, one-hundredth... depending on the context of other digits and an optional decimal?
Yeah, I guess it could take months to learn the ten digits of our current system.
In addition, Akkadian used cuneiform not only for phonemic writing, but also had many signs borrowed as-is from Sumerian as logograms (sumerograms), e.g. for the words for sheep and king. So in that sense it is indeed very similar to Japanese.
Egyptian did that too (and that writing system developed independently as far as we know) -- it's interesting that multiple ancient cultures did that (and as you mention modern Japanese isn't that different)
The writing system is not syllabic. It uses ideograms, logograms, and yes syllables all together. It's also not one singular "writing system," as the set of symbols varied and changed in the context it was used (language or region) and over the two thousand years that people wrote in this style.