Huh. I'd never thought of this. If that is actually meaningfully beneficial, I wonder if they'd design self driving cars with the seats facing backwards, given there's no longer a necessity to look at the road.
(edit: I guess it's more of no-brainer on a train/bus where you don't have a seat belt)
Not the author, but I think there was some research and it's indeed better for you if you have head support, to be facing back towards the front. If prevents a whole range of injuries, from your neck, to becoming a projectile yourself.
But it's really theoretical, and does not account for the passenger in front of you headed head-first into your throat.
PS: I laughed hard that xlbuttplug2 is answering to deadbabe. The internet lives!
Consider the "booth seats" in trains and busses. So people can chat etc facing each other. If you've got a waymo with your friends why wouldn't you want the seats facing each other so you can be social, excluding this safety factor.
Sitting backwards is beneficial if looking at accidents.
But sitting backwards is very very uncomfortable if there is any kind of uneven acceleration, bumps, swaying, rolling, curvy tracks or whatever. Humans need to look forward at the horizon to get their visual stimuli aligned with their motion/balance sense in the inner ear. If that alignment isn't there, you will get seasick. Backwards makes this even worse.
Babies don't suffer from this, because closing your eyes helps, and infants don't have as strong a reaction to motions anyways, due to them usually being carried by their parents until walking age. So reverse baby seats only work for babies.
That's a serious overgeneralization. It's true for some people, but trains mostly don't bump and swerve enough for that to be a significant problem. Finnish trains have lots of seats facing backwards and while they're not anywhere as fast as something like a TGV, they're still often going 200+ km/h. People seem to be just fine. I just spent 1 hour 40 minutes yesterday sitting backwards, mostly reading a book, with no ill effects.
Infant car seats face backwards, they recommend backwards facing for a long as possible (until the kid is too big to fit comfortably in a backwards facing position).
It's incredibly beneficial. However many people dislike it and want to be facing the direction they are moving in, so best case is probably a train-style 4-seater. Which 2 seats facing forward and 2 backwards.
I would have no problem with that. I wouldn't maybe call the AI an artist though, it wouldn't have sentient knowledge to be an artist. It would be art made by a machine. In fact, we have several of those examples already, and there's lots there are really fun and appreciated out there. This new one just happens to be quite more complex and eerie to digest at first.
> They can and most likely will release something that vaporises the thin moat you have built around their product.
As they should if they're doing most of the heavy lifting.
And it's not just LLM adjacent startups at risk. LLMs have enabled any random person with a claude code subscription to pole vault over your drying up moat over the course of a weekend.
LLMs by their very nature subsume software products (and services). LLM vendors are actually quite restrained - the models are close to being able to destroy the entire software industry (and I believe they will, eventually). However, at the moment, it's much more convenient to let the status quo continue, and just milk the entire industry via paid APIs and subscriptions, rather than compete with it across the board. Not to mention, there are laws that would kick in at this point.
I think the function of a company is to address limitations of a single human by distributing a task across different people and stabilized with some bureaucracy. However, if we can train models past human scales at corporation scale, there might be large efficiency gains when the entire corporation can function literally as a single organism instead of coordinating separate entities. I think the impact of this phase of AI will be really big.
> the models are close to being able to destroy the entire software industry
Are you saying this based on some insider knowledge of models being dramatically more capable internally, yet deliberately nerfed in their commercialized versions? Because I use the publicly available paid SOTA models every day and I certainly do not get the sense that their impact on the software industry is being restrained by deliberate choice but rather as a consequence of the limitations of the technology...
I don't mean the companies are hoarding more powerful models (competition prevents that) - just that the existing models already make it too easy for individuals and companies to build and maintain ad-hoc, problem-specific versions of many commercial software services they now pay for. This is the source of people asking, why haven't AI companies themselves done this to a good chunk of software world. One hypothesis is that they're all gathering data from everyone using LLMs to power their business, in order to do just that. My alternative hypothesis is that they already could start burning through the industry, competing with whole classes of existing products and services, but they purposefully don't, because charging rent from existing players is more profitable than outcompeting them.
It’s something normal people understand - everyone who uses a desktop/laptop computer will have rearranged an icon. If they read this it will likely trigger some thoughts about what it could do for them.
It is still not clear to me. The periodicity of their orbit around the tree is the same. I think this is an instance of us meaning different things by “go around”
Say instead of just walking, the man was laying down a net/barricade around the tree. As soon as the man completes the circumference, the squirrel must admit that it has been gone around.
Now let us suppose the squirrel is at the same distance as the man.
Has the man have gone around the squirrel and the squirrel around the man?
If it's only radii less than the other, where is the limit?
To get it I think I have to re-frame it like this:
If you hold out an object toward the centre, you clearly go around it when completing an orbit.
If you keep extending that to the origin but then go beyond, so your arm is longer than the radius, then you still go around it, until your arm reaches twice the radius.
I have very fond memories of playing the Pokemon games, but I always saw the battling aspect as a hurdle to unlocking more of the story/world, which was the true appeal to me. I was content turning my brain off and overleveling my mons. Different strokes, I guess.
I have the exact opposite prediction. LLMs may end up writing most code, but humans will still want to review what's being written. We should instead be making life easier for humans with more verbose syntax since it's all the same to an LLM. Information dense code is fun to write but not so much to read.
Even lay-er person, but maybe the specificity is not that impressive in mice? Perhaps when you scale to more complex animals it is inevitable to see false positives (detrimental effects to healthy cells)?
Yes, you will be vulnerable should you lose access to AI at some point, but the same goes for a limb. You will adapt.
reply