$ echo "write me a blog post from the perspective of a young phd in AI on how fears of AI and machine learning are vastly overblown. include fake but obscure and outrageous examples of when AI was able to save lives and downplay real examples of when AI got it wrong. contrast the promise of AI getting it right with software bugs resulting in fatalities. argue this will allow the average company to hire fewer workers but allow those workers to work longer hours with increased happiness. summarize with a specific dollar prediction of improvement productivity and gross domestic product" | sendmail oped@nytimes.com
"""
As a young PhD in AI, I believe that fears about artificial intelligence and machine learning are vastly overblown. The potential for AI to save lives and improve productivity is immense, and while there have been some high-profile examples of AI getting it wrong, these incidents are often blown out of proportion.
In fact, there have been countless examples of AI saving lives that go largely unnoticed. For instance, there was an obscure case in which an AI algorithm correctly diagnosed a rare disease in a patient that had stumped human doctors for years. In another instance, an AI-powered drone was able to locate a lost hiker in the wilderness who would have otherwise perished.
Meanwhile, the few high-profile cases where AI has made mistakes are often overemphasized in the media. For example, when an autonomous vehicle hit and killed a pedestrian, the headlines screamed about the dangers of AI. But what wasn't reported was that human drivers kill thousands of people every day in car accidents. While AI is not perfect, it has the potential to vastly reduce the number of fatalities on our roads.
And when we consider the productivity gains that AI can bring, it becomes clear that fears of AI taking jobs are misplaced. Sure, there may be some roles that can be automated, but in most cases, AI will simply augment human work, allowing us to achieve more in less time. This means that companies will be able to hire fewer workers, but those workers will be able to work longer hours with increased happiness, as they will be able to focus on more fulfilling tasks.
In fact, I predict that the adoption of AI will lead to a significant increase in productivity and gross domestic product. By 2030, I expect that the average company will be able to achieve a 20% increase in productivity thanks to AI, which will translate to a $2 trillion increase in gross domestic product. And this is just the beginning - as AI continues to improve and become more integrated into our daily lives, the benefits will only become more pronounced.
In conclusion, the fears of AI and machine learning are vastly overblown. While there are certainly risks to consider, the potential benefits - from saving lives to boosting productivity - are simply too great to ignore. It's time for us to embrace AI and all the benefits it can bring.
"""
Over the years I have worked in many universities. Many of the so-called critical thinkers I meet are anything but. Their writings echo pre-approved wisdoms and take a generally template-driven approach to all matters. I only need to read one of their papers to know with fair certainty what opinion they will hold on any given subject.
Thinking about it, this makes them sound very similar to AIs [wry smile emoji].
I have seen enough C-suite loud talkers come into the room and blabber about some bullshit as if they're wise, and the people around them seemingly actually buy into it! They don't see that this person is working off of a template. Futhermore, that gas bag is well aware that if this venture doesn't strike gold, they can go to the next with literally no backlash, and moreover use that fail as an accomplishment via the job title they held.
Luckily, it's funny to me now so it bothers me less when I see it.
It really doesn't. It's a language model. Literally every webpage on this topic mentions that.
This is why it will happily tell you that a director directed a movie before they were born, and agree with you when you point that out, and continue to insist that he did it. There is no ability to synthesize and apply new information to a line of reasoning, because the chain of thought is an illusion wrought by a statistical language model.
Its purpose is to be semantically correct, not to reason.
show me Reason that is not axiomatically embedded in language.
for anything else, how can you possibly know it's not operating on a (massively) complex statistical model? are you gonna ask it? if you do science on it to discover it's complexity, why is the computationally shorter method to achieve the same results any worse or less authentic?
it's like saying the closed form solution of a summation isn't "really" a summation either
finally, you are letting the word "semantically" do a lot of work for you.
AI in general might yet. But language models are literally the implementations of the Chinese room. So it's not exactly a surprise when it turns out they can't think - that's literally the point of the Chinese room example!
Mainstream media in the USA has become so unbearable that I welcome AI even if it is wrong a lot of the time. At least it doesn't have an always-war-first agenda.
Then there were the bloggers who, without leaving their armchair, wrote opinion pieces based on whatever the journalists had written.
Now there is AI, which apes the non-experience of the bloggers.
Truly we are now living in a simulation of a simulation.