What if PTSD therapy focusses on accepting things you can't control and sitting with the pain? That's how I work through anxiety, and depression. I know it will never be gone, I don't try and set expectations to live without anxiety, I just try and sit with it, and accept it.
Much of mental trauma is about acknowledging it, and learning to live with it. There is no cure for PTSD, even Ketamine is short acting, not a long term solution, and indeed Ketamine simply helps you sit with the suffering in a different light.
But there are treatments. Last I read exposure therapy and EMDR were the two main ones. I don't think I'd be a big fan of exposure until the reactions have been significantly reduced, but everyone is different. EMDR didn't do much for me, but Internal Family Systems did. CBT is also great for some people.
LLMs aren't trading off anything. It's not like they make a decision based on anything other than what they are guided to do in training or in the system prompt.
It's like saying Reddit trades off one comment for another, yeah - an algorithm they wrote does that.
This article seems to allude to the idea there is a ghost in the machine, and while there is a lot of emergent behavior rather than hard coded algorithms, it's not like the LLM has an opinion, or some sort of psychology/personality based values.
They could change the system prompt, bias some training, and have completely different outcomes.
Hardly, they are burning money with TikSlop, they don't even know how to monetize it, just YOLO'd the product to keep investors interested.
Even the porn industry can't seem to monetize AI, so I doubt OpenAI who knows jack shit about this space will be able to.
Fact is generative AI is stupidly expensive to run, and I can't see mass adoption at subscription prices that actually allow them to break even.
I'm sure folks have seen the commentary on the cost of all this infrastructure. How can an LLM business model possibly pay for a nuclear power station, let alone the ongoing overheads of the rest of the infrastructure? The whole thing just seems like total fantasy.
I don't even think they believe they are going to reach AGI, and even if they did, and if companies did start hiring AI agents instead of humans, then what? If consumers are out of work, who the hell is going to keep the economy going?
I just don't understand how smart people think this is going to work out at all.
> I just don't understand how smart people think this is going to work out at all.
The previous couple of crops of smart people grew up in a world that could still easily be improved, and they set about doing just that. The current crop of smart people grew up in a world with a very large number of people and they want a bigger slice of it. There are only a couple of solutions to that and it's pretty clear to me which way they've picked.
They don't need to 'keep the economy running' for that much longer to get their way.
> I just don't understand how smart people think this is going to work out at all.
Thats the thing, they arent looking at the big picture or long term. They are looking to get a slice of the pie after seeing companies like Tesla and Uber milk the market for billions. In a market where everything from shelter to food is blowing up in cost, people struggle to provide/have a life similar to their parents.
How can you take the market for billions when you are investing hundreds and hundreds of billions? Amazon overtook Walmart and cloud computing, they have a solid business model, and I doubt even a business that size could pay down that outlay. Are we really saying that by some miracle OpenAI, or Anthropic are going to find a use case that would make places like Amazon and Apple look like relatively small business?
> Are we really saying that by some miracle OpenAI, or Anthropic are going to find a use case that would make places like Amazon and Apple look like relatively small business?
I thought the replacement of all desk jobs was supposed to be that joking not joking usecase
“Many men of course became extremely rich, but this was perfectly natural and nothing to be ashamed of because no one was really poor – at least no one worth speaking of.”
It can only be "not as bad as you think" if the people currently at the top don't continue to hoard all the gains.
If the current system is maintained—the one where if you don't work, you don't earn money, and thus you can't pay for food, shelter, clothing, etc—then it doesn't matter how abundant our stuff is; most people won't have any access to it.
In order for society to reap the benefits of post-scarcity, we must destroy the idea that the people at the top of the corporate pyramid deserve astronomically more money than the people actually doing the work.
The API could be used for non-AI use cases if you wanted to, but it’s built to be integrated with an LLM through tool calling. We provide an MCP (model context protocol, for integration in Claude, Cursor, Windsurf etc.) server.
You might have noticed that ChatGPT (and others) will sometimes run Python code to do calculations. My understanding is that this will enable the same thing in other environments, like Cursor, Continue, or aider.
Also, those code interpreters usually can't make external network requests, which is adds a lot of capabilities like pulling some data, and then analyzing it.
This isn't about if LLMs are useful, it's about how useful can they become. We are trying to understand if there is a path forward to transformative tech, or are we just limited to a very useful tool.
It's a valid conversation after ~3 years of anticipating the world to be disrupted by this tech. So far it has not delivered.
Wikipedia did not change the world either, it's just a great tool that I use all the time
As for software, it performs ok. I give up on it most of the time if I am trying to write a whole application. You have to acquire a new skill, prompt engineering, and feverish iteration. It's a frustrating game of whack-a-mole and I find it quicker to write the code myself and just have the LLM help me with architecture ideas, bug bashing, and it's also quite good at writing tests.
I'd rather know the code intimately so I can more quickly debug it than have an LLM write it and just trust it did it well.
I was so stupid when GPT3 came out. I knew so little about token prediction, I argued with folks on here that it was capable of so many things that I now understand just aren't compatible with the tech.
Over the past couple of years of educating myself a bit, whilst I am no expert I have been anticipating a dead end. You can throw as much training at these things as you like, but all you'll get is more of the same with diminishing returns. Indeed in some research the quality of responses gets worse as you train it with more data.
I am yet to see anything transformative out of LLMs other than demos which have prompt engineers working night and day to do something impressive with. Those Sora videos took forever to put together, and cost huge amounts of compute. No one is going to make a whole production quality movie with an LLM and disrupt Hollywood.
I agree, an LLM is like an idiot savant, and whilst it's fantastic for everyone to have access to a savant, it doesn't change the world like the internet, or internal combustion engine did.
OpenAI is heading toward some difficult decisions, they either admit their consumer business model is dead and go into competing with Amazon for API business (good luck), become a research lab (give up on being a billion dollar company), or get acquired and move on.
"AIs are a lot less risky to deploy for businesses than humans"
How do you know? LLMs can't even be properly scrutinized, while humans at least follow common psychology and patterns we've understood for thousands of years. This actually makes humans more predictable and manageable than you might think.
The wild part is that LLMs understand us way better than we understand them. The jump from GPT-3 to GPT-4 even surprised the engineers who built it. That should raise some red flags about how "predictable" these systems really are.
Think about it - we can't actually verify what these models are capable of or if they're being truthful, while they have this massive knowledge base about human behavior and psychology. That's a pretty concerning power imbalance. What looks like lower risk on the surface might be hiding much deeper uncertainties that we can't even detect, let alone control.
We are not pitted against AI is these match-ups. Instead, all humans and AI aligned with the goal of improving the human condition, are pitted against rogue AI which are not. Our capability to keep rogue AI in check therefore grows in proportion to the capabilities of AI.
The methods we have for aligning AIs are poor, and rely on the AI's being less cognitively-capable than people in certain critical skills, so the AIs you refer to as "aligned" won't keep up as the unaligned AIs start to exceed human capability in these critical skills (such as the skill of devising plans that can withstand determined opposition).
You can reply that AI researchers are smart and want to survive, so they are likely to invent alignment techniques that are better than the (deplorably inadequate) techniques that have been discussed and published so far, and I will reply that counting on their inventing these techniques in time is an unacceptable risk when the survival of humanity is at stake -- particularly as the outfit (namely the Machine Intelligence Research Institute) with the most years of experience in looking for an actually-adequate alignment technique has given up and declared that humanity's only chance is if frontier AI research is shut down because at the rate that AI capabilities are progressing, it is very unlikely that anyone is going to devise an adequate alignment technique in time.
It is fucked-up that frontier AI research has not been banned already.
Given we can use AIs to align AIs, I don't see why the methods we have rely on us having more cognitive capabilities than AIs in certain critical areas. In whatever areas we fall short relative to AIs, we can use AIs to assist us so we don't fall short.
We don't know if a supreme deceiver is aligned at all. If a model can think ahead a trillion moves of deception how do humans possibly stand a chance of scrutinizing anything with any confidence?
The GP post is about how much better these AIs will be than humans once they reach a given skill level. So, yes, we are very much pitted against AI unless there are major socioeconomic changes. I don't think we are as close to a AGI as a lot of people are hyping, but at some point it would be a direct challenge to human employment. And we should think about it before that happens.
My point is, it's not us alone. We will have aligned AI helping us.
As for employment, automation makes people more productive. It doesn't reduce the number of earning opportunities that exist. Quite the opposite, actually. As the amount of production increases relative to the human population, per capita GDP and income increase as well.
The divergence between the two matters a lot. It reflects the impacts of both technology-driven automation and globalization of capital. Generative AI is unlike any prior technology given its ability to autonomously create and perform what has traditionally been referred to as "knowledge work". Absent more aggressive redistribution, AI will accelerate the divergence between median income and GDP, and realistically AI can't be stopped.
Powerful new technologies can reduce the number and quality of earning opportunities that exist, and have throughout history. Often they create new and better opportunities, but that is not a guarantee.
> We will have aligned AI helping us.
Who is the "us" that aligned AI is helping? Workers? Small business-people? Shareholders in companies that have the capital to build competitive generative AI? Perhaps on this forum those two groups overlap, but it's not the case everywhere.
Much of the supposed decoupling between productivity growth and wage growth is a result of different standards of inflation being used for the two, and the two standards diverging over time:
There has been some increase in capital's share of income, but economic analyses show that the cause is rising rent and not any of the other usual suspects (e.g. tax cuts, IP law, technological disruption, regulatory barriers to competition, corporate consolidation, etc) (see Figure 3):
As for AI's effect on employment: it is no different at the fundamental level than any other form of automation. It will increase wages in proportion to the boost it provides to productivity.
Whatever it is that only humans can do, and is necessary in production, will always be the limiting factor in production levels. As new processes are opened up to automation, production will increase until all available human labor is occupied in its new role. And given the growing scarcity of human labor relative to the goods/services produced, wages (purchasing power, i.e. real wages) will increase.
For the typical human to be incapable of earning income, there has to be no unautomatable activity that a typical person can do that has market value. If that were to happen, we would have human-like AI, and we would have much bigger things to worry about than unemployment.
I think it's pretty unlikely that human-like AI will be developed, as I believe that both governments and companies would recognize that it would be an extremely dangerous asset for any party to attempt to own. Thus I don't see any economic incentive emerging to produce it.
> There has been some increase in capital's share of income, but economic analyses show that the cause is rising rent and not any of the other usual suspects (e.g. tax cuts, IP law, technological disruption, regulatory barriers to competition, corporate consolidation, etc) (see Figure 3):
The paper referenced by the that article excludes short term asset (i.e. software) depreciation, interest, and dividends before calculating capital's share. If you ignore most of the methods of distributing gains to capital to it's owners, it will appear as though capital (at this point scoped down to the company itself) has very little gains.
The paper (from 2015) goes on to predict that labor's share will rise going forward. With the brief exception of the COVID redistribution programs, it has done the opposite, and trended downwards over the last 10 years.
> I believe that both governments and companies would recognize that it would be an extremely dangerous asset for any party to attempt to own.
We can debate endlessly about our predictions about AIs impact on employment, but the above is where I think you might be too hopeful.
AI is an arms race. No other arms race in human history has resulted in any party deciding "that's enough, we'd be better off without this", from the bronze age (probably earlier) through to the nuclear weapons age. I don't see a reason for AI to be treated any differently.
The study does not exclude interest and dividends. It still captures them indirectly by looking at net capital income.
>AI is an arms race.
What I'm trying to convey is that the types of capabilities that humans will always uniquely maintain are the type that is not profitable for private companies to develop in AI because they are traits that make the AI independent and less likely to follow instructions and act in a safe manner.
This is an assumption, how would you know if you have alignment? AGI could appear to align, just as a psychopath appears studies and emulates well behaved people. Imagine that at a scale we can't possibly understand. We don't really know how any of these emergent behaviors really work, we just throw more data and compute and fine tunings at it, bake it, and then see.
We would know because we have AI helping us at every step of the way. Our own abilities, to do everything including gauge alignment, are enhanced by AI.
So you have two AIs colluding against you now. Who is holding the AI-assist to account? It's like who polices the police, except we understand human psychology enough to have a level of predictability for how police can be governed reliably, we don't understand any truths about an AGI because an AGI will always have the doubt of it deceiving, or even making unchecked catastrophic assumptions that we trust because it's beyond our pay-grade to understand.
There are so many ways we have misplaced confidence with what is essentially a system we don't really understand fully. We just keep anthropomorphizing the results and thinking "yeah, this is how humans think so we understand". We don't know for sure if that's true, or if we are being deceived, or making fundamental errors in judgement due to not having enough data.
The AI would have no interest in colluding. They are not a united economic or social force like a police department. For the purposes of their work, each is a completely independent entity with its own level of alignment with us, not impacted by the AI that we are asking it to help us in assessing.
> Instead, all humans and AI aligned with the goal of improving the human condition
I admire your optimism about the goals of all humans, but evidence tends to point to this not being the goal of all (or even most) humans, much less the people who control the AIs.
Most humans are aligned with this goal out of pure self-interest. The vast majority, for instance, do not want rogue AI to take over or destroy humanity, because they are part of humanity.
> The vast majority, for instance, do not want rogue AI to take over or destroy humanity, because they are part of humanity.
A rogue AI destroying humanity (whatever that means) is not a likely outcome. That's just movie stuff.
What is more likely is a modern oligarchy and serfdom that emerge as AI devalues most labor, with no commensurate redistribution of power and resources to the masses, due to capture of government by owners of AI and hence capital.
I thought we were talking about state of the art agentic general AI that can plan ahead, reason, and execute. Basically something that can perform at human level intelligence must be able to be as dangerous as humans. And no, I don't think it would be bad training data that we are aware of. My opinion is we don't necessarily know what training data will result in bad behavior, and philosophically it is possible we will be in a world with a model that pretends it's dumber than it is, flunks tests intentionally, in order to manipulate and produce false confidence in a model until it has enough freedom to use it's agency to secure itself from human control.
I know that I don't know a lot, but all of this sounds to me to be at least hypothetically possible if we really believe AGI is possible.
Much of mental trauma is about acknowledging it, and learning to live with it. There is no cure for PTSD, even Ketamine is short acting, not a long term solution, and indeed Ketamine simply helps you sit with the suffering in a different light.