Hacker Newsnew | past | comments | ask | show | jobs | submit | aiphex's commentslogin

This might sound crude, but a man who becomes a physician (and especially a surgeon) gets his pick of the most attractive mates as well as a high social status for life. That's pretty strong motivation for many.


Social actions that lead to birth rate decline are a slippery slope that can be difficult to come back from without immigration. The economic conditions that have been forced on young people are bad enough for this. It will be a wait and see to witness if western nations are able to figure it out but so far many have thrown in the towel and are just relying on immigration now. Countries like Korea and Japan have so far resisted this in an attempt to keep their societies and cultures more homogenous, for which they have found many benefits.


This article felt like long-form Twitter. A bunch of words that go nowhere.


If they aren't already, AIs will be posting content on social media apps. These apps measure the amount of attention you pay to each thing presented to you. If it's more than a picture or a video, but something interactive, then it could also learn how we interact with things in more complex ways. It also gets feedback from us through the comments section. Like biological mutations, AIs will learn which of its (at first) random novel creations we find utility in. It will then better learn what drives us and will learn to create and extrapolate at a much faster pace than us.


> If they aren't already, AIs will be posting content on social media apps.

No, people will be posting content on social media apps that they asked LLMs to write.

It may be done through a script, or API calls, but it's 100% at the instigation, direct or indirect, of a human.

LLMs have no ability to decide independently to post to social media, even if you do write code to give them the technical capability to make such posts.


With the new ChatGPT Plugins, it seems they may actually be able to make POST requests to social media APIs soon. It is likely that an LLM could have "I should post a tweet about this" in its training data.

Granted... currently it is likely humans that have written the code that the new Plugins are allowed to call -- but they have given ChatGPT the ability to execute rudimentary Python scrips and even ffmpeg so I think it is only a matter of time before one outputs a Tweet written by its own code.


> It is likely that an LLM could have "I should post a tweet about this" in its training data.

That only matters if a human has explicitly hooked it up so that when ChatGPT encounters that set of tokens, it executes the "post to Twitter" scripts.

ChatGPT doesn't comprehend the text it's producing, so without humans making specific links between particular bundles of text and the relevant plugin scripts, it will never "decide" to use them.


At a high level, all that would have to happen is a person gives GPT, or something like it, access to a social media page and tells it to post to it with the objective of getting the highest level of interaction and followers.


...which in no way grants GPT sapience, nor would it prove that it has it.

The human is still providing the capability to post, the timing script to trigger posting, and the specific heuristic to be used in determining how to choose what to post.


Genuine question, are you seeing non-coders using it to do any useful coding? As a developer, I find that it is wrong more often than it is right and were it not for my domain knowledge, I would have no idea why (or sometimes when.)


One of my concerns is what happens when machines start making their own money. This could be possible with cryptocurrencies (another reason to loathe them.) Machines can do things online, make sex-working 3d-modelled chat-bots for instance, or do numerous other types of work, like things you see people do on Fivver. If machines start making their own money and deciding what to do with it, they could then pay humans to do things. At this point they are players in the economy with real power. This doesn't seem too far out of an idea to me.


> This could be possible with cryptocurrencies

It is very easily possible with normal currencies too. Obviously banks will need a human, or a legal entity to be the “owner” of the account but it is very easy to imagine someone hooking up an AI with an account to automate some business. Maybe initially it would involve a lot of handholding from a human, so the AI doesn’t have to learn to hustle from scratch, but if the money is flowing in and the AI is earning more money than it is spending it is easy to imagine that the human checks out and doesn’t double check ever single service or purchase the AI does.


We don't say it because we don't care. Machines moving faster than a human runner have not posed a threat to any industry or jobs in our lifetime. It's a silly comparison. I bet you there was someone at one point who was unhappy that a machine was a better or faster welder than them though. At least that person may have had the opportunity to keep working at the factory alongside the welding machine, doing QA and repairs. Most knowledge workers will not get to switch to that kind of replacement job vis-à-vis AIs.


Beyond explaining what the author meant, and also the hype and hypotheticals which are rampant, this is a valid concern which I also share personally. This is more imminent than “AI overlords ruling us” and I am afraid the motivation behind creating this particular system, is to bring on the automation (the creators don’t even hide this). Therefore I think the point you are making is actually important too.


This is not a sensible comparison. A mass-produced machine-made suit wasn't made using your exact measurements. If a human sat at a sewing machine on a factory production floor versus a machine, you wouldn't be able to tell the difference.


Not many get paid to do it, and those that do are paid because it's entertainment, people like watching them play. Nobody is paying to watch developers code.


I think this may be the case, the AI will figure out what the legacy code does and then completely re-writes it using whatever modern standards and ability to host it all in the cloud.


If it's not trained on the legacy code, I have my doubts it will be able to re-write it well enough to replace a human baby sitting it, particularly when we're talking about large projects. As impressive as ChatGPT is at spitting out code examples, what gives you confidence that and LLM could rewrite a million lines of legacy code it has never seen before? That seems to be well beyond the ability of any AI that's been developed so far.


The stochasticness probably gets you into trouble here. If you get different results each time you try to "figure out and rewrite" the legacy program... you're gonna need to be able to right some extremely thorough, perfect test cases. And how much of that are you willing to put to chance?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: