That's really interesting, but i'm wondering if this is as rational as it looks.
> we are going to be kinda of obsolete in what defined us, as a profession: the ability to write code
Is it a fact, really? I don't think "writing code" is a defining factor, maybe it's a prerequisite, as being able to write words hardly defines "a novelist".
Anyway, prompt writing skills might become obsolete quite soon. So the main question might be to know which trend of technological evolution to pick and when, in order not to be considered obsolete. A crystal ball might still be more relevant than LLMs for that.
I call it "the ability to communicate intent [using a programming language]" and suddenly building with AI looks at lot more like the natural extension of what we used to do writing code by ourselves.
I don't think our profession was writing code to begin with (and this may be a bit uuhh. rewriting history?); what we do is take an idea, requirements, an end goal and make it reality. Often times that involves writing code, but that's only one aspect of the software developer's job.
Analogy time because comment sections love analogies. A carpenter can hammer nails, screw screws, make holes, saw wood to size. If they then use machines to make that work easier, do they stop being carpenters?
It's good if not essential to be able to write code. It's more important to know what to write and when. Best thing to do at this point is to stop attaching one's self-worth with the ability to write code. That's like a novelist (more analogies) who praises their ability to type at 100wpm. The 50 shades books proved you don't need to either touch type (the first book was mostly written on a blackberry apparently) or be good at writing to be successful, lol.
Agreed - as I see it, it's akin to the transitions from machine code -> assembly language -> C -> Javascript. As time went by, knowing the deep internals of the machine became less and less necessary, even though having that knowledge still gives an engineer a useful insight into their work and often makes them better at their job. The goal remains the same - make the computer do the thing; only the mechanism changes as the tools evolve.
"-> AI" is just the next step along that journey. Maybe it will end at "-> AGI" and then humans will engage in programming mostly for the craft and the pleasure of it, like other crafts that were automated away over the ages.
As a specific example of this, the U.S. 18F team had helped the Forest Service a decade ago with implementing a requirement to help people get a permit to cut down a Christmas tree.
Although there was a software component for the backend, the thing that the actual user ended up with was a printed-out form rather than a mobile app or QR code. This was a deliberate design decision (https://greacen.com/media/guides/2019/02/12/open-forest-laun...), not due to a limitation of software.
The second video shows the head of the CNIL (~ the "regulator") mostly repeating platitudes about various topics, but nothing about age restriction for social networks. Did i miss anything?
> [...] the Digital Markets Act (‘DMA’) obliges gatekeepers like Google to effectively allow the distribution of apps on their operating system through third party app stores or the web. At the same time, the DMA also permits Google to introduce strictly necessary and proportionate measures to ensure that third-party software apps or app stores do not endanger the integrity of the hardware or operating system or to enable end users to effectively protect security. [...]
They seem to be on it, but no surprise: it's all about Google's claims for "security" and "ongoing dialogue gatekeepers".
Filling forms is a terribly artificial activity in essence. They are also very culturally biased, but that fits well with the material the NNs have been trained with.
So, surely those IQ-related tests might be acceptable rating tools for machines and they might get higher scores than anyone at some point.
Anyway, is the objective of this kind of research to actually measure the progress of buzzwords, or amplify them?
"AI" is too much of a broad umbrella term of competing ideas, from symbolic logic (FOL, expert systems) to statistical operations (NNs). It's clear today that the latter has won the race, but ignoring this history doesn't seem to be a very smart move.
I'm in no way an expert but I feel that today's LLMs lack some concepts well known in the research of logical reasoning. Something like: semantic.
AI is a broad field because intelligence is broad field.
And what's remarkable about LLMs is exactly that: they don't reason like machines. They don't use the kind of hard machine logic you see in an if-else chain. They reason using the same type of associative abstract thinking as humans do.
Surely "intelligence" is a broad field... i might not be so that great at it, but i hope that's ok.
"[LLMs] reason using the same type of associative abstract thinking as humans do": do you have a reference for this bold statement?
I entered "associative abstract thinking llm" in a good old search engine. The results point to papers rather hinting that they're not so good at it (yet?), for example: https://articles.emp0.com/abstract-reasoning-in-llms/.
I don't have a single reference that says outright "LLMs are doing the same kind of abstract thinking as humans do". Rather, this is something that's scattered across a thousand articles and evaluations - in which LLMs prove over and over again that they excel at the cognitive skills that were once exclusive to humans - or fail at them in ways that are amusingly humanlike.
But the closest thing is probably Anthropic's famous interpretability papers:
In which Anthropic finds circuits in an LLM that correspond to high level abstracts an LLM can recognize and use, and traces down the way they can be connected. Which forms the foundation of associative abstract thinking.
> we are going to be kinda of obsolete in what defined us, as a profession: the ability to write code
Is it a fact, really? I don't think "writing code" is a defining factor, maybe it's a prerequisite, as being able to write words hardly defines "a novelist".
Anyway, prompt writing skills might become obsolete quite soon. So the main question might be to know which trend of technological evolution to pick and when, in order not to be considered obsolete. A crystal ball might still be more relevant than LLMs for that.
reply