Hacker Newsnew | past | comments | ask | show | jobs | submit | devnonymous's commentslogin

From the "Discussion" section:

> This suggests that as companies transition to more AI code writing with human supervision, humans may not possess the necessary skills to validate and debug AI-written code if their skill formation was inhibited by using AI in the first place.

I'm reminded of "Kernighan's lever" :

> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?

AI is writing code in the cleverest way possible which then introduces cognitive load for anyone who hasn't encountered these patterns previously. Although, one might say that AI would also assist in the debugging, you run the risk of adding further complexity in the process of 'fixing' the bugs and before you know it you have a big stinking ball of mud.


> AI is writing code in the cleverest way possible …

On the contrary, without mastery guiding, AI writes code in the most boilerplate way possible, even if that means compromising logic or functionality.

> … which then introduces cognitive load for anyone who hasn't encountered these patterns previously

And for those who have. This is the enterprise Java effect. The old trope is Java was designed to make all devs median and all produce the same median code so enterprises don't have to worry about the individual devs, it's all the same bowl of unflavored oatmeal.

When you read code from vibe coding novice, it's difficult to grok the intended logic because that's buried within these chunks of enterprise pattern boilerplate as if the solution was somehow regex'd at random from StackOverflow until some random combination happened to pass a similarly randomized bag of tests.

The cognitive load to reverse this mess into clean clear expression of logic is very high whether a human or machine "coded" this way.

In both cases, the antidote is caring for craft and mastery first, with an almost pseudocode clarity in expressing the desired outcome.

OK, but -- even this doesn't guarantee the result one wants.

Because even if the master writes the code themselves, they may find their intent was flawed. They expressed the intent clearly, but their intention wasn't helpful for the outcome needed.

This is where rapid iteration comes in.

A master of software engineering may be able to iterate on intent faster with the LLM typing the code for them than they can type and iterate on their own. With parallel work sessions, they may be able to explore intention space faster to reach the outcome.

Each seasonal improvement in LLM models' ability to avoid implementation errors while iterating this way makes the software developer with mastery but lack of perfect pre-visualization of intent more productive. Less time cleaning novice coding errors, more cycles per hour iterating the design in their head.

This type of productivity gain has been meaningful for this type of developer.

At the same time, the "chain of thought" or "reasoning" loops being built into the model are reaching into this intention space, covering more of the prompt engineering space for devs with less mastery being unable to express much less iterate intent. This lets vibe "coders" imagine their productivity is improving as well.

If the output of the vibe coder (usually product managers, if you look closely) is considered to be something like a living mockup and not a product, then actual software engineers can take that and add the *-ilities (supportability, maintainability, etc. that the vibe coder has never specified whether vibing or product managing).

Using a vibed prototype can accelerate the transfer of product conception from the PM to the dev team more effectively than PM just yelling at a dev tech lead that the dev hasn't understood what the PM is saying the product should be. Devs can actually help this process by ensuring the product "idea" person is armed with a claude.md to orient the pattern medianizer machine with the below the waterline stuff engineering teams know are 80% of the cost-through-time.

There's not a lot of discussion of prototype vibing being a new way for product owners and engineering teams to gain clarity above the waterline, or whether it's productive. Here's a dirty secret: it's more productive in that it's more protective of the rarer skilset's time. The vibe time wasted is paid by the product owner (hallelujah), the eng team can start with a prototype the product owner iterated with while getting their intent sorted out, so now engineerings iterations shift from intent (PM headspace) to implementation (eng headspace).

Both loops were tightened.

> you run the risk of adding further complexity in the process of 'fixing' the bugs and before you know it you have a big stinking ball of mud.

Iterating where the problem lies, uncoupling these separate intention and iteration loops, addresses this paradox.


Otoh, tarrifs as a foreign policy / coercion method disconnected from trade and local economy impacts definitely is a new thing.

Sure, it might have been used as a delicate lever previously but in its current brazen form is just bad diplomacy.



Very confident.

The people who do the "*nix" cargo cult thing have never seen a SunOS machine and don't even know what a HPUX is.


The linked trends suggest a revival of the term though.

> The people who do the "*nix" cargo cult thing have never seen a SunOS machine and don't even know what a HPUX is.

The meaning of words evolve over time though. Text is still broken into lines by "carriage return/line feeds" and is written on "hard disk" split up in "sectors".,. Over time people using these would not have seen a typewriter or even know what a platter is but may still use it to communicate effectively.


How is it that so many people who supposedly lean towards analytical thought are so bad at understanding scale?


Can't speak for Rob Pile but my guess would be, yeah, it might seem hypocritical but it's a combination of seeing the slow decay of the open culture they once imagined culminating into this absolute shirking of responsibility while simultaneously exploiting labour, by those claiming to represent the culture, alongwith the retrospective tinge of guilt for having enabled it, that drrove this rant.

Furthermore, w.r.t the points you raised - it's a matter of scale and utility. Compared to everything that has come before, GenAI is spectacularly inefficient in terms of utility per unit of compute (however you might want to define these). There hasn't been a tangible nett good for society that has come from it and I doubt there would be. The egarness and will to throw money and resources at this surpasses the crypto mania which was just as worthless.

Even if you consider Rob a hypocrite , he isn't alone in his frustration and anger at the degradation of the promise of Open Culture.


"There hasn't been a tangible nett good for society that has come from it and I doubt there would be"

People being more productive with writing code, making music or writing documents fpr whatever is not a improvement for them and therefore for society?

Or do you claim that is all imaginary?

Or negated by the energy cost?


I claim that the new code, music or documents have not added anything significant/noteworthy/impactful to society except for the self-perpetuating lie that it would, all the while regurgitating, at high speeds, what was stolen.

And all at significant opportunity cost (in terms of computing and investment)

If it was as life altering as they claim where's that novel work of art (in your examples..of code, music or literature) that truly could not have been produced without GenAI and fundamentally changed the art form ?

Surely, with all that ^increased productivity^ we'd have seen the impact equivalent of linux, apache, nginx, git, redis, sqlite, ... Etc being released every couple of weeks instead of yet another VSCode clone./s


Buddhism in India grew in opposition to the Hindu caste system instead of spiritual change of thought. The current Indian government is loudly Hindu nationalist and prefers to minimise or even dismiss the diversity of Indian religious practices as well as pretend that the caste system is no longer present.

They and their supporters downplay Buddhist followers by pretending that the lived experiences of these Buddhist (on in general the non-hindu) don't exist.


Do you have a source on how BJP downplays Buddhist followers?


> Why not just self-host competitive-enough LLM models, and do their experiments/attacks themselves, without leaking actions and methods so much?

Why assume this hasn't already happened?


Why in this instance leak your actions and methods?


Not the person you're asking but I had similar CS issues. I still use Wise sparingly but have also started using Revolut. Though, I wouldn't trust either with more money than I can afford to budget for ^life lessons^


In most (all ?) countries banks are regulated and customers have recourse to some sort of banking ombudsman for this sort of thing.


Not if it's AML stuff. In those cases neither the banks or regulators are your friends.


Small change to emphasize the intent:

> because companies and governments around the world were going to prioritise that in their purchases.

Governments are the largest revenue stream of pretty much every large software company starting from IBM/Xerox to OpenAI. MS is well known to indulge in all sort of legally grey practices to win such contracts.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: