Cyber-warfare capabilities on this level seem pretty horrific. What if you could simply turn off the power grid of Kyiv or Moscow in anticipation of a strike? That seems extremely disorientating. What if you could simply turn off the power grid indefinitely?
Russia attacks Ukrainian power grid on a weekly basis. Not only with cyber-attacks but with actual bombs. Over Christmas 750k homes in Kyiv were without power or heating. This is not a hypothetical it's daily reality for millions of people in Ukraine.
Something like this more or less happened during the initial Israeli strike on Iran ?
From what I remember reading, they were able to gain air dominance not because Iranian air-defense was bad, but because it was put almost completely out of service for a brief period of time by people on the ground - be it through sabotage, cyber-warfare, drone attacks from inside, allowing the Israeli jets to annihilate them.
> not because Iranian air-defense was bad, but because it was put almost completely out of service for a brief period of time by people on the ground - be it through sabotage, cyber-warfare, drone attacks from inside,
Wouldn't that constitute air defense being "bad"? There are no "well technically it should have worked" in war. Failing to properly secure the air defense sites is bad air defense.
Not really. Ferrari is a great car, but with punctured tires or bad driver, it won't win any race.
Although I do agree, that in war only the final outcome is important. It's just that in this case it failed not necessarily because of technology, but because of humans.
A Ferrari with punctured tires isn’t a great car, it can’t drive. It’s an immobile, useless hunk of metal with a great engine and transmission, similar to disabled air defense systems: really expensive, useless hunks of metal.
> The US Navy used sea-launched Tomahawk missiles with Kit-2 warheads, involving reels of carbon fibers, in Iraq as part of Operation Desert Storm during the Gulf War in 1991, where it disabled about 85% of the electricity supply. The US Air Force used the CBU-94, dropped by F-117 Nighthawks, during the NATO bombing of Yugoslavia on 2 May 1999, where it disabled more than 70% national grid electricity supply.
I would not, however, take "Trump said something" as indicative of much. "It was dark, the lights of Caracas were largely turned off due to a certain expertise that we have, it was dark, and it was deadly" is both visibly untrue from the video evidence available, and is the precise sort of off-the-cuff low-fact statement he's prone to.
General Caine specifically said they utilized CYBERCOM (which is the US inter-branch hacking command) to pave the way for the special ops helicopters. I personally have no doubt that any (whether or not they all were) lights being out was due to a US hack. Some of the stuff that got blown up may well have been to prevent forensic recover of US tools and techniques.
> “The F-35, we’re doing an upgrade, a simple upgrade,” Trump said. “But we’re also doing an F-55, I’m going to call it an F-55. And that’s going to be a substantial upgrade. But it’s going to be also with two engines.”
> Frank Kendall, the secretary of the Air Force during former President Joe Biden’s administration, said in an interview with Defense News that it is unclear what Trump was referring to when he discussed an “F-22 Super,” but it may have been a reference to the F-47 sixth-generation fighter jet… Kendall said it is also unclear what Trump was referring to when he discussed the alleged F-55.
It's hard for me to imagine how machine learning Nobel Prize laureate Geoffrey Hinton, someone who is openly warning about extinction risk from AI, is some insane crank on the topic of... machine learning.
Same goes for Turing Award winner Yoshua Bengio, AI tech CEOs Dario Amodei, Sam Altman, Elon Musk, etc. who have all said this technology could literally murder everyone.
Either they don't believe their own bullshit or they for some reason think that this superintelligence will be loyal to them and kill all the competing gods.
People need to start having conversations about existential risk here. Hinton, Nobel Prize winner in AI, thinks there's a decent chance AI executes the entire human species. This isn't some crank idea.
Sure, like the dot com bubble it will burst. But like the internet, AI is here to stay. The real bottleneck is electricity supply. CEO of Microsoft has even said, we have the chips but not the electricity to power them. The chips are sitting idle.
> but it's increasingly looking like LeCunn is right.
This is an absolutely crazy statement vis-a-vis reality and the fact that it’s so upvoted is an indictment of the type of wishful thinking that has grown deep roots here.
If you are paying attention to actual research, guarded benchmarks, and understand how benchmarks are being gamed, I would say there is plenty of evidence we are approaching a clear plateau / the march-of-nines thesis of Karpathy is basically correct long-term. Short-term it remains to be seen how much more we can do with the current tech.
Your best bet would be to look deeply into performance on ARC-AGI fully-private test set performances (e.g. https://arcprize.org/blog/arc-prize-2025-results-analysis), and think carefully about the discrepancies here, or, just to broadly read any academic research on classic benchmarks and note the plateaus on classic datasets.
It is very clear when you look at academic papers actually targeting problems specific to reasoning / intelligence (e.g. rotation invariance in images, adversarial robustness) that all the big companies are doing is just fitting more data / spending more resources on human raters and other things to boost performance on (open) metrics, but that clear actual gains in genuine intelligence are being made only by milking what we know very well to be a limited approach. I.e. there are trivially-basic problems that cannot be solved by curve-fitting models, which makes it clear most current advances are indeed coming from curve(manifold) fitting. It just isn't clear how far we can exploit these current approaches and in what domains this kind of exploitation is more than good enough.
EDIT: Are people unaware Google Scholar is a thing? It is trivial to find modern AI papers that can be read without requiring access to a research institution. And e.g. HuggingFace collects trending papers (https://huggingface.co/papers/trending), and etc.
At present its only SWE's that are benefitting from a productivity stand point. I know a lot of people in finance (from accounting to portfolio management) and they scoff at the outputs of LLMs in their day to day jobs.
But the bizarre thing is, even though the productivity of SWE's is increasing I dont believe there will be much happening in regards to lay offs due to the fact that there isn't complete trust in LLMs; I dont see this changing either. In which case the LLM producers will need to figure out a way to increase the value of LLMs and get users to pay more.
Are SWE’s really experiencing a productivity uplift? When studies attempt to measure the productivity impact of AI in software the results I have seen are underwhelming compared to the frontier labs marketing.
And, again, this is ignoring all the technical debt of produced code that is poorly understood, weakly-reviewed, and of questionable quality overall.
I still think this all has serious potential for net benefit, and does now in certain cases. But we need to be clearer about spelling out where that is (webshit, boilerplate, language-to-language translation, etc) and where it maybe isn't (research code, legacy code, large codebases, niche/expert domains).
This Stanford study on developer productivity found 0 correlation between developers assessment of their own productivity and independent measures of their productivity. Any anecdotal evidence from developers on how AI has made them more or less productive is worthless.
Yup, most progress is also confined to SWE's doing webshit / writing boilerplate code too. Anything specialized, LLMs are rarely useful, and this is all ignoring the future technical debt of debugging LLM code.
I am hopeful about LLMs for SWE, but the progress is currently contextual.
Even if LLMs could write great code with no human oversight, the world would not change over night. Human creativity is necessary to figure out what stuff to produce that will yield incremental benefits to what already exists.
The humans who possess such capability stand to win long-term; said humans tend to be those from the humanities and liberal arts.
That's a very broad statement, explicitly covering all types of code and all kinds of coders. Are you really confident enough to make such an assertion?
My question was more rhetorical, the point being that it's very bold (read: foolish) to make a claim that extends well outside of any conceivable amount of experience you may have accrued or ability to know how others may operate.
> Andrej Karpathy is one of the best engineers in the country, George Hotz is one of the best engineers in the country, etc.
You have citations of them explicitly making this claim on behalf of all SWEs in all domains/langs? I'd find that surprising, if so.
reply