Hacker Newsnew | past | comments | ask | show | jobs | submit | icapybara's commentslogin

FYI it would be "Icing on the cake" or "cherry on top"

It was the year of Claude Code


Interesting that for all the hype, all the benchmarks - none of these 3 demos are anything close to Counter Strike.


Anyone know how Gemini CLI with this model compares to Codex and Claude Code?


Doesn't mean it's safe.


Yeah, I would probably delete this updater if I were to try this: https://github.com/ramensoftware/windhawk/blob/main/src/wind...


as opposed to any other updater on your system...?

> Tech Enthusiasts: Everything in my house is wired to the Internet of Things! I control it all from my smartphone! My smart-house is bluetooth enabled and I can give it voice commands via alexa! I love the future!

> Programmers / Engineers: The most recent piece of technology I own is a printer from 2004 and I keep a loaded gun ready to shoot it if it ever makes an unexpected noise.

https://imgur.com/6wbgy2L (actually a tweet from someone else, but apparently it's private now)


It's actually not completely outside of my threat profile.

Honestly, with the prevailaince of ransomware attacks, unless you're a literal hermit, it shouldn't be out of anyone's threat profile.


Absolutely. Sufficiently capable LLMs can mass produce exploits against whole ecosystems; recent Anthropic post moves the risk needle from ‘theoretical’ to ‘realized’. Any auto-updating software is running a risk of its cdn and/or build forge being compromised. Scary times.


This is not an updater. Due to the sensitive nature of Windhawk, it has no auto-updating mechanism, only update notifications (this file is part of that).


I didn't say it was. But having the source means you (and others) can vet the code if that's a concern.


If they can’t be bothered to write it, why should I be bothered to read it?


I'm sure lots of "readers" of such articles fed it to another AI model to summarize it, thereby completely bypassing the usual human experience of writing and then careful (and critical) reading and parsing of the article text. I weep for the future.

Also, reminds me of this cartoon from March 2023. [0]

[0] https://marketoonist.com/2023/03/ai-written-ai-read.html


Are people doing this or is this just what, like, Apple or someone is telling us people are doing?

Because I've never seen anyone actually use a summarizing AI willingly. And especially not for blogs and other discretionary activities.

That's like getting the remote from the hit blockbuster "Click" starring Adam Sandler (2006) and then using it to skip sex. Just doesn't make any sense.


I'm curious if the people who are using AI to summarize articles are the same people who would have actually read more than the headline to begin with. It feels to me like the sort of person who would have read the article and applied critical thinking to it is not going to use an AI summary to bypass that since they won't be satisfied with it.


> If they can’t be bothered to write it, why should I be bothered to read it?

Isn't that the same with AI-generated source code? If lazy programmers didn't bother writing it, why should I bother reading it? I'll ask the AI to understand it and to make the necessary changes. Now, let's repeat this process over and over. I wonder what would be the state of such code over time. We are clearly walking this path.


Why would source code be considered the same as a blog post?


I didn't say the source code is the same as a blog post. I pointed out that we are going to apply the "I don't bother" approach to the source code as well.

Programming languages were originally invented for humans to write and read. Computers don't need them. They are fine with machine code. If we eliminate humans from the coding process, the code could become something that is not targeted for humans. And machines will be fine with that too.


Why would I bother to run it? Why wouldn't I just have AI to read it and then provide output on my input?


Many of those who can't be bothered to write what they publish probably can't be bothered to read it themselves, either. Not by humans and certainly not for humans.


They used to say judge the message, not the messenger.

But you are saying that is wrong, you should judge the messenger, not the message.


Now that I think about it, it's rather ironic that's a quote because you didn't write it.


Because the author has something to say and needs help saying it?

pre-AI scientists would publish papers and then journalists would write summaries which were usually misleading and often wrong.

An AI operating on its own would likely be no better than the journalist, but an AI supervised by the original scientist quite likely might do a better job.


I agree, I think there is such a thing as AI overuse, but I would rather someone uses AI to form their points more succinctly than for them to write something that I can't understand.


Tired meme. If you can't be bothered to think up an original idea, why bother to post?


2+2 doesn’t suddenly become 5 just because you’re bored of 4.


If you assume that a LLM's expansion of someone's thoughts is less their thoughts than someone copy and pasting a tired meme, that exposes a pretty fundamental world view divide. I'm ok with you just hating AI stuff because it's AI, but have the guts to own your prejudice and state it openly -- you're always going to hate AI no matter how good it gets, just be clear about that. I can't stand people who try to make up pretty sounding reasons to justify their primal hatred.


I don’t hate AI, I hate liars. It’s just that so far, the former has proven itself to be of little use to anyone but the latter.


AI is newsworthy.


That's subjective. I don't consider Chromium retrofitted with Chat-GPT newsworthy. Some people might. I also don't fault the commenter for being tired of the majority of content on this site being LLM-adjacent. I'm certainly over it.


I like the design of your site. Loads very fast, looks clean.


Thanks! Excellent site-wide performance is something we aim for -- hence our stack of Go + htmx!


Do I miss something? The site only shows a stream but nothing to describe or show the project.


That's the global timeline. If you'd like to look at a demo repositoy https://tangled.sh/@tangled.sh/core is a good one (monorepo for Tangled).


Ah I see, thanks. I only clicked on the first link assuming this was the project page.


Hard to be excited about ChatGPT Agent - Claude Code feels like the right form factor for an agent.


Why wouldn't you want an LLM for a language learning tool? Language is one of things I would trust an LLM completely on. Have you ever seen ChatGPT make an English mistake?


Grammarly is all in on AI and recently started recommended splitting "wasn't" and added the contraction to the word it modified. Example: "truly wasn't" becomes "was trulyn't"

https://imgur.com/a/RQZ2wXA


Hm ... I wonder, is Grammarly also responsible for the flood of contraction of lexical "have" the last few years? It's standard in British English, but outside of poetry it is proscribed in almost all other dialects (which only permit contraction of auxiliary "have").

Even in British I'm not sure how widely they actually use it - do they say "I've a car" and "I haven't a car"?


"they" say "I haven't got a car".

Contractions are common in Australian English to, though becoming less so due to the influence of US English.


In my experience "I've a car" is much more common than "I haven't a car" (I've never heard the latter construct used, but regularly hear the former in casual speech). "I haven't got a car" or "I've no car" would be relatively common though.


This is what peak innovation looks like


I don't think an LLM would recommend an edit like that.

Has to be a bug in their rule-based system?


Gemini: "Was trulyn't" is a contraction that follows the rules of forming contractions, but it is not a widely used or accepted form in standard English. It is considered grammatically correct in a technical sense, but it's not common usage and can sound awkward or incorrect to native speakers.


I wonder how much memes like whomst'd might skew the training set.


Yeah, I agree. An open-source LLM-based grammar checker with a user interface similar to Grammarly is probably what I'm looking for. It doesn't need to be perfect (none of the options are); it just needs to help me become a better writer by pointing out issues in my text. I can ignore the false positives, and as long as it helps improve my text, I don't mind if it doesn't catch every single issue.

Using an LLM would also help make it multilingual. Both Grammarly and Harper only support English and will likely never support more than a few dozen very popular languages. LLMs could help cover a much wider range of languages.


I tried to use one LLM based tool to rewrite sentence in more official corporate form, and it rewrote something like "we are having issues with xyz" into "please provide more information and I'll do my best to help".

LLMs are trained so hard to be helpful that it's really hard to contain them into other tasks


uh. yes? it's far from uncommon, and sometimes it's ludicrously wrong. Grammarly has been getting quite a lot of meme-content lately showing stuff like that.

it is of course mostly very good at it, but it's very far from "trustworthy", and it tends to mirror mistakes you make.


Do you have any examples? The only time I noticed an LLM make a language mistake was when using a quantized model (gemma) with my native language (so much smaller training data pool).


Not GP, but I've definitely seen cutting edge LLMs make language mistakes. The most head scratching one I've seen in the past few weeks is when Gemini Pro decided to use <em> and </em> tags to emphasize something that was not code.


Because this "language learning tool" will be dominantly used to avoid actually learning the language.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: