as opposed to any other updater on your system...?
> Tech Enthusiasts: Everything in my house is wired to the Internet of Things! I control it all from my smartphone! My smart-house is bluetooth enabled and I can give it voice commands via alexa! I love the future!
> Programmers / Engineers: The most recent piece of technology I own is a printer from 2004 and I keep a loaded gun ready to shoot it if it ever makes an unexpected noise.
Absolutely. Sufficiently capable LLMs can mass produce exploits against whole ecosystems; recent Anthropic post moves the risk needle from ‘theoretical’ to ‘realized’. Any auto-updating software is running a risk of its cdn and/or build forge being compromised. Scary times.
This is not an updater. Due to the sensitive nature of Windhawk, it has no auto-updating mechanism, only update notifications (this file is part of that).
I'm sure lots of "readers" of such articles fed it to another AI model to summarize it, thereby completely bypassing the usual human experience of writing and then careful (and critical) reading and parsing of the article text. I weep for the future.
Also, reminds me of this cartoon from March 2023. [0]
Are people doing this or is this just what, like, Apple or someone is telling us people are doing?
Because I've never seen anyone actually use a summarizing AI willingly. And especially not for blogs and other discretionary activities.
That's like getting the remote from the hit blockbuster "Click" starring Adam Sandler (2006) and then using it to skip sex. Just doesn't make any sense.
I'm curious if the people who are using AI to summarize articles are the same people who would have actually read more than the headline to begin with. It feels to me like the sort of person who would have read the article and applied critical thinking to it is not going to use an AI summary to bypass that since they won't be satisfied with it.
> If they can’t be bothered to write it, why should I be bothered to read it?
Isn't that the same with AI-generated source code? If lazy programmers didn't bother writing it, why should I bother reading it? I'll ask the AI to understand it and to make the necessary changes. Now, let's repeat this process over and over. I wonder what would be the state of such code over time. We are clearly walking this path.
I didn't say the source code is the same as a blog post. I pointed out that we are going to apply the "I don't bother" approach to the source code as well.
Programming languages were originally invented for humans to write and read. Computers don't need them. They are fine with machine code. If we eliminate humans from the coding process, the code could become something that is not targeted for humans. And machines will be fine with that too.
Many of those who can't be bothered to write what they publish probably can't be bothered to read it themselves, either. Not by humans and certainly not for humans.
Because the author has something to say and needs help saying it?
pre-AI scientists would publish papers and then journalists would write summaries which were usually misleading and often wrong.
An AI operating on its own would likely be no better than the journalist, but an AI supervised by the original scientist quite likely might do a better job.
I agree, I think there is such a thing as AI overuse, but I would rather someone uses AI to form their points more succinctly than for them to write something that I can't understand.
If you assume that a LLM's expansion of someone's thoughts is less their thoughts than someone copy and pasting a tired meme, that exposes a pretty fundamental world view divide. I'm ok with you just hating AI stuff because it's AI, but have the guts to own your prejudice and state it openly -- you're always going to hate AI no matter how good it gets, just be clear about that. I can't stand people who try to make up pretty sounding reasons to justify their primal hatred.
That's subjective. I don't consider Chromium retrofitted with Chat-GPT newsworthy. Some people might. I also don't fault the commenter for being tired of the majority of content on this site being LLM-adjacent. I'm certainly over it.
Why wouldn't you want an LLM for a language learning tool? Language is one of things I would trust an LLM completely on. Have you ever seen ChatGPT make an English mistake?
Grammarly is all in on AI and recently started recommended splitting "wasn't" and added the contraction to the word it modified. Example: "truly wasn't" becomes "was trulyn't"
Hm ... I wonder, is Grammarly also responsible for the flood of contraction of lexical "have" the last few years? It's standard in British English, but outside of poetry it is proscribed in almost all other dialects (which only permit contraction of auxiliary "have").
Even in British I'm not sure how widely they actually use it - do they say "I've a car" and "I haven't a car"?
In my experience "I've a car" is much more common than "I haven't a car" (I've never heard the latter construct used, but regularly hear the former in casual speech). "I haven't got a car" or "I've no car" would be relatively common though.
Gemini: "Was trulyn't" is a contraction that follows the rules of forming contractions, but it is not a widely used or accepted form in standard English. It is considered grammatically correct in a technical sense, but it's not common usage and can sound awkward or incorrect to native speakers.
Yeah, I agree. An open-source LLM-based grammar checker with a user interface similar to Grammarly is probably what I'm looking for. It doesn't need to be perfect (none of the options are); it just needs to help me become a better writer by pointing out issues in my text. I can ignore the false positives, and as long as it helps improve my text, I don't mind if it doesn't catch every single issue.
Using an LLM would also help make it multilingual. Both Grammarly and Harper only support English and will likely never support more than a few dozen very popular languages. LLMs could help cover a much wider range of languages.
I tried to use one LLM based tool to rewrite sentence in more official corporate form, and it rewrote something like "we are having issues with xyz" into "please provide more information and I'll do my best to help".
LLMs are trained so hard to be helpful that it's really hard to contain them into other tasks
uh. yes? it's far from uncommon, and sometimes it's ludicrously wrong. Grammarly has been getting quite a lot of meme-content lately showing stuff like that.
it is of course mostly very good at it, but it's very far from "trustworthy", and it tends to mirror mistakes you make.
Do you have any examples? The only time I noticed an LLM make a language mistake was when using a quantized model (gemma) with my native language (so much smaller training data pool).
Not GP, but I've definitely seen cutting edge LLMs make language mistakes. The most head scratching one I've seen in the past few weeks is when Gemini Pro decided to use <em> and </em> tags to emphasize something that was not code.
reply