I find Zed has some really frustrating UX choices. I’ll run an operation and it will either fail quietly, or be running in the background for a while with no indication that it is doing so.
> this supposed ability of determining whether or not content is AI-generated doesn't exist.
It seems like you’re just wrong here? Em dashes aside, the ‘style’ of llm generated text is pretty distinct, and is something many people are able to distinguish.
No, I'm not wrong. Someone could easily write in the default output style of ChatGPT by hand (which will probably become increasingly common the longer that style remains in place), and someone could easily collaborate with ChatGPT on writing that looks nothing like what you're thinking.
If organizations like schools are going to rely on tools that claim to detect AI-generated text with a useful level of reliability, they better have zero false positives. But of course they can't, because unless the tool involves time travel that isn't possible. At best, such tools can detect non-ASCII punctuation marks and overly cliched/formulaic writing, neither of which is academic dishonesty.
Okay, you’re right that the LLM writing style isn’t singularly producible by LLM’s. However, I’m not sure why this writing style would become increasingly common? I don’t see why people would mimic text that is seen as low quality or associated with academic dishonesty.
Additionally, I do think it is valuable to determine if a piece of text is valuable, or more precisely, what I’m looking for. As others have said, if I want info from a LLM about a subject, it is trivial for me to get that. Oftentimes I am looking for text written by people though.
However, I’m not sure why this writing style would become increasingly common?
I was basing that on a few factors, off the top of my head:
1. Someone might pick up mannerisms while using LLMs to help learn a new language, similarly to how an old friend of mine from Germany spoke English with an Australian accent because of where she learned English.
2. Lonely or asocial people who spend too much time with LLMs might subconsciously pick up habits from them.
3. Generation Beta will never have known a world without LLMs. It's not that difficult to imagine that ChatGPT will be a major formative influence on many of them.
As others have said, if I want info from a LLM about a subject, it is trivial for me to get that.
Sure, it's trivial for anyone to look up a simple fact. It's not so trivial for you to spend an hour deep-diving into a subject with an LLM and manually fact-checking information it provides before eventually landing on an LLM-generated blurb that provides exactly the information you were looking for. It's also not trivial for you to reproduce the list of detailed hand-written bullet points that someone might have provided as source material for an LLM to generate a first draft.
This is all future concerns; if it happens, then people can change their heuristics. There's no point trying to predict all possible futures in everything that you do.
The comment you're replying to isn't related to the topic of heuristics. The first part is explicitly an answer to a question concerning a future prediction.
If the promise is: when using the AT Protocol you have control over your own data, then this is self-guaranteeing, since it is a part of the spec that you can self host a PDS.
The promise that Bluesky will always be compliant with the spec, or that the spec won’t ever change to disallow this isn’t self-guaranteeing, but you could say something similar about any of these self guaranteeing promises. For example the promise that Obsidian will always use markdown isn’t self-guaranteeing.
> The promise that Obsidian will always use markdown isn’t self-guaranteeing.
True, but Obsidian doesn't make that promise. The promise is "file over app": you control the files you create. In this way the promise is not reversible, and self-verifiable.
"...will always use markdown" is not something any app can guarantee. At best an open source app can guarantee it for a specific version (assuming it doesn't require a connection, or the user can self-host the server).
I think the main difference is the expectation that you start signalling, if appropriate, before joining the roundabout. My main complaint (OK one of my main complaints) is that drivers turning right start signalling right and then forget to signal left as they leave...
Might be a regionalism, but here in Oregon, we don't signal going in, but signal right before we intend to exit. That way the next incoming driver can enter the roundabout and keep traffic flowing. We have a LOT of roundabouts though, like dozens upon dozens, and many of them are over saturated. It may be a local response to the traffic patterns here, not sure.
The most important thing is for everyone to speak the same protocol, provided that the protocol meets some minimum standard of fitness-for-purpose. But… yeah, I think you're doing it wrong.
Not a desktop. :) ChromeOS is closer to counting, but while it's technically Linux, that's really just an implementation detail in the same way it would be for a mall kiosk. It's hard to do actually Linuxy things with it that desktop Linux enthusiasts envision.
The meaning and purpose of “desktop” has changed over the years. The “year of Linux on the desktop” was about Linux going mainstream and being used by average joes. Well the average joe doesn’t even use a desktop anymore on a daily basis—the smartphone or tablet has replaced that, and the majority of these devices run Linux.
Also I’m channeling my inner stallman here, but Linux is just the kernel.
I think this question concedes that there is some possibility that one could experience an incorrect puberty.
Given the definition of maturity is being fully grown, this comes across as an inherently unhelpful thing to ask. If we say “only once someone is fully grown they are able to determine if they experienced the incorrect puberty” then this makes it impossible to help children who are going to experience the incorrect puberty. Unless we have some way to determine a child is trans without any input from them, there becomes no way to help them.
The possibility of being unable to help people is not an excuse for hurting them or others. Generally if you can't know the correct action than you should stick to the status quo.
What's next, gene therapy because the embryo might want to be a different race when it grows up?