And yet they'll push out AI-driven "message summaries" that are horrifically bad and inaccurate, often summarizing the intent of a message as the complete opposite of the full message up to and including "wants to end relationship; will see you later"?
Was about to point out the same thing. Apple's desperate rush to market, summarising news headlines badly and sometimes just plain hallucinating stuff causing many public figured to react when they end up the target of such mishaps.
> And this is probably coming, a few years from now. Because remember, Apple doesn't usually invent new products. It takes proven ones and then makes its own much nicer version.
Except this doesn't stand up to scrutiny, when you look at Siri. FOURTEEN years and it is still spectacularly useless.
I have no idea what Siri is a "much nicer version" of.
> Apple can integrate it with hardware and software in a way no other company can.
And in the case of Apple products, oftentimes "because Apple won't let them".
Lest I be called an Apple hater, I have 3 Apple TVs in my home, my daily driver is a M2 Ultra Studio with a ProDisplay XDR, and an iPad Pro that shows my calendar and Slack during the day and comes off at night. iPhone, Apple Watch Ultra.
In that list of Apple products that you own, do none of them match the ops comment? You’re saying none of those products are or have been in their time in the market a perfected version of other things?
There are lots of failed products in nearly every company’s portfolio.
AirTags were mentioned elsewhere, but I can think of others too. Perfected might be too fuzzy & subjective a term though.
Perhaps I’m misremembering, but I feel sure that Siri was much better a decade ago than it is today. Basic voice commands that used to work are no longer recognised, or required you to unlock the phone in situations where hands free operation is the whole point of using a voice command.
There were certain commands that worked just fine. But they, in Apple's way, required you to "discover" what worked and what didn't with no hints, and then there were illogical gaps like "this grouping should have three obvious options, but you can only do one via Siri".
And then some of its misinterpretations were hilariously bad.
Even now, I get at a technical level that CarPlay and Siri might be separate "apps" (although CarPlay really seems like it should be a service), and as such, might have separate permissions but then you have the comical scenario of:
Being in your car, CarPlay is running and actively navigating you somewhere, and you press your steering wheel voice control button. "Give me directions to the nearest Starbucks" and Siri dutifully replies, "Sorry, I don't know where you are."
Yeah, my step-daughter is a vegetarian. She cannot opt out of the several thousand dollar a year meal plan at her college despite the campus dining facilities having often only one not-particularly-good vegetarian option (I'm not vegetarian but when visiting I've tried the options).
So we're left with paying her credit card to buy groceries and a largely unused meal plan.
> it’s worth examining why the school feels compelled to make the meal plan mandatory in the first place.
Well, that's often because Aramark and Chartwells (Compass) require that in their contracts. My partner is the Accounting Manager for another university in our state and that's mandated in their contract (along with other clauses like "any event on campus must be offered to us for catering first, and we will either cater it or decline, and only then can you use another caterer").
There can be debates on why that is allowed in the negotiation, though.
I'd say it was the other way around, MAP is an attempt at avoiding the stigma of pedophile, while CSAM is saying "pornography can be an entirely acceptable, positive, consensual thing, but that's not what 'pornography' involving children is, it's evidence of abuse or exploitation or..."
The term CSAM was adopted in the UK following outrage over the "Gary Glitter Effect" - soaring offence rates driven by news of people caught downloading images of unspeakable abuse crimes getting mild sentences for mere child porn.
This is why many feel strongly about defending the term "CSAM" from those who seek to dilute it to cover e.g. mild Grok-style child porn.
The UK Govt. has announced plans to define CSAM in law.
I had to make a choice to not even use Grok (I wasn't overly interested in the first place, but wanted to review how it might compare to the other tools), because even just the Explore option shows photos and videos of CSAM, CSAM-adjacent, and other "problematic" things in a photorealistic manner (such as implied bestiality).
Looking at the prompts below some of those image shows that even now, there's almost zero effort at Grok to filter prompts that are blatantly looking to create problematic material. People aren't being sneaky and smart and wordsmithing subtle cues to try to bypass content filtering, they're often saying "create this" bluntly and directly, and Grok is happily obliging.
> That has never been the job. The job is to prevent guns, knives, swollen batteries, or anything else that could be a safety threat during air travel.
A job that by their own internal testing, they do well less than 5% of the time (some of their audits showed that 98% of fake/test guns that were sent through TSA got through checkpoints).
reply