Today they are targeting people shooting rockets, tomorrow they will target people commenting on these posts, the day after they will target specific group of people.
So you may be safe today, what happens when they don't like your opinion ?
I think that was somewhat the point of this, to simplify the future complex scenarios that can happen. Because problems that we need to use AI to solve will most of the times be ambiguous and the more complex the problem is the harder is it to pin-point why the LLM is failing to solve it.
For their skill at accomplishing their job. Their jobs are primarily skill-based and customer-facing. A taxi driver who gets you where you want to go quickly and safely, a waiter who never lets your coffee cup get empty, a barber who makes you look... well dang, pretty nice!
So you may be safe today, what happens when they don't like your opinion ?
reply