This seems like the most likely explanation. Legacy AI out in favour of LLM focused AI. Also perhaps some cleaning out of the old guard and middle management while they're at it.
Asahi linux is making great progress. The only thing they have left to make it a truly capable linux environment is USB-C external display support. Once that lands I plan to use my M-series mac as a Linux machine.
Add to this list, ability to verify correct implementation by viewing a user interface, and taking a holistic code-base / interface-wide view of how to best implement something.
Running a platform where billions of users are able to communicate is pretty technologically marvel.
Lest not forget when Hotz said he could easily fix Twitter's search functionality only to give up after 3 months [1]. When immensen scale is involved things do become difficult.
Like we have a comment below taking a shot at Phillip Morris [2]. Lets see you grow, process, and distribute 1/100 the quality of cigarettes. The end product might not be that great for society but it's not trivial to do it either.
Withdrawal symptoms :P The buyer decides whether their problem is a good one to have and whether the solution is adequate. Even when it's, objectively, not.
Thanks! Unlike a lot of our competitors who use search-inspired UX, we went with an agentic approach inspired by tools like Cursor - basically iterative user control.
Instead of just search query → final result (though you can do that too), you can step in and guide it. Tell it exactly where to look, what sources to check, how to dig deeper, how to use its notepad.
We've found this gets you way better results that actually match what you're looking for, as well as being a more satisfying user experience for people who already know how they would do the job themselves. Plus it lets you tap into niche datasets that wouldn't show up with just generic search queries.
Everyones feed is different.
It depends on how much you train the algorithm.
Yours is untrained, therefore slop.