I love Asianometry! The semiconductor history videos are incredible. The level of quality on that channel just blows my mind. It’s one of the best examples of the revolution happening in high-quality, independently produced content.
I actually first heard about it from the Acquired podcast, which is another great example of that same trend.
I get the frustration, but I think there’s a hidden assumption in this discussion: that everyone can write well in English.
Only about 5% of the world’s population are native English speakers, but more than twice that number use it daily. For many, AI rewriting isn’t about losing personal style—it’s about making sure they can communicate clearly in a language that isn’t their first.
It’s fair to dislike how AI flattens unique voices, but for a huge part of the world, it’s not erasing personality—it’s making participation possible.
If you can't translate your email into a foreign language, how is the AI going to rewrite your incorrect prose into correct prose? it's working on something incorrect. And then after the ai rewrites it, how can you tell if it's saying what you want it to?
When I'm communicating with a non-native speaker, I intentionally use shorter / easier to translate words and sentences, and I give them more leeway with word usage / don't expect them to use the right words all the time. And that's fine! Communication still happens! We manage!
But if a non-native speaker starts running their text through an AI it makes communication harder, not easier. I can't tell if their word choice is intentional or if the AI did it. A tiny mistake I can understand gets expanded into multiple incorrect sentences.
>If you can't translate your email into a foreign language, how is the AI going to rewrite your incorrect prose into correct prose? it's working on something incorrect. And then after the ai rewrites it, how can you tell if it's saying what you want it to?
Absolutely this. "Accessibility" and "participation" are great goals on paper, but the tools at hand are likely to introduce confusion because the user fundamentally isn't in a position to judge the quality of the output.
Last year I worked with someone who used AI tools like this, to compensate for their lack of English. It was dreadful beyond belief and basically unworkable.
Lack of comprehension on what other people said was a big issue. But also having four incomprehensible paragraphs thrown at me for what could be six words (not infrequently based on a misunderstanding of a very basic simple sentence).
I'm not a native speaker either, but the only way to learn a language is to actually use it. For better or worse, English is the modern world's lingua franca.
How are you supposed to communicate clearly if you are relying on an AI to communicate for you? How could you even tell if it properly communicates your ideas if you couldnt communicate them properly in the first place?
AI translation is definitely a great enabler, both for written material and things like live subtitles, but people are already aware that translations are imperfect and can be disputed. Something that anime fans can get very heated about.
English is not my native language yet somehow I share this sentiment towards AI. I'm fine with a spell checker, I don't need whatever I write completely rewritten, thank you very much.
The proper solution is to work with an editor that asks clarifying question not to rewrite the whole thing into something totally different.
For published work, if it's not worth editing then it's not worth reading (I would go further personally and say that most publish, edited and peer reviewed work, in your area of interest isn't worth reading anyways)
For unpublished work, like an email, ask the AI to translate the passage while maintaining style and tone. It will still flatten it, but not as much as the complete dogshit I read in the article.
Communication is a job requirement, faking it with AI is going to go about as well as someone faking programming skills. Not very!
Absolutely with you on the need for high-impact tests. I find that humans are still way better at coming up with the tests that actually matter, while AI can handle the implementation faster—especially when there’s a human guiding it.
Keeping a human in the loop is essential, in my experience. The AI does the heavy lifting, but we make sure the tests are genuinely useful. That balance helps avoid the trap of churning out “dumb” tests that might look impressive but don’t add real value.
Totally agree, especially about the need for well-architected, high-impact tests that go beyond just coverage. At Loadmill, we found out pretty early that building AI to generate tests was just the starting point. The real challenge came with making the system flexible enough to handle complex customer architectures. Think of multiple test environments, unique authentication setups, and dynamic data preparation.
There’s a huge difference between using an LLM to crank out test code and having a product that can actually support complex, evolving setups long-term. A lot of tools work great in demos but don’t hold up for these real-world testing needs.
And yeah, this is even trickier for higher-level tests. Without careful design, it’s way too easy to end up with “dumb” tests that add little real value.
It reminds me of a similar cooking technique in Israeli Armored Corps called Transmission Spam.
We used to place a can of spam on the tank's gearbox before a long drive, and by the time we reached our destination, the spam was heated up and ready to eat.
That is why DIY is usually more expensive than managed services. Engineering hours are expensive and best spent on your core competencies.
DIY only make sense at a very small scale or very large scale, everything in between is usually best offloaded to those which do it as their core competency.
I would caution against sweeping generalizations like that. In this case “diy” part is basically just configuration management which with dd you will have to do anyway. And sure they make it slightly easier by providing defaults for most things but Prometheus/grafana do a decent job at it too.
More broadly I’ve never used managed service that would “just work” and wouldn’t require substantial configuration and often times bunch of workarounds but maybe those exist
At every single org I've been where Datadog has been considered, the conclusion has been "Yes, it would be cool, but we really can't justify the price."
Yes, in theory, in the middle scale, you should outsource things, but in practice, it only works if the managed service is at the right price.
I see OpenTelemetry as an application of the same idea pushed by Google via the k8s revolution.
Create a great vendor-agnostic open source tech. Get everyone riled up about the dangers of vendor-locking solutions. Use the new tech to carve yourself a piece of the market from the current incumbent.
It is pretty great and all, but sometimes it is easier to build your app with a simple vendor-locked tech than a super generic agnostic technology.
It’s kinda important to understand who all this is meant for. If you’re a lean startup just use the best/cheapest/quickest tool regardless of vendor lock in. It’s when you get to a certain scale that vendor agnosticism becomes a real concern, but by then you probably have enough resources to hire folks that will rebuild your stack.
> It is pretty great and all, but sometimes it is easier to build your app with a simple vendor-locked tech than a super generic agnostic technology.
That's for sure golden. Until the product is bought over, or there was a merge of companies, or you name it. And then you end up with a pile of products with different vendor-locked log formats, metrics. At that point you'd like to get some standardization. And OpenTelemetry is a perfect candidate for common ground. Thus support of OpenTelemetry becomes a major decision factor when selecting a vendor or OSS solution for your problem, isn't it?
I actually first heard about it from the Acquired podcast, which is another great example of that same trend.