Hacker Newsnew | past | comments | ask | show | jobs | submit | Daiz's commentslogin

Been using (and occasionally contributing to) Sharp for quite a while, both professionally and personally. Great library to have at hand when you need to deal with images.


The key to good endnotes is to make them a nice bonus for those interested rather than required reading for everyone. Basically, make the main text work on its own, then get into the weeds of translation details separately. It's the best of both worlds, though admittedly requires quite a bit of extra work to pull off.


Well... the very first paragraph of the article does say with highlighting how "the presentation quality for translations of on-screen text has taken a total nosedive". And then it shows visual examples of the new bad quality and gives comparison screenshots demonstrating good quality shortly after.


A small suggestion: ask a friend of yours who don't regularly watch anime to read your article for 20 seconds, and see if they can explain what it is about.

I spent quite some hours on CR, yet it was maybe until 30 seconds later before I realized what the "nosedive" refers to exactly. In fact, I kept thinking "quality" refers to "translation quality" and I was puzzled I could not see obvious issues.

It doesn't need to be that. Anyone given side-by-side screenshots without additional contexts should immediately tell you what's happening, and I've read lots of blog posts like that.

More specifically, the article provides 4 bad screenshots at fitst. I actually went through 3 of them. I kind of guessed what you meant but wasn't sure. Then there is another gallery of good ones. Why? Just provide good vs bad at the top, explicitly explain what's the expectation, and if needed, provide more examples. That'll be 200% better than this.


I made some revisions to the start of the article in order to make things more clear to the layman unfamiliar with anime and subtitling. Hopefully that clears things up!


The addition does indeed provide the clarity I sought. For reasons I won't bore you with, I truly could not discern what the issue was. Every one of my hunches was wong.

Hopefully you can see I was disappointed because it was something I wanted to care about, I just wasn't sure what it actually was I was supposed to care about.

I appreciate your openness to feedback, and I think the article is better for it.


The article says there's a nosedive. But by what standard(s)? See the questions I already posted in my original response.

Both the "good" and "bad" quality examples contain subtitles with no discernible difference. All examples contain legible subtitles. So where's the "nosedive"?

There's clearly some anime-specific context and nuance that is NOT communicated with context-less screencaps.

Perhaps the article wasn't written for someone unfamiliar with anime, and I'm not meant to understand, but it would be helpful to have the difference explained. Not to mention the improved accessibility for screen readers or folks with sensory processing issues like myself. At a minimum, marking up the image would be helpful. Circle things. Arrows. Help me understand, don't drop me into unfamiliar territory and leave me to guess.


The difference is that when there is text on the original video material, in the good examples the translations are positioned in the proximity of the original texts, and styled similarly, which makes it easier to understand what is translation of what, and generally improves immersion.

In the bad examples, the translations for the texts are mixed with the lines the characters are speaking, which makes it harder to follow.


That makes sense. Thanks very much for the clarification.


Good catch, thanks - I literally built the site alongside the article, so there's still some rough edges here and there.


Unfortunately, as the link describes, Netflix only makes this available for a very limited set of languages, while everyone else is stuck with the extremely limited text-based standards.

Frankly, those text-based subtitle standards are quite maddening on their own. Netflix's text-based subtitle rendering seems to support a much wider set of TTML features than what it actually allows subtitle providers to use - so if these restrictions were to be slightly relaxed, providers could start offering better subtitles for anime immediately with no additional effort from Netflix.


What Netflix supports on their main website might not be what they care about, though; you used to be able to watch Netflix on the Nintendo Wii, and they probably still have some users on stupidly old smart TVs.


Also fun fact - subtitles did not work on the Wii at all if you were running a video streaming service!

The BBC spent literally years trying to engineering something that did not result in it being unable to playback video smoothly and failed.


Fast forward to 2025 and BBC's streaming app on ApppleTV only just added subtitles; vastly more powerful hardware but so many restrictions from Apple on how developers use it.


> CR could have used the ASS subs on their website and given the less-dynamic sub files to their vendors.

This is exactly what CR was doing for the past couple years, though you can't just automatically convert a fancy ASS file with typesetting into the limited kind of TTML subtitles that general streaming services expect, which is why Crunchyroll has been paying its subtitling staff extra to make those conversions semi-manually.

Though Crunchyroll could definitely improve its standard ASS workflows in ways that would make that conversion process significantly more automated with minimal extra effort on the subtitling staff's part. It wouldn't even be that hard, I've done something like that myself when I had to mangle ASS into limited WebVTT for some streaming work I did at one point.


> This is exactly what CR was doing for the past couple years, though you can't just automatically convert a fancy ASS file with typesetting into the limited kind of TTML subtitles that general streaming services expect, which is why Crunchyroll has been paying its subtitling staff extra to make those conversions semi-manually.

Surely automatically converting into a lesser subtitle format is a much better use of AI than machine transcription. I disagree with the idea that "you can't just automatically convert" at today's technology level.


Both services explicitly disallow this by default in their delivery specifications, unfortunately.

Netflix: https://partnerhelp.netflixstudios.com/hc/en-us/articles/215...

> Netflix requires a non-subtitled version of the content. Netflix defines “non-subtitled” as the presence of main titles, end credits, location call-outs, and other supportive/creative text, but no burned-in subtitled dialogue, regardless of the language in the primary video.

Amazon: https://videocentral.amazon.com/support/delivery-experience/...

> Video

> Global packaging requires component asset packages to be delivered with a semi-textless video file that can be localized with discrete subtitles and audio dubbing.

> Also known as “Texted with no subtitles,” “Textless with main, ends, and graphic text,” and “Non-subtitled”, Prime Video defines semi-textless as a video master without burned-in subtitles, regardless of the language.


Quality typesetting is just as important for dubs as it is for subs, actually! All that on-screen text will be there regardless of which audio track you are using.


Lack of closed captions and dubtitles are definitely very real issues as well, though this article is solely focused on subtitles.


The linked article here goes over all of that in great detail!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: