I really like the idea of the Unix model as well, but you're not going to be able to use it effectively to write an actual application. If you're writing a word processor and you need to find the levenshtein distance between the most frequent word pairs (maybe some measure of how alliterative/consonant/assonant your document is?) then you're probably not going to be building the word processor using the Unix model, and even if you are (the closest you can get is probably using Tcl/Tk?) then it's still best to write out what you're doing as clear as possible. Note that it took me about 5 minutes to figure out what the shell pipeline presented in the article actually does, and multiple times my reasoning about it lead me to think "wait, does this actually do what it's supposed to do?"
A word processor is anti-Unix on its face. If you want the Unix equivalent, look no further than vi and TeX. With vi you can pipe your document through a spellchecker such as gnu aspell [1].