I have maintained a pandoc filter in Haskell for a while. Pandoc is written in Haskell, and so takes a similar approach: JSON API for arbitrary languages, but with a library for trivially parsing that JSON back into the same Haskell datatype that Pandoc uses under the hood.
When you sit down to write the filter for the first time, it’s amazing. You’re using a typed IR that’s well documented, a language that catches you when you’re making mistakes, etc. You have to do very little boring grunt work and focus only on what the filter needs to do.
Over time, the filter became feature complete, so I didn’t want to have to touch it anymore, but the library for parsing JSON releases a new version for every new feature in the AST, and the parsing function checks that the version your filter was compiled with is at least as new as the pandoc that produced the JSON. It has to, because if the pandoc is newer, your older filter won’t know how to parse some of the nodes.
My filter is feature complete, and shouldn’t need to look at those nodes or their new fields: needing to upgrade is just toil. But over time, pandoc releases new versions, and I’d need to recompile the filter to build the new version. At those points, I also found myself having to deal with library, build tool, OS, or compiler upgrades, all to recompile a filter that should need to be.
Eventually I switched to Pandoc lua filters, which eliminated the toil while also being platform agnostic (and not requiring any sort of notarization or executable quarantine on enterprise systems) at the expense of having to tolerate Lua the language. Now, new versions of Pandoc don’t require me to boop a version number in my filter. If that wasn’t an option, for any future filters I write, I’d write my own JSON parser that only parses as much of the JSON as I needed, leaving the rest untouched—that way it wouldn’t matter if new changes came along. I could even tolerate backwards incompatible changes as long as they didn’t alter the contract of the narrow focus of that one filter!
There are of course other ways to deal with problems like these (protocol buffers, JSON parsing with an option to throw away unrecognized JSON, etc. etc.)
I have not looked at how mdBook plugins handle this. But if I were writing such a plugin, it’s the first thing I’d look at, and be sure to program around.
Mdbook passes a version and the renderer/preprocessor can/should do a version check. Since it uses semantic versioning I would expect it to abide by those rules.
It's an interesting approach. From my skim, the way it works:
1. Parse the files with a Ruby parser, collect all method definition nodes
2. Using location information in the parsed AST, and the source text of the that was parsed, splice the parameters into two lambda expressions, like this[1]:
"-> (#{method_node.parameters.slice}) {}"
3. Evaluate the first lambda. This lets you reflect on `lambda.parameters`, which will tell you the parameter names and whether they're required at runtime, not just statically
4. In the body of the second lambda, use the `lambda.parameters` of the first lambda in combination with `binding.get_local_variable(param_name)`. This allows you to get the runtime value of the statically-parsed default parameters.
This is an interesting and ambitious architecture.
I had though in the past about how you might be able to get such a syntax to work in pure Ruby, but gave up because there is no built-in reflection API to get the parameter default values—the `Method#parameters` and `UnboundMethod#parameters` methods only give you the names of the parameters and whether they are optional or required, not their default values if they are optional.
This approach, being powered by `binding` and string splicing, suffers from problems where a name like `String` might mean `::String` in one context, or `OuterClass::String` in another context. For example:
class MyClass
include LowType
class String; end
def say_hello(greeting: String); end
end
MyClass.new.say_hello(greeting: "hello")
This program does not raise an exception when run, despite not passing a `MyClass::String` instance to `say_hello`. The current implementation evaluates the spliced method parameters in the context of a `binding` inside its internal plumbing, not a binding tied to the definition of the `say_hello` method.
An author could correct this by fully-qualifying the constant:
class MyClass
include LowType
class String; end
def say_hello(greeting: MyClass::String); end
end
MyClass.new.say_hello(greeting: "hello") # => ArgumentTypeError
and you could imagine a Rubocop linter rule saying "you must use absolutely qualified constant references like `::MyClass::String` in all type annotations" to prevent a problem like this from happening if there does not end up being a way to solve it in the implementation.
Anyways, overall:
- I'm very impressed by the ingenuity of the approach
- I'm glad to see more interest in types in Ruby, both for runtime type checking and syntax explorations for type annotations
> how you might be able to get such a syntax to work in pure Ruby, but gave up because there is no built-in reflection API to get the parameter default values
> as opposed to whether the construct is allowed/part of the language
Arguably this is also semantics. Type checking and reporting type errors decides whether a construct is allowed or not, yet belongs squarely in the semantic analysis phase of a language (as opposed to the syntactic analysis phase).
> how it differs from syntax
Consider a language like C, which allows code like this:
if (condition) {
doWhenTrue();
}
And consider a language like Python, which allows code like this:
if condition:
doWhenTrue()
The syntax and the semantics are both different here.
The syntax is different: C requires parens around the condition, allows curly braces around the body, and requires `;` at the end of statements. Python allows but does not require parens around the condition, requires a `:`, requires indenting the body, and does not allow a `;` at the end of statements.
Also, the semantics are different: in C, `doWhenTrue()` only executes if `condition` either is a non-zero integer, or can be implicitly coerced to a non-zero integer.
In Python, `doWhenTrue` executes if `condition` is "truthy," which is defined as whether calling `condition.__bool__()` returns `True`. Values like `True`, non-zero numbers, non-empty containers, etc. are all truthy, which is far more values than in C.
But you could imagine a dialect of Python that used the exact same syntax from C, but the semantics from Python. e.g., a language where
if (condition) {
doWhenTrue();
}
has the exact same meaning as the Python snippet above: that `doWhenTrue()` executes when `condition` is truthy, according to some internal `__bool__()` method.
Do you consider yourself a neovim terminal power user?
I tried a while back to invert my workflow (from tmux driving neovim to neovim driving terminals) because I thought it might be easier to only ever have one buffer open for a given file, instead of attempting to open a file in a given pane only to realize that it's already open in a different neovim instance in a different pane.
When I was testing that stuff out I don't think I noticed particular issues with text reflow that would benefit from being solved by swapping to libghostty, rather my pain points were just about how to adjust to the different paradigm. I'd be curious to hear more about someone who is all in on Neovim embedded terminals (and possibly how libghostty might make it better).
I'm all in on Neovim terminals, having a remote development setup means it keeps my terminal with my neovim window (I use nvim-qt).
Also not sure how ghostty would help, haven't noticed text reflowing issues.
It's not bad, a little awkward getting used to:
- you might want a plugin to give you a "persistent" terminal across all tabs
- I still haven't found a way to clear scroll back while a command is running
- I had to set up mappings for easier exiting terminal mode (c-\ c-n really sucks)
- I had to set up events so whenever a terminal buffer is focused it immediately enters insert mode. While I love vim, I've never wanted modal editing in a terminal
I do indeed live in the terminal (all day due to work), but tmux adds too much value for me to do all terminal management in Neovim (tmux session-management being what I use most). I've just encountered too many visual "glitches" in the Neovim terminal to rely on it for everything. That's not to say, however, that I never use the built-in Neovim :terminal.
> I thought it might be easier to only ever have one buffer open for a given file, instead of attempting to open a file in a given pane only to realize that it's already open in a different neovim instance
I'd be curious to hear more about how tmux helps you — I tried it and besides keeping a permanent session open on a remote server to me I didn't find much use for it compared to regular terminal tabs
I use it daily locally, and find it amusing how many only think of it as being useful on remote servers (not to invalidate your use-case -- I'm just contrasting my own use). As a precursor, I view UNIX as my IDE, of which tmux is a part: this IDE runs on Windows (WSL2), macOS, Linux, and Android (Termux). That aside, here are a few reasons I find tmux to be useful in this concoction of tools:
- Session management. I've written custom scripts for myself around this (zoxide + fzf). If you want to see how this can be used, look at ThePrimagen's workflow. I don't use his scripts but he has a good demo of how he harnesses sessions.
- Unified scrollback management - easily search the scrollback, yank it, etc. My favorite thing to do is to yank part of the scrollback, then `Prefix+B,=` to list everything I've yanked (think of this like a "clipboard manager" specific to tmux), select an entry, and press `e` to edit it in `$EDITOR`.
- This one might be a stretch, but I tend to try and use only terminal tools (without being utterly insane) because then tmux can be my "tiling window manager" no matter what OS I'm on. Oh, I have to use Windows for work? Not to worry, tmux runs in WSL2, as do most of my preferred tools, so I feel mostly at home even though I normally really dislike Windows.
- It's scriptable. Read `man tmux` and use your imagination!
Notwithstanding any of that, there are cons, the most apparent one being that I am limited to text-based tools this way. An example of this: getting images to work in tmux, though many modern terminal emulators support them, is a huge pane, so I haven't bothered.
I think my problem is when I realize that I had unsaved changes open in a different neovim instance. If the file was not dirty in any other open neovim instances then I don't have the same problem.
Various apps already do this, if you find yourself reading lots of PDFs on phones. For example I use PDF Expert on iOS to do this. It’s not perfect—depending on the quality of the PDF there might be weird artifacts in the reflowed text (e.g., “ff” ligatures getting mapped to the Unicode ligature character “Latin Small Ligature Ff (U+FB00)” which breaks copy/paste/search).
But for PDFs which are really hard to read on a phone otherwise, it’s really a nice investment.
Very neat! I was delighted to see that "drag to side of screen" tiled the window using that half of the screen. Then I opened a new window, and I was (unreasonably) surprised to see that there wasn't a tiling window manager that put my second window in the other half of the screen.
I’ve been loving Maestral so far, it’s just the syncing, none of the other stuff. It has some downsides (it can’t upload symlinks but it can download them, and it doesn’t have LAN sync) but it’s super lightweight.
This response article makes the opposite claim, that it was Derek Thompson who played fast and loose with sources. As evidence, the Stoller piece cites testimonials from both Lance Lambert and Luis Quintero saying that in their interviews with Thompson, they never went on record as repudiating claims made on the BIG newsletter.
I could commute to the office every day with nothing but the shirt on my back and the phone in my pocket if my work-provided device were my phone. I would not need a backpack or briefcase, which means that for any errands or dinner plans after work, I don't fumble with a backpack. I already leave my preferred keyboard+mouse at my office desk.
If I needed to fly to another office for a business trip, same story: I could sit down at any desk, grab a spare bluetooth keyboard from IT (if there isn't already one on the bookable desk).
If I'm over at a friend's house for dinner and get paged, I could just ask to sit down at their desk and plug my phone in.
I would love to not have to carry a laptop around to all the places that I do today.
I remember being somewhat sold on this story by the PinePhone, but it seems like it might not be possible to buy one new nowadays.
Having just looked up the PinePhone again for the first time in a while, it does look like the Ubuntu Touch project is still alive and kicking, and compatible with some modern commercially available phones!
The main thing preventing me from having a non-standard Android phone/distribution as a daily driver is access to mobile banking apps - I'm yet to check for myself but as I understand, having an unlocked bootloader means that banking apps will consider the device "compromised" and will not work.
This is cool, I learned about `:cb` today (populate quickfix from current buffer)
I find that if I’m already piping into a buffer, I just leave it as a buffer. Vim’s gf and gf keybindings let me jump to the filename under the cursor, and it being a buffer makes it easier for me to edit (reorganize, group, further filter, etc).
I do think people undervalue the quickfix buffer though!
When you sit down to write the filter for the first time, it’s amazing. You’re using a typed IR that’s well documented, a language that catches you when you’re making mistakes, etc. You have to do very little boring grunt work and focus only on what the filter needs to do.
Over time, the filter became feature complete, so I didn’t want to have to touch it anymore, but the library for parsing JSON releases a new version for every new feature in the AST, and the parsing function checks that the version your filter was compiled with is at least as new as the pandoc that produced the JSON. It has to, because if the pandoc is newer, your older filter won’t know how to parse some of the nodes.
My filter is feature complete, and shouldn’t need to look at those nodes or their new fields: needing to upgrade is just toil. But over time, pandoc releases new versions, and I’d need to recompile the filter to build the new version. At those points, I also found myself having to deal with library, build tool, OS, or compiler upgrades, all to recompile a filter that should need to be.
Eventually I switched to Pandoc lua filters, which eliminated the toil while also being platform agnostic (and not requiring any sort of notarization or executable quarantine on enterprise systems) at the expense of having to tolerate Lua the language. Now, new versions of Pandoc don’t require me to boop a version number in my filter. If that wasn’t an option, for any future filters I write, I’d write my own JSON parser that only parses as much of the JSON as I needed, leaving the rest untouched—that way it wouldn’t matter if new changes came along. I could even tolerate backwards incompatible changes as long as they didn’t alter the contract of the narrow focus of that one filter!
There are of course other ways to deal with problems like these (protocol buffers, JSON parsing with an option to throw away unrecognized JSON, etc. etc.)
I have not looked at how mdBook plugins handle this. But if I were writing such a plugin, it’s the first thing I’d look at, and be sure to program around.
reply