> I really can't think of anything that comes close in terms of [...] developer experience.
Of all the languages that I have to touch professionally, C# feels by far the most opaque and unusable.
Documentation tends to be somewhere between nonexistant and useless, and MSDN's navigation feels like it was designed by a sadist. (My gold standard would be Rustdoc or Scala 2.13 era Scaladoc, but even Javadoc has been.. fine for basically forever.) For third-party libraries it tends to be even more dire and inconsistent.
The Roslyn language server crashes all the time, and when it does work.. it doesn't do anything useful? Like cross-project "go-to-definition" takes me to either a list of members or a decompiled listing of source code, even when I have the actual source code right there! (I know there's this thing called "SourceLink" which is.. supposed to solve this? I think? But I've never seen it actually use it in practice.)
Even finding where something comes from is ~impossible without the language server, because `using` statements don't mention.. what they're even importing. (Assuming that you have them at all. Because this is also the company that thought project-scoped imports were a good idea!)
And then there's the dependency injection, where I guess someone thought it would be cute if every library just had an opaque extension method on the god object, that didn't tell you anything about what it actually did. So good luck finding where the actual implementation of anything is.
I almost exclusively work in C# and have never experienced the Roslyn crashes you mentioned. I am using either Rider or Visual Studio though.
> Like cross-project "go-to-definition" takes me to either a list of members or a decompiled listing of source code, even when I have the actual source code right there!
If these are projects you have in the same solution then it should never do this. I would only expect this to happen if either symbol files or source files are missing.
You can back up a debunking with receipts or reputation. Ideally, both.
You and anotherlogin448 have neither, but also show incredible aggression towards anyone pointing that out.
Your confidence might actually be warranted, but there's no reason for any one of us to take you on your word, and neither of you have given anything else.
But do you call that latter thing you do “an em-dash”? Do you tell a peer “You should put an em-dash here” when what you mean is a “space en-dash space”?
A couple of years ago the OCaml and Julia languages already had to deal with a content farm that created wikis for them, filled them with LLM-generated, blatantly wrong or stupidly low quality content, and SEOed its way above actual learning materials. Cue in the newbies to these languages being incredibly confused.
This at least tries to generate the text out of the actual project, but I'm pessimistic and think it'll cause similar confusion.
The former. If you intend to hard-fork then Git's model is already fine. If you're soft-forking and want to model your divergence explicitly then Lappverk might be for you.
Yeah, I mentioned Quilt in the post! Lappverk is effectively an exercise in "What if Quilt, but you could interact with it using any Git tooling, rather than Quilt's half-baked custom VCS?".
I haven't, no. But as far as I can tell from the documentation, it looks more like an alternative to stgit (with a similar lack of history or collaboration support)?
stgit is similar in that it sits with git, but it's not the same workflow. git-spice has branches per feature that base on one-another. It's more git-like than quilt-like.
What you get is git-styled patch-series development. Branches on branches where each branch maintains history of the feature.
A git-spice workflow is compatible with GitHub style PRs where a PR depends depends on another PR.
Or pair git-spice with format-patch you can share with developers who prefer patch files. Or take patches from someone and import each patch as a branch, then let git-spice track the stack position.
> fork the repo (at whatever tag makes sense), then periodically sync with the latest code for that version.
Yeah, this is the workflow that Lappverk is trying to enable.
The problem is that neither of Git's collaboration models works well for this problem. Rebasing breaks collaboration (and history for the patchset itself), and merging quickly loses track of individual patches. Lappverk is an attempt to provide a safer way to collaborate over the rebase workflow.
But you can always create a new branch before rebasing if you want to store the old revision metadata. or do a git format-patches if you don’t want a bunch of branches laying around. So what are the ways to be safer than this?
The difference is that git rebasing is a destructive operation, you lose track of the old version when you do it. (Yes, there's technically the reflog.. but it's much less friendly to browse, and there's no way to share it across a team.)
Maybe that's an okay tradeoff for something you use by yourself, but it gets completely untenable when you're multiple people maintaining it together, because constantly rebasing branches completely breaks Git's collaboration model.
I worked at a place that was allergic to contributing patches upstream. We maintained a lot of internal forks for things and had no problem collaborating.
You don't need to push the rebased branch to the same branch on your remote, if that's an issue (although I don't see how it is).
Maybe this is a case of "Dropbox is just rsync", but I feel like just learning git and using it is easier than learning a new tool.
> I feel like just learning git and using it is easier than learning a new tool
I would agree if this "new tool" we're talking about wasn't just a simple wrapper over existing git commands. You can learn it in its entirety, including how it works (not just how to use it), in a matter of a half hour or less.
We do this for some of the components that are shared between Servo and Firefox. Firefox is upstream, and on the Servo side we have automated and manual syncing. The automated syncing mirrors the upstream `main` branch to our `upstream` without changes daily. The manual syncing rebases our changes on top a new upstream version through a manual rebase process. This happens monthly and each sync is pushed to a new branch to maintain history.
Between monthly syncs we push our own changes to our latest monthly branch (which also get manually sent upstream when we get a chance).
I see — you’re doing more than “here’s a few patches to keep working across revisions”, you’re doing separate-path feature work on a different, actively-developed project.
To me that sounds like not a great idea, but if you must do it, I could see some usefulness to this.
Yeah. For reference, this is a typical patchset for the project that motivated it.[0] Some of the patches are "routine" dependency upgrades, some of them are bugfix backports, some of them are original work that we were planning to upstream but hadn't got around to yet. Some are worth keeping when upgrading to a new upstream version, some aren't.
I agree that it's not ideal, but... there are always tradeoffs to manage.
> The difference is that git rebasing is a destructive operation, you lose track of the old version when you do it. (Yes, there's technically the reflog.. but it's much less friendly to browse, and there's no way to share it across a team.)
Just tag v1, v2, etc. Then push tags as normal for collaboration. git range-diff is excellent to inspect the changes if you want to see how a patchset changed.