There's definitely a lot more I think but they just don't all get posted there. I know I've seen various FOSS jobs but they get posted or talked about in IRC, mailing lists for a specific project, etc.
But you're right to say there aren't a ton relative to non-FOSS jobs.
tmux changed screen's C-a prefix to a more sensible C-b. C-a is pretty universally used as "jump to beginning of line", which for screen meant that you'd have to git "C-a a" instead all the time. Seriously annoying, and therefore lots of screen configs changed the prefix away from C-a for that reason.
Okay, but C-b is "go back one char" and that's also pretty common? I'd personally lean towards saying maybe either C-z or C-s are better? Because suspending a process is a lot less useful when you've got a full multiplexer, and C-s is just... I guess somebody uses it, but I don't think I've ever used it on purpose and it's kind of annoying to accidentally freeze your terminal (and again, tmux has an actual scrollback mode which seems like it covers that use).
Good point. Though C-s is fine unless you have software scroll control enabled, and it's used in emacs and anything that follows its key bindings very often as well for incremental search forward. (It's not used in the shell even though that typically implements emacs bindings, because there you usually want to incrementally search backward, i.e. C-r.) I think C-z is the perfect candidate for the reasons you mentioned.
Left arrow is not where my fingers are, but ctrl-b is.
I would have to move my whole hand over to the arrow keys, or I can use the edge of my right hand (on a full size keyboard, not laptop) to hit ctrl, and hit "b" with my left index finger.
I use F2. Unlike the arrow keys, I don't have to move my hands and I can hit it reliably. I use it enough that I want it within reach, but not so often that I need it on the home row, and all Ctrl combinations there are already tied to emacs operations in my mind.
Same reasons you might need Ctrl-a instead of Home...
1) Some keyboards don't have it, or have it in an awkward place. Most Android on-screen keyboards don't have it (good time to plug Hacker Keyboard). The gestures on Blackberry (e.g. Android) physical keyboards act like scroll-wheel movements rather than cursor keys.
2) Some shells/systems/terminal emulators/some TERM= settings/etc. just don't handle cursor or home/end keys in the console, and instead splat out garbage like: ^[[C^[[D^[[7~^[[5~
What, and move my entire hand to the far side of the keyboard like some sort of savage? (But seriously, I actually do prefer avoiding moving my hands for ergonomic reasons.)
You should do whatever you want with your hands, but you should be aware that not moving your hands is bad for ergonomics. It promotes static posture and staying in one position for too long can cause muscle stiffness and strain. Moving around promotes blood flow and reduces fatigue. Consistent doing a small set of motions (ie never moving your hands) is what leads to RSI.
Of course, a bad setup is a bad setup, and if you're moving your hands a lot because your workspace is poorly setup, you'll also have issues.
For best results, you'll want to keep your hand movements natural and comfortable and your tools within easy reach. Take short breaks to move your hands, as well as your entire body.
What about effortless manipulation? I have CapsLock as Control and shell manipulation has become easier with readline keybinds. I can touchtype but, the extra keys are always at a different place so I don't bother learning their emplacement. So I rest my hands, but I prefer those keybinds because the already short typing bust is even shorter.
I find most of shellcheck's opinions quite arbitrary.
Wider shell compatibility is a good reason to use backticks. Avoiding nesting is always possible. Quoting may be a concern in more complex usage. But yes, I know well, and often also use $( ) syntax.
Making a lot of noise about use of (the shellcheck author's) non-preferred syntax, which works perfectly fine in a given context, is a design flaw that renders shellcheck useless to me. I'm perfectly capable of finding an opinionated pedant on my own who will also critize the syntax I use (with little or no justification), and offer no actual help...
The critical thinking part of me loves seeing takes like this, which immediately rev it up and get it to wonder whether its immediate reaction is really correct. Thank you.
A slice operation s[i:] seems like it should be little more than an ADD instruction for a registerized slice where i is known to be in bounds, but a surprising little detail is that when i==cap(s) we really don't want to create an empty slice whose data pointer points one past the end of the original array, as this could keep some arbitrary object live. So the compiler generates a special branch-free sequence (NEG;SUB;ASR;AND) to compute the correct pointer increment, ((i - cap) >> 63) & (8 * i).
Only really a gotcha if you pass a slice into a function and expect to see modifications in that slice after the function completes. It's helpful to remember that Go passes by value, not reference.
> Only really a gotcha if you pass a slice into a function and expect to see modifications in that slice after the function completes. It's helpful to remember that Go passes by value, not reference.
Slices are passed partly by value (the length), partly by reference (the data).
func takeSlice(s []int) {
slices.Sort(s)
}
From your explanation, you would expect that to not mutate the slice passed in, but it does.
This can have other quite confusing gotchas, like:
Slices are passed only by value. It's just that the value is a struct containing a reference to the data. Once one understands that, the rest makes perfect sense.
I can see why it trips up newcomers, but it feels pretty basic otherwise.
The fact that I can pass a slice to a func 'by value' and mutate the source slice outside the func is already surprising behavior to most people. The fact that it MIGHT mutate the source slice depending on the slice capacity is the part that really drives it home as bad ergonomics for me.
Overall I enjoy working with go, but there are a few aspects that drive me up the wall, this is one of them.
I think the key thing missing from go slices is ownership information, especially around sub-slices.
Make it so you can create copy-on-write slices of a larger slice, and a huge number of bugs go away.
Or do what rust did, except at runtime, and keep track of ownership
s := []int{1, 2, 3}
s[0] = 0 // fine, s owns data
s1 := s[0:2] // ownership transferred to s1, s is now read-only
s1[0] = 1 // fine, s1 owns data
s[0] = 1 // panic or compiler error, s1 owns data, not s
With of course functions to allow multiple mutable ownership in cases where that's needed, but it shouldn't be the default
I could have worded it better, but yes, slices have footgun potential but it's simple to work with once you know how they work (and maps fall into this same category).
It is a suprisingly hard thing to implement well. I have no idea how many times I have implemented slice-like things in C (back in the 1990-2000s when I mostly wrote C) and it was one of those things I never managed to be happy with.
Good point. As for C and slices, doubt that many care, at this point. Many will use C alternatives that have slices or are long time C users that just deal with it.
An expressive type system also often means slower build times. I dislike working with Rust for this exact reason.
While most people highlight the difficulty of picking up the syntax, I find Rust to be an incredibly tedious language overall. Zig has a less expressive type system, but it compiles much faster (though not as fast as Go). I like what Zig and Odin folks are doing over there.
I like the balance Go strikes between developer productivity and power, though I dearly miss union types in Go.
An expressive type system absolutely, positively, unequivocally does not imply slower build times (especially with a Church-style type system). There are plenty of programming languages with advanced type systems which compile extremely quickly, even faster than Go, for example OCaml.
Don't make the fallacy of conflating Rust's slow compile time with its "advanced" (not really, it's 80's tech) type system. Rust compilation is slow for unrelated reasons.
Old doesn't mean non-advanced. GraalVM is based on a paper (Futamura) from fifty years ago. Off the top of my head I can't think of many language features younger than the eighties—maybe green threading? That would be surprising but might fit. I suppose you could also say gradual typing. Haskell has many recent innovations, of course, but very few of those have seen much use elsewhere. Scala has its implicits, I guess, that's another one.
Personally, I write java at my day job and the type system there makes me loooong for rust.
I prefer rust to all of them, but I also come from a very systemsy background. Plus it has the benefit of being much easier to embed inside or compose around basically any runtime you'd like than managed code, which is why I chose rust rather than basically any managed language.
But, it's just a tool, and the tools I choose reflect the type of stuff I want to build. The JVM is extremely impressive in its own right. You're just not going to to find any one runtime or ecosystem that hits every niche. I'm happy to leave the language favoritism to the junior devs—for the vast majority of situations, what you're building dictates which language makes the most sense, not vice versa.
As a start, Go could separate container and slice types, the way C# did it with T[]/List<T>/other and Span<T>/Memory<T>. No lengthy build process required.
I'm not deeply familiar with those C# types, but I think maybe it already does. An array, which includes the size of itself in its type so that a four-element array is not the same type as an eight-element array, is already in Go. Go's language affordances make it easy to just have slices without the underlying arrays, since they're generally not useful on their own, but you can take arrays and slice into them if you like.
Yeah, but the at the same time, I find C# code a sigil soup. Go makes a different tradeoff.
I've been involved in a few successful large scale projects and never felt like the type system of Go is holding me back too much. Sure the error handling could be better, union type would make interface munging easier, but the current state after generics isn't too bad.
Not sure if that's really a proof, as it could be the exact combination of language features that makes up the slowness. For example traits, non-ordered definitions in compilation units and monomorphization probably don't help. GHC also isn't a speed demon for compilation.
But sure, LLVM and interfacing with it is quite possibly a big contributor to it.
Haskell isn't the only language around with complex type systems.
However, it is actually a good example regarding tooling, as the Haskell ecosystem has interpreters and REPL environments available, for quick development and prototyping, something that is yet to be common among Rustaceans.
Indeed, but its compile times aren't much better than LLVM, at least one year ago.
Ideally we would be having the F# REPL/JIT, plus Native AOT for deployment, as comparable development workflow experience.
Naturally F# was chosen as example, because that's your area. :)
Not being negative per se, I also would like to have something like Haskell GHCi, or OCaml bytecode compiler, as options on rustup, so naturally something like this might eventually come.
Based on a first hand account I read (but cannot source offhand) Rust's slow compiles are because anytime there was a tradeoff involving compile time at the expense of something else they'd always choose that something else. Not cause they hated fast compilation, guess it just wasn't high on their priorities.
I for one would be happy if the everyday software I use wasn't complete rubbish. It won't make the world meaningfully better, but it certainly won't make it worse. It's a start.
Why isn't there a browser which you can just install to have a good Internet experience? It would have to update itself every day with a new definition of "good" due to the arms race with advertisers.
It would have to block all ads and trackers and other bad JavaScript; automatically redirect YouTube to Invidious and so on, while seamlessly keeping all YouTube features; automatically open the chronological tab instead of the recommended tab on most social media; be blatantly illegal to possess; and update itself through Tor so nobody can do anything about it.
I agree. I detest that the world I live in, there's an inordinate amount of human effort that goes into diverting my attention to make me behave against my own best interests by buying something, or even just offered the opportunity to notice something buyable, that in both cases enriches someone else.
If all of that effort was focused on something meaningful, rather than something 'profitable', I wouldn't despise myself or the rest of humanity.
There's a massive difference between making the world a better place versus making the world a better place for a select few. Late stage capitalism has a massively heavy emphasis on the latter.
And now the USA has effectively reverted to monarchism. [1]
do what I do. every time grammarly interrupts my YouTube viewing, I say out loud clearly to myself "fuck off grammarly", and then I use YouTube-download to get the video without the ads.
every time Facebook shows me an advert for solar panels, I close Facebook (I only go for the pictures and stories of cats getting in awkward situations).
I've amassed a ton of books for my old age - some day I'm going to go offline and never go back online.
I actually pay for YouTube to not show me advertising, but I'm also a fan of UBlock Origin and yt-dlp [1], and also physical media, but I do appreciate ebooks a lot.
Imagine you had a team of 10 average (not bad, but also not great) software engineers for a year. Choose between:
* Get them to implement the kubernetes ssd/disk attachment plugin for the Adobe cloud offering.
* Get them to implement the remote access, scheduling and status control of a biochemistry centrifugue.
Both missions are boring, and neither will change much how whe world runs.
Well, I lied. Because of incentives, it is actually choosing between 15 decent engineers for the Adobe cloud or 1-2 cheap & bad engineers for the centrifugue (I have worked in embedded, I know software is an afterthought in that industry).
And now let's make it more personal: would you rather work for Adobe cloud for 135k per year, or for Centrifugues R Us for 85k per year? Would your spouse agree with your decision, especially after your 3 year old decided to grab and throw the TV to the floor?