Hacker Newsnew | past | comments | ask | show | jobs | submit | beder's commentslogin

This is how I read it too. What's more, it looks like they're interested in going senior -> staff; at all Large Tech Companies, senior is a perfectly reasonable "terminal" role for a SWE, and many SWEs don't want to get promoted to staff. (Staff SWE is a different job from senior SWE; you might not want want to do that job, and that's typically fine.)

So I think the lesson here is wrong too - when the manager said

> These tasks aren’t business priorities and had no impact on customers and other teams

that didn't mean they were worthless tasks - just that they weren't business priorities and had no impact on customers or other teams. Which is probably true(ish - I would have phrased it very differently if I were their manager).

Improving the release process is great, and helps the team a ton - and indirectly helps customers by enabling the team to ship faster. This is incredibly valuable! And at the right scale, it can be a staff job: at my Large Tech Company, I know several people that have been promoted to staff SWE for this kind of work, but it's for systems that hundreds of SWEs work on. I also know people that have been promoted to senior SWE for this kind of work - these are systems that tens of SWEs work on. It sounds like this example was more like that - this person was doing a good senior SWE job, and the manager didn't see any reason to course correct given that they had given no signal they wanted to get promoted.


I agree, and even moreso, it's easy to see the (low!) cost of throwing away an implementation. I've had the AI coder produce something that works and technically meets the spec, but I don't like it for some reason and it's not really workable for me to massage it into a better style.

So I look up at the token usage, see that it cost 47 cents, and just `git reset --hard`, and try again with an improved prompt. If I had hand-written that code, it would have been much harder to do.


A very pedantic point, but merge keys are not part of the YAML spec [1]! Merge keys are a custom type [2], which may optionally be applied during the construction phase of loading. I definitely wouldn't say that merge keys are integral to anchors.

(Also, as a personal bias, merge keys are really bad because they are ambiguous, and I haven't implemented them in my C++ yaml library (yaml-cpp) because of that.)

[1]: https://yaml.org/spec/1.2.2/

[2]: https://yaml.org/type/merge.html


Yeah, I find the situation here very confusing: I agree that merge keys are not part of YAML 1.2, but they are part of YAML 1.1. The reason they don't appear to be in the "main" 1.1 spec itself is because they were added to 1.1 after 1.1 was already deprecated[1].

[1]: https://ktomk.github.io/writing/yaml-anchor-alias-and-merge-...


At one point at Google, there was a huge chunk of code that was hard to understand, probably at the wrong place in the network stack, and stubbornly hard to change. And it kept growing, despite our efforts. We renamed it "[Foo]Sorcery" (this was about 10 years ago); people stopped trying to add to it, and periodically someone would come in and remove parts of it, all thanks (I think) to the goofy (and somewhat scary) name.


How so?


1) researching the problems far more than the solutions. This creates the impression to the reader that these problems don’t have solutions when in fact they do. 2) treating Deudney as authoritative in the space field when he is, at best, a quack who writes bad science fiction scary stories and tries to pass it off as non-fiction.


Agreed, Dennett hit all of the high points of the who-am-I philosophy in a really entertaining story. It also reminded me of a great Greg Egan story, Learning To Be Me:

https://philosophy.williams.edu/files/Egan-Learning-to-Be-Me...

This is focused on just one aspect of the philosophical dilemma, and both Dennett and Egan touch on the horror of it in delightful ways.


Yes! I absolutely loved Egan's idea of the Jewel, and of course it turns out Dennett predates him by about a decade.


Near the end of Dennett's essay, this is exactly the story I was thinking of.


This was a very enjoyable read. Thanks for sharing.


This reminds me of the fantastic book, A Canticle for Leibowitz, by Walter Miller. He has a similarly pessimistic take.


The author mentions “The Inner Game of Tennis” as an example of how to train in images and feelings, not words, and I highly recommend it to basically anyone, not just tennis players.

If you’re doing anything that you might hear “don’t overthink it”, then it’s for you. It was recommended to me by a friend who played competitive Foosball; I read it as an instructional about playing chess; and later, after starting to play tennis, I read it again, and I think it’s equally applicable to all of those.


Do you mind expanding on how you applied it to chess. I'm curious how you translated it to this domain. Watching the seems of the tennis ball doesn't seem to translate in an obvious way to me.

I've read the book, and I'm definitely one of those that tends to overthink. I tried reading one of the other books in the series and found it not nearly as helpful.


Here’s an example for chess: I found that I was repeating the same lines and the same ideas over and over during matches. One of the key points in the book is to simply observe what you’re doing without judgment. I realized that I’d fixate (especially under time pressure) on an idea that didn’t work.

I tried working on this with deliberate practice on cycling through different areas of the board. There’s a mental feeling (not too different from the physical feeling of swinging a racket) of working through a line, and then moving to another area and trying a different line.


For another take, there's "Thinking fast and slow". Along with "Happiness Hypothesis", these 3 books all talk about a similar split in two way of thinking.


I was on board for points 1 and 2, but completely disagree with 3:

> The components might look wrong when rendered. Tests are very bad at this. We don't want to go down the esoteric and labor-intensive path of automated image capture and image diffing.

This is the main reason we have tests for our (Angular) typescript all; all the tests roughly look like:

1. set up component with fake data

2. take screenshot and compare with golden

3. (maybe) interact with component and verify change

These are super easy to write, and also easy to review: most review comments are on the screenshots anyways. And since the goldens are checked in, it’s easy to see what different parts of the app look like without starting it up.


In my experience, bitmap comparison testing is pretty hard to keep going unless you have a dedicated service for maintaining the bitmaps, updating them when needed, a nice reporting system that will shade the difference area, and people with enough time to review the results to figure out what the difference means and determine root cause. I'm sure some of that has become easier over time with off-the-shelf tooling, but since I'm still seeing bespoke systems out there doing it I doubt it's become turnkey easy yet.

It's also not something you want to do until the UI has solidified for that spot--which sometimes never happens, for some apps. It also has the issue that you can often only pixel-compare between two shots from the same rendering system--that is, same browser version if you're testing browser, same graphics drivers and rendering subsystem version if you're native, etc. Frequently that means testing against frozen reference environments that become increasingly less relevant to in-field behavior over time and are also a maintenance load to qualify and update.

When thinking of the "test pyramid" and why, for example, UI tests are at the point due to high fragility vs. low specificity vs. high effort to triage and maintain, I'd put bitmap testing at the very tippy-top on an antenna above the pyramid. They're useful, but have a large hump to set up, a long tail of rather heavy maintenance, a failure could mean almost anything under the hood, and they churn like a mofo during any flow or visual refactor whatsoever with no possible way to abstract them to mitigate that.

At that point, it's not really about the usefulness anymore, and more about the opportunity cost of not doing something different with the time. I think they're usually pretty questionable unless you're actually testing a rendering system where the bitmap is the primary output. A custom-rendered component ancillary to the application probably wouldn't meet that bar in most cases unless it were complex and central enough to merit the operational risk and stable enough to mitigate the same.


FWIW, I turned this option (to not quit on Cmd+Q) a long time before it was the default, and it's saved me time so many times - I absolutely love it. Before, I would constantly try to hit "Cmd+W" and accidentally tap the Q, and the whole browser would close.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: