> How many times did you leave a comment on some branch of code stating "this CANNOT happen" and thrown an exception? Did you ever find yourself surprised when eventually it did happen? I know I did, since then I at least add some logs even if I think I'm sure that it really cannot happen.
I'm not sure what the author expects the program to do when there's an internal logic error that has no known cause and no definite recovery path. Further down the article, the author suggests bubbling up the error with a result type, but you can only bubble it up so far before you have to get rid of it one way or another. Unless you bubble everything all the way to the top, but then you've just reinvented unchecked exceptions.
At some level, the simplest thing to do is to give up and crash if things are no longer sane. After all, there's no guarantee that 'unreachable' recovery paths won't introduce further bugs or vulnerabilities. Logging can typically be done just fine within a top-level exception handler or panic handler in many languages.
Ideally, if you can convince yourself something cannot happen, you can also convince the compiler, and get rid of the branch entirely by expressing the predicate as part of the type (or a function on the type, etc.)
Language support for that varies. Rust is great, but not perfect. Typescript is surprisingly good in many cases. Enums and algebraic type systems are your friend. It'll never be 100% but it sure helps fill a lot of holes in the swiss cheese.
Because there's no such thing as a purely internal error in a well-constructed program. Every "logic error" has to bottom out in data from outside the code eventually-- otherwise it could be refactored to be static. Client input is wrong? Error the request! Config doesn't parse? Better specify defaults! Network call fails? Yeah, you should have a plan for that.
Not every piece of logic lends itself to being expressed in the type system.
Let's say you're implementing a sorting algorithm. After step X you can be certain that the values at locations A, B, and C are sorted such that A <= B <= C. You can be certain of that because you read the algorithm in a prestigious journal, or better, you read it in Knuth and you know someone else would have caught the bug if it was there. You're a diligent reader and you've convinced yourself of its correctness, working through it with pencil and paper. Still, even Knuth has bugs and perhaps you made a mistake in your implementation. It's nice to add an assertion that at the very least reminds readers of the invariant.
Perhaps some Haskeller will pipe up and tell me that any type system worth using can comfortably describe this PartiallySortedList<A, B, C>. But most people have to use systems where encoding that in the type system would, at best, make the code significantly less expressive.
Yes, this has been my experience too! Another tool in the toolbox is property / fuzz testing. Especially for data structures, and anything that looks like a state machine. My typical setup is this:
1. Make a list of invariants. (Eg if Foo is set, bar + zot must be less than 10)
2. Make a check() function which validates all the invariants you can think of. It’s ok if this function is slow.
3. Make a function which takes in a random seed. It initializes your object and then, in a loop, calls random mutation functions (using a seeded RNG) and then calls check(). 100 iterations is usually a good number.
4. Call this in an outer loop, trying lots of seeds.
5. If anything fails, print out the failing seed number and crash. This provides a reproducible test so you can go in and figure out what went wrong.
If I had a penny for every bug I’ve found doing this, I’d be a rich man. It’s a wildly effective technique.
This is indeed a great technique. The only way it could be improved is to expand on step 3 by keeping a list of the random mutation functions called and the order in which they were called, then if the test passes you throw that list away and generate a new list with the next seed. But if the test fails then you go through the following procedure to "shrink" the list of mutations down to a minimal (or nearly minimal) repro:
1. Drop the first item in the list of mutations and re-run the test.
2. If the test still fails and the list of mutations is not empty, goto step 1.
3. If the test passes when you dropped the first item in the mutation list, then that was a key part of the minimal repro. Add it to a list of "required for repro" items, then repeat this whole process with the second (and subsequent) items on the list.
In other words, go through that list of random mutations and, one at a time, check whether that particular mutation is part of the scenario that makes the test fail. This is not guaranteed to reach the smallest possible minimal repro, but it's very likely to reach a smallish repro. Then in addition to printing the failing seed number (which can be used to reproduce the failure by going through that shrinking process again), you can print the final, shrunk list of mutations needed to cause the failure.
Printing the list of mutations is useful because then it's pretty simple (most of the time) to turn that into a non-RNG test case. Which is useful to keep around as a regression test, to make sure that the bug you're about to fix stays fixed in the future.
There is no inherent benefit in going and expressing that fact in a type. There are two potential concerns:
1) You think this state is impossible but you've made a mistake. In this case you want to make the problem as simple to reason about as possible. Sometimes types can help but other times it adds complexity when you need to force it to fit with the type system.
People get too enamored with the fact that immutable objects or certain kinds of types are easier to reason about other things being equal and miss the fact that the same logic can be expressed in any Turing complete language so these tools only result in a net reduction in complexity if they are a good conceptual match to the problem domain.
2) You are genuinely worried about the compiler or CPU not honoring it's theoretical guarantees -- in this case rewriting it only helps if you trust the code compiling those cases more for some reason.
I think those concerns are straw men. The real concern is that the invariants we rely on should hold when the codebase changes in the future. Having the compiler check that automatically, quickly, and definitively every time is very useful.
This is what TFA is talking about with statements like "the compiler can track all code paths, now and forever."
Sometimes the "error" is more like, "this is a case that logically could happen but I'm not going to handle it, nor refactor the whole program to stop it from being expressable"
Funny you should mention the floating point rounding mode, I actually had to fix a bug like that once. Our program worked fine, until you printed to an HP printer - then it crashed shortly after. It took forever to discover the cause - the printer driver was changing the floating point rounding mode and not restoring it. The fix was to set the mode to a known value each and every time after you printed something.
These are vanishingly unlikely if you mostly target consumer/server hardware. People who code for environments like satellites, or nuclear facilities, have to worry about it, sure, but it's not a realistic issue for the rest of us
Bitflips are waaay more common than you think they are. [0]
> A 2011 Black Hat paper detailed an analysis where eight legitimate domains were targeted with thirty one bitsquat domains. Over the course of about seven months, 52,317 requests were made to the bitsquat domains.
> Bitflips are waaay more common than you think they are... Over the course of about seven months, 52,317 requests...
Your data does not show them to be common - less than 1 in 100,000 computing devices seeing an issue during a 7 month test qualifies as "rare" in my book (and in fact the vast majority of those events seem to come from a small number of server failures).
And we know from Google's datacenter research[0] that bit flips are highly correlated hard failures (i.e. they tend to result from a faulty DRAM module, and so affect a small number of machines repeatedly).
It's hard to pin down numbers for soft failures, but it seems to be somewhere in the realm of 100 events/gigabyte/year - and that's before any of the many ECC mechanisms do their thing. In practical sense, no consumer software worries about bit flips in RAM (whereas bit flips in storage are much more likely, hence checksumming DB rows, etc).
> ... A new consumer grade machine with 4GiB of DRAM, will encounter 3 errors a month, even assuming the lowest estimate of 120 FIT per megabit.
The guarantees offered by our hardware suppliers today, is not "never happens" but "accounted for in software".
So, if you ignore it, and start to operate at any scale, you will start to see random irreproducible faults.
Sure, you can close all tickets as user error or unable to reproduce. But it isn't the user at fault. Account for it, and your software has less glitches than the competitor.
1 in 40,000 customer devices experiencing a failure annually is considerable better than 4 9s of reliability. So we are debating whether going from 4 9s to 5 9s is worth it.
And like, sure, if the rest of your stack is sufficiently polished (and your scale is sufficiently large) that the once-a-year bit flip event becomes a meaningful problem... then by all means do something about it.
But I maintain that the vast majority of software developers will never actually reach that point, and there are a lot of lower-hanging fruit on the reliability tree
A comment "this CANNOT happen" has no value on itself. Unless you've formally verified the code (including its dependencies) and have the proof linked, such comments may as well be wishes and prayers.
Yes, sometimes, the compiler or the hardware have bugs that violate the premises you're operating on, but that's rare. But most non pure algorithms (side effects and external systems) have documented failure cases.
> A comment "this CANNOT happen" has no value on itself.
I think it does have some value: it makes clear an assumption the programmer made. I always appreciate it when I encounter comments that clarify assumptions made.
`assert(false)` is pronounced "this can never happen." It's reasonable to add a comment with /why/ this can never happen, but if that's all the comment would have said, a message adds no value.
Oh I agree, literally `assert(false, "This cannot happen")` is useless, but ensuring message is always there encourages something more like, `assert(false, "This implies the Foo is Barred, but we have the Qux to make sure it never is")`.
Ensuring a message encourages people to state the assumptions that are violated, rather than just asserting that their assumptions (which?) don't hold.
debug_assert!() (and it's equivalent in other languages, like C's assert with NDEBUG) is cursed. It states that you believe something to be true, but will take no automatic action if it is false; so you must implement the fallback behavior if your assumption is false manually (even if that fallback is just fallthrough). But you can't /test/ that fallback behavior in debug builds, which means you now need to run your test suite(s) in both debug and release build versions. While this is arguably a good habit anyway (although not as good a habit as just not having separate debug and release builds), deliberately diverging behavior between the two, and having tests that only work on one or the other, is pretty awful.
For example, I’m pretty sure some complex invariant holds. Checking it is expensive, and I don’t want to actually check the invariant every time this function runs in the final build. However, if that invariant were false, I’d certainly like to know that when I run my unit tests.
Using debug_assert is a way to do this. It also communicates to anyone reading the code what the invariants are.
If all I had was assert(), there’s a bunch of assertions I’d leave out of my code because they’re too expensive. debug_assert lets me put them in without paying the cost.
And yes, you should run unit tests in release mode too.
There is no recovery. When an invariant is violated, the system is in a corrupted state. Usually the only sensible thing to do is crash.
If there's a known bug in a program, you can try and write recovery code to work around it. But its almost always better to just fix the bug. Small, simple, correct programs are better than large, complex, buggy programs.
Correct. But how are you testing that you successfully crash in this case, instead of corrupting on-disk data stores or propagating bad data? That needs a test.
> Correct. But how are you testing that you successfully crash
In a language like rust, failed assertions panic. And panics generally aren't "caught".
> instead of corrupting on-disk data stores
If your code interacts with the filesystem or the network, you never know when a network cable will be cut or power will go out anyway. You're always going to need testing for inconvenient crashes.
IMO, the best way to do this is by stubbing out the filesystem and then using randomised testing to verify that no matter what the program does, it can still successfully open any written (or partially written) data. Its not easy to write tests like that, but if you actually want a reliable system they're worth their weight in gold.
I think the idea is that those asserts should never be hit in the first place, because the code is correct.
In reality, its a mistake to add too many asserts to your code. Certainly not so many that performance tanks. There's always a point where, after doing what you can to make your code correct, at runtime you gotta trust that you've done a good enough job and let the program run.
You don't. Assertions are assumptions. You don't explicitly write recovery paths for individual assumptions being wrong. Even if you wanted to, you probably wouldn't have a sensible recovery in the general case (what will you do when the enum that had 3 options suddenly comes in with a value 1000?).
I don't think any C programmer (where assert() is just debug_assert!() and there is no assert!()) is writing code like:
assert(arr_len > 5);
if (arr_len <= 5) {
// do something
}
They just assume that the assertion holds and hope that some thing would crash later and provide info for debugging if it didn't.
Anyone writing with a standard that requires 100% decision-point coverage will either not write that code (because NDEBUG is insane and assert should have useful semantics), or will have to both write and test that code.
>> A comment "this CANNOT happen" has no value on itself.
> I think it does have some value: it makes clear an assumption the programmer made.
To me, a comment such as the above is about the only acceptable time to either throw an exception (in languages which support that construct) or otherwise terminate execution (such as exiting the process). If further understanding of the problem domain identifies what was thought impossible to be rare or unlikely instead, then introducing use of a disjoint union type capable of producing either an error or the expected result is in order.
Most of the time, "this CANNOT happen" falls into the category of "it happens, but rarely" and is best addressed with types and verified by the compiler.
Importantly, specifying reasoning can have communicative value while falling very far short of formal verification. Personally, I also try to include a cross reference to the things that could allow "this" to happen were they to change.
Do you not make such a tacit assumption every time you index into an array (which in almost all languages throws an exception on bounds failure)? You always have to make assumptions that things stay consistent from one statement to the next, at least locally. Unless you use formal verification, but hardly anyone has the time and resources for that.
Decent code generally avoids indexing into arrays at all; if it does so then it does so in ways where the bound checks are so certain to succeed that you can usually explain it to the compiler (e.g. split an array into slices and access those slices).
that is what I thought you were saying, and it doesn't make much sense to me. AFAICT the point of arrays as a data structure is to allow relatively cheap indexing at the cost of more expensive resizing operations. What else would you do with an array other than index into it?
> e.g. split an array into slices and access those slices
How is this not indexing with a little abstraction? Aren't slices just a way of packaging an array with a length field it a standard way? I'm not aware of many array implementations without a length (and usually also capacity) field somewhere , so this seems like a mostly meaningless distinction (ie all sclices are arrays, right).
> AFAICT the point of arrays as a data structure is to allow relatively cheap indexing at the cost of more expensive resizing operations. What else would you do with an array other than index into it?
The main thing you do with arrays is bulk operations (e.g. multiply it by something), which doesn't require indexing into it. But yeah I think they're a fairly niche datastructure that shouldn't be privileged the way certain languages do.
> How is this not indexing with a little abstraction? Aren't slices just a way of packaging an array with a length field it a standard way?
Sure (well, offset and length rather than just a length) but the abstraction is safe whereas directly indexing into the array isn't.
If such an error happens, that would be a compiler bug. Why? Because I usually do checks against the length of the array or have it done as part of the standard functions like `map`. I don't write such assumptions unless I'm really sure about the statements, and even then I don't.
Unless you are in the extremely small minority of people who would actually be affected by it (in which case your company would already have bought ECC ram and made you work with three isolated processes that need to agree to proceed): you don't. You eat shit, crash and restart.
Well, bitflip errors are more of a vulnerability for longer lived values. This could effect fukushima style robots or even medical equipment. ECC implemented outside of ram would save vs triplicate but it was just a question related to the-above idea of an array access being assumed as in+bounds. Thank you.
> or have it done as part of the standard functions like `map`.
Which are all well and good when they are applicable, which is not always 100% of the time.
> Because I usually do checks against the length of the array
And what do you have your code do if such "checks" fail? Throw an assertion error? Which is my whole point, I'm advocating in favor of sanity-check exceptions.
Or does calling them "checks" instead of "assumptions" magically make them less brittle from surrounding code changes?
A comment have no semantic value to the code. Having code that check for stuff is different from writing comments as they are executed by the machine. Not read by other humans.
Of course you should put down a real assertion when you have a condition that can be cheaply checked (or even an assert(false) when the language syntax dictates an unreachable path). I'm not trying to argue against that, and I don't think anyone else here is either.
I was mainly responding to TFA, which states "How many times did you leave a comment on some branch of code stating 'this CANNOT happen' and thrown an exception" (emphasis mine), i.e., an assertion error alongside the comment. The author argues that you should use error values rather than exceptions. But for such sanity checks, there's typically no useful way to handle such an error value.
False it has value. It’s actually even better to log it or throw an exception. print(“this cannot happen.”)
If you see it you immediately know the class of error is purely a logic error the programmer made a programming mistake. Logging it makes it explicit your program has a logic bug.
What if you didn’t log it? Then at runtime you will have to deduce the error from symptoms. The log tells you explicitly what the error is.
Worse: You may created the proof. You may have linked to the proof. But if anyone has touched any of the code involved since then, it still has no value unless someone has re-done the proof and linked that. (Worse, it has negative value, because it can mislead.)
Git blame will show the commit and the date for each line. It’s easy to verify if the snippet has changed since the comment. i use Emacs and it’s builtin vc package that color code each block.
And anything that can affect relevant state, any dependencies that may have changed, validations to input that may have been modified; it’s hard to know without knowing what assumptions the assertion is based on.
>Further down the article, the author suggests bubbling up the error with a result type, but you can only bubble it up so far before you have to get rid of it one way or another. Unless you bubble everything all the way to the top, but then you've just reinvented unchecked exceptions.
Not necessarily. Result types are explicit and require the function signature to be changed for them.
I would much prefer to see a call to foo()?; where it's explicit that it may bubble up from here, instead of a call to foo(); that may or may not throw an exception my way with no way of knowing.
Rust is absolutely not perfect with this though since any downstream function may panic!() without any indication from its function signature that it could do so.
> At some level, the simplest thing to do is to give up and crash if things are no longer sane.
The problem with this attitude (that many of my co-workers espouse) is that it can have serious consequences for both the user and your business.
- The user may have unsaved data
- Your software may gain a reputation of being crash-prone
If a valid alternative is to halt normal operations and present an alert box to the user saying "internal error 573 occurred. please restart the app", then that is much preferred IMO.
> If a valid alternative is to halt normal operations and present an alert box to the user saying "internal error 573 occurred. please restart the app", then that is much preferred IMO.
You can do this in your panic or terminate handler. It's functionally the same error handling strategy, just with a different veneer painted over the top.
Crashing is bad, but silently continuing in a corrupt state is much worse. Better to lose the last few hours of the user's work than corrupt their save permanently, for example.
> Your software may gain a reputation of being crash-prone
Hopefully crashing on unexpected state rather than silently running on invalid state leads to more bugs being found and fixed during development and testing and less crash-prone software.
That should not need to be a consideration. Crashing should restore the state from just before the crash. This isn't the '90s, users shouldn't have to press "save" constantly to avoid losing data.
You should prefer to write unreachable!("because ...") to explain to some future maintenance engineer (maybe yourself) why you believed this would never be reached. Since they know it was reached they can compare what you believed against their observed facts and likely make better decisions.
But at least telling people that the programmer believed this could never happen short-circuits their investigation considerably.
Heh, recently I had to fix a bug in some code that had one of these comments. Feels like a sign of bad code or laziness. Why make a path that should not happen? I can get it when it's on some while loop that should find something to return, but on a if else sequence it feels really wrong.
Strong disagree about laziness. If the dev is lazy they will not make a path for it. When they are not lazy they actually make a path and write a comment explaining why they think this is unreachable. Taking the time to write a comment is not a sign of laziness. It’s the complete opposite. You can debate whether the comment is detailed enough to convey why the dev thinks it’s unreachable, but it’s infinitely better than no comment and leaving the unreachability in their head.
Before sealed classes and ultra-robust type checking, sometimes private functions would have, say, 3 states that should be possible, but 3 years later, a new state is added but wasn’t checked because the compiler didn’t stop it because the language didn’t support it at that time.
It's much better to have a `panic!("this should never happen")` statement than to let your program get into an inconsistent state and then keep going. Ideally, you can use your type system to make inconsistent states impossible, but type systems can only express so much. Even Haskell can't enforce typeclass laws at the compiler level.
A program that never asserts its invariants is much more likely to be a program that breaks those invariants than a program that probably doesn't.
I'm not sure what the author expects the program to do when there's an internal logic error that has no known cause and no definite recovery path. Further down the article, the author suggests bubbling up the error with a result type, but you can only bubble it up so far before you have to get rid of it one way or another. Unless you bubble everything all the way to the top, but then you've just reinvented unchecked exceptions.
At some level, the simplest thing to do is to give up and crash if things are no longer sane. After all, there's no guarantee that 'unreachable' recovery paths won't introduce further bugs or vulnerabilities. Logging can typically be done just fine within a top-level exception handler or panic handler in many languages.