Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> debug_assert!() (only run in debug builds)

debug_assert!() (and it's equivalent in other languages, like C's assert with NDEBUG) is cursed. It states that you believe something to be true, but will take no automatic action if it is false; so you must implement the fallback behavior if your assumption is false manually (even if that fallback is just fallthrough). But you can't /test/ that fallback behavior in debug builds, which means you now need to run your test suite(s) in both debug and release build versions. While this is arguably a good habit anyway (although not as good a habit as just not having separate debug and release builds), deliberately diverging behavior between the two, and having tests that only work on one or the other, is pretty awful.





I hear you, but sometimes this is what I want.

For example, I’m pretty sure some complex invariant holds. Checking it is expensive, and I don’t want to actually check the invariant every time this function runs in the final build. However, if that invariant were false, I’d certainly like to know that when I run my unit tests.

Using debug_assert is a way to do this. It also communicates to anyone reading the code what the invariants are.

If all I had was assert(), there’s a bunch of assertions I’d leave out of my code because they’re too expensive. debug_assert lets me put them in without paying the cost.

And yes, you should run unit tests in release mode too.


But how do you test the recovery path if the invariant is violated in production code? You literally can’t write a test for that code path…

There is no recovery. When an invariant is violated, the system is in a corrupted state. Usually the only sensible thing to do is crash.

If there's a known bug in a program, you can try and write recovery code to work around it. But its almost always better to just fix the bug. Small, simple, correct programs are better than large, complex, buggy programs.


> Usually the only sensible thing to do is crash.

Correct. But how are you testing that you successfully crash in this case, instead of corrupting on-disk data stores or propagating bad data? That needs a test.


> Correct. But how are you testing that you successfully crash

In a language like rust, failed assertions panic. And panics generally aren't "caught".

> instead of corrupting on-disk data stores

If your code interacts with the filesystem or the network, you never know when a network cable will be cut or power will go out anyway. You're always going to need testing for inconvenient crashes.

IMO, the best way to do this is by stubbing out the filesystem and then using randomised testing to verify that no matter what the program does, it can still successfully open any written (or partially written) data. Its not easy to write tests like that, but if you actually want a reliable system they're worth their weight in gold.


> In a language like rust, failed assertions panic. And panics generally aren't "caught".

This thread was discussing debug_assert, where the assertions are compiled out in release code.


Ah I see what you're saying.

I think the idea is that those asserts should never be hit in the first place, because the code is correct.

In reality, its a mistake to add too many asserts to your code. Certainly not so many that performance tanks. There's always a point where, after doing what you can to make your code correct, at runtime you gotta trust that you've done a good enough job and let the program run.


You don't. Assertions are assumptions. You don't explicitly write recovery paths for individual assumptions being wrong. Even if you wanted to, you probably wouldn't have a sensible recovery in the general case (what will you do when the enum that had 3 options suddenly comes in with a value 1000?).

I don't think any C programmer (where assert() is just debug_assert!() and there is no assert!()) is writing code like:

    assert(arr_len > 5);
    if (arr_len <= 5) {
        // do something
    }
They just assume that the assertion holds and hope that some thing would crash later and provide info for debugging if it didn't.

Anyone writing with a standard that requires 100% decision-point coverage will either not write that code (because NDEBUG is insane and assert should have useful semantics), or will have to both write and test that code.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: