Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, no - you don't.

What you're describing is a very limited subset of testing, which presumably is fine for the projects you work on, but that experience does not generalise well.

Integration testing is of course useful, but generally one would want to create unit tests for every part of the code, and by definition it's not a unit test if hits multiple parts of the code simultaneously.

Apart from that, databases and file access may be fast but they still take resources and time to spin up; beyond a certain project and team size, it's far cheaper to mock those things. With a mock you can also easily simulate failure cases, bad data, etc. - how do you test for file access issues, or the database server being offline?

Using mocks properly is a sign of a well-factored codebase.



> Integration testing is of course useful, but generally one would want to create unit tests for every part of the code, and by definition it's not a unit test if hits multiple parts of the code simultaneously.

The common pitfall with this style of testing is that you end up testing implementation details and couple your tests to your code and not the interfaces at the boundaries of your code.

I prefer the boundary between unit and integration tests to be the process itself. Meaning, if I have a dependency outside the main process (eg database, HTTP API etc) then it warrants an integration test where i mock this dependency somehow. Otherwise, unit tests test the interfaces with as much coverage of actual code execution as possible. In unit tests, out of process dependencies are swapped with a fake implementation like an in-memory store instead of a full of fledged one that covers only part of interface that i use. This results in much more robust tests that I can rely on during refactoring as opposed to “every method or function is a unit, so unit tests should test these individual methods”.


Yeah I think that's a question of using the right tool for the job. Some projects are of a size that it's not really necessary to be more fine-grained, but as the number of moving parts increases, so too in my experience does the need to ensure those parts are individually working to spec, and not just the whole thing. A classic example might be something like a calculator that enshrines a complex piece of logic, and a piece of code that uses it. I would test both of those in isolation, and mock out the calculator in the second case so that I could generate a whole range of different return values and errors and prove that the calling code is also robust and behaves correctly. Separating them like this also potentially reduces the number of tests you need to write to ensure that you hit all possible combinations of inputs and outputs.


Why would I want to create tests for every part of the code? I did that for years because I was taught that, but I came to realize it never mattered - if a test breaks it is because of the last thing I changed. I have a few flakey tests from time to time, but they have been not too bad to track down nd often taught me enough about how the system really worked as to be worth the time anyway.


I'm sorry, I don't understand what you mean.

You seem to say it's not worth writing a lot of tests, but then you talk about tests breaking due to bad changes - if you don't write those tests in the first place, then how do you get into that situation?

I didn't word my earlier comment very well - I don't mean to advocate for 100% coverage, which I personally think is a waste of time at best, and a false comfort at worst. Is this what you're talking about? What I wanted to say is that unit tests should be written for every part of the code that you're testing, i.e. break it into bits rather than test the whole thing in one lump, or better, do both - unit tests and integration tests.


Write a lot of tests - mostly integration. Unit tests have proven more harmful than helpful - unit tests are great when the api is used so often it would be painful to change it so you don't anyway. Otherwise I want to change the api and the code that uses it as requirements change. When I'm writing string or a list I'd unit test that - but mostly that is in my standard library so I'm not. Instead I'm writing code that is only used a few places and those places both will change every few years as requirements change.


I'm sorry I don't really understand your argument; unit tests are bad because code changes? Surely if you're only testing small pieces then that works better when you make a change? You only need to update the tests for the part of code that changed, not everything that touches it. That's another great thing about mocks - your interfaces should be stable over time, so the mocks don't need to change that often, and when you're changing MyCalculator, all the dependencies to MyCalculator remain the same so you only have to update MyCalculatorTests.

This is what I mean about a well-factored code base; separate things are separate, and can be tested independently.

> Unit tests have proven more harmful than helpful

Why? I find them very useful. If I change a method and inadvertently break it somehow, I have a battery of tests testing that method, and some of them fail. Nothing else fails because I didn't change anything else, so I don't need to dive into working out why those other tests are failing. It's separation of concerns.


Let me put it a different way: what is a unit?

i have concluded a unit needs to be large. Not a single class or function but a large collection of them. When 'archtecture astronaughts' draw their boxes they ar drawing units. Often thousands of functions belong to a unit. even then though often it is easier to use the real other unit than a test double.


You can make your own definition of unit if you like, but the usual definition is the smallest whole piece of something - a unit of currency, a unit of measure, etc.

If your unit is thousands of functions wide then you have a monolith, and there are widely discussed reasons why we try to avoid those.


> Using mocks properly is a sign of a well-factored codebase.

Well-factored codebase doesn’t need mocks.


That's not a counter-argument. Why don't you need to mock out the interfaces that you're not testing?


why would I? Either the code works either way or I'm glad to know I broke things when the seemingly unrelated tests breaks. Remember if a test fails the fault is almost always the last thing I changed.

now I do take care to avoid writing tests that depend on other code's results that would be likely to change. This hasn't proven to be the problem I've so often been warned about though.


> Either the code works either way or I'm glad to know I broke things when the seemingly unrelated tests breaks

...or, you tested each thing individually, so only related tests break and you can quickly zero in on the problem. Isn't that easier?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: