Unit Test or Integration Test and Why You Should Care

There remains a fair bit of confusion about what constitutes which kind of test.  Many developers are fairly new to testing, and tend to call any tests of their code “unit tests” even when they’re dealing with something substantially larger than a unit.  The tools don’t really help much here, since the various test runner frameworks all call themselves unit test frameworks, and the various test runners themselves almost universally refer to the tests they run as “unit tests” whether they are or not.  For instance, Visual Studio 2010 starts every new Test Project with a class called UnitTest1 and lets you add a new Unit Test, but nowhere does it mention Integration Tests, Acceptance Tests, Smoke Tests, etc, as you use the same code templates to create each of these.

Visual Studio 2010


ReSharper and most other add-in test runners follow the same convention – if you ran run it as a test on your code, it’s probably going to be referred to as a Unit Test.

ReSharper 6.1



So what constitutes a unit test, and what constitutes an integration test?  What about other kinds of tests beyond these two?  There’s a decent StackOverflow answer related to this topic, which lists several kinds of tests and their definitions.  Here is what it has to say about Unit Tests and Integration Tests, specifically:

  • Unit test: Specify and test one point of the contract of single method of a class. This should have a very narrow and well defined scope. Complex dependencies and interactions to the outside world are stubbed or mocked.

  • Integration test: Test the correct inter-operation of multiple subsystems. There is whole spectrum there, from testing integration between two classes, to testing integration with the production environment.

I have my own definition of a unit test, which is that it’s a test that only tests a single path through a single method.  More importantly, it’s a test that has zero dependencies on infrastructure, or on code outside of your control.  Unit tests should run fast – as in very, very fast – because they aren’t touching file systems, databases, networks, email servers, system clocks, etc.  They run your code.  Period.  If you have code that has dependencies, you need to remove them when running your unit tests, typically by using mocks, fakes, or stubs.  I’ve written before about dependencies, if you’re not sure what I mean:

If you have a test that depends on any of the dependencies listed in the above posts, then you have an integration test.  Integration tests are great and necessary, but they’re generally at least an order of magnitude slower than unit tests, and as such you’re going to be able to run far fewer of them in a given amount of time.  Therefore, you want to write as many unit tests as you can, and write integration tests for things unit tests can’t do (like actually testing your infrastructure and interactions between components).  Basically, you want to follow the Test Pyramid, just like in the United States people are encouraged to eat based on the Food Pyramid (with one key difference being that the Test Pyramid is probably better advice and is less controversial).


Basically, you want a lot more servings of Unit Tests in your daily diet than Integration Tests, and remember that UI tests, being the most expensive and usually the most brittle, are a sometimes food.

How Can You Tell if a Test is a Unit Test or an Integration Test?

Here are some quick questions you can use to qualify your tests.  There may be some exceptions to these rules, but these are generally good guidelines.  It’s usually a good idea to separate your unit and integration tests, either as separate projects/assemblies, or at least using separate categories, so that you can run them separately.  You’ll want to run the unit tests all the time, and they should be fast enough that doing so isn’t too painful.  You’ll want to run the integration tests as often as you can bear to do so, but often these take long enough to run that you don’t want to run them with every compile or before every check-in.

Q: Does the System Under Test (SUT) require an installed and configured and available database in order to run the test successfully?

A: If yes, then it’s an Integration Test

Q: Does the SUT talk to the file system?

A: If yes, then it’s an Integration Test

Q: Does the SUT reference configuration files directly?

A: See previous question.  It’s an Integration Test.

Q: Does the SUT reference a service over the network?

A: If yes, then it’s an Integration Test

Q: Does the test take more than 10ms to execute?

A: If yes, it’s very likely an Integration Test, or at the very least it may be possible to refactor it to run faster.

Q: Does the SUT send emails as part of the test execution, even via a local SMTP server like smtp4dev?

A: If yes, then it’s an Integration Test.

Q: Does the SUT depend on the system clock?  Are there certain days of the week or times of day when it would behave differently?

A: If yes, then it’s an Integration Test.

Q: Does the test make use of a mocking framework?

A: If yes, then it’s likely a Unit Test.  Generally you shouldn’t need to mock much in your integration tests, or you risk not actually testing your system.


MSDN describes Unit Testing and Integration Testing if you’d like an “official” Microsoft source

“What is Unit Test, Integration Test, Smoke test, Regression Test?”

“What are unit testing and integration testing, and what other types of testing should I know about?”

  • Doug Mair

    I’m learning TDD. So I’m trying to follow along.

    You said a unit test, tests a single path through a single method. That’s fine to start with but as your code gets refactored, the link between the tests and the paths gets lost.

    And with TDD, you would ideally write the test before the code anyway.

    Maybe I’m wrong, but it seems the unit tests are tied more to requirements than code sections. That way code implementation can change without requiring code changes.

    I like the distinctions you make about dependencies regarding unit and integration tests.

  • David Karr

    It is perfectly reasonable for some integration tests to also use mocks and stubs, for the physical resources that you want to simulate. Many systems integrate with multiple physical resources, and it’s worthwhile to simulate some of those resources while testing your code against other real resources.

  • Jaco Pretorius

    The distinction between Unit and Integration Tests is extremely important, and more often than not ignored by developers new to automated testing.

    I usually describe a Unit Test as ‘testing a single method on a single object’ – which boils down to the same thing as what you said, but it’s maybe a bit more specific.

  • Jonathan Allen

    Why is the distinction import? And why should we focus on writing lots of small tests instead a few comprehensive tests that cover the same amount of codes?

    Like many others you are making a lot of claims about the supremacy of unit tests but no real arguments to support those claims.

  • ssmith

    @David Karr,

    Yes, I agree, that’s why I didn’t say you should never use mocks/fakes in integration tests. However, I think you should be careful with how you use them, and consider them suspect if you’re not sure they’re correct, as it *may* indicate that you’re not testing what you think you are.

  • ssmith

    @Jonathan Allen,

    There are several reasons why the distinction is important, and the biggest two are performance and granularity. Performance because unit tests are very fast and integration tests are much slower, so it follows that given a certain amount of time you’re willing to invest in running tests, you can have many more unit tests in that amount of time than you can have integration tests. Now, if you can write one integration test that does the work of 1000 unit tests, and runs in 1 second, then by all means consider including it. But this leads to the issue of granularity. When your test suite fails, do you want to have 10 big tests and know that the TestEverythingWorksOnTheCheckoutPage failed, or would you rather have hundreds of small tests, and know that the SalesTaxCalculatorShouldIgnoreGroceryItemsInOhio() unit test has just failed? If you’re looking for the tests to provide insight into *what* is broken and *where* it is located in your code, as opposed to merely indicating *that* something is broken *somewhere*, then unit tests tend to be better suited.

  • Jonathan Allen

    I use the assertions in the test to tell me where and how a test has failed. That’s why they have message parameters.

    I will agree that test performance is important, but I fear that we take it too far. One doesn’t need to be constantly running every single test in a tight loop. More intelligent testing frameworks detect what has changed and only run tests impacted by those changes.

  • David Karr


    Actually, what you said is that you’d likely use mocks and stubs in unit tests, but you didn’t mention this at all in the context of integration tests. An inexperienced developer who’s at least paying attention might take the direction from this that you would use mocks and stubs in unit tests, but not in integration tests.

  • ssmith

    @Jonathan Allen,

    In my experience, your tests should be a small as possible, and you should usually be able to tell what is broken by looking at the test name, with the assert message simply providing some additional detailed info. Of course that’s not to say it’s the only way, and certainly if what you’re doing works for you, that’s great. I’m not saying you’re wrong – I’m just saying what works for me and for many others I’ve interacted with.

    Yes, there are some great testing tools coming along that are more intelligent about which tests they run in response to which code has been changed. I’ve used Mighty Moose for this, though at the time (a few months ago) it still wasn’t quite ready for prime time. I also just wrote another article on running tests in parallel, which can be a great way to speed up test run times, assuming they’re written such that they do not step on one another. (http://stevesmithblog.com/blog/run-your-unit-tests-in-parallel-to-maximize-performance/)

  • ssmith

    @David Karr,

    That’s a good point – I’ll update the article to note that mocks/stubs/etc can have a place in Integration Tests as well as Unit Tests. I generally try to minimize this, but certainly there are situations where it’s helpful.

  • Chris Marisic

    I’ve really decided to make 2012 the year for tests. My projects are all adopting BDD testing, both xBehavior testing using SpecFlow & Selenium WebDriver using gherkin feature files to test the applications direct user behavior. Along with xSpecification unit tests to drive out code, using MSpec along with Machine.Fakes.Moq.

  • Run Your Unit Tests in Parallel to Maximize Performance : Steve Smith’s Blog

    Pingback from Run Your Unit Tests in Parallel to Maximize Performance : Steve Smith’s Blog