A unit test should have the following properties:
- It should be automated and repeatable.
- It should be easy to implement.
- It should be relevant tomorrow.
- Anyone should be able to run it at the push of a button.
- It should run quickly.
- It should be consistent in its results (it always returns the same result if you don’t change anything between runs).
- It should have full control of the unit under test.
- It should be fully isolated (runs independently of other tests).
- When it fails, it should be easy to detect what was expected and determine how to pinpoint the problem.
- Your test contains logic that shouldn't be there:
+ switch, if or else statements
+ foreach, for, while loops
A test that contains logic is usually testing more than one thing at a time, which isn't recommended
- A unit test should be a series of method calls with assert calls, but no control flows, not even try-catch
- Seperate the tests and production code project, because it makes all the rest of the test-related work easier.
- The NUnit runner needs at least two attributes to know what to run:
+ The [TextFixture] attribute that denotes a class that holds automated NUnit tests.
+ The [Test] attribute that can be put on a method to denote it as an automated test to be invoked.
- A unit test usually comprises three main actions:
1. Arrange objects, creating and setting them up as necessary
2. Act on an object
3. Assert that something is as expected
- The Assert class has static methods and is located in the NUnit.Framework namespace. It's the bridge between your code and the NUnit framework, and its purpose is to declare that a specific assumption is supposed to exist
- Name of test project: [ProjectUnderTest].UnitTests -- LogAn.UnitTests
- Name of class in test project: [ClassName]Tests -- LogAnalyzerTests
- Name of method: [UnitOfWorkName]_[ScenarioUnderTest]_[ExpectedBehavior] -- IsValidFileName_BadExtension_ReturnsFalse()
- The red/green concept is prevalent throughout the unit testing world and especially in test-driven development. Its mantra is "Red-Green-Refactor", meaning that you start with a failing test, then pass it, and then make your code readable and more maintainable
- Test code styling:
+ Naming convention.
+ Add empty line between the arrange, act and assert stages in each test.
+ Separate the assert from the act as much as possible (declare "result" variable) rather assert on a value than directly against a call to a function.
--> It makes the code much more readable
- [SetUp] attribute: it causes NUnit to run that setup method each time it runs any of the tests in your class
- [TearDown] attribute: denotes a method to be executed once after each test in your class has executed
- The more you use [SetUp], the less readable your tests will be, because people will have to keep reading test code in two places in the file to understand how the test gets its instances and what type of each object the test is using
- Don't use setup methods to initialize instances. You should avoid it. It makes the tests harder to read. Instead, I use factory methods to initialize the instances under test
- One common testing scenario is making sure that the correct exception is thrown from the tested method when it should be
- Checking for expected exceptions: use Assert.Catch(delegate) instead of [ExpectedException] because [ExpectedException tells the test runner to wrap the execution of this whole method in a big try-catch block and fail the test if nothing was "catch"-ed. The big problem with this is that you don't know which line threw the exception. In fact, you could have a bug in the constructor that throws an exception, and your test will pass, even though the constructor should never have thrown this exception --> The test could be lying to you when you use this attribute
- Ignoring tests: sometimes you'll have tests that are broken, you can put an [Ignore] attribute on tests that are broken because of a problem in the test, not in the code
- Setting test categories: you can set up your tests to run under specific test categories, such as slow tests and fast tests, using [Category] attribute
- Naming conventions of scenarios:
+ "ByDefault": when there's an expected return value with no prior action
+ "WhenCalled" or "Always": can be used in the second or third kind of unit of work results (change state or call a third party) when the state change is done with no prior configuration or when the third-party call is done with no configuration. Ex: Sum_WhenCalled_CallsTheLogger or Sum_Always_CallsTheLogger
- Using stubs to break dependencies
- An external dependency is an object in your system that your code under test interacts with and over which you have no controll (Common examples are filesystems, threads, memory, time, and so on)
- A stub is a controllable replacement for an existing dependency (or collaborator) in the system. By using a stub, you can test your code without dealing with the dependency directly
- Refactoring your design to be more testable: One way of introducing testability into your code base - by creating a new interface
- 2 types of dependency-breaking refactorings, and one depends on the other:
+ Type A - abstracting concrete objets into interfaces or delegates
+ Type B - refactoring to allow injection of fake implementations of those delegates or interfaces
- Extract an interface: you need to break out the code that touches the filesystem into a separate class. That way you can easily distingguish it and later replace the call to that class from your tests.
Next, you can tell your class under test that instead of using the concrete FileExtensionManager class, it will deal with some form of ExtensionManager, without knowing its concrete implementation. In .NET, this could be accomplished by either using a base class or an interface that FileExtensionManager would extend
- Dependency injection: inject a fake implementation into a unit under test
- Inject a fake at the constructor level (constructor injection): In this scenario, you add a new constructor (or a new parameter to an existing constructor) that will accept an object of the interface type you extracted earlier (IExtensionManager). The constructor then sets a local field of the interface type in the class for later use by your method or any other
- Simulating exceptions from fakes
- Injecting a fake as a property get or set. When you should use property injection? Use this technique when you want to signify that a dependency of the class under test is optional or if the dependency has a default instance created that doesn't create any problems during the test
- Use a factory class: The factory pattern is a design that allows another class to be responsible for creating objects
- Use stubs to make sure that the code under test received all the inputs it needed so that you could test its logic independently
- Interaction testing using mock objects
- Value-based testing checks the value returned from a function
- State-based testing is about checking for noticeable behavior changes in the system under test, after chaning its state
- Interaction testing is testing how an object sends messages (calls methods) to other objects. You use interaction testing when calling another object is the end result of a specific unit of work
- Always choose to use interaction testing only as the last option because so many things become much more complicated by having interaction tests
- A mock object is a fake object in the system that decideds whether the unit test has passed or failed. It does so by verifying whether the object under test called the fake object as expected. There's usually no more than one mock per test
- A fake is a generic term that can be used to describe either a stub or a mock object, because they both look like the real object. Whether a fake is a stub or a mock depends on how it's used in the current test. If it's used to check an interaction (asserted against), it's a mock object. Otherwise, it's a stub
- The basic difference between mock hay stub is that stub can't fail tests. Mock can
- Creating and using a mock object is much like using a stub, except that a mock will do a little more than a stub: it will save the history of communication, which will later be verified in the form of expectations
- You aren't writing the tests directly inside the mock object code, because:
+ You'd like to be able to reuse the mock object in other test cases, with other asserts on the message
+ If the assert were put inside the handwritten fake class, whoever reads the test would have no idea what you're asserting. You'd be hiding essential information from the test code, which hinders the readability and maintainability of the test
- It's perfectly OK to have multiple stubs in a single test, but more than a single mock can mean trouble, because you're testing more than one thing
- Having several asserts can sometimes be a problem, because the first time an assert fails in your test, it actually throws a special type of exception that is caught by the test runner. That also means no other lines below the line that just failed will be executed
- It would be a good indication to you to break this test into multiple tests. Alternatively, perhaps you could just create a new EmailInfo object and have the three attributes put on it, and then in your test create an expected version of this object with all correct properties. This would then be one assert
- One mock per test. Having more than one mock per test usually means you're testing more than one thing, and this can lead to complicated or brittle tests
- Isolation (mocking) framework: is a set of programmable APIs that makes creating fake objects much simpler, faster and shorter than hand-coding them
- A dynamic fake object is any stub or mock that's created at runtime without needing to use a handwritten (hardcoded) implementation of that object
- There are 2 common scenarios: tests run as part of the automated build process and tests run locally by developers on their own machines
- The safe green zone: Locate your integration and unit tests in separate places. By doing that, you give the developers on your team a safe green test area that contains only unit tests, where they know that they can get the latest code version, they can run all tests in that namespace or folder, and the tests should all be green.
- There are the basic 3 patterns based on test class inheritance:
+ Abstract test infrastructure class
+ Template test class
+ Abstract test driver class
- Refactoring techniques that you can apply when using the preceding patterns:
+ Refactoring into a class hierarchy
+ Using generics
- List of possible steps for refactoring your test class:
1. Refactor: extract the superclass
+ Create a base class (BaseXXXTests)
+ Move the factory methods (like GetParser) into the base class
+ Move all the tests to the base class
+ Extract the expected outputs into public fields in the base class
+ Extract the test inputs into abstract methods or properties that the derived classes will create
2. Refactor: make factory methods abstract, and return interfaces
3. Refactor: find all the places in the test methods where explicit class types are used, and change them to use the interfaces of those types instead
4. In the derived class, implement the abstract factory methods and return the explicit types
- The tests that you write should have three properties:
+ Trustworthiness: Trustworthy tests don't have bugs, and they test the right things
+ Maintainability
+ Readability
- Together, the 3 pillars ensure your time is well used. Drop one of them, and you run the risk of wasting everyone's time
- Writing readable tests:
+ Naming unit tests: When I call method X with a null value, then it should do Y
+ Naming variables: don't use magic number like -100, replace with a readable variable like COULD_NOT_READ_FILE = -100
+ Creating good assert messages
+ Separating asserts from actions
- Code integrity: These practices are all part of code integrity:
+ Automated builds
+ Continuous integration
+ Unit testing and test-driven development
+ Code consistency and agreed standards for quality
+ Achieving the shortest time possible to fix bugs (or make failing tests pass)
I like to say "We have good code integrity", instead of saying that I think we're doing all these things well
- Working with legacy code (chapter 10):
+ Where do you start adding tests? You need to create a priority list of components for which testing makes the most sense: Logical complexity, Dependency level, Priority
+ Writing integration tests before refactoring
+ Important tools for legacy code unit testing (page 212)
- Design and testability