Unit testing code coverage - do you have 100% coverage?

Unit TestingCode Coverage

Unit Testing Problem Overview


Do your unit tests constitute 100% code coverage? Yes or no, and why or why not.

Unit Testing Solutions


Solution 1 - Unit Testing

No for several reasons :

  • It is really expensive to reach the 100% coverage, compared to the 90% or 95% for a benefit that is not obvious.
  • Even with 100% of coverage, your code is not perfect. Take a look at this method (in fact it depends on which type of coverage you are talking about - branch coverage, line coverage...):


public static String foo(boolean someCondition) {
    String bar = null;
    if (someCondition) {
        bar = "blabla";
    }
    return bar.trim();
}

and the unit test:

assertEquals("blabla", foo(true));

The test will succeed, and your code coverage is 100%. However, if you add another test:

assertEquals("blabla", foo(false));

then you will get a NullPointerException. And as you were at 100% with the first test, you would have not necessarily write the second one!

Generally, I consider that the critical code must be covered at almost 100%, while the other code can be covered at 85-90%

Solution 2 - Unit Testing

To all the 90% coverage tester:

The problem with doing so is that the 10% hard to test code is also the not-trivial code that contains 90% of the bug! This is the conclusion I got empirically after many years of TDD.

And after all this is pretty straightforward conclusion. This 10% hard to test code, is hard to test because it reflect tricky business problem or tricky design flaw or both. These exact reasons that often leads to buggy code.

But also:

  • 100% covered code that decreases with time to less than 100% covered often pinpoints a bug or at least a flaw.
  • 100% covered code used in conjunction with contracts, is the ultimate weapon to lead to live close to bug-free code. Code Contracts and Automated Testing are pretty much the same thing
  • When a bug is discovered in 100% covered code, it is easier to fix. Since the code responsible for the bug is already covered by tests, it shouldn't be hard to write new tests to cover the bug fix.

Solution 3 - Unit Testing

No, because there is a practical trade-off between perfect unit tests and actually finishing a project :)

Solution 4 - Unit Testing

It is seldom practical to get 100% code coverage in a non-trivial system. Most developers who write unit tests shoot for the mid to high 90's.

An automated testing tool like Pex can help increase code coverage. It works by searching for hard-to-find edge cases.

Solution 5 - Unit Testing

Yes we do.

It depends on what language and framework you're using as to how easy that is to achieve though.

We're using Ruby on Rails for my current project. Ruby is very "mockable" in that you can stub/mock out large chunks of your code without having to build in overly complicated class composition and construction designs that you would have to do in other languages.

That said, we only have 100% line coverage (basically what rcov gives you). You still have to think about testing all the required branches.

This is only really possible if you include it from the start as part of your continuous integration build, and break the build if coverage drops below 100% - prompting developers to immediately fix it. Of course you could choose some other number as a target, but if you're starting fresh, there isn't much difference for the effort to get from 90% to 100%

We've also got a bunch of other metrics that break the build if they cross a given threshold as well (cyclomatic complexity, duplication for example) these all go together and help reinforce each other.

Again, you really have to have this stuff in place from the start to keep working at a strict level - either that or set some target you can hit, and gradually ratchet it up till you get to a level you're happy with.

Does doing this add value? I was skeptical at first, but I can honestly say that yes it does. Not primarily because you have thoroughly tested code (although that is definitely a benefit), but more in terms of writing simple code that is easy to test and reason about. If you know you have to have 100% test coverage, you stop writing overly complex if/else/while/try/catch monstrosities and Keep It Simple Stupid.

Solution 6 - Unit Testing

What I do when I get the chance is to insert statements on every branch of the code that can be grepped for and that record if they've been hit, so that I can do some sort of comparison to see which statements have not been hit. This is a bit of a chore, so I'm not always good about it.

I just built a small UI app to use in charity auctions, that uses MySQL as its DB. Since I really, really didn't want it to break in the middle of an auction, I tried something new.

Since it was in VC6 (C++ + MFC) I defined two macros:

#define TCOV ASSERT(FALSE)
#define _COV ASSERT(TRUE)

and then I sprinkled

TCOV;

throughout the code, on every separate path I could find, and in every routine. Then I ran the program under the debugger, and every time it hit a TCOV, it would halt. I would look at the code for any obvious problems, and then edit it to _COV, then continue. The code would recompile on the fly and move on to the next TCOV. In this way, I slowly, laboriously, eliminated enough TCOV statements so it would run "normally".

After a while, I grepped the code for TCOV, and that showed what code I had not tested. Then I went back and ran it again, making sure to test more branches I had not tried earlier. I kept doing this until there were no TCOV statements left in the code.

This took a few hours, but in the process I found and fixed several bugs. There is no way I could have had the discipline to make and follow a test plan that would have been that thorough. Not only did I know I had covered all branches, but it had made me look at every branch while it was running - a very good kind of code review.

So, whether or not you use a coverage tool, this is a good way to root out bugs that would otherwise lurk in the code until a more embarrasing time.

Solution 7 - Unit Testing

I personally find 100% test coverage to be problematic on multiple levels. First and foremost, you have to make sure you are gaining a tangible, cost-saving benefit from the unit tests you write. In addition, unit tests, like any other code, are CODE. That means it, just like any other code, must be verified for correctness and maintained. That additional time verifying additional code for correctness, and maintaining it and keeping those tests valid in response to changes to business code, adds cost. Achieving 100% test coverage and ensuring you test you're code as thoroughly as possible is a laudable endeavor, but achieving it at any cost...well, is often too costly.

There are many times when covering error and validity checks that are in place to cover fringe or extremely rare, but definitely possible, exceptional cases are an example of code that does not necessarily need to be covered. The amount of time, effort (and ultimately money) that must be invested to achieve coverage of such rare fringe cases is often wasteful in light of other business needs. Properties are often a part of code that, especially with C# 3.0, do not need to be tested as most, if not all, properties behave exactly the same way, and are excessively simple (single-statement return or set.) Investing tremendous amounts of time wrapping unit tests around thousands of properties could quite likely be better invested somewhere else where a greater, more valuable tangible return on that investment can be realized.

Beyond simply achieving 100% test coverage, there are similar problems with trying to set up the "perfect" unit. Mocking frameworks have progressed to an amazing degree these days, and almost anything can be mocked (if you are willing to pay money, TypeMock can actually mock anything and everything, but it does cost a lot.) However, there are often times when dependencies of your code were not written in a mock-able way (this is actually a core problem with the vast bulk of the .NET framework itself.) Investing time to achieve the proper scope of a test is useful, but putting in excessive amounts of time to mock away everything and anything under the face of the sun, adding layers of abstraction and interfaces to make it possible, is again most often a waste of time, effort, and ultimately money.

The ultimate goal with testing shouldn't really be to achieve the ultimate in code coverage. The ultimate goal should be achieving the greatest value per unit time invested in writing unit tests, while covering as much as possible in that time. The best way to achieve this is to take the BDD approach: Specify your concerns, define your context, and verify the expected outcomes occur for any piece of behavior being developed (behavior...not unit.)

Solution 8 - Unit Testing

No because I spent my time adding new features that help the users rather than tricky to write obscure tests that deliver little value. I say unit test the big things, subtle things and things that are fragile.

Solution 9 - Unit Testing

On a new project I practice TDD and maintain 100% line coverage. It mostly occurs naturally through TDD. Coverage gaps are usually worth the attention and are easily filled. If the coverage tool I'm using provided branch coverage or something else I'd pay attention to that, although I've never seen branch coverage tell me anything, probably because TDD got there first.

My strongest argument for maintaining 100% coverage (if you care about coverage at all) is that it's much easier to maintain 100% coverage than to manage less than 100% coverage. If you have 100% coverage and it drops, you immediately know why and can easily fix it, because the drop is in code you've just been working on. But if you settle for 95% or whatever, it's easy to miss coverage regressions and you're forever re-reviewing known gaps. It's the exact reason why current best practice requires one's test suite to pass completely. Anything less is harder, not easier, to manage.

My attitude is definitely bolstered by having worked in Ruby for some time, where there are excellent test frameworks and test doubles are easy. 100% coverage is also easy in Python. I might have to lower my standards in an environment with less amenable tools.

I would love to have the same standards on legacy projects, but I've never found it practical to bring a large application with mediocre coverage up to 100% coverage; I've had to settle for 95-99%. It's always been just too much work to go back and cover all the old code. This does not contradict my argument that it's easy to keep a codebase at 100%; it's much easier when you maintain that standard from the beginning.

Solution 10 - Unit Testing

I generally write unit tests just as a regression-prevention method. When a bug is reported that I have to fix, I create a unit test to ensure that it doesn't re-surface in the future. I may create a few tests for sections of functionality I have to make sure stay intact (or for complex inter-part interactions), but I usually want for the bug fix to tell me one is necessary.

Solution 11 - Unit Testing

I usually manage to hit 93..100% with my coverage but I don't aim for 100% anymore. I used to do that and while it's doable, it's not worth the effort beyond a certain point because testing blindly obvious usually isn't needed. Good example of this could be the true evaluation branch of the following code snipped

public void method(boolean someBoolean) {
    if (someBoolean) {
        return;
    } else {
        /* do lots of stuff */ 
    }
}

However what's important to achieve is to as close to 100% coverage on functional parts of the class as possible since those are the dangerous waters of your application, the misty bog of creeping bugs and undefined behaviour and of course the money-making flea circus.

Solution 12 - Unit Testing

From Ted Neward blog.

> By this point in time, most developers have at least heard of, if not considered adoption of, the Masochistic Testing meme. Fellow NFJS'ers Stuart Halloway and Justin Gehtland have founded a consultancy firm, Relevance, that sets a high bar as a corporate cultural standard: 100% test coverage of their code. > >Neal Ford has reported that ThoughtWorks makes similar statements, though it's my understanding that clients sometimes put accidental obstacles in their way of achieving said goal. It's ambitious, but as the ancient American Indian proverb is said to state, >>If you aim your arrow at the sun, it will fly higher and farther than if you aim it at the ground.

Solution 13 - Unit Testing

I only have 100% coverage on new pieces of code that have been written with testability in mind. With proper encapsulation, each class and function can have functional unit tests that simultaneously give close to 100% coverage. It's then just a matter of adding some additional tests that cover some edge cases to get you to 100%.

You shouldn't write tests just to get coverage. You should be writing functional tests that test correctness/compliance. By a good functional specification that covers all grounds and a good software design, you can get good coverage for free.

Solution 14 - Unit Testing

Yes, I have had projects that have had 100% line coverage. See my answer to a similar question.

You can get 100% line coverage, but as others have pointed out here on SO and elsewhere on the internet its maybe only a minimum. When you consider path and branch coverage, there's a lot more work to do.

The other way of looking at it is to try to make your code so simple that its easy to get 100% line coverage.

Solution 15 - Unit Testing

In many cases it's not worth getting 100% statement coverage, but in some cases, it is worth it. In some cases 100% statement coverage is far too lax a requirement.

The key question to ask is, "what's the impact if the software fails (produces the wrong result)?". In most cases, the impact of a bug is relatively low. For example, maybe you have to go fix the code within a few days and rerun something. However, if the impact is "someone might die in 120 seconds", then that's a huge impact, and you should have a lot more test coverage than just 100% statement coverage.

I lead the Core Infrastructure Initiative Best Practices Badge for the Linux Foundation. We do have 100% statement coverage, but I wouldn't say it was strictly necessary. For a long time we were very close to 100%, and just decided to do that last little percent. We couldn't really justify the last few percent on engineering grounds, though; those last few percent were added purely as "pride of workmanship". I do get a very small extra piece of mind from having 100% coverage, but really it wasn't needed. We were over 90% statement coverage just from normal tests, and that was fine for our purposes. That said, we want the software to be rock-solid, and having 100% statement coverage has helped us get there. It's also easier to get 100% statement coverage today.

It's still useful to measure coverage, even if you don't need 100%. If your tests don't have decent coverage, you should be concerned. A bad test suite can have good statement coverage, but if you don't have good statement coverage, then by definition you have a bad test suite. How much you need is a trade-off: what are the risks (probability and impact) from the software that is totally untested? By definition it's more likely to have errors (you didn't test it!), but if you and your users can live with those risks (probability and impact), it's okay. For many lower-impact projects, I think 80%-90% statement coverage is okay, with better being better.

On the other hand, if people might die from errors in your software, then 100% statement coverage isn't enough. I would at least add branch coverage, and maybe more, to check on the quality of your tests. Standards like DO-178C (for airborne systems) take this approach - if a failure is minor, no big deal, but if a failure could be catastrophic, then much more rigorous testing is required. For example, DO-178C requires MC/DC coverage for the most critical software (the software that can quickly kill people if it makes a mistake). MC/DC is way more strenuous than statement coverage or even branch coverage.

Solution 16 - Unit Testing

There's a lot of good information here, I just wanted to add a few more benefits that I've found when aiming for 100% code coverage in the past

  • It helps reduce code complexity

Since it is easier to remove a line than it is to write a test case, aiming for 100% coverage forces you to justify every line, every branch, every if statement, often leading you to discover a much simpler way to do things that requires fewer tests

  • It helps develop good test granularity

You can achieve high test coverage by writing lots of small tests testing tiny bits of implementation as you go. This can be useful for tricky bits of logic but doing it for every piece of code no matter how trivial can be tedious, slow you down and become a real maintenance burden also making your code harder to refactor. On the other hand, it is very hard to achieve good test coverage with very high level end to end behavioural tests because typically the thing you are testing involves many components interacting in complicated ways and the permutations of possible cases become very large very quickly. Therefore if you are practical and also want to aim for 100% test coverage, you quickly learn to find a level of granularity for your tests where you can achieve a high level of coverage with a few good tests. You can achieve this by testing components at a level where they are simple enough that you can reasonably cover all the edge cases but also complicated enough that you can test meaningful behaviour. Such tests end up being simple, meaningful and useful for identifying and fixing bugs. I think this is a good skill and improves code quality and maintainability.

Solution 17 - Unit Testing

A while ago I did a little analysis of coverage in the JUnit implementation, code written and tested by, among others, Kent Beck and David Saff.

From the conclusions:

> Applying line coverage to one of the best tested projects in the world, here is what we learned: > > 1. Carefully analyzing coverage of code affected by your pull request is more useful than monitoring overall coverage trends against thresholds. > > 2. It may be OK to lower your testing standards for deprecated code, but do not let this affect the rest of the code. If you use coverage thresholds on a continuous integration server, consider setting them differently for deprecated code. > > 3. There is no reason to have methods with more than 2-3 untested lines of code. > > 4. The usual suspects (simple code, dead code, bad weather behavior, …) correspond to around 5% of uncovered code. > > In summary, should you monitor line coverage? Not all development teams do, and even in the JUnit project it does not seem to be a standard practice. However, if you want to be as good as the JUnit developers, there is no reason why your line coverage would be below 95%. And monitoring coverage is a simple first step to verify just that.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestioncoreView Question on Stackoverflow
Solution 1 - Unit TestingRomain LinsolasView Answer on Stackoverflow
Solution 2 - Unit TestingPatrick from NDepend teamView Answer on Stackoverflow
Solution 3 - Unit TestingAndrew HareView Answer on Stackoverflow
Solution 4 - Unit TestingRobert HarveyView Answer on Stackoverflow
Solution 5 - Unit TestingmadlepView Answer on Stackoverflow
Solution 6 - Unit TestingMike DunlaveyView Answer on Stackoverflow
Solution 7 - Unit TestingjristaView Answer on Stackoverflow
Solution 8 - Unit TestingRichHView Answer on Stackoverflow
Solution 9 - Unit TestingDave SchweisguthView Answer on Stackoverflow
Solution 10 - Unit TestingSqlRyanView Answer on Stackoverflow
Solution 11 - Unit TestingEskoView Answer on Stackoverflow
Solution 12 - Unit TestingcetnarView Answer on Stackoverflow
Solution 13 - Unit TestingAtes GoralView Answer on Stackoverflow
Solution 14 - Unit TestingquamranaView Answer on Stackoverflow
Solution 15 - Unit TestingDavid A. WheelerView Answer on Stackoverflow
Solution 16 - Unit TestingalzclarkeView Answer on Stackoverflow
Solution 17 - Unit TestingavandeursenView Answer on Stackoverflow