Why is it so bad to mock classes?

Unit TestingTestingMocking

Unit Testing Problem Overview


I recently discussed with a colleague about mocking. He said that mocking classes is very bad and should not be done, only in few cases.

He says that only interfaces should be mocked, otherwise it's an architecture fault.

I wonder why this statement (I fully trust him) is so correct? I don't know it and would like to be convinced.

Did I miss the point of mocking (yes, I read Martin Fowler's article)

Unit Testing Solutions


Solution 1 - Unit Testing

Mocking is used for protocol testing - it tests how you'll use an API, and how you'll react when the API reacts accordingly.

Ideally (in many cases at least), that API should be specified as an interface rather than a class - an interface defines a protocol, a class defines at least part of an implementation.

On a practical note, mocking frameworks tend to have limitations around mocking classes.

In my experience, mocking is somewhat overused - often you're not really interested in the exact interaction, you really want a stub... but mocking framework can be used to create stubs, and you fall into the trap of creating brittle tests by mocking instead of stubbing. It's a hard balance to get right though.

Solution 2 - Unit Testing

IMHO, what your colleague means is that you should program to an interface, not an implementation. If you find yourself mocking classes too often, it's a sign you broke the previous principle when designing your architecture.

Solution 3 - Unit Testing

Mocking classes (in contrast to mocking interfaces) is bad because the mock still has a real class in the background, it is inherited from, and it is possible that real implementation is executed during the test.

When you mock (or stub or whatever) an interface, there is no risk of having code executed you actually wanted to mock.

Mocking classes also forces you to make everything, that could possibly be mocked, to be virtual, which is very intrusive and could lead to bad class design.

If you want to decouple classes, they should not know each other, this is the reason why it makes sense to mock (or stub or whatever) one of them. So implementing against interfaces is recommended anyway, but this is mentioned here by others enough.

Solution 4 - Unit Testing

I would suggest to stay away from mocking frameworks as far as possible. At the same time, I would recommend to use mock/fake objects for testing, as much as possible. The trick here is that you should create built-in fake objects together with real objects. I explain it more in detail in a blog post I wrote about it: http://www.yegor256.com/2014/09/23/built-in-fake-objects.html

Solution 5 - Unit Testing

Generally you'd want to mock an interface.

While it is possible to mock a regular class, it tends to influence your class design too much for testability. Concerns like accessibility, whether or not a method is virtual, etc. will all be determined by the ability to mock the class, rather than true OO concerns.

There is one faking library called TypeMock Isolator that allows you to get around these limitations (have cake, eat cake) but it's pretty expensive. Better to design for testability.

Solution 6 - Unit Testing

The answer, like most questions about practices, is "it depends".

Overuse of mocks can lead to tests that don't really test anything. It can also lead to tests which are virtual re-implementations of the code under test, tightly bound to a specific implementation.

On the other hand, judicious use of mocks and stubs can lead to unit tests which are neatly isolated and test one thing and one thing alone - which is a good thing.

It's all about moderation.

Solution 7 - Unit Testing

It makes sense to mock classes so tests can be written early in the development lifecycle.

There is a tendency to continue to use mock classes even when concrete implementations become available. There is also the tendency to develop against mock classes (and stubs) necessary early in a project when some parts of the system have not been built.

Once a piece of the system has been built it is necessary to test against it and continue to test against it (for regression). In this case starting with mocks is good but they should be discarded in favour of the implementation as soon as possible. I have seen projects struggle because different teams continue to develop against the behaviour of the mock rather than the implementation (once it is available).

By testing against mocks you are assuming that the mock is characteristic of the system. Often this involves guessing what the mocked component will do. If you have a specification of the system you are mocking then you don't have to guess, but often the 'as-built' system doesn't match the original specification due to practical considerations discovered during construction. Agile development projects assume this will always happen.

You then develop code that works with the mock. When it turns out that the mock does not truly represent the behaviour of the real as-built system (eg. latency issues not seen in the mock, resource and efficiency issues not seen in the mock, concurrency issues, performance issues etc) you then have a bunch of worthless mocking tests you must now maintain.

I consider the use of mocks to be valuable at the start of development but these mocks should not contribute to project coverage. It is best later if the mocks are removed and proper integration tests are created to replace them otherwise your system will not be getting tested for the variety of conditions which your mock did not simulate (or simulates incorrectly relative to the real system).

So, the question is whether or not to use mocks, it is a matter of when to use them and when to remove them.

Solution 8 - Unit Testing

Edit: Since you have clarified that your colleague meant mock class is bad but mock interface is not, the answer below is outdated. You should refer to this answer.

I am talking about mock and stub as defined by Martin Fowler, and I assume that's what your colleague meant, too.

Mocking is bad because it can lead to overspecification of tests. Use stub if possible and avoid mock.

Here's the diff between mock and stub (from the above article):

> We can then use state verification on > the stub like this. > > class OrderStateTester... > public void testOrderSendsMailIfUnfilled() { > Order order = new Order(TALISKER, 51); > MailServiceStub mailer = new MailServiceStub(); > order.setMailer(mailer); > order.fill(warehouse); > assertEquals(1, mailer.numberSent()); > } > > Of course this is a very simple test - > only that a message has been sent. > We've not tested it was send to the > right person, or with the right > contents, but it will do to illustrate > the point. > > Using mocks this test would look quite > different. > > class OrderInteractionTester... > public void testOrderSendsMailIfUnfilled() { > Order order = new Order(TALISKER, 51); > Mock warehouse = mock(Warehouse.class); > Mock mailer = mock(MailService.class); > order.setMailer((MailService) mailer.proxy()); >
> mailer.expects(once()).method("send"); > warehouse.expects(once()).method("hasInventory") > .withAnyArguments() > .will(returnValue(false)); >
> order.fill((Warehouse) warehouse.proxy()); > } > } >In order to use state verification on the stub, I need to make some extra methods on the >stub to help with verification. As a result the stub implements MailService but adds extra >test methods.

Solution 9 - Unit Testing

It depends how often you use (or are forced by bad design) mocks.

If instantiating the object becomes too hard (and it happens more than often), then it is a sign the code may need some serious refactoring or change in design (builder? factory?).

When you mock everything you end up with tests that know everything about your implementation (white box testing). Your tests no longer document how to use the system - they are basically a mirror of its implementation.

And then comes potential code refactoring.. From my experience it's one of the biggest issues related to overmocking. It becomes painful and takes time, lots of it. Some developers become fearful of refactoring their code knowing how long will it take. There is also question of purpose - if everything is mocked, are we really testing the production code?

Mocks of course tend to violate DRY principle by duplicating code in two places: once in the production code and once in the tests. Therefore, as I mentioned before, any change to code has to be made in two places (if tests aren't written well, it can be in more than that..).

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionguerdaView Question on Stackoverflow
Solution 1 - Unit TestingJon SkeetView Answer on Stackoverflow
Solution 2 - Unit TestingPascal ThiventView Answer on Stackoverflow
Solution 3 - Unit TestingStefan SteineggerView Answer on Stackoverflow
Solution 4 - Unit Testingyegor256View Answer on Stackoverflow
Solution 5 - Unit TestingAnderson ImesView Answer on Stackoverflow
Solution 6 - Unit TestingAvdiView Answer on Stackoverflow
Solution 7 - Unit TestingMoaView Answer on Stackoverflow
Solution 8 - Unit TestingGravitonView Answer on Stackoverflow
Solution 9 - Unit TestingPiotr NiewinskiView Answer on Stackoverflow