I’m looking for information about the successes and failures related to defining mocks, whether for testing (the typical case) or for dev activity.
The documentation for unit test tools seem to place a big focus on “create mocks and you won’t have to have access to the remote server you’re developing/testing the integration for!” (ignoring for the moment the view that one system calling another is by definition not a unit test). To me the emphasis on this is usually overdone.
IME, creating really useful mocks, ones that will help uncover errors in your integration components, and last longer than just the current project is really, really challenging. Imagine mocking the SFDC API in any reasonable manner. Sure, you might be able to do so for a few objects and few cases, but how long will that be useful? How much time and effort will you need to spend to do so? Will you spend more time maintaining the mock than the integration code? Is the bug that was reported in the integration code or caused by the mock being wrong or out of date? Why not just connect to the real thing and deal with aspects of that?
I’ve done some limited mocking in the past but abandoned it pretty quickly due to the above items. It has always been easier and more robust just to connect to the real thing. There are limitations certainly (e.g. repeatability of data creation) but one gets real results, not fake tends-to-be-from-the-happy-path-point-of-view responses.
But I’m looking for alternate experiences and points of view.
If the objective of the test is to test the access to the external system then it’s of course not very useful to mock it. But if you want to test the code that e.g. handles the results of such a call, then mocks are fine IMO. We use them on a regularly; they make tests possible which would not be possible (or with MUCH more effort) without mocks.
We do not use wM Test Suite though and have our own implementation of the stuff (should I say that it’s much better that the wM Test Suite? :-)).
This applies to every test, not only to tests dealing with external systems. In order to provide some mock implementation, you’d have to either edit the code or provide some other implementation in another package. You’d then have to play around with packages (activate and deactivate them) etc. Too much required from a mere mortal person to really do that. Which renders the testing not feasible and hence impossible. Writing tests should be easy, then there is a chance they will be written.
PS. I’m honored to have a conversation with one of the olders members of the wM community
I must be missing something in your post. I don’t quite understand what you’re stating is more difficult.
“…to provide some mock implementation, you’d have to either edit the code or provide some other implementation in another package.”
My question was about mock vs. no mock. But your response seems to cover only a mock implementation. And activate/deactivate packages. Why would one need to do that?
I have some scenarios that I could present for discussion but before I do that can you share the scenarios you’re describing?
The tool I use (not SAG) allows me to switch back and forth from real to mock in a second, and maintain the mock values (input/output pairs) somewhat easily.
However, it requires the actual code to be present, it cannot replace it (for instance, to mock SAP connections without actually installing the adapter and libraries).
Out of curiosity: What kind of mock is that? Does it step in on the transport level? Or even in the backend? I’m asking because, usually, mocks do replace the real code. Does your mock allow to mock any service? Or just a special kind of backend system (e.g. SAP)?
If the tests can be conducted using the real code only then fine, no mocks are needed. However this is not always possible. Hence we need some kind of replacement code.
This replacement code can be installed (and deinstalled) programmatically (on the fly) or manually. In the context of this thread, I call only programmatically installable replacements “mock”. The other things are also mocks in the sence that they provide a replacement implementation, but their handling is much more cumbersome and is more about configuration management in the software (putting different lines of development to the system). This, in my view, is too much of a hassle for anyone to use it willingly.
I understand what you’re saying now. Thanks for the clarification.
I agree that the amount of work to switch between the mock and real thing needs to be as simple as possible. My position has been to make it the most simple thing by not doing mocks at all and instead connect to the real thing.
For the scenario of testing the code handling responses, one approach is to split the response handling code to its own service. Then create unit tests for that. Eliminates the need to call the real system or a mock to test that portion.
The view I’m promoting here: if it seems like a mock would be useful for a given scenario, look to see how the component being tested can be further decomposed to isolate steps so that unit tests without making a call out to another system can be used instead.
This is very true and we strive to decompose our code into simple steps. This makes code more readable and testable. But still there are cases when the code (even if it doesn’t talk to an external system) must be mocked to implement a test. Decomposing services into smaller pieces facilitates that since you get more points to hook in.
So as always: Mocks are just a tool and should be used where needed (not where possible).
Interesting. I’ve never heard of creating a mock for anything other than mimicking (mocking) another system. Mocking other IS-hosted components seems odd to me, but food for thought.
The mock I’m using intercepts all calls so any service will be diverted to the mock engine, which I think it is just a simple lookup of the pair input/output with some wildcards so it can match similar inputs (for instance, ID fields).
This way: the mocked service is not changed; when in mock mode, it will not be invoked.