"Unit test" as term losing its focus?

Wanted to get perspectives from others on the scope of “unit test.”

Over the past year or so, I’ve noticed an increased use of “unit test” in contexts that are broader than expected. I won’t repeat the definitions from the web here but most generally focus on what a Wiki article states as “…but it is more commonly an individual function or procedure.” Or as an article at SmartBear notes: “…the smallest piece of code that can be logically isolated in a system. In most programming languages, that is a function, a subroutine, a method or property.”

Within Integration Server, that would refer to a FLOW or Java service that is limited in scope. E.g. one could write unit tests for a good number of the services in WmPublic, such as the date, list, json, string folders and more.

The SmartBear article goes on to state:

The isolated part of the definition is important. In his book “Working Effectively with Legacy Code”, author Michael Feathers states that such tests are not unit tests when they rely on external systems: “If it talks to the database, it talks across the network, it touches the file system, it requires system configuration, or it can’t be run at the same time as any other test."

This means tests for the services in the client folder, for example, would not be “unit tests” as those each “talks across the network.”

Alternatively, an article from Martin Fowler talks about how “unit test” is not as tightly defined as some people think. In addition to noting that unit testing is about “low-level” and “small part” of a software system, he points out there are variations of what is considered a “unit.” And then also describes “social” and “solitary” which leads to the use of “doubles” or “mocks.”

I tend to fall into the camp of “if is talking to another system, such as a DB, an app via API, etc.then it is not a unit test.” But I may be too rigid there.

My concern is it seems like people are using “unit test” to refer to any and all testing done by developers or in a development environment. I think that is too broad and dilutes the meaning (and hence understanding) of unit test. Say I’m working on a user story and say “development and unit testing is complete.” What can you safely assume has been tested? “Integration testing” can have the same ambiguity.

I’m very interested in thoughts and experiences from others.

Hello Rob,
I do want to join your camp on this.

I usually make sure both Unit and Integration testing are covered when I want to make my story as complete.
Unit tests I use them for my regression for every deployment and for each environment along with mocks. Also would include the proof of Unit Test results for the pull requests so that the change does not break anything.

To a level I use integration tests to get enough confidence on success and failure cases but might not be able to do the same in all environments.

This is very good topic to discuss with the forum.

How much time do you spend updating and confirming that the mocks are behaving as expected/needed? When a test fails, how often do you have to determine whether it is is the code under test or the mock that has the issue?

Very less - might not be more than 2 hours for each system.
We usually focus more on unit testing the transformation and validations and to support that only we go for mocks.

We strictly adopt Test Driven Development approach for breaking the code into modules to produce testable code and when TDD adhered the possibility of failures in mock is minimal.

1 Like

Great topic. Not off-topic at all :slight_smile:

I think testing is a continuum. There will be some ‘bleed’ between unit, functional, load and stress testing. But you are right that a unit test implies testing a single module or unit of code.

Compare to a test that submits a cXML B2B order to an IS URL, waits 3 seconds, then calls wm.tn.doc:viewAs to check a downstream canonical order document was generated correctly. This is an integration test to me. Run this ‘cXML order integration test’ in parallel (say, 20 threads for 2 hours), with other similar tests (‘EDI order’ , ‘cXML invoice’)… impose a load similar to production … and that’s a load test. Impose an ‘unfair’ amount of load – that’s a stress test.

Another necessary condition to qualify something as a unit test to me is it should be fast – 200 ms instead of 200 seconds. The reason is you normally want to run dozens of unit tests in parallel, and you want the test time to be reasonable (maybe even on every commit)

To me, a test that invokes pub.client:http could well be a unit test – provided, it’s a single unit of work, it can be mocked reliably and it is fast. Ditto for database tests.

For instance, here’s a discussion between Perl coders on unit testing code that makes HTTP calls. The stated requirement was, “For the unit testing, I would like to cover the case where a remote host does not respond, does not have a file, and where the fetched file arrives truncated or corrupt”.

One answer suggests, “The harder but more versarile approach is to distribute and spawn a custom webserver that exhibits the intended (error) behaviours under well-known URLs.”

A similar unit test involving HTTP client calls could be built in webMethods IS. In fact, I’d say webMethods has an unfair advantage in constructing ‘mocks’, as many Integration Server components are ‘bidirectional’ by nature - ie. HTTP client and server, email client and port, FTP/S client and port, file write functionality and File Poller port, SAP RFC/IDoc client and server, JDBC client and … (well, here the analogy falls down. But it’s usually easy enough to requisition a reliable test support database).

For what it’s worth, my opinion is integration tests deliver more bang for buck than unit tests. So if you can only do one, do integration tests first – since by running one integration test, you’re implicitly ‘running’ dozens of unit tests that must pass to give you the result you want.

But there is absolutely a real need for unit testing – typically to test library or utility components. Here, unit tests shine when testing edge cases that may not be possible in integration testing.


Good info!

Regarding the “losing its focus” aspect – anyone else seeing “unit test” being used more and more for things that are not unit tests? Are we once again taking a term that had a perfectly fine definition and using it in more scenarios such that when someone says “unit testing is complete” we have no idea what that really means? As an industry, we tend to do that for some reason, the most recent victim being REST. :slight_smile:

Speaking of mocks, I’m not a fan. Mainly due to the relative complexity and the “yet another component to maintain, debug.” I get that they can potentially add value, but for us, that has not been the case so far. For integrations hosted in wM IS, I much prefer tests with the real thing. True, one cannot specifically or easily test all some scenarios (e.g. the “data is partial or corrupted”) but the corner cases are just that – corner cases. As long as any error is detected/reported, and things do not silently continue, we’ve been good. Imagine mocking a complex API in any meaningful way. I would think mocking something like the Salesforce APIs would be a full time job. Likely easier, and sufficient, to just call Salesforce (in an integration test, not a unit test :slight_smile: ).

It would be useful, perhaps, if a particular post/thread could be marked as “discussion” or something to indicate to the forum software to stop pestering the OP for “accepted answer/solution.” This thread is an example of where there will never be an answer or solution.


Hello Rob,

I agree with you on this.

We usually take a bookish approach for Unit Tests to solve this problem. I encourage my developers to break the codebase into figs/esb_1101.gif
Ref - 11.1 The VETO Pattern | Enterprise Service Bus: Theory in Practice.
And it worked very well for us. We never create too many mocks since we only create unit tests for blocks - validate, enrich, transform but not Operate.
As we all know middleware never owns the data and we don’t need to test provider systems like DB, Salesforce, etc.

And we only create mocks when we are performing lookups within the transformations.


Rob –

I’m no fan of mocks either. If someone comes by saying “Integrate this!”, my response is “Where is your test system?”. Sometimes that test system is unavailable, unreliable, or I know it will simply evaporate after the implementation project ends. I see mocks as a necessary evil for those circumstances.

Gotta say this – despite me lauding IS’s ability to host mocks, none of my mocks are IS-hosted. My ‘programmatic mocks’ are basically 3-5 Perl/PHP scripts hosted on a Linux server, that doubles as host of SFTP/FTP mock ‘data receiver’ endpoints. The mocks scripts are brutally simple – they just write the request to disk (to /tmp actually, so it ultimately gets cleaned up automatically) and return a “Thank you/ come again” HTTP or API response. One script simulates API failure at random. Another retrieves files written to /tmp disk for test validation purposes. That’s it.

For integration and load tests, I use a homegrown Perl wrapper around JMeter (http://metat.sf.net/). There’s also a second homegrown unit testing framework in process of release.

Regarding the “[unit testing] losing its focus” aspect – maybe that’s not a bad thing. I get the sense ‘pure’ unit tests are trumpeted too hard sometimes, with people not realising they are inferior to integration tests. Because integration tests tend to test end-to-end functionality. While in unit testing, all your 'units can pass, but the application still collapse in a flaming heap… with the units still returning: “I’m fine, roger roger, over and out” in a cheery droid voice.”.

[ That last phrase adapted from this hilarious and insightful opinion piece: Stevey’s Google Platforms Rant ]

In the “overall solution” context, yes they are inferior. But that is not the focus of unit tests. My objection to the apparently expanding context of unit tests is they are starting to be viewed to be the same as integration tests. They are not the same. All unit tests passing is fine but should always be viewed for what they are – focused tests. All of them passing does not indicate anything about the overall solution.

My objection is based upon the history of our industry constantly taking terms and destroying them. Unit test seems to be another.

I know I’m tilting at windmills here – terminology evolves via the tyranny of the crowd. :slight_smile: Someone even posted something along the lines that “REST now means this other thing and rants from the original author are not going to change that and should be ignored.”

I second that , @toni.petrov , @marielavd , Perhaps we can have a new kind of section , just called discussion.

Also I noticed that this topic is unlisted, this is a engrossing discussion, and it will benefit everyone.
It is not really off topic and can be listed.


1 Like

You’re right of course. And you’re not tilting at windmills. The term ‘unit test’ still holds its usual meaning … I hope :slight_smile: .

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.