Code review quality review checklists

Does anyone have a checklist compiled for performing a code or quality review of IS flow, IS/TN configuration, security, or other topics?

If not, how about suggestions in this thread for items to review and what to look for. Here’s a few to start:

  1. Proper error handling should be in place. Check for try/catch blocks.
  2. Flows should detect and handle connection problems with partners.
  3. An audit trail of transactions should be maintained. Check for proper logging.
  4. Should always use access.println instead of system.out.println
  5. Review Java code - naming standards, adequate documentation
  6. Functional review. Does the service accomplish the goal?


  1. Appropriate use of comments in Flow and Java code
  2. Proper balance between use of the debugLog command and overuse resulting is degraded server log usefulness
  3. Adequate logging strategy using either the built-in logging or the addition of log4J or logging commons project
  4. Adequate instrumentation or exposure of integration attributes to enterprise monitoring or systems management tools
  5. Effective restart / recovery planning at the integration level (e.g. What if my IS server dies at this step in the integration?)
  1. Validation of incoming XML documents and effective communication with the business partner of any encountered errors
  2. Duplicate checking of inbound XML documents against a transaction log (or audit table)
  1. Cleaning up the pipeline, dropping vars when they are no longer needed.
  1. Clear variable naming patterns (i.e. value should not be dropped from Pipeline Out in favor of real variable name such as concatOut

  2. Caching of results is used, where appropriate

  3. Extraneous debugging steps such as pub.flow:savePipeline are removed from Flow

Whoops. #15 should read “… (i.e. value should be dropped from Pipeline Out and its value mapped from Service Out to a “real” variable name such as concatOut).”

Data Item Declaration Related:
1.Are the names of the variables meaningful?
2.If the program language allows mixed case names, are the variables names with confusing use of lower case letters and capital letters?
3.Are the variables initialized?
4.Are there similar sounding names?(For unintended errors)
5.Are all the common structures, constants and flags to be used defined in a header file rather than in each file separately?
Data usage related:
1.Are values of right data types being assigned to the variables?
2.Is the access of data from any standard file, repositories or database done through publicly supported interface.
3.If pointers are used, are they initialized properly?
4.Are bounds to array subscription and pointers properly checked?
5.Has the usage of similar looking operators checked?


  1. Make sure that the code is operating-system independent (ex. use “/” instead of “” as the file separator or use the system property “file.separator”).

  2. Use streams over byte arrays for better performance.

  3. Always close file descriptors, I/O streams, etc. using the Java finally block.

  4. Use Java services only when you must invoke an API that is not accessible through Flow, when doing more complex exception handling, when flow performance in not acceptable, or when the complexity of the service logic suits itself to Java rather than flow.

  5. Avoid using Instead, create your own gateway service or create a wrapper service. This allows for greater flexibility.

  6. When executing services via the TN processing rules, execute them in asynchronous mode whenever possible. When a service is executed synchronously, the client who initiated the connection remains waiting until the service has completed.

Eliza wrote: “1.Are values of right data types being assigned to the variables?”

I would offer that the “right” default data type is normally string. Strict data typing within integrations is typically not necessary. My approach is to keep everything as a string unless there is a specific “in the middle” need to do otherwise. If a different type is needed for some sort fo manipulation then convert, do the manipulation, then convert back to string (i.e. the math services).

The one exception I have to this rule of thumb is for dates/timestamps. Generally, I use strings at the edges (convert to/from the string format desired by the end point) and java.util.Date while the data is in transit. When the end-point is a database, this has the advantage of not needed to figure out the default string format for that DB–the driver handles the marshalling between java.util.Date and whatever form the DB wants.

There are good options for code review and code quality validation out there. One such a solution is offered commercially by my company. The rules of the community does not allow me to elaborate but if this still an open question, feel free to connect on LinkedIn.

Regards, Christian

We have also developed a quite extensive set of test tools for internal use in our company. It includes static code analysis, unit tests, process tests, mocks, integration with Jenkins, code coverage (currently not in use but works).

I have not heard of anything like this, but that does not mean there are not such tools, because there might be many companies that have created similar tools for their internal needs and do not advertise it. Our company is one such example.

Yep, same situation followed at other orgs also hosted with lot of custom dev automation tools built to deal situations like code reviews/checklists scenarios used for in-house devOps builds centric wise activities.