Whoops. #15 should read “… (i.e. valueshould be dropped from Pipeline Out and its value mapped from Service Out to a “real” variable name such as concatOut).”
Data Item Declaration Related:
1.Are the names of the variables meaningful?
2.If the program language allows mixed case names, are the variables names with confusing use of lower case letters and capital letters?
3.Are the variables initialized?
4.Are there similar sounding names?(For unintended errors)
5.Are all the common structures, constants and flags to be used defined in a header file rather than in each file separately?
Data usage related:
1.Are values of right data types being assigned to the variables?
2.Is the access of data from any standard file, repositories or database done through publicly supported interface.
3.If pointers are used, are they initialized properly?
4.Are bounds to array subscription and pointers properly checked?
5.Has the usage of similar looking operators checked?
Make sure that the code is operating-system independent (ex. use “/” instead of “” as the file separator or use the system property “file.separator”).
Use streams over byte arrays for better performance.
Always close file descriptors, I/O streams, etc. using the Java finally block.
Use Java services only when you must invoke an API that is not accessible through Flow, when doing more complex exception handling, when flow performance in not acceptable, or when the complexity of the service logic suits itself to Java rather than flow.
Avoid using wm.tn:receive. Instead, create your own gateway service or create a wrapper service. This allows for greater flexibility.
When executing services via the TN processing rules, execute them in asynchronous mode whenever possible. When a service is executed synchronously, the client who initiated the connection remains waiting until the service has completed.
Eliza wrote: “1.Are values of right data types being assigned to the variables?”
I would offer that the “right” default data type is normally string. Strict data typing within integrations is typically not necessary. My approach is to keep everything as a string unless there is a specific “in the middle” need to do otherwise. If a different type is needed for some sort fo manipulation then convert, do the manipulation, then convert back to string (i.e. the math services).
The one exception I have to this rule of thumb is for dates/timestamps. Generally, I use strings at the edges (convert to/from the string format desired by the end point) and java.util.Date while the data is in transit. When the end-point is a database, this has the advantage of not needed to figure out the default string format for that DB–the driver handles the marshalling between java.util.Date and whatever form the DB wants.
There are good options for code review and code quality validation out there. One such a solution is offered commercially by my company. The rules of the community does not allow me to elaborate but if this still an open question, feel free to connect on LinkedIn.
We have also developed a quite extensive set of test tools for internal use in our company. It includes static code analysis, unit tests, process tests, mocks, integration with Jenkins, code coverage (currently not in use but works).
I have not heard of anything like this, but that does not mean there are not such tools, because there might be many companies that have created similar tools for their internal needs and do not advertise it. Our company is one such example.
Yep, same situation followed at other orgs also hosted with lot of custom dev automation tools built to deal situations like code reviews/checklists scenarios used for in-house devOps builds centric wise activities.