Possible DES related errors and resolution

Possible DES related errors and resolution.[In Progress]

  1. “Error:0 errorMessageing Synchronizer cannot perform sync; Integration Server cannot connect to provider isSuccessful:false”
    • Please make sure IS_DES_CONNECTION is enabled in IS Admin Page.
  2. What if products are installed in different machines/ directories
    • Copy .dx files from source machine to all other machines in same location.
  3. “The Integration Server was unable to commit these changes to this document type……..RepositoryFileCorruptedException:….Lexical analysis failed:”
    • You are not allowed to update/modify .dx file directly from repository. If so please delete .dx files from repository and Sync again.
  4. “Trying to receive from DES, could not connect to UniversalMessaging [nsp://localhost:9000] (Realm is currently not reachable:Realm was still unreachable after max retry count - nSessionAttributes:conns=1/[nsp://localhost:9000/])”
    • Your UM server is not up. Start UM server.
  5. “Unable to send message to webMethods Messaging alias IS_DES_CONNECTION:….. Cannot perform operation 'tryEmit' because Digital Event Services is not running.”
    • Your DES is not running, login to CCE, go to Instances->ALL->IS_default->Digital Event Services and start DES.
    • Basically you are not allowed to start/stop DES. Restart of DES is not supported.
  6. After exporting and importing IS package(Which contains triggers), you will neither be able to see triggers nor able to create triggers with same name.
    • The key reason is, we should not rename the package while exporting it from Integration Server. If you rename package name, in node.ndf it will not be updated. This is expected behavior. 
  7. If you are trying to update UM URL from CCE and you see still it is pointing to old url?
    • Go to apama project and check DESConnectivity.properties file, make sure value is empty for property “DigitalEventServices_replaceConfigWithRNAME=”.
  8. If you are trying to publish data from IS and you don’t see it coming on UM Enterprise manager?
    • Make sure “Service Groups” configuration in CCE is pointing to desired UM server.
  9. “com.softwareag.events.routing.QueueFullException: The store & forward queue is full.”
    • You see this error if UM is down and in runtime configurations if “Default On-Disk Capacity” and “Default In-Memory Capacity” is met, meaning you should increase the values for both properties to get rid of this error.
  10. You might observe duplicate data is received on destination after Apama is restarted!!
    • This is because you have not used “on FlushAck(requestId = c.flush())” in apama monitor code. For more details please visit apama online help.
  11. “com.softwareag.eda.estore.storage.impl.elasticsearch.v232.client.EsClientManagerException: Failed to create Elasticsearch transport client connection to cluster SAG_EventDataStore, index testindex on server [elasticsearch://localhost:9300] with user evsuser.”
    • In the Elasticsearch config page, under the Cluster Settings, Cluster URI will point to elasticsearch://localhost:9300 by default. Change the listening TCP port to 9340.
  12. “com.softwareag.eda.estore.storage.impl.elasticsearch.v232.client.EsClientManagerException: Failed to create Elasticsearch transport client connection to cluster EventDataStore, index testindex on server [elasticsearch://localhost:9340] with user evsuser.”
  • In the Elasticsearch config page, enter the Cluster Name as per the field cluster.name in elasticsearch.yml. i.e.,  “SAG_EventDataStore”
    • Else update the corresponding cluster name in elasticsearch.yml.
  1. After publishing the event, data is not present in the elastic search.
    • Wait for the duration specified as in Batch Write Timer (sec). By default it will be 15 secs.
    • In CCE make sure “Service group” configuration is pointing to elastic search.
  2. “Error executing Elasticsearch bulk indexing request; operation will be re-tried; Exception repeated 1 times.NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9340}]]”
    • Elasticsearch server could be down. Start the elastic search server.

 

  1. “: java.sql.SQLException: Could not open connection to jdbc:hive2://localhost:10000/database?mapred.child.java.opts=-Duser.timezone=UTC: java.net.ConnectException: Connection refused: connect at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:203)”
    • Database needs to be created in Hadoop before event transfer.
    • Make sure custom Hive SERDE and Joda Time libraries are installed.

 

  1. Index name has to be in smaller case. Elastic search will not accept the index name in Capital letters.