I have loaded a repository with ± 2000 Natural modules. We have a database field that needs to be changed, so I am now trying to do the impact analysis for this change. It hung after 1/2 an hour on module 130 - this is a program that uses the file, but is not referencing the field at all. The times spent and estimated time did not move (The estimate was over 500 minutes!). However I seemed to be using 100% of the CPU. It also seems to hang (using 100% of the CPU) if the ddm list is to be populated, and also some other reports.
I have set up the database parameters as specified in the ‘Environment sizing’. Should anything be different? Is this too big a repository? The HOSPITAL system worked fine.
From my XREF data I can tell that only 128 modules are potentially affected. I am wondering if I shouldn’t just extract these 128 modules, plus all the ddms and copycode in the library and work on this smaller repository? I know that there will be lots of missing modules but does this matter as long as the field to be changed isn’t passed to any other module?
Do the objects need to be catalogued in the library for Natural Engineer to work?
Natural 611 PL 13
Natural Engineer 521
Adabas 333 PL 1
Windows XP - Pentium 4 2GHz 1gig RAM