DEVICE type Size vs Performance


My question is related to Adabas 7.4.x ASSO/DATA block size and performance.

I have thought for many years that the ‘conventional wisdom’ is that a bigger block size will be better performance. But I’m not sure about that any more because I read this:
which says:
The time for moving or reading blocks into or out of the cache structure depends on the device type (block size) in use:

Small block sizes are moved synchronously to and from the cache structure.

Larger block sizes may be moved asynchronously. Asynchronous moves take much longer and always require more CPU time than synchronous requests.

Although earlier versions of Adabas often worked well with large block sizes, the buffer pool manager and forward index compression feature introduced with Adabas version 7 make smaller block sizes more attractive, especially in data-sharing mode.

Now that link does appear in the Cluster Services section of the manual and I am NOT a cluster site.

Can anyone please confirm that indeed a smaller block size is better. My goal is for fast IO, but NOT at expense of CPU - I PAY $$$ FOR CPU - I don’t pay $$$ for IO.

You see, before I next increase my production database I must ADAORD RESTRUCTUREDB/ADADEF DEFINE/ADAORD STORE it because I have 16.7m DATA RABNs already and I need to change RABNSIZE from 3 to 4.

At this ADADEF DEFINE stage I have the opportunity to change block size.

My current dev type is 3390 giving ASSO 2544 bytes/RABN and DATA 5064 bytes/RABN.

Any advice from vendor, or experience from customer, welcomed!



As you mention this article specifically discusses performance issues for cluster services, where movements between the cache structure and adabas nuclei have a serious impact on overall cpu consumption.

This discussion does not apply to a non cluster environment. But since I wrote this part of the documentation let me be clear: Even under Cluster Services small blocksizes (device type 3390) are generally not recommended.

Block sizes are trade offs and as rule of thumb device type 3390 is usually not optimal for current database sizes. When most I/O included an acess to a physical disk, the argument for larger blocksizes tended to be stronger due to mechanical delays which were independent of the blocksize. When I/Os are satisfied out of an I/O cache this advantage is somewhat diminshed.

Nevertheless there are still some good arguments for large blcoksize

On Asso you would like to avoid high index levels on your most heavily used files. Most heavily used files should have index levels of 3 or 4.
If index level come down due to a larger block size on Asso this reduces I/O as well as CPU. A blocksize of 4K on Asso is often a good compromise.

On DATA it depends somewhat on the sizes of your compressed record length, the amount of sequential processing and a lot of other considerations like sizes of your buffer pool etc.
Again a blocksize of quarter track to half track tends to work well in general.

Making block sizes too large is usally less of a problem than keeping them too small. A large block size is unlikely to impact your CPU consumption in a non cluster environment.

Rainer Herrmann

Hello Rainer

Thank you for your excellent reply. This helps immensely.

David Gurr