 # Sample Adabas -> Natural with decimals

Hi,
I would like a practical example of

and its transformation to some value type U.S. \$ 1,000.56

how this should be handled in Natural?

In adabas I would have a field
with P10 (example)

how would my DDM in Natural?
would like to have on programming this value with decimals?

Cláudio.

There is no transformation, a decimal value is a decimal value, the Natural definition will dictate what is stored on you Adabas file.

The Adabas FDT definition is a byte length, the Natural definition defined digits before and after the decimal point.

The Adabas definition does not know about a decimal point and it’s location, so it is essential to use the same Natural definition whenever a specific P or N field is accessed.

First of all - always define packed (type P) fields with an odd number of digits, otherwise you are wasting space.

For a dollar value with 2 digits following the comma define something like

``1 #mydollarvalue (P9.2)``

The corresponding Adabas definition would be 6,P (9+2 digits = 11 + sign = 12 / 2 = 6 bytes)

Thanks.

Next thing is: You can’t add digits after the decimal point afterwards. So if you define 9.2 in your DDM, you have to be sure that your 2 digits after decimal point are enough. It’s impossible simply making a 9.4 out of it.

And if you are working on a multi currency system you have to store the corrency in a seperate field (e.g. CURRENCY (A3))

Not easily, but possible, one would simply read the whole file and multiply all values by 100

Hi Wolfgang & Matthias,

Mathematically, what you have suggested is not correct, although, practically, it does work, but can be very messy.

Suppose I have a value like 123.45 in an n6.2 variable, which is the length of something. I wish to change to n6.4

Now I multiply the value by 100 to produce 12345.00.

To work with this, I must divide by 100. So now I have 123.4500.

The problem is the last two zeroes. Their presence basically says that the value was measured with four decimal places. BUT, it wasn’t, it was only measured to two decimal places.

Now suppose I divide 123.4500 by two and put the result in an n6.4, and get 61.725.

Is it true that two “somethings” of length 61.725 will exactly fit in the space of something with a length 123.45. The answer is no. The original 123.45 might really be 123.4572 (truncated), or, 123.4496 (rounded up).

To expand on Matthias’ comment, the number of decimal places should always reflect the true “precision” of a variable.

To put this in other terms (more germane to Natural).

Consider the following program and output:

DEFINE DATA LOCAL
1 #A (N2.1) INIT <12.5>
1 #B (N2.5)
1 #C (N2.1)
END-DEFINE
*
COMPUTE #B = #A / 3
WRITE ‘=’ #B
COMPUTE #C = #A / 3
WRITE ‘=’ #C
END

Page 1 13-04-22 06:56:27

#B: 4.16666
#C: 4.1

In older versions of Natural the precision of a divide was always determined by the precision of the numerator. This was mathematically correct. The value of #B , in older versions, would have been 4.10000, which mathematically is correct. It says that to one decimal point 4.1 is a third of 12.5. The final four zeroes are meaningless.

Bowing to the user community, this was changed. Now the precision of a divide is the maximum of the precision of the numerator and the precision of the result. Mathematically, this is not correct. You cannot change the precision of a value by performing an arithmetic operation on it.

Can this cause a problem? Yes, I have seen such, especially in computations involving money (usually, there is code written to make things come out okay).

Steve,

the story is this:

Suppose you have a U8 field defined on an Adabas file - Adabas knows nuffin about a decimal point.

Perfect, we are on the same page anyway then, thanks for the clarification 