How to convert Binary to Numeric / Alphanumeric

I have got a number stored as binary (B). How could this be converted to Numeric(N) or Alphanumeric (A).

When a hexa value is moved to binary and displayed, its coming as hexa value.

MOVE H’0A’ TO #BINARY
Display #BINARY gives output as ‘A’

I would like this to be displayed as binary.


/*  This program serves as example how to design a user-defined
/*  program to call 'USR1028N'.
/***********************************************************************
DEFINE DATA
LOCAL
   1 FUNCTION       (A1)          /* 'I' - byte to bit conversion
                                  /* 'Y' - bit to byte conversion
   1 NUM            (P5)          /* Number of bytes to be converted
   1 BITS           (A1/1:8,1:20) /* Array of bits to be converted
   1 REDEFINE BITS
     2 BITS-A8      (A8/1:20)
   1 BYTES          (A1/1:20)     /* Array of bytes to be converted
   1 REDEFINE BYTES
     2 BYTES-B      (B1/1:20)
   1 RESPONSE       (N4)          /* Error code
*
END-DEFINE
*
* Set up some defaults:
*
SET KEY ALL
FUNCTION := 'I'
NUM := 4
MOVE ALL '0' TO BITS(*,*)
BYTES(1) := 'A'
BYTES(2) := 'B'
BYTES(3) := 'C'
BYTES(4) := 'D'
RESET BYTES(5:6)
*
REPEAT
*
  INPUT (AD=MITL'_' CD=NE IP=OFF)
      'Bit/byte conversion:' (YEI)
    / '-' (20) (YEI) /
    / 'Function ..........' (TU) FUNCTION
      6X '("I" - byte to bit; "Y" - bit to byte)' (TU)
    / 'Number of bytes ...' (TU) NUM
    / 'Response ..........' (TU) RESPONSE (AD=O CD=TU)
  /// 'Bytes alpha .......' (TU) BYTES(01) 8X BYTES(02) 8X
                                 BYTES(03) 8X BYTES(04) 8X
                                 BYTES(05) 8X BYTES(06)
    / 'Bytes hex .........' (TU) BYTES-B(01) 7X BYTES-B(02) 7X
                                 BYTES-B(03) 7X BYTES-B(04) 7X
                                 BYTES-B(05) 7X BYTES-B(06)
    / 'Bits ..............' (TU) BITS-A8(01) BITS-A8(02) BITS-A8(03)
                                 BITS-A8(04) BITS-A8(05) BITS-A8(06)
  /// 'Press any PF-Key to stop.' (TU)
*
  IF *PF-KEY NE 'ENTR'
    STOP
  END-IF
*
  CALLNAT 'USR1028N'
    FUNCTION NUM BITS(1:8,1:NUM) BYTES(1:NUM) RESPONSE
END-REPEAT
*
END

First, a disclaimer. I like USRs. What I like most about them is that they provide “protection” against modifications to Natural structures, encoding, etc. I know that the latest USR for a particular function will reflect the current version of Natural.

However, I find that USRs that do not pertain to Natural structures can often be coded as inline statements which are much more efficient than a CALLNAT to a USR.

All that said, I would ask the original poster why they need to look at a bit map?

Now, to some code. It is quite simple to convert a byte to a bit map, as shown below:

DEFINE DATA LOCAL
1 #B (A1/1:256) INIT /* …etc
1 #A (A8/1:256) INIT <‘00000000’,‘00000001’,‘00000010’,‘00000011’>
1 #INDEX (I4)
END-DEFINE
*
INCLUDE AATITLER
INCLUDE AASETC
*
EXAMINE #B(*) FOR FULL H’03’ GIVING INDEX #INDEX
*
WRITE 5T #INDEX // 5T #B (#INDEX) (EM=HHHHHHHH) // 5T #A (#INDEX)
END

PAGE #   1                    DATE:    Feb 04, 2009
PROGRAM: BIN01                LIBRARY: INSIDE

          4

03

00000011

Of course, you would have to define the full range of 256 values for #A and #B. But you do that just once, and it is not executable code.

Which brings us to performance. I compared the times to use the table lookup and to use USR1028. Here are the results for 1000 iterations:

PAGE #   1                    DATE:    Feb 04, 2009
PROGRAM: BIN02                LIBRARY: INSIDE

examine in line        1          14
usr1028       72         686

The first set of numbers (1 versus 72) are elapsed times (*timd’s) and the second set of numbers (14 vs 686) are *CPU-times.

The ratios are VERY significant. Having played with such comparisons quite a bit, my guess is that the time differences are not solely CALLNAT times, but include the code (perhaps usr1028 does a progressive divide).

So, if this will be done a lot, you might want to write your own code. If it is a one time thing (maybe for some debugging of one system), you might want to simply use usr1028.

steve

You can converted a field binary to a field numeric or field alphanumeric using “REDEFINE”


DEFINE DATA
LOCAL      
1 VAR-A (A10)    INIT <'1977400001'>
1 REDEFINE VAR-A                    
  2 VAR-B (B10)                     
END-DEFINE                          
WRITE VAR-B
END   

Page 1 09-07-31 10:46:25

F1F9F7F7F4F0F0F0F0F1

REDEFINE is one of the most dangerous, and most abused facilities in Natural.

This is especially true for B (binary).

Binary variables of length 1-4 are “basically” treated as numeric in Natural; whereas lengths greater than four are basically treated as alpha in Natural. Here is a minor modification of your program:

DEFINE DATA
LOCAL
1 VAR-A (N10) INIT <1234567890>
1 REDEFINE VAR-A
2 VAR-B (B4)
END-DEFINE
*
WRITE 5T 'BEFORE COMPUTE: ’ VAR-B
COMPUTE VAR-B = 1234
WRITE 5T 'AFTER COMPUTE: ’ VAR-B
END

PAGE #   1                    DATE:    09-07-31
PROGRAM: BIN01X               LIBRARY: IN-ARCH

BEFORE COMPUTE:  31323334
AFTER   COMPUTE:  000004D2

I am on my PC, so the hex values are a bit different before the COMPUTE than you would see on the mainframe. Note the difference though, before vs after.

Now try changing the re-definition to 2 VAR-B (B5). The program will not even compile. The COMPUTE is invalid since VAR-B is not really numeric.

This can be really dangerous unless the programmer knows what they are doing.

steve

I read about that in the documentation. Binary variables of length 1-4 looks like non-signed big endian Integers for me. But you can’t use them with all arithmetical statements.
Is there any reason for this special behaviour? Is it a “legacy” of an older Natural version?