First, a disclaimer. I like USRs. What I like most about them is that they provide “protection” against modifications to Natural structures, encoding, etc. I know that the latest USR for a particular function will reflect the current version of Natural.
However, I find that USRs that do not pertain to Natural structures can often be coded as inline statements which are much more efficient than a CALLNAT to a USR.
All that said, I would ask the original poster why they need to look at a bit map?
Now, to some code. It is quite simple to convert a byte to a bit map, as shown below:
DEFINE DATA LOCAL
1 #B (A1/1:256) INIT /* …etc
1 #A (A8/1:256) INIT <‘00000000’,‘00000001’,‘00000010’,‘00000011’>
1 #INDEX (I4)
END-DEFINE
*
INCLUDE AATITLER
INCLUDE AASETC
*
EXAMINE #B(*) FOR FULL H’03’ GIVING INDEX #INDEX
*
WRITE 5T #INDEX // 5T #B (#INDEX) (EM=HHHHHHHH) // 5T #A (#INDEX)
END
PAGE # 1 DATE: Feb 04, 2009
PROGRAM: BIN01 LIBRARY: INSIDE
4
03
00000011
Of course, you would have to define the full range of 256 values for #A and #B. But you do that just once, and it is not executable code.
Which brings us to performance. I compared the times to use the table lookup and to use USR1028. Here are the results for 1000 iterations:
PAGE # 1 DATE: Feb 04, 2009
PROGRAM: BIN02 LIBRARY: INSIDE
examine in line 1 14
usr1028 72 686
The first set of numbers (1 versus 72) are elapsed times (*timd’s) and the second set of numbers (14 vs 686) are *CPU-times.
The ratios are VERY significant. Having played with such comparisons quite a bit, my guess is that the time differences are not solely CALLNAT times, but include the code (perhaps usr1028 does a progressive divide).
So, if this will be done a lot, you might want to write your own code. If it is a one time thing (maybe for some debugging of one system), you might want to simply use usr1028.
steve