Thinking decimal about binary things

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

In my struggle to comprehend the Y2K, tried to become the machine, binary representation, RandS gates. RorS gates and the whole bit. I quickly realized that at the level of the machine the 2K transition looks different. There are no two digit century fields. The simplest is a 7 bit representation which is necessary for decimal 64 to 127. Decimal 128 requires the 8th bit. So decimal 99 is 01100011 and decimal 100 is 01100100. The pattern is similar for the four digit 1999 to 2000. This raised the question how does the binary field become 00000000? It would have to be an input function or a deliberate test, e.g. If cent >01100011 then set cent = 00000000. The input function would be either operator entry, a person types in 00 or a Realtime Clock . The first is an obvious possibility but it would seen an easily corrected one. The RTC input is more confusing. I went to the Dallas Semiconductor article on RTCUs and they speak to the two digit century fields but this a decimal format. What is actually there at the binary level? Do they have a 4 bit, 10Us field and a 4 bit, 1Us field so 99 is 1001--1001 and on rollover becomes 0000--0000 or is it a single 7 bit field? The only reason I can see for the two 4 bit mode would be in preparation for display or print functions when things are presented in a decimal format. So my question isU How does the century field get set to 00000000 in the machine simply because it is an extra digit function in decimal? Aside from the operator input of 00 why would the >99 test be made?? What would be the point of deliberately making the binary mimic the decimal? What, in binary, does one really find when getting century data from the RTC?

The problem returns again at the output but only for purposes of display or print. Output to disk and time/date stamps remains binary in format. I can see how allowing only two display/print fields could create a problem but it seems almost incidental. All the data would be right it just could not be presented in a two place display/print decimal format. I get the strong impression that people are thinking decimal about a binary world and because the 99 to 100 is a special event in decimal representation it is projected as equally special in the binary representation. The global positioning problem is a good example of a special binary event. The week 1023 to 1024 requires an extra binary digit and sets everything to 0Us. 1023 to 1024 in binary is the equivalent of 99 to 100 in decimal. The time to worry in the binary is, in decimal 127 to 128 if 7 bits are used or from decimal 255 to 256 if 8 bits are used.

-- Jerry Pudelko (gpudelko@oc.ctc.edu), March 08, 1999

Answers

OOOOOOOH!!! I love it when you talk dirty!

: )

-- Mrs. Pa Kettle (MaKettle@juno.com), March 08, 1999.


Most applications do not store the date in binary format. They store it as a character representation '99' or as 'zoned decimal' in mainframe terms. If the system is using binary representations, Y2K is probably not a problem, but some other rollover date is. (Unix systems will have a problem in 2038) This is what the FAA discovered on some of its old computers.

-- fran (fprevas@ccdonline.com), March 08, 1999.

Jerry,

Much numeric data is stored in character (ASCII or EBCDIC) format. E.g., "99" = character "9", followed in next byte by character "9". Sometimes arithmetic or comparisons are done directly on this character format, and "00" (or "100") represents a special case. Sometimes the numeric data is converted to binary for internal processing, so that binary 01100011 is compared to binary 00000000.

There are many different ways in which the value 0 or 100 might not result in the same program action as values 98 or 99 did, leading to Y2k problems. Sometimes a program doesn't properly take care of the case of a zero value -- the programmer assumed that a value was always nonzero and positive, overlooking that a zero value would take a different branch. Sometimes a program fails to provide for the expansion of space needed to store the character value of the decimal representation of 100, whereas it has worked just fine for decades when all the values had two-digit decimal representations.

The other veterans and I can construct more specific examples illustrating exactly how Y2k problems arise at the machine level if you wish.

-- No Spam Please (No_Spam_Please@anon_ymous.com), March 08, 1999.


Jerry,

Let me add a few reasons why dates (and other data fields) are often stored as character values rather than binary:

1) Independent of differences in number representation on differing brands of computers.

2) Human-readable on printouts without conversion chart or calculation.

3) That's the way the input arrives when the user presses the keys -- as character values -- so don't convert it until necessary.

4) Other benefits of storage as character values outweigh the miniscule cost or speed savings from compression to binary values on storage media or transmission lines whose cost is rapidly decreasing and whose speed is steadily increasing.

-- No Spam Please (No_Spam_Please@anon_ymous.com), March 08, 1999.


Moderation questions? read the FAQ