Paul Milne gives a math lesson.

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

t in Search> >> Forum: comp.software.year-2000 >> Thread: One Thousand Times The 'Normal' Rate Of Failure >> Message 1 of 3 Subject:One Thousand Times The 'Normal' Rate Of FailureDate:1999/11/09Author:fedinfo Posting History

From information recived from the Year 2000 Symposium series in NYC. Click on : November 8, 1999: The Y2K Endgame http://www.audiocentral.com/rshows/mir/default.html Go listen to this report, don't take my word for it, and then read below...... ---------------------------------- As far as embedded systems are concerned, ask yourself this. In a 'normal' business environment, there are failures of these systems every day. And it is being handled without seeing an infrasrtucture collapse. Could they handle a failure rate ten times greater? Twenty? Thirty, without seeing catastrophic economic results? How about forty or fifty times greater? What about 100 or 200 times greater? I think that by now you realize that an embedded sytems failure rate of 50 or a hundred times greater would be pretty frightning and a failure rate 100 or two hundred times greater ould be white-knuckle panic time. Take it a bit further, as if THAT was not bad enough. What if the rate of failure was FIVE HUNDRED times greater. I guess at that level even the most naive Pollaynna, even the most maliciously recalcitrant pollyanna, would throw in the towel and and say, you know, at five hundred times the normal rate we would be enscrewed. Well, the rate of failure will actually be ONE THOUSAND times greater than normal. The pollyannas have contented themselves with saying that they thought the rate of failure, as measured by the NUMBER OF AFFECTED SYSTEMS, would be about 3-4% of embedded systems and THEN that was downgraded to about 1% of embedded systems. And they sighed a breath of relief that the problem would not be so bad. But a failure rate, as measured by the PERCENTAGE OF 'NORMAL' FAILURE RATES, of only ONE percent of embedded systems would be running at 1000 times the normal rate of failure. It is not possible for there NOT to be substantial and grievous economic results if 1000 times the normal amount of failures occur. There would be a total inability to deal with that. The cascading failures would overwhelm any attempt to remedy them. And this completely IGNORES whether the physical replacement parts even exist OR if they could be purchased OR if they could be transported, OR if there were competent technical personnel to do the job. It is idiotic to get a warm fuzzy feeling by saying that 'WHEW!' only one percent are likely to fail, when that one percent is one thousand time the normal rate. It is still astronomical in nature. The Federal government has STAUNCHLY maitained that all one must do is to have three days worth of food, a half a tank of gas, and some batteries. There is NO doubt whatsoever that in SOME place, SOMEwhere, there is going to be a catastrophic problem. (yet alone, multiply and simultaneously ) what do you think will be the response of the population after they have had the whole issue pooh-poohed for over two years. How will they react? They will immediately realize that they have been lied to. And they will panic. What happens if there is no water and sewerage in NYC? LA, Boston, Chicago, Houston? Two weeks? Three weeks? The clear result would be rioting. The WORST thing that can happen is to tell people not to prepare and THEN they discover that they have been lied to. When they realize that they have been lied to, then it will be all over but the crying. And alan dechert maintains that "There will be NO economic impact from Y2k. It is a hoax." Paul Milne "If you live within 5 miles of a 7-11, you're toast"

-- Ed (ed@lizzardranch.com), November 13, 1999

Answers

I'm sure there is a plan to handle Americans In Indignant Shock. Mr. Milne forgets that if the people were this easy to manipulate, the "Gobmints" got their number and will most absolutely handle them. I'm guessing a number of firebreaks will go into effect. "Gobmint" will swoop into the fire giving Being Indignant much attention, walk by their side per se to alter the course, and take them to a predetermined perception or conclusion.

It is also possible people will be so busy and exhausted trying to survive daily, they won't have the time or energy to make much of a fuss. If one is rioting one didn't find water for the next day and the long line for a small bit of food selling didn't have oneself in it.

-- Paula (chowbabe@pacbell.net), November 13, 1999.


Ed --

There is another problem here, touched on only lightly. It is a point I tried to make in my post on Embedded Chips, but it kind of got 'glossed over' in the lectures on Power Companies from one of the tech types, and the back-and-forth on that topic.

This point is that failures of embedded chips are *not* common. These tend to fail due to physical things happening to them, (water getting on them and shorting them out, being dropped, that sort of 'gross physical' event.) These types of events are not common. The chips tend to be protected inside of a case. There are probably rare cases of controllers being burned out due to over or under voltage, but I would think these constitute a miniscule proportion.

What is going to happen on 1/1 and thereafter is of an entirely different order. The *SOFTWARE* in them is going to fail. No doubt there will be a higher mortality rate due to the more normal causes mentioned above, as some will suffer due to the power problems, if any, while others will be dropped, shorted, etc, while being removed to determine if they are the cause of the original problem, but, by far, the greatest number will fail because the 'software' (not accurate, much of it is actually 'firmware') will fail because of the CDC problem.

In these cases, replacement of the part is not sufficient. The program that the part is running must be changed. For more detail on what this entails, please see the thread on Clarification on Embedded Chips below. In short, the program change will frequently require that the chip be removed, thrown away, and, *if* the part is still made, a new program created, transferred to the manufacturer as a binary bit pattern to be put in the ROM at time of manufacture, and the new part shipped. This is a long process. The 'mean-time-to-repair' (the mttr) will typically be measured in months, like about 12-18 of them. (If the original source code still exists, the original part is still made, and the source can be modified to eliminate the CDC problem, then you are likely looking at *only* 18 weeks.

This is a rather long time for the infrastructure to be 'out-of-service'. Particularly if the infrastructure in question is a railroad which supplies coal to the manufacturer of the required part. Or if it is the water system of NYC. And so on.

-- just another (another@engineer.com), November 13, 1999.


just another--

How are you? Could you give a guestimate as to how many embedded chips may be affected by the sw/firm ware issue?

-- curious (karlacalif@aol.com), November 14, 1999.


Re. the guesstimate about embedded Y2K reliance, I'd guess that it's very low, say <1%. Unfortunately, this IS a guess, based on an anecdote I heard while working on nuke plant DMS systems. The UK nuke plant sector have for years been replacing ALL non-certified embedded systems (PLC's etc) regardless of whether they were date reliant or not, and there was some grumbling among the purse string holders that this was a very poor use of resources, as the vast majority of them didn't need replacing.

But of course, most businesses have been replacing only the date-reliant systems, and the problem with these embedded systems really is that if you don't have a compliant replacement TO HAND when the tiny number of date reliant ones fail, then there's nothing you can do. Think about that. What business - or even specialist process control contractor - is going to have compliant replacements on hand? If they thought they'd need them, they would already have done the replacement!

-- Colin MacDonald (roborogerborg@yahoo.com), November 14, 1999.


In my opinion,

Y2K REALITY CHECK: Best Case Scenario "Bump In The Road" Will Be Painful For All U.S. Citizens.

From my research on the net, I have determined that embedded chips come in three flavors:

Chips that will have no problem.

Chips with problems that can be easily remediated pre or post Y2K.

Chips that will fail out-right and must be replaced or worked around.

There is a fourth flavor of chip, though not technically a "flavor" in and of itself. These are the chips that are never identified/located, are identified but incorrectly assessed/tested/evaluated, or are inaccessable or in-use in such a manner that renders them untestable. Some unknown percentage of these chips will fail. And some of those that fail will result in a catastrophic failure of the system or device they control irreguardless of whether it is a "mission critical" system in a nuclear power plant or a VCR.)

CHIPS THAT WILL HAVE NO PROBLEM

Worldwide, there are approximately 50 billion chips in use today. Most of these chips (96.002% to 97.002%) are okay and will have no problem rolling over to the year 2000. Enough said about these.

CHIPS WITH PROBLEMS THAT CAN BE EASILY REMEDIATED PRE OR POST Y2K

About 2-3% of these 50 billion chips (I use 2.5% in my calculation below) will exhibit some abnormal behavior on or about the year 2000 rollover, requiring "simple remediation by a human", such as physically re-setting the time clock of the device or system they are embedded in.

Do the math:

50,000,000,000 x .025 = 1,250,000,000

With the U.S. as the most technologically advanced country in the world, logic dictates that anywhere between 10% and 30% of these 1,250,000,000 chips are in embedded systems within the United States. (I use 20% in my calculation below.)

Do the math:

1,250,000,000 x .20 = 250,000,000

Okay, so how many of these 250 million chips (that will be impacted within the United States) are in "mission critical" embedded systems of the U.S.'s 200,000 most vital physical infrastructures? (power companies, drinking water and wastewater treatment facilities, chemical companies, oil and natural gas companies, voice and data telecommunications companies) Again, I use an overly conservative figure of 1% of these 250 million chips in my calculation below.

Do the math:

250,000,000 x .01 = 2,500,000

And finally, lets assume these 2,500,000 chips are distributed evenly between 200,000 or so individual government, public and private entity's mission critical systems that comprise the vital infrastructure of the United States.

Do the math:

2,500,000 / 200,000 = 12.5

CONCLUSION: Using the very conservative figures above, the bottom line is that each of the 200,000 individual government, public and private entities comprising the vital infrastructure of the United States must 1) locate 12.5 embedded chips hiding throughout their thousands of embedded systems, and 2) make 12.5 "simple remediations". This is the easy part unless Murphy rears his ugly head...and 1 of every 1000 "simple remediation" tasks are botched or worse, were never identified as needing "simple remediation" in the first place, possible through oversight or relying purely on vendor certification.

Do the math:

2,500,000 / 1000 = 2,500

Now we have 2,500 of these 200,000 individual government, public and private entities comprising the vital infrastructure of the United States engaged in a mission critical systems failure on or about 2000/01/01.

Apply this to the real world, with each "critical" industry receiving an equal number of "critical systems" failures:

Power Production & Distribution = 500 "mission critical" Systems Failures

Oil & Natural Gas = 500 "mission critical" Systems Failures

Chemical Production & Storage = 500 "mission critical" Systems Failures

Water & Sewage Treatment = 500 "mission critical" Systems Failures

Telecommunications = 500 "mission critical" Systems Failures

total 2,500 "mission critical" Systems Failures

At best, each "mission critical" system failure results in a complete shut-down, albeit temporary (because these chips CAN be easily remediated on the spot), of that individual company/organization. Remember, these chips ARE embedded in "mission critical" systems.

At worst...you begin to have the domino effect and, well...it gets ugly quick.

That was the good news, now for the bad news. In addition to all the "mission critical" systems failures listed above that can be easily remediated after they fail...add the following...

...Worldwide, an additional 0.2% of these 50 billion chips will fail outright and can not be remediated in any way. These chips (or the systems/devices they are embedded in) must be replaced or worked around prior to rollover to Y2K, otherwise the system or device they control will experience a catastrophic failure.

Do the math:

50,000,000,000 x .002 = 100,000,000

Applying the same conservative percentages as before, in these calculations, we get:

100,000,000 x .20 = 20,000,000 (In the U.S.)

20,000,000 x .01 = 200,000 (In the "mission critical" systems of the vital infrastructure of the U.S.)

200,000 / 200,000 = 1 (In each company/organization)

Now, Let's say 1 out of every 1,000 (maintaining the same ratio as before) of these non-remediable embedded chips (that MUST be replaced or worked around because they WILL fail and can NOT be remediated) has a botched remediation attempt in the work-around/replacement or is missed and never remediated.

Do the math:

200,000 / 1,000 = 200

Now we have 200 chips embedded in "mission critical" systems throughout the vital infrastructure in the U.S. that will fail on or about 2000/01/01. These chips/systems may take from 3 weeks to 3 months to design, produce and replace and until they are, the company/organization literally grinds to a halt (at best).

Again, with each industry receiving an equal number of non-remediable chips resulting in "mission critical" systems failures:

Power Production & Distribution = 40 "mission critical" Systems Failures

Oil & Natural Gas = 40 "mission critical" Systems Failures

Chemical Production & Storage = 40 "mission critical" Systems Failures

Water & Sewage Treatment = 40 "mission critical" Systems Failures

Telecommunications = 40 "mission critical" Systems Failures

total 200 "mission critical" Systems Failures

IN CONCLUSION:

Using information from various White Papers referenced on this forum, the Senate's 100 Day Report, a LITTLE common sense, and the (conservative) figures representing a "BEST CASE SCENARIO", I have concluded that:

1) There will be 540 "mission critical" systems failures in the POWER PRODUCTION AND DISTRIBUTION systems within the U.S. on or about January 1, 2000. Forty (40) of these will be catastrophic, requiring anywhere from 3 weeks to 3 months to design, build and install a new replacement system.

2) There will be 540 "mission critical" systems failures in the OIL & NATURAL GAS systems within the U.S. on or about January 1, 2000. Forty (40) of these will be catastrophic, requiring anywhere from 3 weeks to 3 months to design, build and install a new replacement system.

3) There will be 540 "mission critical" systems failures in the CHEMICAL PRODUCTION & STORAGE systems within the U.S. on or about January 1, 2000. Forty (40) of these will be catastrophic, requiring anywhere from 3 weeks to 3 months to design, build and install a new replacement system.

4) There will be 540 "mission critical" systems failures in the WATER & SEWAGE TREATMENT systems within the U.S. on or about January 1, 2000. Forty (40) of these will be catastrophic, requiring anywhere from 3 weeks to 3 months to design, build and install a new replacement system.

5) There will be 540 "mission critical" systems failures in the TELECOMMUNICATIONS systems within the U.S. on or about January 1, 2000. Forty (40) of these will be catastrophic, requiring anywhere from 3 weeks to 3 months to design, build and install a new replacement system.

I am fully aware some industries will have more failures than others. It's also true that some systems may experience multiple failures, any one of which would have caused that system to fail. Therefore, I assign a liberal error rate of + or - 10% to each industry listed above in 1-5. Again, taking the "best case scenario" of a -10% error rate, that still leaves 486 "mission critical" systems failures on or about January 1, 2000 in each of the five industries making up the vital infrastructure of the U.S.

And finally, the GRAND TOTAL of the combined "mission critical" systems failures (within the vital infrastructure of the U.S.) on or about January 1, 2000, under a BEST CASE SCENARIO, is:

2,430

Now, compare 2,430 "mission critical" system failures to the normal failure rate of "mission critical" systems in our vital infrastructure within the time span of a week or so. My utilities (electric, water, sewer and phone) work properly 99.99% of the time, therefore I conclude it is extreamly rare for my utility companies to experience a "mission critical" system failure. This is about to change.

Oh yeah, I almost forgot to mention that there is also a Y2K problem with hundreds of thousands of Operating Systems, in addition to the computer software programs they run. But that is ANOTHER STORY ALTOGETHER.

Three days of preps? Try six months for the BITR Best Case Scenario, IMHO.

-- GoldReal (GoldReal@aol.com), November 14, 1999.



Goldreal, You said "At best, each "mission critical" system failure results in a complete shut-down, albeit temporary (because these chips CAN be easily remediated on the spot), of that individual company/organization. Remember, these chips ARE embedded in "mission critical" systems."

However this is the very fallacy that is making the embedded chip issue appear so much worse than it really is. Mission Critical systems can (and do) fail regularly without causing a shut down of the process, let alone the company/organisation. Often the failure can be worked around for a short time while an alternative process is set up, or the original one is repaired.

An example we had at one of our power stations recently was a software failure in a controller which operates our spillway gates. The operator using SCADA set a desired spillway flow of xxx cumecs. the gates started opening, but instead of stopping at the desired flow, they just continued opening. The operator took emergency action and stopped the gates manually, but by then they were already indicating a flow of yyyy cumecs, sufficient to flood a downstream town. He manually overode all the control systems and closed the gates to the previous setting, then manually calculated what the correct gate setting should be for the flow he required, and manually operated the spillgate controller to achieve the desired position. Once all this had been done, the controller was still reporting a flow of yyyy cumecs, even though itcould be proved the correct flow was xxx cumecs. Here is a case of a software failure in a mission critical emebdded system, and the only negative effect is incorrect data being output to the SCADA.

OK, the gates had to operated manually for a couple of days while the software was corrected, but it didn't result in a shutdown of anything.

Malcolm

-- Malcolm taylor (taylorm@es.co.nz), November 14, 1999.


GoldReal:

First, you are applying worst-case failure rates of *systems* to individual chips. Each system has anywhere from a few to many thousands of chips. So right off the bat, your analysis is incorrect by about 3 orders of magnitude. Oops, well, minor problem.

Second, you are assuming random incidence of date failures among all embedded systems. This is like saying if you have one foot in the icebox and one foot in the oven, then *on the average* you're comfortable. Major conceptual error. Oops, another minor problem.

Third, you are assuming that *nothing* has been done about those systems that do have date problems. Yet a sizeable chunk of those hundreds of billions of dollars went into finding, replacing or fixing embedded problems -- you aren't the only one in the world who was aware of such things, you know. Oops, more problems.

Correct these fundamental false assumptions *first*, THEN start playing with your calculator. Your input is pure garbage. What does this make your output?

-- Flint (flintc@mindspring.com), November 14, 1999.


Malcolm,

According to John Koskinen, 80% of companies in the U.S. have already experienced such "glitches" as you describe, albeit not in embedded systems. I contend the problem you describe is not "mission critical" in that its failure was remeadiable without a shutdown or loss of production.

To quote you from an article you posted some months ago, you wrote:

"A hydro turbine appears to be much simpler having only a single stage runner, and yet is considerably more efficient at recovering energy. In fact the hydro runner must be chosen even more carefully as it is often converting energy from a much heavier column of fluid.

The inlet control mechanism, called the actuator, is operated by high pressure oil fed into hydraulic rams which are able to move the control gates from fully open to fully closed extremely rapidly. The amount of oil flowing into or out of the actuator rams is controlled by the governor which is a device with two main roles. First, it is a speed control device. When the governor detects a change in turbine speed it will attempt to correct that change by allowing more or less fluid into the turbine. Its second function is a generation control device, because the total amount of power produced by the generator is closely equivilent to pressure multiplied by the weight of fluid passing through the turbine.

Hence the govenor is the second control function available to power station operators in controlling the amount and quality of power produced. Nothing that I have talked about so far in this section has used any electronic controlls at all, but once we start examining the governor and the way that it is controlled we can start to find electronics being used.

Early governors used rotating weights to control a pilot valve which in turn would allow more or less governor oil into the turbine actuator. But more recent governors use pure;y electronic controls to achieve the same result. There are electronic speed tranducers on the turbine shaft to measure the speed. Signal amplifiers and comparators to determine any speed drift from the load setpoint. Other components to dampen out any signal noise and adjust the speed droop characteristics of the governor. So at this stage in a modern governor is the first opportunity for Y2K issues to occur. Some governors will have these control and data functions programmed in, but most will still be mainly mechanical or electro-mechanical in nature. Although there should be no need for any date/time function to be used, modern governors should be checked as a matter of course."

Malcolm, here is an example of a "mission critical" system failure, should a "modern governor" contain a date/time function which is not Y2K compliant then the failure of the governor will result in a shutdown of that turbine. This is an example of the type of "mission critical" system failure I am speaking of in my lengthy post above.

You went on to write:

"However, it is the man machine interface between the governor controls, (and the excitation controls in the previous section) which are often part of SCADA or EMS, that have the greatest potential to be affected by Y2K issues."

Again, you have so elequently pointed out yet another example of the type of embedded system failure which would result in a shutdown of the turbine, should an embedded chip in this system fail. And again, this is the type of failure I am speaking of in my lengthy post above.

If you read my lengthy post above, you will see I have allowed for "non-mission critical" embedded system failures (such as you describe) which do not result in a shutdown or loss of production.

These "non-mission critical" system failures are the other 99% or 19,800,000 of embedded chips that will fail outright in the U.S on or about 2000/01/01 and can NOT be remediated in any way. Sure, they will cause problems, but their failure will not result in the shutdown or loss of production. They can be "worked around".

Again, these type embedded systems failures are not included in my estimates of "mission critical" systems failures nor are they part of my estimates of how many "critical system" failures we will experience here in the U.S., IMHO.

-- GoldReal (GoldReal@aol.com), November 14, 1999.


Flint,

You just proved your inability to comprehend complex analysis of complex problems.

The reason you don't "Get It" is because the Y2K problem is beyond your mental ability to comprehend. I feel sorry for you, really. That's why I'm buying extra rice, so mentally challenged people like yourself don't starve to death, should Y2K be a BITR or worse.

Cheer up Flint! It could be worse, we could all be like you.

-- GoldReal (GoldReal@aol.com), November 14, 1999.


GoldReal:

First, you need to do a complex analysis. Simple linear projections based on false assumptions aren't "complex analysis" by any stretch. Defining a failure that could have wiped out a town as "non critical" because someone caught it in time is pushing self-serving circular reasoning beyond the point of absurdity. And oh yes, your assumption about my own preparations is also false. Try getting at least something right, OK?

Substituting personal attack for anything resembling addressing the issues I raised ought to give you a clue about the strength of your "reasoning". If you think about it, of course. Try it.

-- Flint (flintc@mindspring.com), November 14, 1999.



Okay Flint,

I'll "dumb it down" so you can comprehend.

My results are based on the assumption that 999 of every 1,000 "mission critical" non-compliant embedded chips will be successfully remediated/replaced prior to them failing.

-- GoldReal (GoldReal@aol.com), November 14, 1999.


GoldReal:

Maybe we have a problem with terminology. When you speak of a mission critical failure, I assume you mean a device or process that will completely disable the function of some larger (business-level?) operation, for some period of time, should it fail. For this to happen, this system must make a critical bad decision based on a calculation that uses dates. Is this correct?

If so, I feel your estimates are way too high. I expect quite a few such failures myself, but fewer than you estimate by at least a factor of 10. As Malcolm tried to say, our exposure to failures *of that magnitude* is substantially lower than your base assumptions lead you to calculate. I think you are making fairly realistic estimates of the number of instances of anomalous behavior we can expect. But you are overstating the practical impact of that behavior.

Part of the problem, as I see it (and in any area as broad and diverse as embedded systems, nobody has the big picture) lies in your assumption of randomness. Statistically, the assumption of randomness simplifies calculations, but there are several good reasons to suspect this assumption is incorrect. For one thing, remediation efforts have focused on systems that make use of the date. This is a tiny minority, and far from random. For another thing, of embedded systems that *do* use the date, focus has been placed on those systems whose failure would be most expensive (in lives as well as dollars).

As a result, you are imposing linearity onto a highly nonlinear situation. When you start with an estimate of total chips which for all anyone knows might be off by at least a factor of two, place these randomly into a number of systems which for all anyone knows might be off by an order of magnitude, and then apply linear assumptions, you end up with GIGO. We simply lack both the numbers and the vectors to apply anything close to a mathmatical treatment of the problem.

But it's still a big problem, even if our limitations of knowledge permit reasonable assumptions to produce wildly different estimates. I expected embedded failures to be concentrated within a few days (for the worst of them) and be highly newsworthy and occasionally fatal. And *very* expensive to clean up. My disagreement with you is only quantitative, not qualitative.

-- Flint (flintc@mindspring.com), November 14, 1999.


Karla --

Above this you will find goldreal's analysis, along with Flint's rebuttal. Both are a whole lot more confident of their positions than I am.

I have heard estimates varying from 30 billion chips to 165 billion, world-wide. And I haven't any idea of where on that spectrum the actual number is. Nor have I any idea how many will fail, or what the failure mode would be.

I have read that the expected failure rate, as determined empirically in a test, I believe in a Georgia county, came to something like 3%. But I cannot remember where I saw this. (Been a lot written on it and I have read a bunch.)

Speaking only for the stuff I have written, I would guess that somewhere between 1/3 and 1/2 will likely fail. But this is only a guess. I cannot remember what sort of default cases there were in all of them, nor what all of the potential failure modes are.

The short answer to your question is that I don't know. And I don't think anybody does know. Not how many there are, what percentage of them will fail, what the failure modes are likely to be, or how many of them must be replaced with an entirely new system as the original source code, the parts have no pin-compatible replacement, the compilers no longer exist, or the internals of the chip are invalid. That being the case, I cannot give you any satisfactory answer.

This is different from knowing *how* the things work. This is a matter of the logistics of the situation and it is, as far as I can determine, an unknown.

-- just another (another@engineer.com), November 14, 1999.


Flint,

"Part of the problem, as I see it (and in any area as broad and diverse as embedded systems, nobody has the big picture) lies in your assumption of randomness. Statistically, the assumption of randomness simplifies calculations, but there are several good reasons to suspect this assumption is incorrect. For one thing, remediation efforts have focused on systems that make use of the date. This is a tiny minority, and far from random. For another thing, of embedded systems that *do* use the date, focus has been placed on those systems whose failure would be most expensive (in lives as well as dollars)."

Applying "randomness" to the human factor is to point out the "known" unknown. We know humans are not perfect, therefore we can say with 100% accuracy that anything humans are involved with, can and will go wrong. We also know (from experience) at what rate they will go wrong. However, Y2K is a unique experience, therefore we don't have any hard data to support our conclusions. This is why I qualified everything I wrote.

As for your contention that because we "focused" on the mission critical systems everything is now fixed, might I remind you about:

The Challenger explosion. Chernoble. Three Mile Island. Apollo 13. And more recently, the Mars mission that burned up due to miscalculations.

I'm quite sure these people were "focused with tunnel-vision intensity" on their tasks yet they still made fatal errors in "mission critical" systems. It's the human factor. You are attempting to discount it exists or has any effect on the end result.

You are arguing that:

1) every one of the hundreds of thousands of "mission critical" embedded system in the vital infrastructure of the U.S. was inventoried without a miss. Not one miss.

2) every one of these inventoried chips/systems was perfectly tested, resulting in a perfect list of every faulty chip/system.

3) every one of these faulty chips/systems on every perfect list was perfectly remediated or replaced and are now Y2K compliant.

4) we know everything there is to know about how every chip will react when it rolls over to Y2K.

My question to you is: Where is the "known" human factor? You haven't calculated it in!

We have thousands of highly competent engineers working in a unique environment (never had this Y2K problem before); under a stressful non-negotiable deadline with (sometimes) sketchy, partial information from vendors, incomplete embedded systems inventory records, outdated schematics, unrealistic test environments, etc., and you end up less than ideal circumstances from which errors will be made. We expect errors to be made. It is a known. At what rate will errors be made? That is subjective. I say 1 out of 1000 and call that conservative. You say 1 out of 10,000 and I call that unrealistic.

Flint, I can't hope you are correct because you are not, given the known variables I have pointed out. I may not be correct either but if I have erred, it is on the side of caution.

-- GoldReal (GoldReal@aol.com), November 15, 1999.


GoldReal,

The embedded system that I described, a spill gate controller, is indeed a mission critical system. However the software failure that we experienced did not disable the system, it only resulted in a bad output. We were still able to operate the controller manually. To say that because we were able to manage a work around means that it is not a mission critical system is a complete misrepresentation of the facts.

You then mention a post that I made as part of the "Generation and Distribution 101" thread in which I pointed out the potential for Y2K issues to occur in some modern governors. However nowhere did I say that a Y2K failure would cause the turbine to shutdown. The effects of a Y2K failure may possibly result in a shutdown (I don't personally know of any cases where this would occur), however that is not the most likely consequence. What is likely to happen is that for a short period of time, the speed droop and response characteristics may be unreliable.

Again you go on to read into my words a meaning that is not there when you say that a failure of the SCADA or EMS would result in a turbine shutdown. This is not the case. Should our SCADA fail (as it has done regularly over the past few years) then all turbines and generators just continue at their last known good setpoint. This allows plenty of time to select the station from "remote" to "local" control, and to continue running everything manually.

I will reiterate that the failure of an embedded system does neccessarily result in a complete shutdown of its process. It may do so, but that is not a certainty. In the majority of cases the result will be a bad output which higher level systems will recognise as being off normal, and signal such as a fault. The underlying process will usually continue without interuption.

Malcolm

-- Malcolm taylor (taylorm@es.co.nz), November 15, 1999.



Malcom,

You stated:

"The embedded system that I described, a spill gate controller, is indeed a mission critical system. However the software failure that we experienced did not disable the system, it only resulted in a bad output. We were still able to operate the controller manually. To say that because we were able to manage a work around means that it is not a mission critical system is a complete misrepresentation of the facts."

My response:

My understanding of a "critical mission embedded system" is an embedded system that must function properly (as designed) in order for the mission to be successful. Should the embedded system fail, the mission would fail.

My understanding of a "non-mission critical embedded system" is one that if it should fail, it could either be ignored, easily remediated on the spot, or worked around and still complete the mission successfully without shut down or loss of production.

Malcolm, with all due respect, you are describing the non-mission critical embedded system controller, of a mission critical "spill gate". This is a distinct difference from an embedded system which is critical to the successful operation of a device. Your spill gate controller has a backup manual work around, therefore it is not "mission critical" that it function properly 100% of the time.

-- GoldReal (GoldReal@aol.com), November 15, 1999.


GoldReal:

Most of the failures you list involved mistakes people made as large contributing factors. I won't deny that will happen.

The good news is, your definition of "mission critical" is extremely narrow. According to your definition, no system is mission critical if anything whatsoever can be done to work around the problem in such a way as to continue the mission, even if the performance of the system is degraded. You should recognize that your definition is far more strict than any other. By your definition, from all I have read the power generation and distribution system contains NO mission critical devices at all. Just some "mission-useful" devices.

Say a fire alarm system fails, for example. This is defined as a mission critical system according to accepted definitions. But according to your definition, this system isn't mission critical at all, since it's irrelevant if there's no fire, and doesn't even count if there IS a fire but it gets noticed and extinguished in time. So your definition eliminates nearly *every one* of the systems considered mission critical by remediators and organizations. Your calculations don't take your narrow definition properly into account.

-- Flint (flintc@mindspring.com), November 15, 1999.


GoldReal missed Flint's point above.

GoldReal uses estimates of the number of embedded microcontrollers, then applies failure rates of micropocessors.

From GartnerGroup:

http://www.gartnerweb.com/public/static/y2k/00081966.html#0018

# of Microcontrollers: tens of billions % of Microcontrollers with Y2k problems: 1 in 100,000, or .001%

# of Microprocessors: tens of millions % of Microprocessors with Y2k problems: .25% to 7%

Disregarding the other problems with this analysis, just plug in .001% in place of the original 2-3%, and see the results.

-- Hoffmeister (hoff_meister@my-deja.com), November 15, 1999.


Hoffy's numbers are as ludicruous as GoldReal's. Go back, read "just another" and keep buying beans. And hazmat protection.

-- A Calculating Fool (Just@Doing.The.Math), November 15, 1999.

Yes, argument by declaration is so much easier than providing documentation.

FYI, the info from Gartner:

5.0     Embedded-System Failure Rates

There are distinct and separate year 2000 problems for information systems and embedded systems. Information systems' problems affected the vast majority of business applications and required that significant percentages of code be remediated or replaced. It is easy to understand why information systems process dates, and therefore why they might have problems. Furthermore, IS problems have already started: they will happen over an extended period of time.

The contrast with embedded systems is marked. Year 2000 problems only affect a small percentage of embedded systems. Many people find it difficult to understand why embedded systems process dates, and therefore why they can be vulnerable. Also, embedded systems' problems have not started occurring in any statistically significant numbers: since embedded systems are largely "real-time" systems, those that do have problems are most likely to experience them at or around midnight on 31 December 1999.

In nearly every enterprise, year 2000 awareness and action start with information systems. This tends to lead to a position where the business believes that the IS department is responsible for year 2000 in general. However, IS staff have very little previous experience of real-time control systems, and, where IS staff have been given responsibility for running embedded-system activities, the projects almost inevitably stall. To emphasize that IS staff are unlikely to be able to make a substantial contribution to the embedded-system project, GartnerGroup defines an embedded system as "any electronic system not acquired with the IS budget."

5.1     Real-Time Clocks

Although there are many different types of electronic devices that can be considered embedded systems, one common feature links all the embedded systems that suffer from year 2000 problems: they must have access to a persistent source of date information. This is almost universally supplied by a real-time clock (RTC). An RTC is a device that uses a battery to oscillate a crystal and then counts the oscillations to maintain time and date. An embedded system may have its own RTC, or it may have access to date information by virtue of being network connected to another device that itself has an RTC. Any device that does not have an RTC and is not connected to another device is incapable of suffering a year 2000 failure.

Most RTCs provide a two-digit year. This is in itself not a problem. The function of the RTC is to supply date and time information, but in order for useful work to be done the information must be interpreted by a program. A program may read a two-digit year and quite correctly interpret "00" as "2000." Equally, a program reading a four-digit year may choose only to access the last two digits and misinterpret "00" as "1900." The key point to note is that there is nothing inherently wrong with RTCs that only provide two-digit year data.

5.2     Microcontrollers

The most numerous ES devices are microcontrollers. The particular characteristic of these devices is that they are not programmable: the program is burnt onto the chip at the point of manufacture. While there are billions of these devices in existence, they are very simple devices and are generally not capable of processing complex data like date and time. When people say that there are "chips" in domestic appliances like coffee machines, toasters and irons, what they actually mean is that there are microcontrollers in some of these machines. They are not at risk of year 2000 failures. Based on information received from many clients who have undertaken extensive research in this area, we believe that, at the century boundary, free-standing microcontrollers will experience a year 2000 failure rate of less than one in 100,000 (0.8 probability).

5.3     Microprocessors

Whereas microcontrollers are pre-programmed devices, microprocessors are considerably more complex. These are effectively "computers on a chip": they provide the ability to execute instructions contained in a program that comes from somewhere else. Therefore, microprocessors are neither compliant nor noncompliant: they are passive devices that need a program in order to become active. The typical configuration for a microprocessor is as the heart of a programmable logic controller (PLC). The program the microprocessor will execute will typically be found on a co-mounted chip such as a programmable read-only memory (PROM). It is the program that must be assayed for year 2000 compliance, not the microprocessor.

A PLC with no RTC and no connection to any other device with an RTC cannot generate a date from thin air and should be considered to have the same potential for year 2000 problems as a microcontroller: one in 100,000.

A PLC with no RTC but that is connected to another device (typically a PC) that does have an RTC and that can therefore theoretically pass date information to the PLC in a network message is slightly vulnerable to year 2000 anomalous processing. Information garnered from many clients suggests that, although it is unusual, some such devices can have problems. The numbers are small: through 2001, fewer than 0.25 percent of microprocessors not co-mounted with RTCs will demonstrate year 2000 anomalous processing (0.8 probability).

We use the term "anomalous processing" advisedly. It simply means that some function or process that should be supported by the device will not be supported in the expected manner. This is very different from "fail." A great many problems with embedded systems are cosmetic or minor in nature. For example, a PLC may be connected to a pressure sensor, and one of its functions may be to open a valve if the pressure reaches a certain threshold. A secondary function could be to write an audit record of the event, where the audit record has date and time as part of the information. If such a PLC has a year 2000 problem, it may well still open the valve under the correct conditions but write the date information in an incorrect format. The device is noncompliant, but it is questionable whether it has "failed." To make such a judgment, it would be necessary to discover the ramifications of the incorrect date format in the audit record.

The question inevitably arises: Why would a microprocessor process dates? The most common date-processing function in real-time control systems is "interval timing" — that is, calculating the interval in time between two events. For instance, a train goes through a set of points at Time A. Another train goes through a set of points at Time B. The PLC has to make a decision based on the time interval between the two events — e.g., if less than 20 minutes, perform Action X; otherwise, perform Action Y. There are many ways of programming this function. Where an RTC is available, the time of Event A could be captured and stored in a register, so that the time of Event B can be captured and the calculation made. Because nearly all RTCs support date as well as time, the programmer may store date and time in the registers.

Why is this of particular interest when considering year 2000 problems in PLCs? Because GartnerGroup has identified that many of the year 2000 problems in interval-timing algorithms can only ever occur if the first event (Event A) occurs in "99" and the second event (Event B) occurs in "00." We call this type of problem "transient noncompliance," because, although the PLC program may be noncompliant, the noncompliance can only happen once. In such cases, if the system is inactive at midnight — i.e., there is no Event A with a "99" date waiting for an Event B — the noncompliance will not be activated and the algorithm will function satisfactorily for another 99 years. Transient noncompliance is the most common form of noncompliance in microprocessor devices: at least 7 percent of microprocessors co-mounted with RTCs will demonstrate transient year 2000 anomalous processing at the century boundary (0.7 probability).

It is important to note that there are many other miscellaneous reasons why microprocessors can suffer year 2000 problems. Some of these problems will be persistent. However, our research shows that, through 2001, fewer than 2 percent of microprocessors co-mounted with RTCs will demonstrate persistent year 2000 anomalous processing (0.8 probability).

5.4     Large-Scale Embedded Systems

While microcontrollers and microprocessors correspond to the conventional view of embedded systems as "chips," large-scale embedded systems (LSESs) generally look like much more traditional computers. LSESs are typically PCs or other dedicated computers with traditional configurations involving screens, keyboards, processors and disks. The family of LSESs incorporates supervisory control and data acquisition (SCADA) systems on the factory floor, distributed control systems (DCSs) at the heart of process control, and building management systems (BMSs) controlling heating, ventilation and air conditioning, lighting and security access systems in commercial property.

These systems are typically the hub of a network of lower-level devices, such as PLCs. Since they are based on conventional information systems architecture, with an operating system loaded from disk, with multiple programs that can also be loaded from disk, and complex data stores on disk, they have considerably greater complexity than the two other main families of embedded systems, microcontrollers and microprocessors. All LSESs are date sensitive. Our research shows that, through 2001, at least 35 percent of LSESs will demonstrate anomalous date processing (0.8 probability).

5.5     Embedded-System Comparative Failure Rates

The decomposition of the embedded-systems problem into the three families — microcontrollers, microprocessors and LSESs — shows the futility of attempting to provide a percentage failure rate for embedded systems as a whole. There are tens of billions of microcontrollers, but only tens of millions of microprocessors and only millions of LSESs.

Figure 12 illustrates some of the essential differences between the three families of devices.


Source: GartnerGroup

Figure 12. Differences Between Microcontrollers, Microprocessors and LSESs



-- Hoffmeister (hoff_meister@my-deja.com), November 15, 1999.

Flint,

You wrote:

"The good news is, your definition of "mission critical" is extremely narrow. According to your definition, no system is mission critical if anything whatsoever can be done to work around the problem in such a way as to continue the mission, even if the performance of the system is degraded. You should recognize that your definition is far more strict than any other. By your definition, from all I have read the power generation and distribution system contains NO mission critical devices at all. Just some "mission-useful" devices."

My Response:

Again, you are flat wrong. You are basing your definition of "mission critical" on an outdated definition. For example,

(snip) NEW YORK TIMES CO (most recent Q10 in a thread above this one)

Year 2000 Readiness Disclosure

The systems identified in the inventory were further categorized into five priority classifications:

o Shutdown - highest priority. If these systems (e.g., editorial systems, presses, and utilities) were to fail, the Company's ability to continue its operations would be seriously impaired. Approximately 8% of the identified systems are in this category.

o Impractical Workaround - If these systems were to fail, the available alternatives are too expensive to implement. Approximately 9%.

o Costly Workaround - If these systems were to fail, a feasible but costly alternative exists. Approximately 28%.

o Additional But Manageable Cost - If these systems fail, an alternative solution exists at a moderate cost. Approximately 22%.

o No Impact - Little if any consequence to the business if these systems fail. Approximately 33%.

(snip)

The New York Times "Gets It". They have identified 8% of their systems as being "mission critical".

To categorize the "flood gate controller" according to the New York Times, it would be listed in their third or even a fourth priority category.

Flint, it seems you and Malcolm don't "Get It" because you have an erroneous self-proclaimed definition of what a "mission critical" embedded system is. According to your definition, the toilet is "mission critical".

-- GoldReal (GoldReal@aol.com), November 15, 1999.


Hoffmeister,

In response to:

"# of Microcontrollers: tens of billions % of Microcontrollers with Y2k problems: 1 in 100,000, or .001%

# of Microprocessors: tens of millions % of Microprocessors with Y2k problems: .25% to 7%"

"Wild Card" question #1 is:

What percentage of microprocessors were used in place of microcontrollers? It's a known fact that some were. They are interchangeable this one way and the very reason why two identical devices with consecutive serial numbers MUST be individually tested, one could have a microprocessor (computer on a chip with a built-in battery maintaining an internal date) functioning as a microcontroller (sealed inside a "black box") while the other device indeed has a microcontroller. This is one reason why our Y2K Czar is saying just because it APPEARS a device isn't date capable doesn't mean it isn't.

Until you can answer this most important question, you are being irresponsible to apply a failure rate % to microcontrollers, IMHO. No one knows for sure how many microprocessors were used in place of microcontrollers, at far as I know, yet the statistics you cite do not factor in this MOST important variable. Infact, your statisticts completely ignore it. Why is that?

"WildCard" question #2 is:

The rosy statisticts you cite completely ignore the known "human factor". Just because you say you know how many chips will fail (which you have obviously underestimated IMHO), your statisticts fail to calculate how many of these faulty chips will be overlooked, misidentified, improperly tested, improperly remediated, etc., by us imperfect humans. Yet you conclude there will be no significant problems with the vital infrastructure of the U.S., based SOLELY on your (underestimated IMHO) chip statisticts. Where is the "human factor" included in your statistical conclusions? It isn't.

-- GoldReal (GoldReal@aol.com), November 15, 1999.


GoldReal:

I can make a couple of comments here, and ask a question. The question is, are you genuinely trying to determine the scope of our exposure, or are you trying to build a worst-case analysis? I ask because the two are very different.

As for using microprocessors ($15-$500 each) in place of microcontrollers (20 cents to $2 each), I can assure you this is a fundamental design decision on the basis of cost. One of the primary goals of modern (last couple of decades) engineering is to cram as much functionality into the cheapest part on the market. Engineers are acutely aware that hardware costs are recurring, while software costs (NRE) are not. In any case, misusing a microprocessor where a microcontroller would suffice doesn't increase the failure rate, since the system requirements haven't changed.

Your NYTimes stuff sounds about right, but should be evaluated in light of the Gartner categories of microcontroller, microprocessor, and LSES. The Gartner statistics are the best available, and I see no reason to reject them simply because they don't lead to the conclusion you are trying to build.

Assuming very roughly 50 billion microcontrollers and 50 million microprocessors, with roughly 1 in 100,000 anomalies among microcontrollers and 1 in 20 anomalies among microprocessors (and recognizing that *both* are found in LSES), this comes to somewhere in the neighborhood of a million systems (50,000 microcontrollers, 1 million microprocessors) that won't work as intended in some way. To be extremely conservative, let's call this 10 million misbehaved systems -- 10 times the calculated value.

Of these 10 million systems, let's go roughtly with the NYTimes estimates and say 10% of these systems are truly mission critical. We're down to a million mission critical systems with *any* anomalies. That's a bit higher than the NYTimes estimate.

Now, how many of these anomalies are of the nature to disable these critical systems completely? Investigation has shown that most embedded systems that exhibit anomalous behavior don't do so in a manner that will disable the system -- logging errors, etc. So to be very conservative, let's say it's 50-50. We're down to half a million critical systems that would have gone belly up without remediation.

Now, let's use your estimate that 90% of these systems have been successfully remediated. This bring us to an estimate of 50,000 true critical failures.

The next step concerns workarounds. It's simply not the case, no matter how many times you say otherwise, that there cannot possibly be a workaround to ANY such failure. Clearly there will be strong motivation to find a workaround, however ugly and temporary. Malcolm addressed this -- a failure that would wipe out a town is critical. In general, depending on the nature of the failure, people are amazingly creative in finding ways to stick pennies in fuseboxes. I'd estimate (yes, this is a SWAG) that 60% of such failures will be amenable to workarounds within a day or two. So we're down to 20,000 total failures based on worst-case estimates, and somewhat less than 2,000 such failures based on best-estimates (NOT best case, which might be 200 failures total, worldwide).

Now, can we tolerate this number of total failures? IMO, not comfortably. I personally expect 2-3000 belly-up drop-dead embedded failures worldwide. I expect this would have an economic impact even considered (as I have so far) in a contextual vacuum.

Next we add in some bad shit. As you've already pointed out (indirectly), most of the significant embedded failures we've seen involved a good deal of just plain bad judgment by people. This won't change. Additionally, there is collateral damage. Fixing the proximate cause of the problem isn't the same thing as fixing the damage the problem caused. If the facility explodes, it's going to take more than a couple of days to install even an available compliant system. There's nothing to install it in anymore!

So I expect a good many newsworthy events in the immediate aftermath of the rollover. Exactly how these events will play out in the lives of the people not directly in the line of fire I can't predict.

-- Flint (flintc@mindspring.com), November 15, 1999.


Flint,

You wrote:

I can make a couple of comments here, and ask a question. The question is, are you genuinely trying to determine the scope of our exposure, or are you trying to build a worst-case analysis? I ask because the two are very different.

My Response:

I am attempting to uncover what a "best case scenario" would look like by applying ALL KNOWN variables to the estimated number of faulty chips in "mission critical" embedded systems within the vital physical infrastructure of the United States, that will not be fixed prior to Y2k for whatever reason. I personally haven't seen this information presented in this format anywhere.

All we get from TPTB are the "clinical" numbers devoid of the everpresent human factor, which is the ultimate cause of this Y2K debacle in the first place. TPTB never learn, do they. So here they are, making the same fundamental error all over again in their sanitized (mis)calculations.

I have gone back and checked my numbers and the percentages I used in my calculations are ROCK SOLID. The differences in microprocessors and microcontrollers are negligible, at best, when calculating from a broad spectrum as I have. What isn't applicable or relevant in one industry or entity turns out to be HIGHLY "mission critical" in another which in turn, has a direct impact on the first. Combined, they must ALL work together for the patient to remain alive and well.

Yeah, the power production facility may remain functional, but when a chemical plant upwind explodes because of an embedded system failure in a microprocessor based system, it immediately becomes relevant to the operation of the power plant. Everyone working there dies. But that isn't going to happen, is it. Why? Because the power industry is not going to have any mission critical embedded systems failures! It's this kind of flawed "logic" that's being used TODAY. "WE" don't have any "mission critical" embedded systems of that nature so the failure of any of those type embedded systems won't effect us. Uh, think again stupid.

-- GoldReal (GoldReal@aol.com), November 15, 1999.


GoldReal:

You are of course perfectly free to reject actual, empirical observations in favor of your own numbers. You are free to create numbers 100 to 1000 times worse than investigation has shown. You are also free to make a worst-case definition of "mission critical" and pretend this is the definition others use. You are free to pretend that microcontrollers and microprocessors have similar failure rates despite all evidence. Finally, you are free to dub your numbers and percentages "ROCK SOLID" rather than correct blatant errors. This is all your choice, and obviously you aren't going to let any known facts stand in your way.

All I can say is, we are extremely fortunate that reality doesn't mind what you've done, one way or the other.

-- Flint (flintc@mindspring.com), November 16, 1999.


Flint, Hoffmeister, Goldreal --

In reading through the Gartner Group stuff posted above, I notice a problem. They classify 'microcontrollers' as being 'non-programmable', and having no RTC. This appears to be a rather 'radical' redefinition of the term from my experience.

My experience with microcontrollers defined them as 'computers-on-a-chip-with-on-board-I/0-control-pins'. That is, they were self-contained, with everything required right there so there was as little need as possible for a bunch of peripheral chips, other than for things like D/A and A/D chips, (although some came with these at a price), thus saving valuable board space, layout and design headaches, etc.

Examples of these with which I am familiar are: Motorola's MC68HC11 (and variants), the MC6801,2,3 from the same manufacturer, although I don't believe that these are made anymore, Intel's i80186, (and variants), TI's TMS9900 and TMS99000 series (and variants), and RCA's 1800 series (and variants). Each and every one of these came in *at least one* standard variation, most usually included as a 'standard' feature, with a RTC on board. Check out the data books. (Although, admittedly, you may have to go back a few years for some of them.)

If we have redefined these as being 'microprocessors' rather than 'microcontrollers', then the above statistics are probably a lot more valid than I thought they were. Unfortunately, the flip side is that the remediation of the 'microprocessor' category just got a *WHOLE LOT* more complicated, and the numbers for that just went out the proverbial window. Because the 'microprocessor' problem just expanded to include a class of chip where the 'program' is an integral part of the device as delivered from the factory, and cannot be changed other than by changing the part.

Sounds like an 'oopsie' to me.

-- just another (another@engineer.com), November 16, 1999.


GoldReal, You wrote "Flint, it seems you and Malcolm don't "Get It" because you have an erroneous self-proclaimed definition of what a "mission critical" embedded system is. According to your definition, the toilet is "mission critical"."

I have posted here before just what our company's definition of mission critical, however I shall summarise it again for your benefit. We have 4 levels of criticality due to Y2K.

Level 1: Generation would cease immediately on rollover to 01/01/00 if the system failed completely.

Level 2: Generation would not cease immediately, but would fail sometime after the rollover.

Level 3: Generation may continue, but with some loss of efficiency. Failure of busness, administration or security systems may affect the company's operations.

Level 4: No immediate effects, but may have significant long term effects.

So I guess that you are right. Under our definition the toilet could a level 4 system, depending on long long each individual can "hold on" for, and on who they wish to p*ss off.

Malcolm

-- Malcolm Taylor (taylorm@es.co.nz), November 16, 1999.


That should have been ...

So I guess that you are right. Under our definition the toilet could a level 4 system, depending on How long each individual can "hold on" for, and on who they wish to p*ss off.

Malcolm

-- Malcolm Taylor (taylorm@es.co.nz), November 16, 1999.


Bold off

-- Malcolm Taylor (taylorm@es.co.nz), November 16, 1999.

just another,

wow...and I never say "wow".

"wow"...

"WOW"

-- GoldReal (GoldReal@aol.com), November 17, 1999.


Well, given the short timespans these devices tend to work with, most of our questions will be answered in 2 months. Enjoy them while they last.

-- Flint (flintc@mindspring.com), November 17, 1999.

Moderation questions? read the FAQ