North Carolina Dept. of Transportation Found Massive 6% Failure Rate in Embedded Chips, Is This NORMal?

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

At the Cary, NC, y2k summit this week, the y2k consultant for the State of NC Department of Transportation stated that their embedded chip testing revealed a failure rate of 6%. That's six percent, not point six percent.

This failure rate is much higher than any estimates I have seen by Gartner Group or others. I have previously seen estimates of global failure rates of 1.5% or lower. Later reports have frequently stated that the 1.5% estimates were too high.

IF NCDOT's experience was NORMal, then the world's 30 to 50 billion chips would experience 1.8 to 3.0 billion failures at rollover. That does not give me the warm fuzzies.

Is there a particular reason the NCDOT would find an abNORMally high failure rate? Is there any reason to think this failure rate would exist among the entire universe of embedded systems? Did anyone else who attended the Cary meeting hear this differently?

Note: The Wake County Medical Center spokesman said they found a 3% failure rate. That's still alarming compared to what I've been hearing from Gartner.

-- Puddintame (dit@dot.com), March 25, 1999

Answers

Good work Puddin. NORM will be impressed. Where's the URL?

-- a (a@a.a), March 25, 1999.

Unfortunately there is no URL. I only have my notes from the live meeting. The meeting was videotaped for replay on the local access cable channel in Cary, so any Caryites out there could double check me on this. I might try to call NCDOT if I get a spare few minutes.

-- Puddintame (dit@dot.com), March 25, 1999.

Puddintame, according to the WRAL-TV report (at its site):

"Only about 3 percent of the devices are questionable," says Ben Steiniger with Wake Medical Center. "That's really a low number when you look at 5,000 devices."

It looks to me as if they haven't even tested them yet, that the 3 percent is an estimate. Even if 3 percent is accurate, that's a lot of devices, particularly if they're all critical, such as dialysis machines or respirators. Scary.

I'm trying to figure out what critical electronic devices DOT would have. Those computerized road warning signs come to mind, but I can't think of anything else. Traffic lights are city or county responsibilities. I doubt they're referring to sophisticated road-grading equipment or similar, since the Teer Co. seems to have a tight grip on road-building and repair.

-- Old Git (anon@spamproblems.com), March 25, 1999.


The failure rate depends on the applications. Some industries have a higher rate than 6% and some have a much lower rate.

-- Buddy (buddydc@go.com), March 25, 1999.

I called NCDOT a few minutes ago and got the typical state government "response", to wit: no clue.

I then called Mr. Bill Stice, the Cary Information Services Director who planned the meeting. He said that his notes also contained the 6% figure. He said that the lady who spoke on behalf of the NCDOT was an employee of Keane & Associates, who were consulting on y2k on behalf of NCDOT.

Isn't it a crying shame that journalism is a dead profession. It seems that a professionally trained journalist could take some information like this, develop it, relate it to other aspects of the problem and maybe try to write a real story. I guess the spectre of mass panic, bank runs and societal breakdown just aren't sexy enough to merit anything other than note taking. Until then it's just all of us half-wits on our own. Is it any surpise that "whacko" conclusions are being drawn? Factually, we've got a pretty clean slate to work with. Way to go, Fourth Estate!

-- Puddintame (dit@dot.com), March 25, 1999.



Well, there's a puzzle for you. One would first ask what the inventories and what the failure rates were for each catagory of device. And then one would need to look at the impact of failures for any particular device in a particular type of service.

These gross numbers are not real helpful are they? But let's say they have a typical rate of device needing replacement. And let's say they can afford to neglect 75% of these problems as trivial and focus on the remaining 25%, yet there are people all over the nation also trying to get these same types of systems fixed at the same time (puts a world of hurt on the orgs that sold these systems). Still sounds kind of difficult to me.

Someone's not going to get their important systems repaired. I just hope its someone else and not the people I have to deal with, don't you???

-- David (ConnectingDots@Information.Net), March 25, 1999.


Old Git, the guy from Wake Medical Center did elaborate some on the types of failures. He said very few of the 3% were patient critical. He gave as an example a defribrillator (sp?) which had a failed chip. The device still functioned perfectly for patient care, but the report printed by the machine would be erroneous in some way. He mentioned some device that would print out an incorrect date on Feb. 29 or something like that. I can't recall if it was the same defibrillator. For what it's worth.

-- Puddintame (dit@dot.com), March 25, 1999.

Good catch, Mr. Puddintame. While I generally buy the notion that embedded systems exposures are turning out to be less than feared (may it be so), I have these three strong caveats in my own mind:

... like Y2K generally, the quality of the info is highly suspect. Who deserves trust here? The person who cited 6% in Cary? Mebbe, but as pointed out, what is the quality of their info? Gartner Group? Mebbe, but I know from personal experience at Meta Group that research figures are generally "made up" (meaning that precisely, not as a charge of lying: they are concocted from a blend of data, assumptions and gut feel).

... Who is testing and, taking world as whole, can we rely on largely anecdotal reports that are rare and scattered across widely varying industries (and, certainly, Buddy is right that the industry exposures also vary)? Even though I'm nervous, it's this testimony that has me somewhat more optimistic than I was last year. Is that smart or stupid on my part?

... The acknowledged weirdnesses of how embedded systems have rolled off the lines over the decades (identical batches with different chips, etc).

To my mind, embedded systems will remain THE wildcard until at least 1/1/2000. It is impossible, in principle, to predict the impact, even if we are hopeful. Unfortunately, if we're wrong on embedded systems, Y2K becomes TEOTWAWKI quickly and stays there for many, many years.

-- BigDog (BigDog@duffer.com), March 25, 1999.


Well, isn't this thread just a prime example of your crazed, extremist, pick-apart-anything-but-the-negative attitudes!

I can't believe how much emphasis you put on corraberation, looking up references and comparing notes. You must be a rabid cult of historians! Are you not asshamed of your level-headed, skeptical view of all information? How dare you suggest the 4th estate is biased and/ or lazy! Just because they come to different conclusions based the opinions of people who's stock prices would plummet if they said anthing but "All is exceptionaly, wonderfully well"!

Red Ermine Shame on you all. You are all crazies, spreading around the myth of reason!

-- Alison Tieman (fearzone@home.com), March 25, 1999.


"You are all crazies, spreading around the myth of reason!"

C'mon, cut us crazies some slack. We have problems too.

-- Tom Carey (tomcarey@mindspring.com), March 25, 1999.



Thanks, Puddintame.

When I attended the Oakland Y2K Around the Bay, one of the speakers was Bob Bennet a former Executive and techie from Cisco Systems. Hes helping the City of Berkeley assess and organize remediation of all their systems -- s/w and embedded. (Hes also testified to our State of California Y2K group loosely spearheaded by State Senator John Vasconcellos.)

At any rate, our local electric utility, PG&E will NOT release information on their problems and solutions, however, Bob Bennet did have meetings with them. As I recall he said they were finding about 13% of the embedded chips had problems, which is significantly higher than most peoples 1-3% estimates.

Diane, not crazy ... concerned

-- Diane J. Squire (sacredspaces@yahoo.com), March 25, 1999.


Wow Alison, what planet do you live on? Anyone that has taken even a cursory look at this forum, or done even minimal research on this problem, and comes to the conclusion that this is not a major problem, is the one that is crazy, in my humble opinion. We don't make this stuff up. Why do you suppose that for every "good news" story that we have posted, there are, God knows, how many bad? Why don't you compare your notes with these... <:)=

Senate report - Utilities assessment

-- Sysman (y2kboard@yahoo.com), March 25, 1999.


If you can, please find out what kinds of devices these embedded failures are installed in. For the life of me, I can't figure out what kinds of DOT equipment would use embedded devices.

Until you include the NC State Patrol in the DOT, which is how NC has things organized. The State Patrol's speed radars, VASCARS, Breathalyzers and dispatch systems could account for a lot of these embedded device troubles.

If I recall correctly, there have been more than a few Breathalyzer failure stories already. I imagine that traffic radars and other clocking systems are potentially ripe for problems too.

WW

-- Wildweasel (vtmldm@epix.net), March 25, 1999.


"A rabid cult of historians!"

I love it!

I used to live in a hippie commune in the Haight-Ashbury district in San Francisco. We made a LOT of money selling computers, and donated much of it to various non-profit projects. When asked if we were a cult, we would say, "Yes, we're a cult of accounting..."

-- pshannon (pshannon@inch.com), March 25, 1999.


That would make for a clever movie trailer:

"THEY TRIED TO BURY THE PAST, BUT THEY CAME BACK WITH A VENGEANCE..."

ATTACK OF THE RABID HISTORIANS!

Guess I shouldn't have watched "Kentucky Fried Movie" last night.

-- Jenny (noSnart@GI.com), March 25, 1999.



puddintame,

one thing i've found is that the failure rate in health care systems literally varies wildly from one place to another. i've heard of some health care facilities with exceptionally few problems, while others have found difficulties with up to *one*third* of their devices.

Nearly One-Third Of Health Care Equipment Fails Y2K Tests In South Australia

in addition, it is very important *which* 3% fail. that 3% (or 6% or whatever) in a hospital may turn out to be very expensive equipment:

Vital Health Care Equipment Fails Y2K Tests In Australia; Suppliers Won't Respond To Government



-- Drew Parkhill/CBN News (y2k@cbn.org), March 25, 1999.


Uh, Sys -

I think Alison (typos and all) was just being facetious. Re-read her post - looks like a bit of irony there. She probably just forgot to include a "smiley" or two...

-- Mac (sneak@lurk.com), March 25, 1999.


Tom and Sysman: Don't you understand irony? Alison is agreeing with our alarmist conclusions, not attacking them.

-- cody (cody@y2ksurvive.com), March 25, 1999.

Drew has half the questions right - which devices fail? The other half is, how do these devices fail? Other good questions are, how many devices fall into the category of can't get along without and must replace? Lead times are starting to reach into next year for common, required devices with fatal failures.

And can we get along without breathalyzers? We may have to.

-- Flint (flintc@mindspring.com), March 25, 1999.


Puddintame -

Aside from the state patrol hardware someone else mentioned, Washington state also has local airports (non - international) under their juristiction, as well as freeway entry and lane control message machines and lights, licensing bureaus - both vehicle and person and trucking - all under DOT. Then there is road construction along with the bids etc. and bridge maintenance (we have a lot of bridges in Washington and snow removal not to mention the large ferry system. 6% if that is the case in Washington is indeed a LOT to worry about. Groan - gives me a whole nuther set of night terrors to dream about!

-- (anon@please.net), March 25, 1999.


Wildweasel, I'm looking at the Raleigh phone book and I've found almost two full columns of DOT listings--but no Highway Patrol. That's listed separately under H. I suppose people don't expect to find HP under DOT. Anyway, I had completely forgotten that DOT is responsible for drivers' licenses, the DMV (with its huge collection of data used by police departments), truck safety, mapping, rail transportation matters, highway design, bridges, and a load of other stuff needing electronic devices of one sort or another. Now I understand why a 6% failure in embedded systems might be a problem.

-- Old Git (anon@spamproblems.com), March 25, 1999.

Don't forget to add in the truck weight scales on every main highway also. They come under DOT's jurisdiction too.

Yep, you all just ran out of luck - I'm back again.

Before y'all say "so what", remember that I am your worst nightmare come to life.

To the "new dudes/dudettes", "Hi, my name is Sweetolebob", and I'm really not. I am barely housebroken, and I don't play well with others. So, circle the wagons and hide all of the women. No one is safe now.

How is it that as soon as the back of my face is looking at you folks you go out and get yourselves named as "Enemas of the State", and recruit a whole new cast of actors too?

What else am I guilty of by association with you people?

Do I get to burn, loot, pillage, rape, shoot, or stab people?

Before I left we were busy doing some cannibal thing, (I think), and now I find that you folks are a danger to the entire Western world and will single handedly bring about the destruction of civilization by stockpiling food and "needful things".

I can't believe that you people are still in here trying to put reason and logic into an otherwise insane and irrational world.

I see that you have been visited by some new trolls, and re-visited by a few old trolls as well.

In general then, I don't appear to have missed much in the two weeks that I was gone.

But, I'm back now, so beware.

check 6

S.O.B.

-- sweetolebob (buffgun@hotmail.com), March 25, 1999.


Diane - 13% at PG&E!!!! - wow. This is way higher than my friend is talking about in his corner of PG&E's remediation, and I thought his was high.

Posted 20SEP98 This post has so far generated 63 replies at csy2k, you can track them using Deja News. Any argument you might have with this fwd'd post, you can be _sure, has been covered within the csy2k thread - any point needing clarification probably has too.

The last paragraph is the one with the gold. Any y2k managers should at least be aware of his view and experience on ES projects.

mb

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ http://x9.dejanews.com/getdoc.xp?AN=387752360&CONTEXT= 906302275.39649320&hitnum=72

70 BBBBBBILLION Embedded Systems Author: fedinfo Email: fedinfo@halifax.com Date: 1998/09/03 Forums: comp.software.year-2000

. . "What's the point in spending millions of dollars fixing your computer systems, if you don't have phones, elevators, or heat come January 1, 2000?" asks Michael Harden, president and CEO of Century Technology Services, a consulting company and vendor of Y2K remediation services in McLean, Va.

"This is a global Easter egg hunt, and you don't know how many eggs are out there so you'll never know if you've found all of them," says Brian Kishline, manager of systems engineering for Data Dimensions, a software vendor and consulting company in Bellevue, Wash.

Once you do sniff out these rotten eggs, the dubious payoff is the cumbersome process of contacting the vendor--if it's still in business--to determine the Y2K compliance status of the device (See "Caught in the Y2K time crunch? Compliance databases can help"). Then, if the device is not Y2K compliant, it's generally a matter of replacing or retiring the device altogether.Fixing the code is generally not an option, since usually you won't have access to the source code. With less than 500 days to go, time is too short for most companies to contemplate code repair for the myriad of distributed devices containing embedded systems at their premises.

The scope of the problem

It's easy to ignore embedded systems, since they are necessarily hidden from view (see "On the lookout: a 14-step methodology"). But, hard as it may be to fathom, experts say the Y2K problems inherent in embedded systems are much broader in scope--and potentially much more expensive to fix--than those of business computer systems. Like the Y2K issue in general, the embedded systems piece of the problem is fraught with uncertainty. No one can say for sure how many embedded systems are out there and how many will fail come 2000. Since it's impossible to determine the scope of the embedded systems problem, it's likewise impossible to specify how much it will cost U.S. businesses to fix the problem. From interviews with the top logic chip manufacturers, Harden estimates that approximately 5 billion of the 70 billion chips produced since 1972 are subject to Y2K problems. "The question is, how do you go through the 70 billion to find the 5 billion that will have a problem? It's the quintessential needle in a haystack," he says.

Andrew Bochman of the Aberdeen Group thinks the problem rate is much higher than Harden puts it. "From what my clients are saying, I'm looking at a trouble rate of about 20%" of all devices containing embedded systems, says Bochman, senior analyst for Year 2000 services at Aberdeen, a Boston-based IT consulting company. And Aberdeen's manufacturing clients report spending three to four times as much on their embedded systems remediation efforts compared with their computer systems. Whatever that figure might be will only be determined in hindsight, but whatever the amount is, it's a lot of money. =================================

http://www.datamation.com/PlugIn/newissue/09y2k.html

-- Mitchell Barnes (spanda@inreach.com), March 25, 1999.


Diane - 13% at PG&E!!! - wow. This is way higher than my friend is talking about in his corner of PG&E's remediation, and I thought his was high.

Posted 20SEP98 This post has so far generated 63 replies at csy2k, you can track them using Deja News. Any argument you might have with this fwd'd post, you can be _sure, has been covered within the csy2k thread - any point needing clarification probably has too.

The last paragraph is the one with the gold. Any y2k managers should at least be aware of his view and experience on ES projects.

mb

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ http://x9.dejanews.com/getdoc.xp?AN=387752360&CONTEXT= 906302275.39649320&hitnum=72

70 BBBBBBILLION Embedded Systems Author: fedinfo Email: fedinfo@halifax.com Date: 1998/09/03 Forums: comp.software.year-2000

. . "What's the point in spending millions of dollars fixing your computer systems, if you don't have phones, elevators, or heat come January 1, 2000?" asks Michael Harden, president and CEO of Century Technology Services, a consulting company and vendor of Y2K remediation services in McLean, Va.

"This is a global Easter egg hunt, and you don't know how many eggs are out there so you'll never know if you've found all of them," says Brian Kishline, manager of systems engineering for Data Dimensions, a software vendor and consulting company in Bellevue, Wash.

Once you do sniff out these rotten eggs, the dubious payoff is the cumbersome process of contacting the vendor--if it's still in business--to determine the Y2K compliance status of the device (See "Caught in the Y2K time crunch? Compliance databases can help"). Then, if the device is not Y2K compliant, it's generally a matter of replacing or retiring the device altogether.Fixing the code is generally not an option, since usually you won't have access to the source code. With less than 500 days to go, time is too short for most companies to contemplate code repair for the myriad of distributed devices containing embedded systems at their premises.

The scope of the problem

It's easy to ignore embedded systems, since they are necessarily hidden from view (see "On the lookout: a 14-step methodology"). But, hard as it may be to fathom, experts say the Y2K problems inherent in embedded systems are much broader in scope--and potentially much more expensive to fix--than those of business computer systems. Like the Y2K issue in general, the embedded systems piece of the problem is fraught with uncertainty. No one can say for sure how many embedded systems are out there and how many will fail come 2000. Since it's impossible to determine the scope of the embedded systems problem, it's likewise impossible to specify how much it will cost U.S. businesses to fix the problem. From interviews with the top logic chip manufacturers, Harden estimates that approximately 5 billion of the 70 billion chips produced since 1972 are subject to Y2K problems. "The question is, how do you go through the 70 billion to find the 5 billion that will have a problem? It's the quintessential needle in a haystack," he says.

Andrew Bochman of the Aberdeen Group thinks the problem rate is much higher than Harden puts it. "From what my clients are saying, I'm looking at a trouble rate of about 20%" of all devices containing embedded systems, says Bochman, senior analyst for Year 2000 services at Aberdeen, a Boston-based IT consulting company. And Aberdeen's manufacturing clients report spending three to four times as much on their embedded systems remediation efforts compared with their computer systems. Whatever that figure might be will only be determined in hindsight, but whatever the amount is, it's a lot of money. =================================

http://www.datamation.com/PlugIn/newissue/09y2k.html

-- Mitchell Barnes (spanda@inreach.com), March 25, 1999.


Twice Mitchell?

Yeah, that submit button 'ill getcha.

Yep, 13%. I was stunned too. Perhaps that's why PG&E can't be "ready" until next fall.

Diane

-- Diane J. Squire (sacredspaces@yahoo.com), March 25, 1999.


Mac & cody, I know (grin). <:)=

-- Sysman (y2kboard@yahoo.com), March 26, 1999.

Hi, S.O.B., good to see you back.

-- Old Git (anon@spamproblems.com), March 26, 1999.

Sorry 'bout the double post, my first one and wouldn't ya know it was a longer one.

-- Mitchell Barnes (spanda@inreach.com), March 26, 1999.

Hello,

A friend pointed me to this forum and asked me to comment. This is from my response, which I also forwarded to contacts at the Edison Electric Institute (http://www.eei.org) and the Nuclear Energy Institute (http://www.nei.org/) and some auditors at PG&E who may or may not be affiliated with PG&E's y2k process.

I guess my tendency is to agree with all of it. May I offer a little untangling?

It's relatively easy to get world wide production figures for the chips themselves, and which ones are real time clocks (RTC) or contain RTCs, such as is the case with microcontrollers. Generally 10% of these 50 billion chips went or will go into computers, the rest into embedded systems. Say you have between 10 and 100 chips in the average embedded system. You are averaging over all sorts of functionality and several implementations of various technologies as you look back over some 40 years.

So, what are the sources of these "disagreements"?:

Gartner looks at chips, not systems, and looks at the whole world. Gartner's figure of 1 in 100,000 failures for "free standing microcontrollers" is potentially misleading since it is but one point on a continuum of complexity. With a microcontroller, generally the whole system is on one chip, so the number of chips per system is 1. Microcontrollers became the rage after 1989, so there are several technologies that were already retired by the time they came along; you also get a slice in time when you select out microcontrollers.

Gartner cannot afford to release all of their findings and remain in business. In January I heard Lou Marcoccio, one of their research directors, talk about SCADA (supervisory data control and data acquisition systems). If you measure complexity in terms of the number of real time clocks contained in the system (my ad-hoc y2k figure of merit) these are clearly more complex as a class than microcontrollers. These devices contain at least one processor and a number of interfaces and sensors and actuators. Each element generally has the ability to time and date stamp information that it generates.

In electric power generating and distribution systems, they time-stamp down to the hundredth of a second or so, so that they can pinpoint failures to one one of the sixty cycles that occur during each second. Thus, when they are doing their detective work, they are able to trace the root cause. This practice is indicative of a proactive risk management attitude that is responsible for the U.S. having arguably the most reliable delivery of electric services in the world. Unforeseen at the time of implementation, it also allowed a way for the Y2k bug to enter the system.

PG&E looks at systems of typically 10 to 100 chips, not individual chips and is looking at a very narrow slice of design years and technology in a specific industrial sector. (http://www.pge.com/resources/compliance/)

Since we are talking about continua, it is useful to mention that the failures fall along a continuum as well, from "nuisance" (e.g. displays 00 to a human operator, instead of 2000, or something, but poses no consequential y2k effects) to "catastrophic". The technical term catastrophic" means that the device enters a state where it cannot resume normal operations without outside intervention. It does not mean loss of life or property. That depends on other factors. Some people have taken the technical term "catastrophic failure" out of its proper context. This is not to say that catastrophic failures cannot cause catastrophes. They can. This is one reason for the focus on Y2k at the U. S. Chemical Safety and Hazard Investigation Board, http://www.cshib.gov/y2k/.

If there is a reset button and a handy operator, this reduces the chances of a catastrophic failure. If that reset button is in orbit or at the bottom of the Atlantic, then this may cause more severe consequences. Unfortunately, the back-up systems which are designed to improve the survivability of these remote systems are often identical to the main systems and will have the same y2k risks. In some cases, the main and back-up system will fail identically, passing control from the main to the back-up system and then back again, over and over. This is called "thrashing". When thrashing happens, whatever the embedded system is controlling will in all likelihood not be able to accept commands or perform its normal operations.

Thanks for turning me on to this forum.

Mark Frautschi http://www.tmn.com/~frautsch/

My paper on embedded systems: http://www.tmn.com/~frautsch/y2k2.html

Y2k Week X Newsletter: http://www.tmn.com/y2k/

-- Mark Frautschi (frautsch@tmn.com), April 03, 1999.


I think you'll fit in quite well here Mark. Thanks for your input. <:)=

-- Sysman (y2kboard@yahoo.com), April 03, 1999.

Welcome, Mark Frautschi! You mean until now you had never visited this TimeBomb2000 Forum?! What a whole new weird & wonderful Y2K world you will discover! On the first page, New Questions, scroll down to "Older Messages (by category)." These are the venerable archives. New threads are immediately added, so it's not all "old."

Be sure to hit "New Answers" on the top of the New Questions page. That will shoot you to "Recent Answers," where you can see the latest hotest topics under debate. Use your "Refresh/Reload" button frequently!

Warning: We bite, fight, shout, scream, scratch, & claw, and yet this is *the* place to find the latest info on Y2K. It is also highly addictive. Have fun!

Ashton & Leska in Cascadia

xxxxxxxxx xxxxxxxxx xxxxxxxxx xxxxxxxxx xxxxxxxxx x

-- Ashton & Leska in Cascadia (allaha@earthlink.net), April 03, 1999.


Mark, I've admired (and under-used) your posted file of Y2k bookmarks for quite a while. - - - - - Folks, I recommend that you all bookmark Mark Frautschi's now-297K of Y2k bookmarks at http://www.tmn.com/~frautsch/y2k.html.

Yes. The Y2k bookmark file he has displayed is currently 297K bytes. Yes, I mean Y2k-only bookmarks.

-- No Spam Please (No_Spam_Please@anon_ymous.com), April 03, 1999.


Hi Mark F and All,

There seems to be some misconceptions about how many 'chips' there actually are.

Assuming the 70 billion figure is correct (and it's within the ballpark) -- it is a count only of CPU's or chips with CPU's in them. That is, single chip controllers with integrated ROM,PROM,I/O, and separate CPU's (x86, 68xx, 65xx, Z80x, etc.). Memories, RTC's and glue logic is NOT included in the count (it would be much, much greater).

That means there are NOT 10 to 100 chips per embedded system, usually there's only one (or a few, for periperheral functions).

IOW, you can't divide the 50 billion (or 70 billion ...) by 10 or 100 to get the number of embedded systems.

-- Dean -- from (almost) Duh Moines (dtmiller@nevia.net), April 03, 1999.


My thanks to the "No Spam Please" poster who corrected my statement about chips versus microprocessors and microcontrollers.

Sure enough, in July 1998, Harlan Smith had forwarded me a message with similar conclusions:

From: Marcia Stoklosa , on 7/2/98 5:17 PM: To: hmanning@ti.com

This came from ICE's Status 1998. The figures are actually World Semiconductor Trade Statistics (WSTS).

Units (millions) 1991 1992 1993 1994 1995 1996 1997(est) Total Microcontroller: 1722 1902 2221 2659 3067 3450 4167 19188 Microprocessor: 136 143 167 170 212 249 286 1363 (both): 20551

Harlan Smith

It is curious that there are more microcontrollers (popular since 1989) being made than microprocessors! I would like to have seen production figures going back to (1967?) when TI introduced the microprocessor, when embedded systems used discrete chips (Arithmetic Logic Units, ...) to make their processors.

-- Mark Frautschi (frautsch@tmn.com), April 05, 1999.


Hi Mark - good to hear directly from you.

I would strongly not recommend anybody get too hooked up to the "percent failing" and "number of ...(chips, controllers, systems, processers, computers, or whatever's)" parts of this side of the Y2K problem.

For successful, low-spike, regulated power distribution, the entire system must run and regulate itself correctly - as it does now - in the post-2000 timeframes. So, whether the source of the problem is thrashing (as it was at the PeachBottom nuclear test (where two mutually non-compliant, (failing) display controllers both failed as they "reset" and defaulted to each other repeatedly), or simple SCADA "record 00.00" errors, the potential for real problams comes from the action taken (and not taken) in response to the failured of the automatic sysem.

(As pointed out very well above) the response to a system-wide series of display and printout errors, goofs, laughable probelms ("Hey, look what it says happened!") and real alarms will vary between operators and companies depending on their training, knowledge, background, and sense of "knowledge" of the response of the system: what happens when each of these occur many thousand times in many thousand different plants: false shutdown commands (overridden), false alarms (overridden), real alarms (aknowledged, followed by shutdown), real shutdowns (not overridden), (real alarms (not overridden, not akknowledged, no action taken); real alarms (followed by overridden emergency responses - thus the original sensor/processer failure is exaggerated by catastrophic system failure ); and by real alarms that are not sensed by the system (and thus no action is taken at all until catastrophic failure happens.)

failure could be anywhere - in the "comparment full" sensor of a warehouse's automatic retreiveial system to a cement plant's natural gas flame temperature sensor in the lime kiln. If it is automatic or is connected to any electronic device or sensor in any way, to be certain it (the system) will properly perform next year it must be checked for year 2000 performance. I am skeptical of a "vender check" or "vender certification" only - a certification not followed by a integrated "full-up" time-advanced test. The "vender certification" is a needed first step, and may be alll that can be done, but the full-up test is the only one that will detect most (and even then - not all) of the interface probelms between systems.

Percents will definitely vary between industries and between applications in those industries:

EPRI has a database they've compiled from their group of member electric utilities and natural gas companies (and a few oterhs) - their embedded systems coordinator indicated they are identifying 3% (on average) that have probelsm. SCADA systems appear to avrage 3% as in thier experience.

Controller and sensors themselves are lower, but the "reporting program (the computer and its program the sensors "report to") appear to be much more likely to have "programmed in" problems:

Investigating the problem at the company level indicates the same: Rosemount Instruments for example, found no problems in its temperature and pressure sensors themselves, but needed to upgrade its control computer program to become y2K compliant - (this upgarde was not available until late Sept 1998.) - so remediation and testing in any power plant using Rosemount intruments and programs could not begin testeing until Oct - Nov at the earliest.

"Unmanned" enginerooms (and all the remote sensor and control instrumentation that implies) are very, very common in the shipping industry since the mid-and early 1980's - perticularly in the latest Euorpean and Japanese/Korean/Tawainese ships built since then - perhaps 50% of the world's ships. These simply no longer sail with enough crewmen to fully man their engine rooms anymore full time - (Note - emergency manning is available still.) So I was very alalrmed that surveys found 11% of embedded systems are "suspicious" when enginerooms were surveyed for Y2K compliance. (But is "suspicious" necessarily the same as "falinig"? Don't know.)

"Dan" is regular reader here - a Polyanna, but not an unreliable source thereby - who has checked "several hundred" embedded chips "in his power plant" - we know know the plant id or umber of plants surveyed - but he indicated he has found none that are Y2K failures. Good - 0 out of 100 in one test is statistically in the same ballpark as 3% out of several thousand when many other tests and reports are added in. But it indicates too that one person could be testing differently (or testing for a different level of compliance) for Y2K than others are. For example, if he only tested the Rosemount sensors, he too would have discovered 0% failure - if somebody else was reporting the compliance (or need for replacement) of the next higher level computer logging program.

But if nobldy at his utilitiy checked the next higher program - or thought "Dan" was checking that program also when he reported "0% failure in embedded chips" - then that utilitiy may still be subject to alarms and potential faliures.

Thus is the 'embedded chip probelm 0% or 25% ("The chip was tested okay, but the patient (the process itself) still died.")? We don't know absolutely - and I am still troubled a great deal by the simple age of the "processing computer" on the plant floors - the older 286 and 386 "industrial computers" that run CNC machines, tensile stree gages, and simple 10 and 12 year-old process controllers that may fail next year (as my old 386 at home might fail) or may not fail next year (as my 486 will only need rebooting and resetting).

But in the meantime, it (the 486) will not run correctly.

-- Robert A Cook, PE (Kennesaw, GA) (Cook.R@csaatl.com), April 05, 1999.


Late response here, as I've just gotten back to this thread.

Cody wrote (3/25) "Tom and Sysman: Don't you understand irony? Alison is agreeing with our alarmist conclusions, not attacking them."

Alison's point, and intent, were plain to see. My ("Cut us crazies some slack") was intended as a mordant joke to follow it up, but it seems that wasn't apparent. Owel.

-- Tom Carey (tomcarey@mindspring.com), April 05, 1999.


Hi Tom, if you noticed my later reply... I know (grin), so was I. <:)=

-- Sysman (y2kboard@yahoo.com), April 05, 1999.

Moderation questions? read the FAQ