Y2K May Cost Europe $210 Billion

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

Sorry if this has been posted before but I find interesting the description of the types of damage that will(if I may be so bold) occur. Link

By Sylvia Dennis, Newsbytes August 02, 1999

Ongoing research from International Monitoring suggests that the Y2K issue could cost countries within the European Union (EU) as much as $210 billion.

Nick Gogerty, a spokesperson for the Y2K specialist research firm, told Newsbytes that the damage estimates will be caused by a total of 58 million individual hardware, software, and embedded system errors.

International Monitoring says that only 10 percent of those errors will actually occur on the actual New Year's date change, with the rest occurring "downstream."

To calculate the value, International Monitoring says it uses a quantitative model for calculating damages. The first step involves calculating the technological inventory of a country including the estimated amount of hardware, software, and embedded systems.

The firm then isolates how much of the technology is likely to be date-sensitive. The final bug estimate is derived using a figure for software fix efficiency and the number of systems likely to not be repaired due to late starts.

The report says that the final figure is the number of bugs likely to cause problems. Damage estimates are then made based on the economic profile and technology utilization within the various countries.

Gogerty said that it is important to remember that most bugs will be small inconveniences, but some could cause failures in critical national infrastructures.

The research also highlighted that there will be three types of damage caused by the Y2K issue: direct damage, indirect damage, and ambient damage.

Direct damage, as is normal in the information technology (IT) industry, will consist of system failures or incorrect processing. International Monitoring says that many of the cases researched and reported so far have not involved full failures.

For example, the firm says, PC's that roll over to 1980 after December 31, 1999 do not always fail - they continue functioning, albeit with dates that may corrupt data files and databases.

Larger systems, however, can and do fail in much more significant ways. The report notes that direct damage can also include infrastructure failures.

Indirect damage is a less quantifiable problem, however. The term refers to problems of business partners, customers, vendors, regulators, or others having a relationship with the directly damaged organization.

For example, the report says, a just-in-time system failure in one firm may bring a halt to an assembly line on another continent. The failure of a residential emergency phone system could increase damages in many time sensitive situations, such as fire, criminal, or health risks.

The third problem - ambient damage - is an interesting issue. International Monitoring says that it is caused by non-standard behavior, and could include corporate or individual stockpiling which may induce shortages.

The report suggests that ambient damages can also include external actions needed to minimize impact from Y2K. One type of ambient damage would be the extra cost required to increase customer communications in an effort to assure clients of corporate sustainability.

The report goes on to note that duplicating records in paper or other formats is another example of ambient damages.

International Monitoring says that the country damage profiles and scenarios vary from nation to nation. Delays or outages of national infrastructure are considered highly probable for most states.

International Monitoring's Web site, where segments of the report can be reviewed and purchased, is at http://www.intl-monitoring.com .

-- y2k dave (xsdaa111@hotmail.com), July 31, 1999

Answers

Anything that comes out of International Monitoring is worthless to me. Who the heck are they?. What credentials do they have?. When did they come into existance. Seems like someone decided to rehash information and sell it for big bucks.

Their speculation is as valid as yours or mine, nobody knows what is going to happen and nobody can guess the cost.

-- Doubting Thomas (DoubtingThomas@notso.com), July 31, 1999.


doubting,

Can you tell us what you think about the predictions from Cap Gemini and the Gartner Group? What information do you rely upon?

-- y2k dave (xsdaa111@hotmail.com), July 31, 1999.


Yeah, I've worked with such models. We really don't know how much code we're talking about, so use C lines of code. We don't really know what the error percent will be, so let's plug in E%. And we really don't know what percent of these errors will be minor and which will be serious, so let's plug in s% for minor and S% for serious, s+S=100. And we don't know how much these errors will cost in actual money, so let's plug in m$ for minor errors and $M for serious errors. And we have no clue whatever about downstream costs, so let's use d$ and D$.

Now, depending on what values we used for C, E, M, m, D, d, and S, we come up with anywhere from 0 to 10x GWP (gross world product). Next, what are *reasonable* values for these variables? Well, let's look at some know metrics, some actual case studies, check the seat of our pants, here's what the model says. Finally, who's paying for this study and what to they want to see? That's easy, pick values that match the desired results. Presto, 'scientific' evidence. Piece of cake.

-- Flint (flintc@mindspring.com), July 31, 1999.


Moderation questions? read the FAQ