Hoff's Y2K error analysis (continued)

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

Hoff, this new thread continues our existing discussion. The following is taken from our former thread, but has been edited slightly.

OK, let's see if I get it (finally). The assumption of system replacements taking place during the 24 months of 1998 and 1999, was meant to be worded to have the replacements end prior to rollover period which begins with the last week of 12/99. So the 1.13% figure would not apply to the last week of 12/99.

But even with this correction, the rollover period would be generating errors at 2.10% function points per month, versus 1.13% per month for the period prior to rollover. I realize that this nearly 2:1 ratio reflects your having doubled the Gartner Group estimate "to add a level of certainty," but if using their rate instead of double their rate, is required to bring about comparable resulting error rates, that would suggest greater uncertainty.

Question, why would errors in a replacement system not continue to surface after the system was installed, and why should this not be included in error rate estimates?

Another question, given the problems that can surface during installation, might not the effort to get the system up and running take several days (or longer), rather than occur strictly on the scheduled production date? This would mean that concentrating planned installations at the end of a month (for example) might not concentrate to the same extent, the actual effort involved.

Finally, in distributing the unremediated or missed Y2K errors, there's a point in the analysis where you take 25% of 4.2%. I was wondering whether you might have meant to take 25% of the 5.25%. I noticed this when I was initially reading your calculations, but hadn't bothered to mention it because it would have negligible effect compared to the other areas of discussion.

Thanks.

-- David L (bumpkin@dnet.net), September 27, 1999

Answers

Posted by: David L

Hoff, I took the liberty of posting again so I could expand on one of the points hinted at in my above post.

In your analysis, system replacement errors are allocated strictly to 1998 and 1999, based on those replacements being fully implemented prior to the rollover period.

And therein may lie a problem. Software errors can occur only if the software containing the errors is executed. The purpose behind much of the code in a system is to intercept and gracefully handle irregularities, such as an invalid value entered by a user, an interfacing system being unavailable, and other realities of an imperfect world. Well crafted software handles contingencies that are unlikely to arise even once in the entire life of the system.

It follows then, that a significant portion of the code within a system will not have to be executed during its first year. Further, I would suggest that the code that does get executed in that first year, will produce a percentage of the total system errors that is considerably smaller than its percentage of the total lines of code in the system.

System testing focuses on those scenarios most likely to occur under field conditions. Therefore the code that executes those scenarios will have had a higher percentage of its errors identified and rectified prior to deployment, than will the code for the more obscure cases.

My conclusion is that the function point error rate for a replacement system (excepting a small allowance for those errors that would not ever surface) needs to be spread over the entire expected life of the system.

-- David L (bumpkin@dnet.net), September 27, 1999.


The main reason I made the conservative assumptions was to try and account for a number of variables that have no real estimates available. As you pointed out, one was doubling the estimated error rate by the Gartner Group.

As I said previously, another of these assumtions was the constant, uniform rate of implementations, as well as remediated systems being reimplemented into production. I fully expect that these in actuality followed a more Normal distribution, which would increase the actual error rate substantially at the peaks.

On the flip side, as you point out, not all errors occur at implementation. However, the rate and severity of errors immediately following implementation are quite higher than subsequent periods. These errors tend to affect day-to-day operations, and as such have a much higher impact. Even so, I discounted the errors introduced through implementations by 85%, to again provide a conservative estimate, and try to account for as many unknown variables as reasonably possible.

Note as well, the error metrics from Jones only take into account the software itself, and not errors due to variables such as data conversions, and the installation process itself.

-- Hoffmeister (hoff_meister@my-deja.com), September 28, 1999.


Posted by: David L

I realize you sought to make conservative assumptions, but it's hard to gauge their validity without seeing them applied.

Consider a hypothetical replacement system with 50 inputs, 50 outputs, 25 user inquiries, 100 data files updated by the system and 10 interfaces to other applications. Applying the function point calculation guidelines given by Capers Jones and referenced in your analysis, this results in the following FP counts: Inputs: 50 x 4 = 200, Outputs: 50 x 5 = 250, Inquiries: 25 x 4 = 100, Data files updated: 100 x 10 = 1000, Interfaces: 10 x 7 = 70, for a total of 1620 FPs.

If for the first two years of deployment, Y2K equivalent errors in this system were discovered at an average rate of 1/week, that would translate to 104 errors. In contrast, applying your assumptions would translate to 75% x 1620 = 1215 Y2K equivalent errors found in those two years.

This example is intended to show the need to consider concrete instances in assessing the validity of a theoretical model.

-- David L (bumpkin@dnet.net), September 28, 1999.


David L: I agree with your conclusions about error rates that surface with replacement systems.

-- BigDog (BigDog@duffer.com), September 28, 1999.

Really, BigDog? You're agreeing that Software Errors due to the initial implementation occur at a uniform, constant rate from implementation? And that the initial code generates errors at this same rate for the life-span of the software? Pretty interesting stuff.

Anyway, David, there were too many complaints as it was, that the post put people to sleep; delving deeper into relative estimates would have made it more so. That's one reason I took conservative estimates, and discounted software error rates so heavily; to simplify the post. But, since you seem interested, let's expand the analysis.

I'll agree that errors do continue to occur after the implementation. But there is a definite slope, starting with implementation, that goes downward with time. And is definitely not as simplified as 1 per week forever.

Eventually, a state is reached where the errors are essentially being created by the fixes, and do not derive from the initially implemented code. At that point, you reach what is in essence a "stable" system, that has bugs that require ongoing manintenance. But this is the relative base of errors in all software, and is essentially constant. That is, errors in "stable" systems are constant both before and after the rollover, disregarding Y2k errors themselves.

For estimation purposes, my best guess is a typical system implementation requires at the outside 6 months to reach this stage. Breaking that six-months into 2-week periods gives about 12 two-week periods. As a starting point, assume 10% of the bugs are discovered in the initial two-week period, and slopes downward to 0% after the 12th.

A rough approximation for the percentage of errors in any given two-week period then is given by the function Y = 10 - 5/6*X, where X is the number of two-week periods from implementation.

So:

10.00% at implementation 08.33% at 1 month 06.67% at 2 months 05.00% at 3 months 03.33% at 4 months 01.67% at 5 months 00.00% at 6 months

Also, let's assume 10% of the total errors remain, as the base error level after 6 months.

My original post calculated 24.75% of the function points with errors of the same magnitude as Y2k generated through system implementations. But that number was after discounting implementation errors by 85%. Part of that discounting was to account for not all errors happening at once. So, instead of 85%, discount implementation errors by 75%. That leaves a new percentage 41.25% of the function points with errors of the same magnitude as Y2k generated through system implementations.

From above, assume 10% are the base level of errors, or 4.125%, leaving 37.125% of the function points generating errors during the 6 months from implementation.

Now, these need to be spread throughout the 24 months of 1998-1999. Previously, I used a uniform distribution to provide a level of conservatism. But this certainly has not been the case; a more Normal Distribution is appropriate.

Given a 24-month period, a uniform distribution yields 4.167% of the systems implemented per period. Being conservative, assume a generally Normal Distribution peaking on January 1, 1999, at a rate twice the uniform rate. Thus, January 1, 1999 had 8.34% of the implementations. And in general, the percent of systems implemented at each period can be approximated by the function Y = .695*X for the period January 1, 1998 thru January 1, 1999, where X = Number of Months from January 1, 1998.

So:

Jan 1998 0 Feb 1998 0.695% Mar 1998 1.390% Apr 1998 2.085% May 1998 2.780% Jun 1998 3.475% Jul 1998 4.170% Aug 1998 4.865% Sep 1998 5.560% Oct 1998 6.255% Nov 1998 6.950% Dec 1998 7.645% Jan 1999 8.340% Feb 1999 7.645% Mar 1999 6.950% Apr 1999 6.255% May 1999 5.560% Jun 1999 4.865% Jul 1999 4.170% Aug 1999 3.475% Sep 1999 2.780% Oct 1999 2.085% Nov 1999 1.390% Dec 1999 0.695%

So, at the peak, implementations that peaked in Jan, 1999 would generate errors at .310% of the function points ( 37.125% * (.10 errors at implementation) * (.0834 of implementations))

But, as you pointed out, errors are also generated past implementation. We used a 6-month shakedown period; so, the actual error rate at a given month is given by .10 * ( 37.125 * (Impl Rate for Month)) + .0833 * (37.125 * (Impl Rate for Month - 1)) + .0667 * ( 37.125 * (Impl Rate for Month -2))....0167 * (37.125 * (Impl Rate for Month - 5))

Giving estimated percentage of function points generating errors as:

Jan 1998 0 Feb 1998 .0258 Mar 1998 .0731 Apr 1998 .1376 May 1998 .2150 Jun 1998 .3009 Jul 1998 .3911 Aug 1998 .4813 Sep 1998 .5711 Oct 1998 .6708 Nov 1998 .7520 Dec 1998 .8449 Jan 1999 .9325 Feb 1999 .9711 Mar 1999 .9667 Apr 1999 .9290 May 1999 .8634 Jun 1999 .7818 Jul 1999 .6915 Aug 1999 .6019 Sep 1999 .5113 Oct 1999 .4209 Nov 1999 .3307 Dec 1999 .2402 Jan 2000 .1502

So, expanding the analysis yields a peak rate in February of .9711% function points generating errors due to software implementations. Adding the .1% from the initial post, due to unremediated Y2k errors and bad fixes, yields a rate of 1.0711%.

Note that to be complete, errors due to bad fixes during this period should be calculated, which would add more. Just don't have the time nor will at the moment.

The baseline from the original post was 1.05% errors during the two-week rollover period. Add the .1502% above to get 1.202% function points generating errors during rollover.

1.0711% vs 1.202%. Hardly earth-shattering, especially since we've seen absolutely no evidence of "systemic cross-cascading failures" to date. And note, this still discounts errors in new software by 75%, and still doubles the estimated error rate from Gartner Group.

-- Hoffmeister (hoff_meister@my-deja.com), September 28, 1999.



Try again, with spacing

-------------------

Really, BigDog? You're agreeing that Software Errors due to the initial implementation occur at a uniform, constant rate from implementation? And that the initial code generates errors at this same rate for the life-span of the software? Pretty interesting stuff.

Anyway, David, there were too many complaints as it was, that the post put people to sleep; delving deeper into relative estimates would have made it more so. That's one reason I took conservative estimates, and discounted software error rates so heavily; to simplify the post. But, since you seem interested, let's expand the analysis.

I'll agree that errors do continue to occur after the implementation. But there is a definite slope, starting with implementation, that goes downward with time. And is definitely not as simplified as 1 per week forever.

Eventually, a state is reached where the errors are essentially being created by the fixes, and do not derive from the initially implemented code. At that point, you reach what is in essence a "stable" system, that has bugs that require ongoing manintenance. But this is the relative base of errors in all software, and is essentially constant. That is, errors in "stable" systems are constant both before and after the rollover, disregarding Y2k errors themselves.

For estimation purposes, my best guess is a typical system implementation requires at the outside 6 months to reach this stage. Breaking that six-months into 2-week periods gives about 12 two-week periods. As a starting point, assume 10% of the bugs are discovered in the initial two-week period, and slopes downward to 0% after the 12th.

A rough approximation for the percentage of errors in any given two-week period then is given by the function Y = 10 - 5/6*X, where X is the number of two-week periods from implementation.

So:

10.00% at implementation

08.33% at 1 month

06.67% at 2 months

05.00% at 3 months

03.33% at 4 months

01.67% at 5 months

00.00% at 6 months

Also, let's assume 10% of the total errors remain, as the base error level after 6 months.

My original post calculated 24.75% of the function points with errors of the same magnitude as Y2k generated through system implementations. But that number was after discounting implementation errors by 85%. Part of that discounting was to account for not all errors happening at once. So, instead of 85%, discount implementation errors by 75%. That leaves a new percentage 41.25% of the function points with errors of the same magnitude as Y2k generated through system implementations.

From above, assume 10% are the base level of errors, or 4.125%, leaving 37.125% of the function points generating errors during the 6 months from implementation.

Now, these need to be spread throughout the 24 months of 1998-1999. Previously, I used a uniform distribution to provide a level of conservatism. But this certainly has not been the case; a more Normal Distribution is appropriate.

Given a 24-month period, a uniform distribution yields 4.167% of the systems implemented per period. Being conservative, assume a generally Normal Distribution peaking on January 1, 1999, at a rate twice the uniform rate. Thus, January 1, 1999 had 8.34% of the implementations. And in general, the percent of systems implemented at each period can be approximated by the function Y = .695*X for the period January 1, 1998 thru January 1, 1999, where X = Number of Months from January 1, 1998.

So:

Jan 1998 0

Feb 1998 0.695%

Mar 1998 1.390%

Apr 1998 2.085%

May 1998 2.780%

Jun 1998 3.475%

Jul 1998 4.170%

Aug 1998 4.865%

Sep 1998 5.560%

Oct 1998 6.255%

Nov 1998 6.950%

Dec 1998 7.645%

Jan 1999 8.340%

Feb 1999 7.645%

Mar 1999 6.950%

Apr 1999 6.255%

May 1999 5.560%

Jun 1999 4.865%

Jul 1999 4.170%

Aug 1999 3.475%

Sep 1999 2.780%

Oct 1999 2.085%

Nov 1999 1.390%

Dec 1999 0.695%

So, at the peak, implementations that peaked in Jan, 1999 would generate errors at .310% of the function points ( 37.125% * (.10 errors at implementation) * (.0834 of implementations))

But, as you pointed out, errors are also generated past implementation. We used a 6-month shakedown period; so, the actual error rate at a given month is given by .10 * ( 37.125 * (Impl Rate for Month)) + .0833 * (37.125 * (Impl Rate for Month - 1)) + .0667 * ( 37.125 * (Impl Rate for Month -2))....0167 * (37.125 * (Impl Rate for Month - 5))

Giving estimated percentage of function points generating errors as:

Jan 1998 0

Feb 1998 .0258

Mar 1998 .0731

Apr 1998 .1376

May 1998 .2150

Jun 1998 .3009

Jul 1998 .3911

Aug 1998 .4813

Sep 1998 .5711

Oct 1998 .6708

Nov 1998 .7520

Dec 1998 .8449

Jan 1999 .9325

Feb 1999 .9711

Mar 1999 .9667

Apr 1999 .9290

May 1999 .8634

Jun 1999 .7818

Jul 1999 .6915

Aug 1999 .6019

Sep 1999 .5113

Oct 1999 .4209

Nov 1999 .3307

Dec 1999 .2402

Jan 2000 .1502

So, expanding the analysis yields a peak rate in February of .9711% function points generating errors due to software implementations. Adding the .1% from the initial post, due to unremediated Y2k errors and bad fixes, yields a rate of 1.0711%.

Note that to be complete, errors due to bad fixes during this period should be calculated, which would add more. Just don't have the time nor will at the moment.

The baseline from the original post was 1.05% errors during the two-week rollover period. Add the .1502% above to get 1.202% function points generating errors during rollover.

1.0711% vs 1.202%. Hardly earth-shattering, especially since we've seen absolutely no evidence of "systemic cross-cascading failures" to date. And note, this still discounts errors in new software by 75%, and still doubles the estimated error rate from Gartner Group.

-- Hoffmeister (hoff_meister@my-deja.com), September 28, 1999.


Really, BigDog? You're agreeing that Software Errors due to the initial implementation occur at a uniform, constant rate from implementation? And that the initial code generates errors at this same rate for the life-span of the software? Pretty interesting stuff.

It would be, except that I was giving an average rate and did not intend to imply that it was a uniform rate. I agree that there's a definite slope to errors occurring over time.

I think the main difference in our views is regarding the percentage of errors that lay dormant in the code. I would be agreeable to disagreeing on this point.

I appreciate the time you've taken to discuss this subject.

-- David L (bumpkin@dnet.net), September 28, 1999.


FYI, but double checking my calculations revealed that this model in actuality assumes almost 70% of the errors introduced through implementations lay dormant, instead of the 10% stated.

Above the 10%, when I distributed the errors through the 6 months, I accounted for only 35% of the errors ( 10% + 8.33% + 6.67% + 5% + 3.33% + 1.67% ).

So, of the 41.25% of function points generating errors on the level of Y2k errors, only about 12.85% are actually distributed.

-- Hoffmeister (hoff_meister@my-deja.com), September 29, 1999.


A rational explanation for making Y2K preparations http://www.greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=001R UO

Sincerely,
Stan Faryna

Got 14 days of preps? If not, get started now. Click here.

Click here and check out the TB2000 preparation forum.



-- Stan Faryna (faryna@groupmail.com), October 01, 1999.

Well, Stan, appreciate the, ummm, contribution to the thread.

-- Hoffmeister (hoff_meister@my-deja.com), October 01, 1999.


Moderation questions? read the FAQ