An Application Of The NERC Report

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

Below is an excerpt from Bonnie Camp's NERC analysis:

http://www.cbn.org/y2k/insights.asp?file=990114o.htm

Seventeen percent of the utilities which responded to the NERC November survey estimate they have completed 10% or less of their fixing and testing of critical systems. 16 utilities have not done any remediation or testing yet. Are they at the starting line or still trying to get to the field? Just 34%, or one-third of the utilities have completed more than 50% of critical systems remediation and testing. Two thirds (66%) have completed 50% or less of their remediation for critical systems.

===================================================================

Let's take these numbers and try to make a real life application.

First, there are a couple of assumptions that we need to get out of the way for the hagglers and pickers of nits for no other reason than to be pickers of nits. We know that not all programs and or application and or sytems were created equally. They are not all the same size and complexity. But that is not the issue here and is not under discussion. We also know that there is a tendency to fix 'easy' ones first and leave the 'hard' ones to last, and that too, is not an issue here. As well, it is not an issue that the last part of a project is the hardest part. There are a lot of these points that can be brought to bear both for and against what I am going to argue, but they would be swallowing the camel and straining out the gnat. This also will not include FIFTY percent of the job which is testing. We will leave that for later. So let's stick to the issue unless I have made a grievous error somewhere, in which case I am open to correction. Nit or no nit.

For the purposes of this discussion we will assume that all programs/systems are equivalent for remediation purposes.

Comapny A is one of those companies that is doing 'better'. They have 1000 systems and 200 are Mission Critical.(I won't even try to skew this by using a bad one) It has fifty percent of its Mission Critical systems on the remediated side of the list and the other half to be addressed. Let's say that they began coding Jan 1 1998. One year out of the remaining two has elapsed. To begin with, there already appears to be a problem. They have half the code finished and half to go. that would leave no time at all for full integrated testing. But, that is just another issue. Let's believe, for the moment that we are in a fantasy world and they don't need to test making it even better for them. All things being equal they have half the time to go and half the work to do. They are in a position to make it, but it would come right down to the wire. Not. There is something very very important left out of the equation. Of the systems that have been placed on the 'done' list, how many of them actually required little or no remediationat all? If I say half, then the quibblers rush in. So I won't. I will say 25%. That means that they have spent ONE year on only 25% of all the mission critical systems. They have 50% to go, or TWICE as many.

Here is where the first 'legitimate', if not illogical, quibbler apppears. If the systems were arbitrarily divided down the middle, then there would be just as many systems NOT needing remediation in the second half, thus equlizing the situation. NOT. And this is where you can see that this quibble falls apart. And this is one place where 'reality' has to come in. There is NO company, having arbitrarily drawn a line down the middle, having said it had fifty percent complete, that would not have also included the other 25% in the report of remediated systems. No. They would, at that point, actually be 75% complete. We all know, for a fact, that any remediated or substantially remediated system would AUTOMATICALLY be in that first half and dispenesed with immediately. This is unquestionable. If you want to debate that point , then read no further.

The situation is that they do, in fact, have all the substantially remediated or remediated systems in the first half. The reality is that they DO in fact have twice as much work to do. And quite simply, there is not enough time. The real problem was the late start. The number one problem that causes more projects to fail than anything else. They started too late and had to create a schedule that did not reflect reality, but reflected only the deadline, and not very well at that.

Lets try to work this thing backwards to find out what their REAL schedule and rate of progress should have reflected. We know that the second half of the project will actually take TWICE the time as the first half because the first half actually included half of 'those' systems already done. So the first half had to be one third of the alotted time, because it is actually one third of the total work needing to be done. They should have competed the first half by August of 1998, 8 months out of the 24 remaining. As of Jan 1, they are already 4 months late. They have twice the work to go.

So where are they in reality? In reality they have twice as much work to be done as they have done and they have zero time at all for testing, a full half the job.

In order to have worked it backwards properly for testing to consume HALF the time, they actually should have completed the first half by the end of March, 1998. So, actually, they are 8 months behind a proper job. This goes hand in hand with even the most naive understanding of IT metrics.

They are not going to make it. And this is a company that says it is HALF done. Forget the rest. Acording to the NERC report, *that* 66% is in 'similar' shape to what I have described. Mileage may vary, but overall, not far off that mark. Does that mean that the other 34% WILL make it? Not even close. If the bulk of them were actually 90% complete, that would be one thing. But we know that that is not the case. It is very reasonable to presume that most of them will not make it. IT Metrics, Business-As-Usual and Pareto's Law all militate against the majority of them making it.

Now, let's throw the budget issue in here as well. The SEC filings indicate that less than half of the filing companies had spent more than 44% of their proposed budget. look at how that would figure in here.

If they have spent 44% of their budget and they have twice as much work to go, they have another 88% to spend. That gives us a total of 132% of proposed budget. And then comes that nasty testing issue again. Does testing cost as much as coding? More? Less? Let's just say it is HALF of the proposed budget and not even half of what 'coding' costs. That will serve to make it an even LOWER amount. Using a random proposed budget of $100,000, the coding cost $132,000 and the testing another $50,000, for a total of $185,000 and a WHOPPING 85% OFF. If testing actually costs half of the coding costs then they are off by about 100%.

There are a number of curve balls that you can throw. I know that I have not covered 'all' the bases, and it was not my intent nor within the scope of my ability to do so. I am just talking about all things being relatively 'equal' in the normal everyday sense of IT experience. One could argue, that miracle of miracle, things go as planned. Usually though, the last part of the job is hardest. Assurances from vendors fall flat, customers cross you off their list, programmers burn out, unanticipated problems crop up, etc etc. causing slippages to pile up like cars carreening into each other at the bottom of an icy hill. But, on the whole, they tend to greatly stack up against successful remediation.

No matter how you slice it, these companies are in big big big trouble.



-- Paul Milne (fedinfo@halifax.com), January 15, 1999

Answers

This is a serious contribution and I'm gonna take a while to work through the assumptions based on my own big systems experience. For now, just this:

Our best hope (how pathetic is this) is that most orgs have actually completed 70-90% remediation but are too stupid to realize it and think and report they've only done 40%. LOL.

I ain't kidding. There is only the loosest possible connection (don't dare use the word "correlaton") between the actual metrics (hey, let's get real: almost no one collects any meaningful metrics, period) and what is reported by the PR flacks. Check out the thread by good 'ol "a" - "The Plan". Hilarious and real. Yes. So it is barely possible that the boys in the shop are actually doing great work but that the message hasn't gotten out the door. So, it's not necessarily that the situation is as bad as Paul say (wanna bet on it, though? more likely, FAR worse) but that the god-awfully-stupid compliance percentages are essentially meaningless. Meaning-less.

But, hey, compliance percentages pass for data, no? Thank you, Paul.

-- BigDog (BigDog@duffer.com), January 15, 1999.


Yup. A typical trick of the IT trade. For instance, this is how my state reports their progress at http://www.amrinc.net/nasire/y2k/results.cfm:

Assessment: 0%

Renovation: 0%

Validation: 0%

Implementation: 42%

Comments:

42% - overall percentage of ALL SYSTEMS WHICH ARE COMPLIANT. Mission Critical are included. Information unavailable for percentages of those systems which are assessed, renovated and validated.

This is not a pretty picture, despite the lame comment they tacked on as a disclamer.

-- a (a@a.a), January 15, 1999.


a: ROTFLMAO

Or is it 41%? Is it 41.275%? Nice that no assessment was required. But if some bozo had put down 8% for assessment, what would that have meant?

Now, let's "compare" this 42% to organization B's 42%. LOL.

Oh, no, better, let's average all these compliance percentages. Hey, that will give us something great to feed the media!

This is directly on topic, guys. Because Paul's analysis starts from the PR flack that parades as data, says, "ok, guys, you said so" and then shows how inane it is against any historical measure of real-life IT.

Understanding this is CRUCIAL to understanding why Y2K's consequences are unknowable and why you must prepare. Even if reported compliance is 100% by October of 99 (and, psst, it probably will be, ROTFAgain), we will have 0% reason to trust it.

Deano's system may make it (many will, many will, thanks Deano for your work), but the words on this thread are written in almighty stone when it comes to the big picture.

Sure, we may escape TEOTWAWKI but it will be sheer comedy if we do. The Y2K effort is less serious, taken globally (again, I except the many "Deano's") than the efforts of Inspector Clouseau.

-- BigDog (BigDog@duffer.com), January 15, 1999.


note: copied this over from EUY2K thread meant to reference this thread . (can you spell s-e-a-r-c-h?) ~C~

Perspectives are based on an individual's trust factor, as well as other things. If Mr. Milne's basic premise is that everything self- reported in utility 10Qs (concerning where they are in fixing critical systems) is untrue, then there is no proper argument to be made. In that case, we can't assume to know anything about what progress there is - or is not. Personally, I have never observed that any group of people perennially lies their butts off all the time. Some, yes, but this doesn't incline me to tar everyone with the same brush.

If the premise is that the reports are not lies, but that utilities' are defining project compliancy estimates in a misleading way, then there is a counter argument to be made. If I understand Mr. Milne's assessment in the post, what he is saying is this: If a utility has 200 mission critical systems, only a portion of those will actually need to be fixed and the rest will require "little or no remediation". Yet those already compliant systems will be included in the utilities percentage of completion - which in effect hides the actual work they have left to do. As in if 150 of those critical systems needed to be fixed, and 50 didn't, then the utility would report they were 50% finished if they had fixed just 50 out of the 150 which really needed fixing. (50 not needing remediation added to the 50 which were fixed = 100, or half of the total systems.)

This argument presumes that utilities are looking at all their systems as a whole and are issuing percentage reports *including* any systems that have been assessed as not having Y2K problems. I have found utility 10Qs which did note the rate of percentage of systems needing fixing versus those that don't - in their Assessment Phase. In the Remediation Phase of utility projects, however, the percentages given are usually worded such as "conversions" done and "conversions" remaining, or "implementations" completed versus "implementations" done, or "modifications made" and "modifications still to be done". These words do not apply to systems which did not need fixing.

Some utilities estimated the work done versus work remaining in man- hours, and TECO Energy wrote: "Sixty-five percent of the renovations to the transmission and and distribution system have been made, which represents 55 percent of the work required to achieve Year 2000 readiness for this part of the project."

You do not "renovate" that which doesn't need to be, and if you're trying to falsify completion percentages then you don't specify them using words which all indicate "fixes". You also don't tell people that 65% of the fixes equalled only 45% of the work. While I also have a great deal of skepticism concerning corporate reports, I do not believe that utilities included systems in their completion estimates that did not have any Y2K problems to begin with.

I do agree, however, with Mr. Milne's statement, "Usually though, the last part of the job is hardest. Assurances from vendors fall flat, customers cross you off their list, programmers burn out, unanticipated problems crop up, etc etc."

-- Bonnie Camp (bonniec@mail.odyssey.net), January 15, 1999.

-- Critt Jarvis (Wilmington, NC) (critt@critt.com), January 15, 1999.


ERC reported that all systems that actually affect power generation are remediated and tested. Can't have that, of course. Since we can't PROVE they're lying, let's apply FUD. Off we go:

1) Response to NERC. Let's lie. Nobody is verifying these assessments, and nerc can't do anything even if they knew. Anyway, self-assessments are either deliberate high-level fabrications, or misinformed misrepresentations by clueless dolts. We all know the management class is composed of nothing else, although some of them used to be pretty damn good engineers. So sad to see how stupid they've become. Who'd of thought?

2) Games with numbers. There are lies, damned lies, and statistics, we all know this. So let's say we're 50% done. This means that we've done just enough assessment to know that 50% of our systems are not y2k-affected; we've actually done no work. Is this true? Get outta here,we're not looking for the truth, we're got a job and a silly deadline. The boss wants to see good numbers pronto, let's say were half done before we start. Looks good on paper, and management never questions anything that they like to hear, do they?

3) Reporting. Here's a device. OK, was that assessment? It uses a date, is *that* part of assessment? Let's turn the date ahead and see what happens. Does this count as part of remediation, or is this the testing phase, or are we still assessing? Well, it works OK, but it says the year is 00 on the display. Is this noncompliance? What should it say? Let's mark it down as a failure just to be safe. OK, we've tested that one, we're done, it works, it's a failure, whatever, on to the next one. How do they report this at XYZ utility? Who cares? Just make it look good.

4) Back-office systems. Well, we can generate power, but the billing system is hosed. Is this critical? Damn right it is. And the system that keeps track of the KWH for each customer won't work either. Another critical failure. Will we continue to generate power even if we might not get paid for it, or can't monitor it? How about payroll, does it work? No? Mark it down, another critical system failure. Looking bad here. But we gotta report accounting problems as critical, the bigwigs understand this stuff. Nobody knows jack about the hardware, they downsized those guys a couple years back.

5) Vendor compliance. This vibration sensor in the flywheel is gonna go haywire, and we won't know what's wrong. Is there an upgrade? Who cares, the vendor is years backordered. Can we live with this? Order one anyway, so we can check it off the list, problem remediated. We'll probably get the new one before we get vibration problems anyway. Just ignore the reading for now.

6) External systems. Are the railroads ready? Will telecom still work? Will confused traffic signals and riots prevent people from showing up for work? Will checks bounce or get lost? Do we really have to worry about this? Nah, not our problem. Good thing this stuff isn't on the checklist.

----

Now, can we trust the NERC report? Probably not, but I'm not allowed to say this because it smacks of angst. Does this mean we'll have power or not? Nobody knows, but I'm not supposed to say that either. Why are those who distrust this report so convinced that we won't have power? Because when you don't know, it's safest to assume the worst?

The NERC report is bad news. Yeah, it looks good, but don't be fooled. It says we'll have power, but we won't. It says utilities will be finished in time, but we know better. We can't have a good catastrophe and still have power, so there'll be no power. Trust me, I can see the future, any moron can see the future. Just ask one.

(Sorry, it's late and I'm getting punchy)

-- Flint (flintc@mindspring.com), January 16, 1999.



Flint,

It's not that we won't have power. I am planning on the basis of Rick Cowles' and Bonnie Camps' analyses, which seem to project nothing *worse* than regional problems and longer-run capacity shortfalls. Naturally, I'm assuming it's my region, but you wouldn't flame me for that, no?

With due respect to Bonnie, I have seen people in orgs do the craziest things when "measuring": they do indeed include systems that "don't apply to the problem" into a mix. Some do it deceitfully, some from confusion, some because they thought that was part of the deal. So, I wouldn't rule out the darker side of Paul's analysis.

I have the utmost respect for the boy-girl geeks who are killing themselves to fix the stuff. The lies happen as the work moves up the chain (again, "The Plan" from "a"). Do the people at NERC think they are lying? I'll bet they don't. They believe they are providing a public service and, thanks to the ability of Rick and Bonnie to do the Sherlock work, they are. But, ethically and professionally, the NERC report amounts to a lie.

The entire way the Y2K "metrics" are being handled, analyzed (not) and reported is, I'll repeat it, worse than the worst Inspector Clouseau approach. That's precisely why Bonnie's analyses are so vital when, on 1/16/1999, they shouldn't be necessary.

Flint, did I misunderstand your post?

-- BigDog (BigDog@duffer.com), January 16, 1999.


BigDog,

I think you got what I intended. I view the NERC report as a composite compiled from many organizations torn between conflicting goals. They don't want the legal liability assurances bring, they don't want to cause any panic, they don't know how to summarize their progress (which varies wildly from system to system), they can't agree on what's critical, there are no clear standards to follow.

And those whose job it is to summarize all this and present conclusions are under political pressure to make the raw data fit a predetermined picure. But no matter how they choose to summarize it, the result is a spin of some sort. Kind of like drawing a regression line through a random scatterplot -- any line is as good (or bad) as any other, and we're not much wiser than when we started. We have no shortage of facts, but we're no closer to meaningful conclusions.

I've been trying to say for some time that this is the kind of data we're dealing with. But this doesn't change the fact that there are lots of problems out there, and that's bad. Some of them won't be fixed in time, and that's bad. We can't predict with any precision what the impact of those problems will be, but those impacts cannot be good -- problems are problems.

-- Flint (flintc@mindspring.com), January 16, 1999.


Point by point answer to:

-- Bonnie Camp (bonniec@mail.odyssey.net), January 15, 1999.

-- Critt Jarvis (Wilmington, NC) (critt@critt.com), January 15, 1999.

note: copied this over from EUY2K thread meant to reference this thread . (can you spell s-e-a-r-c-h?) ~C~ Perspectives are based on an individual's trust factor, as well as other things. If Mr. Milne's basic premise is that everything self- reported in utility 10Qs (concerning where they are in fixing critical systems) is untrue, then there is no proper argument to be made.

Several points. in my post above I PRESUMED an acurate and TRUTHFUL depiction. You have missed the point. I said that PRESUMING that THIS IS the truth, you have every reason to see why they ARE failing.

Second, You respond with a 'hypothetical' issue. I DID NOT address that issue in my post, the issue of whether self-reporting can be taken as an accurate representation of the truht. Your reply is utterly non-responsive and as it is based on your above "if', your response falls apart.

In that case, we can't assume to know anything about what progress there is - or is not.

Although this is NOT what I addressed, I will answer it any way.

Personally, I have never observed that any group of people perennially lies their butts off all the time. Some, yes, but this doesn't incline me to tar everyone with the same brush.

I made no mention whatsoever about the utilities lying in my post as the basis for my points. Reading comprehension is required. My entire argument was based on the fact that they have reported 'x' amount done and have left out a vital piece of information. And that is, out of how much has been done, how much of that is attributable to actual remediation? This is AMPLY explained in my post and totally ignored by you.

If the premise is that the reports are not lies, but that utilities' are defining project compliancy estimates in a misleading way, then there is a counter argument to be made.

Let's hear it.

If I understand Mr. Milne's assessment in the post, what he is saying is this: If a utility has 200 mission critical systems, only a portion of those will actually need to be fixed and the rest will require "little or no remediation". Yet those already compliant systems will be included in the utilities percentage of completion - which in effect hides the actual work they have left to do. As in if 150 of those critical systems needed to be fixed, and 50 didn't, then the utility would report they were 50% finished if they had fixed just 50 out of the 150 which really needed fixing. (50 not needing remediation added to the 50 which were fixed = 100, or half of the total systems.)

This argument presumes that utilities are looking at all their systems as a whole and are issuing percentage reports *including* any systems that have been assessed as not having Y2K problems.

They are, Their reports on their remdiation are of their OVERALL work. They are NOT reports based upon ONLY what they are remediating as if to say thay began with 10% compliant systems and then report that they are 40% done instead of 50% done. That is absolutely ludicrous. Their reports are concerning OVERALL remdiation and they are including that which took little ofr no work to do. That is fine, if it is just an overall overview of their composite progress. But when you EXTRAPOLATE to their conclusion, and compare what has been done, to what has YET to be done, you must include ONLY that which they ACTUALLY remediated. This is NOT so in their overall reports. It lead one to concluse that they could finish what they have remaining on time, becuase they do not inform the public that what they have actually done is only a fractyion of what 'is remediated'.

I have found utility 10Qs which did note the rate of percentage of systems needing fixing versus those that don't - in their Assessment Phase.

They are in the wild minority.

In the Remediation Phase of utility projects, however, the percentages given are usually worded such as "conversions" done and "conversions" remaining, or "implementations" completed versus "implementations" done, or "modifications made" and "modifications still to be done". These words do not apply to systems which did not need fixing.

Some utilities estimated the work done versus work remaining in man- hours, and TECO Energy wrote: "Sixty-five percent of the renovations to the transmission and and distribution system have been made, which represents 55 percent of the work required to achieve Year 2000 readiness for this part of the project."

Yes, 'some'. And that is all.

You do not "renovate" that which doesn't need to be, and if you're trying to falsify completion percentages then you don't specify them using words which all indicate "fixes". You also don't tell people that 65% of the fixes equalled only 45% of the work. While I also have a great deal of skepticism concerning corporate reports, I do not believe that utilities included systems in their completion estimates that did not have any Y2K problems to begin with.

-- Bonnie Camp (bonniec@mail.odyssey.net), January 15, 1999.

-- Critt Jarvis (Wilmington, NC) (critt@critt.com), January 15, 1999.

You have obviuosly missed my point entirely. It is one thing to discuss that which you ARE remediating. It is another to include compliant sytems in the overall report and have then considered as that upon which effort was expended. when ones tries to extrapolate from 50% of systems done in one year, to being able to do the same 50% of sytems in the following year, it is necessary and VITAL to know if that first 50% includes a portion of previously compliant systems. If it includes as much as 50% preiously compliant systems, then you can readily see, as my post pointed out, that they have TWICE as much work left to go in an uphill battle.

Your post in NO way has addressed this issue. I am ready when you are for an rebuttle that adresses the points that I made.

I do agree, however, with Mr. Milne's statement, "Usually though, the last part of the job is hardest. Assurances from vendors fall flat, customers cross you off their list, programmers burn out, unanticipated problems crop up, etc etc."

Of course, this was only ancillary to my whole post. But at least you were responsive to the point.



-- Paul Milne (fedinfo@halifax.com), January 16, 1999.


[My comments in brackets. Am also posting to EUY2K] FIRST MILNE POST

Company A is one of those companies that is doing 'better'. They have 1000 systems and 200 are Mission Critical.(I won't even try to skew this by using a bad one) It has fifty percent of its Mission Critical systems on the remediated side of the list and the other half to be addressed. Let's say that they began coding Jan 1 1998. One year out of the remaining two has elapsed.

[Ill take your assumption here but, arguably, most began coding June 1998 or later. Lets ignore how stupid that is, even though it might appear that the coding is going more rapidly than you imply. Coding is the least interesting part of this anyway]

To begin with, there already appears to be a problem. They have half the code finished and half to go. that would leave no time at all for full integrated testing. But, that is just another issue. Let's believe, for the moment that we are in a fantasy world and they don't need to test making it even better for them.

[Lots of unit testing and much integrated subsystem testing can take place in parallel though you are correct about fully integrated system test, which most utility industry dudes say cant be done for utilities anyway without shutting down, no? But I understand you arent dealing with testing here, which will indeed be a laugh riot later this year]

. There is NO company, having arbitrarily drawn a line down the middle, having said it had fifty percent complete, that would not have also included the other 25% in the report of remediated systems. No. They would, at that point, actually be 75% complete. We all know, for a fact, that any remediated or substantially remediated system would AUTOMATICALLY be in that first half and dispensed with immediately. This is unquestionable. If you want to debate that point , then read no further.

The situation is that they do, in fact, have all the substantially remediated or remediated systems in the first half. The reality is that they DO in fact have twice as much work to do.

Lets try to work this thing backwards to find out what their REAL schedule and rate of progress should have reflected. We know that the second half of the project will actually take TWICE the time as the first half because the first half actually included half of 'those' systems already done.

[Your analysis doesnt account for the fact that, even with coding apart from testing, honest coding work ALWAYS slows down as the completion percentage rises. I know from other posts that you are aware of this and realize youve oversimplified. I say this here, though, because even if one quarrels with your core assumption that only 25% of mission-critical systems were done in 98 (hey, make it 35% 45% or the original 50% if you want), they still wont get there in 99 even if we focus on code alone. Code completion is itself non-linear.].

So where are they in reality? In reality they have twice as much work to be done as they have done and they have zero time at all for testing, a full half the job.

[No, lets now assume that they are doing lots of unit and subsystem testing in parallel but that they need *only* 20% of overall project calendar time for system testing. Were still into the end of 1Q 2000 at the end of a very cold winter]

No matter how you slice it, these companies are in big big big trouble.

[WE are in big trouble]

FIRST CAMP REPLY TO MILNE:

[My comments in brackets]

Perspectives are based on an individual's trust factor, as well as other things. If Mr. Milne's basic premise is that everything self-reported in utility 10Qs (concerning where they are in fixing critical systems) is untrue, then there is no proper argument to be made.

[From what I could see, Milne took the self-reported data as accurate. His point is that there is no way to get to compliance in 99 based on the self-reported data]

If I understand Mr. Milne's assessment in the post, what he is saying is this: If a utility has 200 mission critical systems, only a portion of those will actually need to be fixed and the rest will require "little or no remediation". Yet those already compliant systems will be included in the utilities percentage of completion - which in effect hides the actual work they have left to do. As in if 150 of those critical systems needed to be fixed, and 50 didn't, then the utility would report they were 50% finished if they had fixed just 50 out of the 150 which really needed fixing. (50 not needing remediation added to the 50 which were fixed = 100, or half of the total systems.)

[Yes, this is my understanding of his argument as well, but they would report 75% complete: 50 didnt need fixing, 50 were fixed; 50 still to go]

This argument presumes that utilities are looking at all their systems as a whole and are issuing percentage reports *including* any systems that have been assessed as not having Y2K problems. I have found utility 10Qs which did note the rate of percentage of systems needing fixing versus those that don't - in their Assessment Phase. In the Remediation Phase of utility projects, however, the percentages given are usually worded such as "conversions" done and "conversions" remaining, or "implementations" completed versus "implementations" done, or "modifications made" and "modifications still to be done". These words do not apply to systems which did not need fixing.

 You do not "renovate" that which doesn't need to be, and if you're trying to falsify completion percentages then you don't specify them using words which all indicate "fixes". You also don't tell people that 65% of the fixes equalled only 45% of the work. While I also have a great deal of skepticism concerning corporate reports, I do not believe that utilities included systems in their completion estimates that did not have any Y2K problems to begin with.

[Bonnie, is your statement above your "belief"? I know how diligent you are with the numbers. What percentage of reporting companies explicitly distinguished mission-critical systems that needed renovation from those mission-critical systems that were determined to have no such need?]

MILNES REPLY POST TO CAMP: [My comments in brackets]

(Milne)  I made no mention whatsoever about the utilities lying in my post as the basis for my points. Reading comprehension is required. My entire argument was based on the fact that they have reported 'x' amount done and have left out a vital piece of information. And that is, out of how much has been done, how much of that is attributable to actual remediation? This is AMPLY explained in my post and totally ignored by you.

[Milne is correct here. That was his question.] (Camp from earlier post) .. This argument presumes that utilities are looking at all their systems as a whole and are issuing percentage reports *including* any systems that have been assessed as not having Y2K problems.

(Milne) They are, Their reports on their remediation are of their OVERALL work. They are NOT reports based upon ONLY what they are remediating as if to say thay began with 10% compliant systems and then report that they are 40% done instead of 50% done.

[I agree with Milne here, barring Camp reporting statistics from report as per my query to her above. In similar projects in past, we would logically do the following: assess systems; determine which are mission-critical; determine which dont need any work (Im simplifying) and immediately put them in the remediated bucket; begin work on the ones that need fixing and report them as remediated as we completed them, etc. We werent purposely "front-loading" things (the good boy and girl geekettes dont lie by and large; that comes further up the chain), just delighted to get some bang from our assessment as soon as possible. But it sure did have the subsequent effect on schedules that Milne described in his original post.]

(Camp from earlier post) I have found utility 10Qs which did note the rate of percentage of systems needing fixing versus those that don't - in their Assessment Phase.

(Milne) They are in the wild minority.

[Again, Bonnie, can you quantify this for us?]

BIG DOG GOES BOW-WOW IN BRACKETS

[My take on why this entire thread matters: compliancy percentages are the whole ball-game in the media charade about Y2K. "Bozo Inc is 80% compliant and says not to worry, theyre confident." Yawn. "Taking all agencies of the Federal government as a whole, they are 73.454% compliant. No need to prepare. Go to sleep." Yawn. Yawn. Im beginning to get drowsy .

Bonnies work has been awesome. Cowles too. Im relying on it, for goodness sake: they better be right! Milnes post is an extension to that work, not a challenge, rebuttal or flame of it. Or am I out to lunch? It helps us understand even more clearly why the NERC report doesnt hold water. And, more generally, it helps us understand that compliancy percentages are largely a shell game. Thats why Milne dropped testing out of the picture, though testing is where the compliancy story will be for the post Y2K historians.

Some companies (oh, how few, how few) use serious metrics because they understand that accurate numbers are an extraordinary competitive advantage. Capers Jones has built a career on trying to get executive management (not IT mgmt, execs) to understand this. They understand it conceptually but refuse to require it of IT because there is a major front-loaded cost in collecting metrics before the huge payoff down the road shows up. And, yes, lots of IT geeks resist it (lets be honest) because its a heck of a lot easier NOT to be measured ("quality code coming right up, boss  yeah, sure"). Try getting code inspections accepted by an IT organization!

Bonnie must use the numbers she has. So must Milne. So must we all. If I can do it without throwing up, Im going to try to write a short, serious paper on the whole compliancy game. Tens of millions of fellow citizens are failing to prepare because of the compliancy charade.

Help me with those number, Bonnie.]

-- BigDog (BigDog@duffer.com), January 16, 1999.


Bonnie has signed off on EUY2K with respect to this discussion, which is her right. But reading my last post here, I realize I might have given an inaccurate impression (what goes around always comes around)! While my personal description from experience of how we would assess, code, report, etc is accurate, it was based on work in 80s/90s prior to Y2K. I have not been involved directly in Y2K remediation itself, which that example seemed to imply. But as already posted (and again see Yourdon and, alas, evidence everywhere), there is no reason to believe Y2K remediation projects are following a different track than IT projects over the decades.

-- BigDog (BigDog@duffer.com), January 17, 1999.


Moderation questions? read the FAQ