Dale Way's Response

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

Herewith is Dale Way's generous invited response to some comments about his original essay:
Date: Thu, 04 Nov 1999 14:13:57 -0800
From: "Dale W.Way"
Subject: Re: your response...? I don't have a lot of time, but here are my comments. Hope this helps. Dale W. Way
> Your response to the comment below would be most interesting.
> (If you do respond, please post to the thread URL below...)
>
> THANKS for a really excellent article!
>
> ---------------------------------------------------------------------
>
> http://www.greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=001fqh
>
> Another comment on Way's statement, and some comments on comments:
>
> 1)"If an organization goes off half-cocked, without complete,
> detailed knowledge of how its system of systems works altogether in
> all normal and possible abnormal situations, as the vast majority of
> remediators have done, yet make wholesale changes as if it did have
> that knowledge, they are doomed to failure..."
>
> This is the central point of Way's essay, and I disagree with it
> totally. For starters, "complete, detailed knowledge" of the workings
> of the macro "system of systems" in all possible situations is
> impossible.

I don't know how he knows this, but his attitude is most eloquent in explaining why few have ever tried to gain this knowledge. So many things in the past were thought to be impossible simply because nobody bothered to really think creatively about how we might go about it other than in traditional way. This is not impossible. It just hasn't been done yet. But it will be, although not with the Independence Fallacy hanging around this guy's neck like a yoke. (See next point.) Many ideas and technologies are floating around out there that could be brought to bear.
> That Way correctly dismisses the concept of "compliance"
> (which is, after all, complete, detailed knowledge of the workings of
> INDIVIDUAL COMPONENTS in a system in all possible situations), and
> then demands the same understanding of the macro system is
> inexplicable.

I also don't know where he gets this definition of compliance from. What I see out there most often is a general statement of "correctly processing dates" and then some rather minimal list of test scenarios/criteria for which this is known to be true. This falls rather short of "complete, detailed knowledge of the workings."
Yet in his confusion this person missed a stark fact of reality: if you make changes anywhere in a large array of interlinked, active, decision-making components without having complete, detailed knowledge of the workings of the whole, you are very very likely to inject something in there that upsets an existing assumption one component has about others and cause some disruption to other parts. He may wish it was true that you don't need to know these things, or he may say it is impossible to know it, but that does not make the requirement go away. The only alternative to having this knowledge beforehand is to get it afterhand, and that is called integration or end-to-end testing. That is what software remediators have been doing -- pushing this part of problem out of their domain into testing, for which we have neither the infrastructure or the time.
> Secondly, I don't see how Y2K remediation fits the description
> "wholesale changes". Individual Y2K code changes are, by definition,
> trivial. Further, all the collective Y2K changes are aimed at making
> the system component work EXACTLY LIKE IT DOES TODAY. Systems are not
> being redesigned - I don't think this qualifies as "wholesale change"
> by anyone's definition.

"Wholesale" says nothing about the extent of any ONE change, trivial or not. It speaks to the ubiquity of changes, no matter how small each one may be. If this guy can assure that every component "works" "EXACTLY" like it does today, with no change in the format or meaning of any data element, he would be right. But how can you resolve the ambiguity of the missing century digits in a two-digit year system and have it behave EXACTLY like it did before your resolution without changes. Something had to change somewhere, or most likely, in many places. Change the date format and you have to adjust every piece of software that talks to that data (and theoretically every piece of software THAT software talks to, and so on) to make sure they properly interpret, for all of their uses, that new data format. Change the logic in a program to put in a pivot-point sliding window to retain the two-digit format BUT CHANGE THE MEANING of the data values, which is what a windowing "solution" does, and you have to make sure EVERYWHERE the RESULT of that logic goes, that meaning is correctly and consistently maintained. This is unpleasant but true. Anything less and you are opening yourself up to sucker-punch errors popping up all over the place.
> 2) "I believe that the essay stated that we are looking at the
> problem ONLY from the 'let's fix the hardware' perspective."

This person cannot read. That's a huge point of mine: Y2K is about software, the hardware is not really a problem. (Something many people who bought new "compliant" computer hardware are going to be upset to discover.)
> Well, I'd say it's 'let's fix the software', but OK.
>
> "However, if you give your statement about 20 seconds of thought, you
> will realize that a lot of organizations are not doing interface
> testing. (Remember that AT&T tested their stuff in isolation, saying
> it would be too difficult and time consuming to test with all the
> third party add-on folks.)"
>
> Outside interface "testing" is another practical impossibility for
> most organizations. But nobody's waiting until December 31 to field
> their remediated software - ours has been in the field for over a
> year. That's what I mean when I say that interface testing is going
> on now - if an interface gets screwed up, it'll be evident long
> before rollover.

The "screw up" may be evident, but that does not mean the location of the screw up will be accurately determined and the proper "fix" that will take care of that problem, AND NOT CAUSE MORE SOMEPLACE ELSE, will be known. Of course, as many things on the other side of the interface will likely be changing at the same time as they find their own problems, this gets really tricky when time is so short; WHICH IS WHY TRADITIONAL COMPLIANCE-BASED REMEDIATION WILL FAIL IN MANY CASES (but not all).
> 3) "If by "independently" you mean that the two sides of the
> interface do not need to come to explicit agreement on how to
> represent the year 2000, that I would debate. Adding the treatment of
> the year 2000 to an interface is a change to that interface, even if
> this changes neither field size nor type. How to treat the year 2000
> has to be explicitly discussed because there are several options,
> such as doing nothing (i.e., fix on failure), using "00" to represent
> the year 2000, or using 4-digit representation of year."

There has to be agreement because the two sides share data and are therefore dependent on each other in some way. There has to be explicit agreement of how dates will be represented. Whether that agreement is forced on each side or in the "interface" does not change the non-independent status of the two together. That he would "debate" this shows his confusion.
> An interface is the definition of the data layout for information
> being passed from one system to another. The "treatment" of "00" by a
> given system is immaterial. The interface only changes if you expand
> from 2 to 4 position year, or move the location of the date field
> within the layout. Do either of these without consulting with systems
> downstream, and you richly deserve the consequences.

Yes, an interface is passive; it only deals with layout. It is up to the processing logic on both sides to derive and interpret semantic meaning of the contents of what passes through the interface. But "The "treatment" of "00" by a given system is immaterial" ONLY if it is consistent and compatible on/to both sides. Form the point of view of the interface it may be immaterial, but not from the point of view of the reliable operation of the tow sides together. To resolve incompatible date representations at the interface level requires an active element, called a bridge, to achieve, but it can be done. His last sentence here shows he does understand the concept of interdependency and what can happen if you ignore it. But it illustrates the disjointedness of his mind that he cannot see it in terms of the limitations of component compliance.
> 4) "He also clarified another concern about embedded systems. If
> there is no date calculation across the century boundary, the
> embeddeds will work fine. This causes a new worry on pipelines,
> natural gas transmission facilities, electrical power distribution
> systems where product flows per second will be measured to help
> calculate billing for the quanity of product delivered. This could be
> a one time problem for each system at the rollover and then it would
> be O K if there is no lookback into 1999."
>
> Well, embeddeds are hardly my strong suit, but I'll ask a couple of
> questions for anyone who might know:
>
> A) When you're talking about a flow measurement system, is there a
> reason why the embed's internal clock would need to be synched to
> EST, or whatever? It would seem to me to be unnecessary, plus a giant
> pain in the ass to adjust if it drifted off real time. And if it's
> not synched, then the embed's eventual rollover has no relation to
> Jan.1, right?

Synchronization is an issue in physical systems that typically have embedded devices and systems in them. But is precisely because it is an issue that there are already means in place to keep things in synchronization when they fall out of it. These things have had to have been ready for this for a long time or they wouldn't be working in the field right now. They don't care what throws things out of synch, Y2K rollover or sun spots, they just put them back into synchronization, either automatically or, worse case, manually. But de-synchronizations happen all the time. Y2K is just a new, one-time source. As such it will have little impact there.
> B) Onetime rollover problems can be missed entirely by shutting down
> the sytem during rollover, which is exactly what a lot of companies
> are planning to do. Is there a reason I'm missing as to why this
> won't work?

No, it will work. That is another reason why Y2K is not about hardware, clocks or embedded devices, but software.

-- alan (foo@bar.com), November 05, 1999

Answers

I forget already. Who is Dale Way and why should I care what he thinks?

-- (hum@ho.hum), November 05, 1999.

Alan, can you format this to be more readable please.

-- OR (orwelliator@biosys.net), November 05, 1999.

no but the SYSOP can. Once it re-cache's that should help a lot

Number 3

-- I AM a number # 3 (sysops@re.us), November 05, 1999.


Sorry about the format problems. I did my best to insert the appropriate html "breaks"... cannot understand why this system does not understand two hard returns as blank line (paragraph break)...

-- alan (foo@bar.com), November 05, 1999.

Damage is done, no amount of postscripts can stop it!!!

Water under the bridge!! nobodies listening accept the community thats vigilant for this type info from this type of source!!!!

-- D.B. (dciinc@aol.com), November 05, 1999.



Some good points, RC. I will invite Mr Way to respond.

-- alan (foo@bar.com), November 06, 1999.

for the record:

IEEE Y2K Chair Dale Way's original writeup http://ourworld.compuserve.com/homepages/roleigh_martin/end_game_criti que.htm

IEEE Y2K Chairman's Personal, Pessimistic Take on Y2K http://www.greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=001fqh

Mr. Dale Way (IEEE)! Gary North and others are on to you Sir! http://www.greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=001hHC

Mr. Way's IEEE article and the myth of Y2k compliancy http://www.greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=001hM0

Interpretation of Dale Way's commentary on Yourdon's End Game article http://www.greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=001gqd

Dale Way's Response http://www.greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=001iAT

Dale Way http://www.greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=001iXs

-- alan (foo@bar.com), November 07, 1999.


I sent a copy of my post above to Mr. Way, and he generously took the time to knock me around a little bit more. Some good reading here if you can get past the cut-and-paste-athon structure, so I asked him if I could post it here. I may post a few short responses if I get a chance.

I will add this excellent point from Dale's preamble to his comments:

" Another major disrupter to communication is something a close colleague pointed out to me recently: "When knowledgeable, well-meaning people disagree, it is usually because they are talking about the same thing from different levels of abstraction." I tend to talk at the categorical, meta and meta-meta level of the situation, making generalities about categories derived from a coherent taxonomic framework that attempts to encompass the crisis from as close to its true width and depth as I can get. You and others, rightfully, are more grounded in the concrete instances of your personal or extended experience. Both are helpful, neither sufficient to deal with a concrete situation that also has to perform on a larger stage."

And awaaaay we go...

> >--from the original essay-- > > 1)"If an organization goes off half-cocked, without complete, > > detailed knowledge of how its system of systems works altogether in > > all normal and possible abnormal situations, as the vast majority of > > remediators have done, yet make wholesale changes as if it did have > > that knowledge, they are doomed to failure..." > > > --from my response-- > > This is the central point of Way's essay, and I disagree with it > > totally. For starters, "complete, detailed knowledge" of the workings > > of the macro "system of systems" in all possible situations is > > impossible. > > --Way's response-- > "I don't know how he knows this, but his attitude is most eloquent in > explaining why few have ever tried to gain this knowledge. > So many things in the past were thought to be impossible simply because > nobody bothered to really think creatively about how > we might go about it other than in traditional ways. This is not impossible. > It just hasn't been done yet. But it will be, although not > with the Independence Fallacy hanging around this guy's neck like a yoke. > (See next point.) Many ideas and technologies are > floating around out there that could be brought to bear." > > --I say-- > Perhaps I should've said "practically" impossible. There are thousands of > possible paths an input can take thru a given program > - millions through a given system - who knows how many through a system of > systems. Find a creative way to quantify all those > paths. Bring ideas and technologies to bear to devise a method to test them > all, in normal and abnormal conditions.

--Way responds again-- Why are possible paths through a program necessary to know this? How about actual paths that are used? Why is "the program" always the sun in this solar system? The beginning and end points of all discussion of this question? This is a very deep area, too deep to go into here, but I would say that a program-centric view, necessary for system CREATION is not necessarily the right one for EVOLVING existing operational systems that today usually sit embedded in a web of other systems and shared databases. It is too weak to acquire the understanding necessary.

> Then convince my bosses to pay for it.

Your bosses pay every day with unreliable, very expensive to modify systems that do not give them the kind of information they need in the way they need it when they need it. Many bosses have address this by outsourcing. Many have continually paid for a succession of failed attempts to deal with these shortcomings cosmetically: Information Engineering, Decision Support Systems, Repositories, and most lately, Data Warehouses. All have bounced off the world of program-centic thought with failed vendors, singed careers and unmet expectations littering the field. Then again, how many bosses actually understand the value of their systems and what they could be?

And why should "your boss" pay for it all himself? Where is academia with some new cognitive tools to address EXISTING OPERATIONAL SYSTEMs and not just creating NEW ones? Where is the equivalent of NASA and the Pentagon that funded the development of the semiconductor infrastructure that could not possibly have been paid for by any private sector "boss." With all the investment made over 50 year into our application infrastructure, with all that depends on it, with all the additional insight that could be gotten out of it, and how recurring costs could be reduced if it were rationalized and consolidated and moved to more modern foundation, don't you think it odd that the tools (cognitive and technological) we have to EVOLVE this massive thing have changed hardly a wit since the beginning of the Computer Age?

> Regarding the "Independence Fallacy": > Let's see if I can pull myself up from my chair to type this ugggh damn > yoke > >From the beginning of my training as a programmer, I was taught top-down > programming. Think modularly, they told me. A > module should be an independent entity with one discreet input and one > discreet output. A program is an independent collection > of modules which perform a specific task The entire concept of > Object-oriented programming is to break code down into > independent objects with specific interfaces Whole programming languages > have developed to facilitate the creation and use > of independent objects. > > Independence is NOT a "Fallacy", it's a GOAL! A very desirable one, at that. > Y2K could be seen as a test of how close we've > come to achieving it.

--Way responds again-- And a very laudable goal it is. Has that "goal" been achieved across a large portion of the existing infrastructure? No. So what is the likelihood of success of basing an institution-wide program on an un-achieved goal, as if it had been achieved -- an unreality? Very little. Why that goal has not been achieved is interesting and again, too deep to go into here. But it has something to do with discipline falling to expediency, with history and entrenched culture resisting truly new approaches, with a lack of tools, all of which led to or allowed a breaching of the encapsulation of a "module," revealing and enabling the use of, its inner workings and forcing dependencies to exist between modules. This has gone on for decades. The concept of OO is great, exactly what is needed. But retrofitting the entire application software and data base to OO is something akin to what I was talking about above; an overly program-centric world view, like a Ptolemaic one, obscures an accurate understanding of that base and prevents any serious change.

> >--from my response-- > > That Way correctly dismisses the concept of "compliance" > > (which is, after all, complete, detailed knowledge of the workings of > > INDIVIDUAL COMPONENTS in a system in all possible situations), and > > then demands the same understanding of the macro system is > > inexplicable. > > ----from Way's response--- > "I also don't know where he gets this definition of compliance from. What I > see out there most often is a general statement of > "correctly processing dates" and then some rather minimal list of test > scenarios/criteria for which this is known to be true. This > falls rather short of "complete, detailed knowledge of the workings."" > > --I say-- > I should have said compliance REQUIRES "complete, detailed knowledge", given > the definition that says there will be NO > date-related errors due to Y2K. Unless you've tested EVERY ONE of the > millions of paths through your system's logic, you > can't, or at least shouldn't (so say the corporate lawyers), claim 100% > compliance. This sort of testing is also a "practical" > impossibility. > > Now I realize that this definition of "compliant" doesn't match what you were > going after in your essay ("Being compliant took > on the more general meaning of "safe""). But regardless, it still seems to > me you are saying that micro-compliance is > meaningless, only macro-compliance matters. I am saying that without > micro-compliance, macro-compliance is impossible. And > this time I mean really, really impossible.

--Way responds again-- Why is this so hard to grasp? One CAN have macro-compliance without mico-compliance. It's basic systems theory 101; one part of the systems can make up for another part of the system. I showed that conclusively (and Intel concurred) with the example on a non-compliant Real Time Clock (as all Pentium on-board clocks are) in a totally compliant macro system because the OS takes the responsibility for century issues. This encapsulation concept can be, and has been successfully, to reset the clock, intercept input data, convert dates back 28 years, process on old, unremediated, non-compliant code, intercept the data on the way out and reconvert back to current dates and viola! a compliant macro-system with totally non-compliant components. Just ask the U.S. Treasury Department. They have done this. Of course, this takes code (and programmers for that matter) out of its central role, so it is no wonder this simple, safe, cheap and very fast method is not considered by programmers charged with protecting their systems from Y2K.

> --Way's response goes on-- > "Yet in his confusion this person missed a stark fact of reality: if you make > changes anywhere in a large array of interlinked, > active, decision-making components without having complete, detailed > knowledge of the workings of the whole, you are very > very likely to inject something in there that upsets an existing assumption > one component has about others and cause some > disruption to other parts. He may wish it was true that you don't need to > know these things, or he may say it is impossible to > know it, but that does not make the requirement go away. The only alternative > to having this knowledge beforehand is to get it > afterhand, and that is called integration or end-to-end testing. That is what > software remediators have been doing -- pushing this > part of problem out of their domain into testing, for which we have neither > the infrastructure or the time." > > --I say-- > Mr. Way, we make changes like this EVERY DAY. We make changes 1000 times more > complex than any Y2k change > EVERY DAY. And we do so armed only with a good GENERAL knowledge of how > components IN OUR OWN > SYSTEMS will be affected, and a vague idea of what some other company's > systems will do with the results. And usually, > end-to-end testing is not done. Sometimes we screw it up - sometimes our > testers miss it - sometimes bad code makes it into > production. But despite our worst efforts, the system stubbornly continues to > work. What you're advocating here would mean > paralysis - nothing would ever be changed. > By the way, when did testing suddenly get pushed out of the "domain" of > software remediation?

--Way responds again-- I am not advocating anything of the sort. I am merely pointing out that this approach pushes these issue out to the testing function, which you admit. But you miss a key point from the "wholesale" dispute. It is not the extent or complexity of any ONE change that matters -- because the change is still confined to a small corner of the whole, which remains unchanged, the trial-and-error approach to testing can work because there is general stability -- but the ubiquity, the total number and distribution of even small changes. Then the ratio of changed to unchanged is more nearly switched around and the trial-and-error test method TAKES TOO MUCH TIME TO RESOLVE because re-fixes are going on all over the place and there is insufficient stability to systematically squeeze out the direct, secondary and tertiary problems. Remember, for every 5 modifications and new error is introduced that was not there before. Do the math. How many changes, no matter how small, have there been to how many modules across the institution? Don't forget, year/dates are very ubiquitous in many applications, especially business/accounting/administrative computing (although more rare in physical and process control systems).

> >--I said-- > > Secondly, I don't see how Y2K remediation fits the description > > "wholesale changes". Individual Y2K code changes are, by definition, > > trivial. Further, all the collective Y2K changes are aimed at making > > the system component work EXACTLY LIKE IT DOES TODAY. Systems are not > > being redesigned - I don't think this qualifies as "wholesale change" > > by anyone's definition. > > --Way responds-- > ""Wholesale" says nothing about the extent of any ONE change, trivial or not. > It speaks to the ubiquity of changes, no matter > how small each one may be. If this guy can assure that every component > "works" "EXACTLY" like it does today, with no > change in the format or meaning of any data element, he would be right. But > how can you resolve the ambiguity of the missing > century digits in a two-digit year system and have it behave EXACTLY like it > did before your resolution without changes. > Something had to change somewhere, or most likely, in many places. Change the > date format and you have to adjust every > piece of software that talks to that data (and theoretically every piece of > software THAT software talks to, and so on) to make > sure they properly interpret, for all of their uses, that new data format. > Change the logic in a program to put in a pivot-point > sliding window to retain the two-digit format BUT CHANGE THE MEANING of the > data values, which is what a windowing > "solution" does, and you have to make sure EVERYWHERE the RESULT of that > logic goes, that meaning is correctly and > consistently maintained. This is unpleasant but true. Anything less and you > are opening yourself up to sucker-punch errors > popping up all over the place." > > --I say-- > What you're saying is that every place in the code that references a date > field has to be changed in the same way in all programs > in a given system, right? Thank you for defining "remediation". > Windowing changes the MEANING of a data value? No, that data value still > represents a 2-position year. All you have to > allow for is: 1) 00 is greater than 99, 2) 00 minus 1 is 99, and 3) 99 plus 1 > is 00 (slightly simplified, but not by much). > Anything less and you ain't totally remediated. And you will get > sucker-punched by the spots you missed - and you did miss a > few spots. And more importantly, it can only be fixed by finishing your > remediation! Eat your vegetables!

--Way responds again--Does not change the meaning?? OK, two systems use different pivot points for their sliding window remediation approach. The year value '22' leaves one system and goes to a shared database. Probably unknown to it, the second system reads from that database the '22' year value. What century does the second system assign to that year? It assigns the century based on ITS pivot point. But that may or may not be the MEANING assigned by the source system. If it isn't, the meaning has changed. Once again, for your system where you use the plus/minus 1 approach that may work throughout your system, however you define 'system', may work there. But when a date representation leaves your system, leaves your control, your assignment of meaning, it is at the mercy of the outside world, like a tadpole leaving the egg sack. That Independence Fallacy yoke must really be hurting by now ;-).

> >--somebody else said-- > > 2) "I believe that the essay stated that we are looking at the > > problem ONLY from the 'let's fix the hardware' perspective." > > --Mr Way said-- > "This person cannot read. That's a huge point of mine: Y2K is about software, > the hardware is not really a > problem. (Something many people who bought new "compliant" computer hardware > are going to be upset to discover.)" > > --I say-- > Spoken like a true hardware guy ;-) > > > --I said-- > > Well, I'd say it's 'let's fix the software', but OK. > > > --the other guy said-- > > "However, if you give your statement about 20 seconds of thought, you > > will realize that a lot of organizations are not doing interface > > testing. (Remember that AT&T tested their stuff in isolation, saying > > it would be too difficult and time consuming to test with all the > > third party add-on folks.)" > > > --I said-- > > Outside interface "testing" is another practical impossibility for > > most organizations. But nobody's waiting until December 31 to field > > their remediated software - ours has been in the field for over a > > year. That's what I mean when I say that interface testing is going > > on now - if an interface gets screwed up, it'll be evident long > > before rollover. > > --Mr. Way responds-- > "The "screw up" may be evident, but that does not mean the location of the > screw up will be accurately determined and the > proper "fix" that will take care of that problem, AND NOT CAUSE MORE > SOMEPLACE ELSE, will be known. Of course, > as many things on the other side of the interface will likely be changing at > the same time as they find their own problems, this > gets really tricky when time is so short; WHICH IS WHY TRADITIONAL > COMPLIANCE-BASED REMEDIATION > WILL FAIL IN MANY CASES (but not all)." > > --I say-- > It is our job to find and fix the screwup. That's what we get paid for. And > I'll say it again: the interfaces are in production > RIGHT NOW. No need to wait till January to see what'll happen.

--Way responds again--It may be your job, and you may pursue it with dedication, and you may get paid for it. But what would you, or anyone, be willing to bet you will be SUCCESSFUL not just for your system, the one under your control and responsibility, but your organizations' systems and the business functions that rely on all of them, and your on-line partner's systems and your jointly managed functions?

The truth of the matter is a lot of software will be put into production without testing at this level, problems will emerge, they will be chased down as best as possible while the system runs intermittently (the most common result, I believe), and if there is too many coming too fast, the system will have to be temporarily abandoned while whatever workaround can be generated is done so and applied, including de-committing some business function internally or externally to the organization. Then the organization will either wait for the window of vulnerability to Y2K to pass and restart the old software and data as best as possible (some data/transactions may be lost to it) and be put back in production, or the system will be abandoned permanently, to be replaced by a quick-and-dirty ad hoc hybrid of procedural and technical workarounds until a more fully functional system is developed or acquired. In the mean time, the net flow of transactions and information flow will generally slow down. This will have economic impacts but is unlikely to directly affect life and limb.

> >--another other guy said, in response to an earlier post-- > > 3) "If by "independently" you mean that the two sides of the > > interface do not need to come to explicit agreement on how to > > represent the year 2000, that I would debate. Adding the treatment of > > the year 2000 to an interface is a change to that interface, even if > > this changes neither field size nor type. How to treat the year 2000 > > has to be explicitly discussed because there are several options, > > such as doing nothing (i.e., fix on failure), using "00" to represent > > the year 2000, or using 4-digit representation of year." > > --Way responds-- > "There has to be agreement because the two sides share data and are therefore > dependent on each other in some way. There > has to be explicit agreement of how dates will be represented. Whether that > agreement is forced on each side or in the > "interface" does not change the non-independent status of the two together. > That he would "debate" this shows his confusion." > > --I say-- > There already is an explicit agreement on how dates will be represented. It's > called an INTERFACE. > > >--I said earlier-- > > An interface is the definition of the data layout for information > > being passed from one system to another. The "treatment" of "00" by a > > given system is immaterial. The interface only changes if you expand > > from 2 to 4 position year, or move the location of the date field > > within the layout. Do either of these without consulting with systems > > downstream, and you richly deserve the consequences. > > --Way responds-- > "Yes, an interface is passive; it only deals with layout. It is up to the > processing logic on both sides to derive and interpret > semantic meaning of the contents of what passes through the interface. But > "The "treatment" of "00" by a given system is > immaterial" ONLY if it is consistent and compatible on/to both sides. Form > the point of view of the interface it may be > immaterial, but not from the point of view of the reliable operation of the > tow sides together. To resolve incompatible date > representations at the interface level requires an active element, called a > bridge, to achieve, but it can be done. His last sentence > here shows he does understand the concept of interdependency and what can > happen if you ignore it. But it illustrates the > disjointedness of his mind that he cannot see it in terms of the limitations > of component compliance." > > --I say-- > I can best answer this with an example. Companies A, B and C each have > independent computer systems which send data to > the other's systems. They jointly decide that whatever Y2K changes they > choose to make, the interfaces between their systems > will remain as it is today. They then go off to do their own little > independent remediation projects. > Company A decides to expand their dates to four positions. Company B decides > to use encapsulation. Company C decides to > use windowing. Are they compatible, or are they screwed? > > Answer: they are compatible. Because they decided not to change the > interface, Company A will lop off the two extra year > positions before it sends data over the interface, and Company B will > un-encapsulate the year field before it sends data over the > interface.

--Way responds again--This is exactly what should happen and will most likely happen to MOST, FORMAL interfaces. But what about informal interfaces, shared databases, hand-carried diskettes that still are used to data exchange (I know some dirty secrets about this in the U.S. defense establishment)? There are many database shared among many different organizations. What about the "dates" you did not detect at the interface? The data elements that are not dates, but have dates encoded within them (like product expiration codes)? Just more problems to take care of when they emerge, if possible. (See point above.)

> P.S. Thanks for the clarification on embedded systems. It kind of verified > what I thought - but again, it's not an area I know > much about. In fact, when the subject comes up, I start getting fidgety > Randy Christopher - Large Programmer At Large

--Way says now-- Thanks, you are really welcome. This has been fun and I hope informative. No hard feelings here.

Dale W. Way Chairman Year 2000 Technical Information Focus Group Technical Activities Board The Institute of Electrical and Electronics Engineers (IEEE)

-- RC (randyxpher@aol.com), November 08, 1999.


I'd like to emphasize a point made in here. This is that Interface Control Documents are *NOT* necessarily required, or if required, kept up to date.

A truism in the software industry at large is that if your only tool for design or requirements is 'Interleaf' or 'Frame' or 'Word', then it will be used *one time* to prepare a 'dust catcher'. And this is exactly what is done in all too many places.

This sort of thing may not happen in your particular industry, but it is commonplace elsewhere.

-- just another (another@engineer.com), November 08, 1999.


Moderation questions? read the FAQ