stats on how many lines of code still need to be compliant?

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

Approximately how many lines of code still need to made compliant? How many major corporations (including government agencies) are still at the awareness stage? Why is it not likely that companies will be compliant? I need details! Thanks!

-- Jen (emweinert@yahoo.com), April 16, 1998

Answers

Jen, I would suggest you read the book for which this forum is named. Many of your questions will be answered there. If you really want small details on lines of code and numbers of different languages and the distribution of such systems areound the planet, go to Capers Jones' site (www.spr.com) and download "The Global economic Impact of the Year 2000 Software Problem". It is from January 1997, so some forward extrapolation will be needed to answer your question. Furthermore, even Jones' admits that much of the data has a large potential margin for error. However, I don't know of any other similar compilation. Also, you can check www.cio.fed.gov and look at all of their y2k documents. At last report, several federal agencies are projected to complete their y2k projects between 2003 and 2009 (no tongue in cheek).

-- P. Larson (ptrades@earthlink.net), April 17, 1998.

Jen If I could clarify one point, "lines of code" is often used -- erroneously -- as THE measure of how much work is involved in making a system y2k-complaint.

The number of lines of code are only part of the story, since this figure is often taken to mean only the lines that have to be changed. What usually gets overlooked is that ALL programs in ALL systems have to be tested.

On the project I work on, there are about 10,000 legacy (mainframe) programs, plus an equivalent amount in client-server systems. I work on the legacy side, where "only" about a third of the legacy programs need changes, but EVERYTHING has to be tested for compliancy (how else would we know they're compliant?). Since testing represents roughly half the total project budget, well, you get the idea.

Hope I haven't discouraged you even further.

Steve

-- steve francis (sfrancis@sympatico.ca), April 17, 1998.


To further develop Steve Francis' (and others) answer, I see little discussion/measurement of changes to underlying data structures, catalogs, and execution JCL (for IBM systems).

Code and interfaces are only part of the problem. Where explicit date changes (vs implicit changes) are made, every record in the underlying files and databases must be converted to accomodate the increased field length. Record lengths, block sizes, etc. have to be changed to accomodate the longer records (in most cases). File definitions and working storage maps have to change to accomodate the increased field/record lengths. Execution JCL 'dd' statements have to change, as well.

Accomplishing this in production environments running at 90% capacity with tape files and disk data banks in short supply, is seldom discussed. Further, since most legacy systems are finely tuned to run at optimal performance, changing record lengths changes I/O parameters which may degrade production jobstreams and TP response time, impacting day-end, month-end, etc. with associated productivity delays on all the humans waiting to start the next day's work.

"Lines of code" doesn't begin to measure "compliance" or "remediation". A better question is "what % of your online and batch production systems are remediated and load tested in a production environment with performance + or - 5% of current process parameters?"

-- J. Scott Curran (JSCurran@aol.com), April 23, 1998.


Moderation questions? read the FAQ