residual errors and "Infomagic"

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

For what it's worth . . . (and be forewarned, this is a very long post!)

Once more, with feeling, on the subject of "residual errors" (Y2K errors not detected by testing, or glitches inadvertently introduced while "correcting" Y2K errors):

Infomagic (now apparently identified as Ivan Mingham) is right to remind us of the possibly severe consequences of residual errors. For that matter, Mr. Yourdon pointed out the threat of residual errors in the first edition of "Time Bomb 2000," raised the topic again in his August 26th interview on the Art Bell "Coast to Coast" radio program, and, of course, raised the subject yet once more in his excellent, long "Deja Vu" article recently. Infomagic, of course, takes these concerns much further than does Mr. Yourdon.

Where Infomagic may be challenged is in his basic assumptions and projections, and in several instances of faulty logic. One does wish that he'd forego unsubstantiated, melodramatic utterances and stop trying to argue from analogy (a logical fallacy). It's nice that he has read Paul Ehrlich, say, but he should also remember that human beings aren't Kaibab mule deer. When he makes sweeping statements such as "the outcome of this situation might well be the total extinction of the human race," he does a disservice to his own arguments. He also apparently forgets that half of the current world population has never even used a telephone, let alone formed any direct dependency upon automated systems. (Yes, I know the arguments about hybrid seeds, threats to transportation and distribution systems, etc., but I still doubt that a rice farmer in the backwaters of, say, China, is likely to be seriously impacted by any Y2K disaster.)

Let us begin with a few basics. Mr. Yourdon has noted that, on average, a good company with reasonably rigorous testing methods might expect to reduce its error rate, post testing, to, say, one bug per 10,000 lines of total code. Yourdon adds that this is a bell curve phenomenon, obviously: less conscientious companies might have one bug every 100 lines (!), while really good companies (Lucent, Motorola, etc.) might have a bug every million lines or so. These sorts of figures are widely available in an industry that almost obsessively keeps records on itself; the software metrics "guru," of course, is Capers Jones, founder of SPR.

One of Infomagic's most noticeable weaknesses (i.e., assumptions) is that he doesn't try to work in any documented, rigorous way from such available software metrics to his statements that, say, there's a 5% or 1% or whatever chance that a given computer system will suffer a critical failure. Obviously, there is no such thing as the "average" program or application, and there's only so much that one can do with software metrics anyway, but Infomagic does leave the reader with the feeling that numbers are often plucked out of the air (a common pastime these days). He might want to plow through some of Jones's research and try again. I will grant Infomagic that, given the shortage of qualified programmers and the rush to complete many Y2K projects, we can expect error rates in excess of those normally encountered. In a recent article, Mr. Yourdon notes that three outside firms checking remediated and tested programs (in Y2K projects) have typically found residual (that is, post testing) errors on the order of 450-900 per million lines of code, which is an error rate 4.5X to 9X higher than the one bug per 10,000 lines statistic cited above. One can immediately think of at least three "high profile" cases of organizations running into problems because of residual errors. Last spring the "Washington Post" reported that BankAmerica, which was having to contract outside help in addition to its 1,000 inhouse programmers, was finding that programs "corrected" by such outside programmers often crashed. In July, Samsonite (the luggage co.) was told by its computer consultants that its systems were fully corrected and ready to go; programs then crashed frequently, costing Samsonite millions of dollars in lost revenue in the third quarter. And finally, more recently, the FAA has had to take out its new ATC software (designed to replace Y2K-plagued software) at its towers in Chicago, Dallas-Ft. Worth, and elsewhere because the programs kept crashing. (At Chicago O'Hare, one crash took down the entire ATC system for seven hours--an episode described by a controller as "sheer terror"--and necessitating reliance upon the ATC at Midway Airport, 20 miles away, which created "blind spots.")

So Infomagic is obviously right in one very basic point: you can't rush programming (whether remediation or new/replacement software) unless you want big trouble, and it's clear that at least some, perhaps many, organizations ARE rushing such work. A possible reason that you're hearing "miracle" stories these days of big companies and federal agencies doing, say, 6 months work in 14 months of time is that such organizations may be relying heavily on automated program scanners to find and correct date fields. Unfortunately, as one CIO of an international investment bank lamented to Ed Yardeni at the Basle BIS conference last spring, these scanners often "leak like sieves." They vary widely in effectiveness according to type, language, and application, of course, but the few numbers I've seen suggest that even under good conditions they might miss as much as 5-20% of Y2K-related date fields. (As we all know, many programmers encoded and embedded dates in some very quirky ways that the scanners just aren't programmed to find; missing documentation and even missing source code in some instances multiply the difficulties.) That's why the testing phase of the typical Y2K project has grown from 30% to 60-70% of the entire project. But, as Yourdon's recent articles suggest, testing procedures and degree of rigor vary widely from organization to organization; and one has the unpleasant suspicion that in at least some cases the testing methodology may not be up to snuff. Even Greenspan, a former programmer himself, has noted that you can spend lots of time supposedly getting everything right, only to have it unravel on you when it's put into real-time operation. (He might have been thinking about the poor job federal agencies seem to be doing thus far on EDI, or electronic data interfaces.) I doubt that we're going to see the kind of catastrophic "unraveling" that Infomagic expects, but the potential for major trouble is clearly there. If we also note the increasing reliance upon type (sample) testing and vendor compliance statements for "assessing" embedded systems, especially by utility companies behind schedule, the sense of unease grows. Both type testing and vendor statements have been known to be inaccurate. (Type testing is when you automatically assume that because one embedded system checks out Y2K OK, all "identical" systems made by the same company would also check out fine and so need not be tested individually. Alas, chips for embedded systems have been made by literally thousands of small chip makers around the world, and companies making embedded systems often shop around for the chips they install--thus, two apparently identical systems may have some individual chips made by different chip makers.)

Infomagic could have bolstered his case by noting a Cap Gemini survey a few months ago that found that 50% of all American businesses intend to do NO testing of remediated or replacement software. Granted, most of these slackers are SMEs--but let us remember that even a "medium-sized" enterprise is defined as having 2,000 to 20,000 employees. Tiny it ain't. One expects, though, that most of the laggards will be the truly small businesses, of which there are roughly 23 million. You can do the math, given that the total population of the U.S. is about 270 million people: most of these businesses must be, quite literally, mom-and-pop outfits or one-person enterprises.

In fact, many of these tiny outfits plan to do absolutely nothing before 2000 but instead adopt an FOF ("fix on failure") strategy. Given that the U.S. has a programmer shortage of at least 300,000, and given the increasing scramble for limited programming resources, one may reasonably expect that some of these small businesses will find that FOF = DOA and so go belly up. Capers Jones has projected that the business bankruptcy rate from Y2K will be 5-7%. Infomagic, in his latest article, claims that the FDIC is quietly warning member banks to expect up to a 15% failure rate in their clients (chiefly small businesses, one presumes); I've not seen independent verification of that claim and would appreciate any documentation of it, though I suppose that as an "outside" or high estimate it's not out of the realm of possibility. By the way, the Gartner Group last spring projected a 20% business failure rate worldwide, though I suspect that Gartner has toned that estimate down a bit recently--they've toned down most everything else! Well, back to the U.S. of A.: those projected business failure rates sound very bad, but please remember that according to the Small Business Adminstration most U.S. small businesses fail within a few years of start-up anyway. Infomagic is correct to point to likely disruptions in supply chains (e.g., vendors to GM), though again he probably exaggerates the problem. Even granting an unusually high number of small business failures within a compressed time frame, there is more resiliency in the overall business "system" than Infomagic grants (a point made by Webster). Indeed, if we operated under Infomagic's assumptions, we should have been plunged into economic and social chaos many years ago, given that many small businesses fail each year. The American economy is considerably more fluid than Infomagic realizes (though not nearly as invulnerable to Y2K or to the global economic crisis as the American public blissfully assumes!).

Infomagic also unduly compresses the time frame for Y2K-related failures. Most failures won't occur on or near 1/1/2000. Lou Marcoccio, a Gartner research director, recently estimated that only 8% of total failures will occur right at the start of 2000, and while that sounds like another of those numbers snatched out of the air, it's true that most failures will be spread out over weeks and even months. (Indeed, in the case of many embedded systems, chips measuring "absolute time" but not performing strict date-dependent functions were usually not calibrated with the Gregorian calendar and so their failures may occur months, even years, after 1/1/2000; Dr. Frautschi has found that some failures will occur as late as 2006, and Qantas Airlines puts some failure dates as late as 2010.) Companies putting "corrected" software back into real-time operation in 1998 and 1999 (again, think of Samsonite) will be dealing with many of their problems long before 2000 arrives. Others won't be hitting their most critical residual errors until quite some time after 1/1/2000. It's still a shorter time frame than one would like--after all, there's a reason why an individual company, let alone the world, doesn't normally try to remediate all of its software systems within a period of months or even a few years!--but it's not quite the asteroid impact that Infomagic makes it out to be. On the other hand, it isn't likely to be the pollyannish scenario envisaged by the Gartner Group, with its claim that 90% of all detected errors will be corrected within three days! Gartner doesn't seem to have considered sufficiently the vendor supply problems and severe programmer shortages. If, for instance, very many power companies suddenly discover in January 2000 (or even in late 1999) that they need a particular embedded system to replace a Y2K-faulty one, they might temporarily overwhelm their vendors, who aren't set up to accommodate such a sudden surge in demand. (Roleigh Martin has called attention to this threat.)

The chief appeal of Infomagic's arguments has been in their illusion of "mathematical proof." Granted, the form of his basic equations is correct for determining the possibility of an outcome occurring (in this case, the odds that at least one system will fail) given set probabilities of independent events (i.e., the odds that any given system might fail). These are the same sorts of equations that one uses for determining, say, the odds that a "three" will be rolled in four rolls of a die (each roll is an independent event, uninfluenced by the other rolls). Indeed, here again Infomagic could have actually bolstered his argument by noting that in many cases the events won't be independent: a failure in one system is likely to impact one or more other systems, making the final result perhaps worse than his equations (based on independent events, remember) would suggest.

The difficulties, though, arise with his assumptions. Does he have any documentation for his notion that the "average" small business has 5 computer systems? Given how very small most such businesses are (see above), I would think that one or two systems would be more likely the case. Similarly, without documentation or industry statistics, I've no way of knowing whether the "average" medium-sized enterprise has 25 systems, or the average large company has 100, though I can easily imagine that the largest corporations have well in excess of 100 computer systems (given that GM has roughly two billion lines just of mainframe code!). If Infomagic has any documentation for these numbers, he should provide it.

More troubling is a glaring error in his "Devolutionary Spiral" article. In the early part of the article, he assumes 5, 25, and 100 systems per small, medium, and large enterprises, respectively, as noted above; then, later, when he goes on to write about supposedly mission-critical systems, he uses the same numbers! Obviously most systems are NOT mission-critical. The federal govt. has roughly 73,000 computer systems overall, of which (at last count) only 6,700 or so are considered "critical"--though, since this number was originally over 8,500, one suspects that systems are getting "declassified" (so to speak) for not entirely logical or honest reasons. But the point is that fewer that 10% of all federal systems are currently considered critical. In the private sector, the only number I've seen in this regard is from North, and I've no idea where he dug it up or how reliable it is: approximately 15% of business computer systems are critical, according to North. Well, if we do assume that roughly 10-15% of all systems are critical, then obviously, even granting the 5, 25, and 100 numbers above (for the total number of systems in small, medium, and large enterprises, respectively), only 1, 3-4, and 10-15 systems are critical in small, medium, and large enterprises, respectively. These changes would affect Infomagic's final numbers considerably.

Another problem, as suggested earlier, is that there is no documentation or derivation from industry software metrics to support his assumption that "at least" 1% of critical computer systems will suffer a residual error sufficiently severe to take down the business itself. Not all bugs are created equal; and although it's true enough that many Y2K-related glitches are of the "boundary condition" type noted by Infomagic in his latest article (i.e., resulting in at least temporary system crashes), it's also true that many are not and will result in relatively minor problems. Granted, here again Infomagic might reply that even if a glitch isn't sufficiently serious to crash an entire system, it might have eventually more insidious consequences, like a program logic error that gradually corrupts a database. (See Yourdon's recent articles.) That, in fact, is one reason that Deputy Def. Sec. John Hamre (the Pentagon's point man on Y2K) once said that he'd prefer that Y2K glitches cause outright system crashes instead of causing more insidious problems. When you're dealing with early warning systems and nuclear weapons, for instance, you don't want corrupted databases.

Anyway, the main point is that Infomagic tends to make blanket, melodramatic assumptions about the nature of Y2K glitches. He also tends to assume that ANY error in a "critical" system will be "terminal" for the business, whereas, in fact, even granting the problems posed by a compressed time frame and limited resources and a world gone Y2K "buggy," most errors aren't likely to kill a given business. Or, more generally, the most plausible scenario is that some businesses will indeed fail but that most won't--a scenario, in short, somewhere between that projected by Infomagic and that projected by his more pollyannish critics. Of course, just where "in between" the actual outcome will be, is an open question!

This has been an unconscionably long post; worse, it hasn't reached any clear conclusions or even estimates. All I've tried to do is suggest why the more melodramatic projections are probably wrong, while also acknowledging that residual errors are indeed a very serious concern. We have an outline now for the debate, an outline furnished by Jones, Yourdon, Infomagic, Webster, and others. What we need now are as many specific examples, statistics, and hard data, and as much documentation, as possible.

Ah, it's easy to wish.



-- Don Florence (dflorence@zianet.com), January 05, 1999

Answers

Mr. Florence,

I am enormously impressed by your analysis. It is broadly thoughtful, intelligently argued, and brilliant in its bird's eye view of the large picture.

I was one of those people who was deeply troubled by Infomagic's supposed mathematically proven scenario. I sincerely thank you for helping to dispel the doubts I had about Infomagic's grim outcome.

May we hear more from you in the future on this post?

Kudos, man.

Mary R. Seattle

-- Mary R. (Ribeyed@eskimo.com), January 06, 1999.


Don, its refreshing to see such a well thought out post. Just a few points to consider:

* Not all Y2K failures are "equal". For instance, if indeed on or about 1/1/2000, the power grid goes down and stays down, that pretty much outweighs anything else.

* A lot of the Y2K impact will not be so much about failures, per se, but rather a loss of confidence on the part of people. Bank computers may "work", but if they produce enough screwy problems that get high visibility, then people will no longer trust them. Such loss of confidence opens the floodgates for all kinds of problems, including riots in the cities.

* The entire notion of what is or is not a "critical" system seems to be open to a lot of question. Obviously, the pressure to declare a system as not critical is high, so as to keep the number of systems that have to be remediated down. Many have wondered, after all the downsizing and rightsizing and re-engineering, etc., etc., that has taken place, how we can seemingly have so many non-critical systems still running around. Also, perhaps I am wrong here, but I was under the impression that Gary North generally considered such "critical" versus "non-critical" system division as largely a programmer's fallacy, one that if actually implemented, would bankrupt any business, due to the inherent interdependencies. (And the 15% that you quote from North sounds awfully low.)

* I think that Infomagic makes plausible arguments, but that everyone is at a loss to actually try to assign any kind of weight as to how probable they are. But I sure don't think they should be discounted because they cannot be "proven". I don't think that you can really "prove" much about Y2K impact, there is just too much complexity, which must also account for human reactions (e.g., confidence in bank computers).

-- Jack (jsprat@eld.net), January 06, 1999.

Very good! Don you da man. Now the only thing that still concerns me is the fact that Russia and China will suffer a 66% critical failure of all systems. This data comes from NBC NEWS so it has some merit. It is my hope that we all make it come 2000 keep the faith and great job Don!

-- Terry (taylort2@nevada.edu), January 06, 1999.

Very well written Don, and a pleasure to read.

As you pointed out with your assertion that "Infomagic also unduly compresses the time frame for Y2K-related failures.", the time frame in which the bell curve will unravel is the key factor and wild card that what will determine the severity of the social/economic impact. At what point is recovery still possible? At what point is a collapse inevitable? Is your more moderate view enough to make a difference to prevent a eventual social collapse? Would it still happen a la Infomagic, only slower?

On small businesses, you said "One expects, though, that most of the laggards will be the truly small businesses, of which there are roughly 23 million. You can do the math, given that the total population of the U.S. is about 270 million people: most of these businesses must be, quite literally, mom-and-pop outfits or one-person enterprises." This statement needs clarification.

I read on a government site that SME's are businesses employing 100 people or less. What is the true percentage of "mom and pop" and one person outfits? Are they "most" or half or
-- Chris (catsy@pond.com), January 06, 1999.


Don,

You bring up good points. Honestly, no-one knows just how bad Y2K will be, simply because we don't know what a certain number of failures will mean in a highly interconnected system. It could be that things won't be all that bad, or it could be TEOTWAWKI. As someone else already said, it depends on whether that "certain number of failures" is in utilities or not.

I know you said you don't like using analogies, but the following link uses an excellent analogy to show that in what sectors failures take place makes all the difference in the world:

http://www.garynorth.com/y2k/detail_.cfm/2947

"ANALOGY: The Indianapolis 500 (Efficiency Isn't the Same as Capacity"

-- Kevin (mixesmusic@worldnet.att.net), January 06, 1999.



Agreed that this is an outstanding post: clear thinking, excellent writing, etc. Kudos.

Agree also with jsprat's counterpoints -- confidence and psychological factors play a large role, impacting in turn the "objective" stuff.

Something that no one has addressed: in the case of very small, one-person or mom/pop outfits, IS there such thing as a "critical" computer system? Are [most/any] of them doing anything that really *depends* in a critical way on a computer? OK, so you have to sit down and make out the invoices by hand... so what? Some late nights, etc. The logistics are no big deal, unlike with larger enterprises. Comments?

-- alan (aelewis@provide.net), January 06, 1999.


I can give you an example of a family-owned retail food business -- a mom-and-pop operated by my inlaws in a small midwestern town.

The teenagers operate the computer-based cash register, but they could probably go to solar-powered handheld calculators (or even pencil and paper if needed) to ring up sales.

Only about 20% of the sales come from refrigerated items -- if electricity goes down, those products must be pitched out by public health order.

Stocking of shelves is done by my brother-in-law, who still uses a clipboard to manually count and record the stock. Weekly, he does an estimate of sales and calculates items needed for the next order.

Then he gets on the TELEPHONE, and calls approximately 5-7 companies HALF WAY ACROSS THE CONTINENT on the East Coast, and places orders.

He waits about 10 days for the order to be delivered by TRUCK.

The order printout is from the wholesale company's COMPUTER, and from this, he figures out what was sent and what needs to be reordered.

If the phones don't work, this business will die within a month. If the trucks don't run, ditto. The trucks depend on the diesel pumps, which depend upon electricity. If the wholesale companies have computer problems, maybe they can go back to manually pulling and stocking items on their floor -- but a shipment will then take more than 10 days to get from order desk to truck to shop. This will mean decreased sales, pulling the kids out of college, less local purchases and expenditures....a ripple effect throughout the small community that starts with one family....

And even though "mom-and-pop" gives us interesting images of kindly ol' gramps and granny selling individual eggs over the counter (or somesuch), this family is dependent upon good weekly revenues to support their home mortgage, their ailing grandmother who is in a rest home and requires 24-hour care, and their kid's educations. Pull the financial plug on these people, and their personal infrastructure collapses.

So, even a little mom-and-pop, operating by hand (mostly) in the heart of a rural area, could easily be impacted to death by computer faults at the other side of the continent.

To say that mom-and-pops will have minimal y2k problems ignores, once again, the interdependencies of the whole situation.

Anita E.

-- Anita Evangelista (ale@townsqr.com), January 06, 1999.


All the sophisticated arguments and intricate reasoning are quite stimulating to one's intellect but what if Infomagic is correct even though his reasoning be faulty? What if, in spite of the well reasoned proofs to the contrary, Y2K turns out to be the massive collapse of the worst case scenarios?

If he's wrong, everyone can heave a sigh of relief.

If he's right, or even close to it, what will your situation be?

Do you play poker for stakes or according to odds?

The stakes player will always have a place to live because he never will, "bet the ranch".

I have seen the odds player lose his life savings on the turn of a single card. . .

-- Hardliner (searcher@internet.com), January 06, 1999.


Anita, thanks for your example, it's eye opening for me city dweller. One more tiny drop in my already close to overflowing awareness. It's getting hard to keep all the dots connected in my little brain. Easier to look at it with the big picture.

Hardliner, wether I'd think it's going to be bumpish or Infomagicious, I'd still not bet the ranch. Mine and my kid's life is too precious. But it's interesting to discuss the details all the same.

-- Chris (catsy@pond.com), January 06, 1999.


Thanks, Anita, for your response.

You wrote: "To say that mom-and-pops will have minimal y2k problems ignores, once again, the interdependencies of the whole situation."

Indeed. I take it (and took it) as a given that interdependencies are the big wildcard in all this; of course these externals could bring down a mom/pop operation as readily as a larger one. My point had to do with the INTERNAL situation: are mom and pop really all that dependent in their own little world(s) on computers? Probably not, in most cases. Further, maybe mom and pop can adapt to some extent by relying more heavily on local sources (?). The shift in the center of gravity from distant to local would be traumatic for many businesses and individuals, but ultimately probably a good thing, IMO.

-- alan (aelewis@provide.net), January 06, 1999.



ps: I have no doubt that 12-24 months from now many local economies, to say nothing of non-local ones, will be devastated.

-- alan (aelewis@provide.net), January 06, 1999.

Whata pleasure to read thoughtful, respectful exchanges. Civil society. I ran across this intriguing item last month. This gentleman seems to believe he is Infomagic. Judge for yourself:


-- Lewis (aslanshow@yahoo.com), January 06, 1999.


sorry for the hypertrash.

I meant:



-- Lewis (aslanshow@yahoo.com), January 06, 1999.


See that? I try to get fancy and look what happens!

Just cut and paste:

http://www.smu.edu/cgi-bin/Nova/get/gn/309/2/1/2.html

"Thanks to computers, we can now make more mistakes, faster than ever before!" -Unknown

-- Lewis (aslanshow@yahoo.com), January 06, 1999.


Perhaps you can update us with Mr. Altman's essay - thanks Alan.

-- Andy (2000EOD@prodigy.net), January 06, 1999.


My compliments on a thoughtful essay. However, let give you a few things to think about.

First, the exact percentage of errors per lines of code (LOC) is far less important than their exact effect. A single error of type A in a million lines might be "fatal" to the function of a given system while 100 errors of type B might merely be annoying. A weighted metric on the impact of any given error is very hard to quantify. It is logical to expect, though hard to prove, that if the total error rate per LOC increases then the "lethality" of the sum of the errors also increases. Second, one needs to be concerned with the 'connectedness' of errors within a given system. Some errors are independent of each other in their effect. Others, can operate in concert to either increase or decrease the endpoint of a give error. These 'connected' errors have the potential to be more lethal to a system because they are harder to find and fix than and independent error. Y2K problems often fall into this category. Date calculations within separate modules can subtle effects because the errors might nearly (but not quite) cancel each other out. The error may reside within allowed boundary conditions and be carried forward on various files.

Third, your argument for the time spread of failures is not convincing. The 8% number, as you point out, seems to be an 'out of thin air' argument. Yes, JAE errors will occur before 1/1/2000 as will som other types of look ahead problems. And yes, much of the remediated code will have been in production for some months prior to 1/1/200 and some typres of residual errors, particularly non date related errors introduced accidentally, will have already been experienced. BUT, the rollover will be the FIRST time that much of this code has EVER seen a system year date of 2000 end to end in a production job stream. Much of the testing being done is spotty and incomplete. An example I know of is the use of vastly scaled down test databases, systems libraries and job control instructions on a Time Machine. I can think of non-Y2K systems that underwent a full year of user testing, everything seems fine, and failed miserably on the first day of actual production. For this reason, its logical to conclude that, rather than a smooth increase in errors with a 2000 plateau, there will instead be a steady ,but rather low level, error rate through 1999 (I am excluding separate 99 errors), then there will be a sharp spike in 1/2000.



-- RD. ->H (drherr@erols.com), January 06, 1999.

MVI's Theorem:

For every known bug in a given program, there are a minimum of 10 unknown bugs which are potentially more damaging.

MoVe Immediate

-- MVI (vtoc@aol.com), January 06, 1999.


Attempting to close color tag.

-- Chris (catsy@pond.com), January 06, 1999.

Dang Chris, I was starting to like that red. :)

MoVe Immediate

-- MVI (vtoc@aol.com), January 06, 1999.


Thanks for cleaning up after me, Chris.

MVI, I'd be happy to share my clever technique for repainting a whole thread, but that would require me knowing how I did it....

Sorry for the Type B error...

-Lewis, who needs to practice ^c^v

-- Lewis (aslanshow@yahoo.com), January 06, 1999.


"Indeed. I take it (and took it) as a given that interdependencies are the big wildcard in all this; of course these externals could bring down a mom/pop operation as readily as a larger one. My point had to do with the INTERNAL situation: are mom and pop really all that dependent in their own little world(s) on computers? Probably not, in most cases. Further, maybe mom and pop can adapt to some extent by relying more heavily on local sources (?). The shift in the center of gravity from distant to local would be traumatic for many businesses and individuals, but ultimately probably a good thing, IMO." Well, my friend, if we want to get down to extreme basics, the m&p my in-laws run could certainly "function" without computers -- it does so largely, internally, today.

But, what would they sell?

To imagine that m&p can simply revert to selling products from the "local economy" is purest fantasy -- remember, the local people have well-established trade systems that date back generations: auction houses for livestock, roadside vegetable stands, barter agreements.

Mom and pop's business exists solely to provide MORE than the local economy can provide: the local economy has no cracker manufacturers, no makers of canned goods, no ocean fish, no cornflakes, no salt, no ice cream, no chocolate, no coffee, no vanilla....no fabric manufacturing (dresses, suits, winter jackets, etc), no metal mines (except for lead), no metalworking, no plastics...no movie makers, no match factories, not even a woolen mill...

Yes, we have lots and lots of cattle, some hogs and sheep and plenty of goats. We have backyard gardens and 25-bushel per acre corn (a pathetic output by any standard -- poor soil). We do not have bulk cheese plants (and no local metals or industry or investment to make them); we don't even have large slaughtering facilities to generate packaged meats -- all our big livestock is shipped elsewhere for fattening and slaughter.

Seventy years ago, the m&p's in the town sold hardware, fancy "city" clothing, bulk foods (salt, flour, etc.) and precious little else. Even then, these goods were shipped in (by train) from some other area. Unlike today, there was no music store, pet store, gourmet food store, balloon store, florist, book store, stationery store, decorator goods store, video rental shop, computer repair shop, auto parts store, or Radio Shack -- ALL of which are m&p in our town, and ALL of which are totally dependent upon shipping, electricity, and telephones, not to mention dependencies on manufacturing....which ALL have computers at their very hearts.

So, yes, you are correct -- a lot of mom-and-pop stores COULD function internally without computers.

Until they ran out of product. Then, they would have to close their doors.

Anita Evangelista

-- Anita Evangelista (ale@townsqr.com), January 08, 1999.


Anita, I bet you passed your NLN test at the 98'th + percentile ;-)

-- Chris (catsy@pond.com), January 08, 1999.

Ah, Golly, Chris (*blush* diddling toe in dust) -- them ol' critical thinkin' skills rear their heads at the strangest times....

Anita E.

-- Anita Evangelista (ale@townsqr.com), January 08, 1999.


Moderation questions? read the FAQ