The problem of corrupt data fatally infecting the world-wide Banking system

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

think, offline here, I need a better understanding of the imported data problem. I understand the nature of this issue at the embedded level, where (in my opinion) it represents the single most serious embedded issue there is. At my level, what's imported by higher layers in the control circuitry isn't 'bad data' exactly. Instead, it's perfectly valid erroneous data. That is, the robot server doesn't get an improperly formatted date or record, it gets a 'shut down now' message (or some such) with no way to interpret the validity of this message. I've read over and over that all banks have both exhaustive data filtering in place, and standard procedures for handling data which fail to meet requirements (up to backing out the entire batch, correcting it, and starting over). There are multiple checks and balances in place to ensure that both parties to every transaction agree on what was transacted -- sometimes even heuristic checks on legitimate-looking transactions that seem 'unusual' for various reasons. So it seems to me that your concern must be about one or both of two things: either y2k will lead to errors nobody has ever thought to check for before (I can't imagine one, since noisy lines have historically caused almost every possible error), and/or the error rate being trapped in transactions is so high that communication is effectively slowed below minimum requirements. Am I missing something here? What is your actual worry? Can you give me a scenario? I'm really curious. Flint

-- Andy (2000EOD@prodigy.net), February 23, 1999

Answers

This is my reply to Flint.

Sure Flint,

Let me preface this with saying that I'm more of an Airline realtime systems guy - however I spent the last 5 years working with VISA, the last three of which was spent in the "Coverage" and "realtime back- office" depts., 24 hour "firefighting" of online transaction problems, so I have a pretty good understanding of the EDI EFT scenario.

E.g. Yes banks have ISO formatted (and others) transactions that follow well established protocols regarding data interchange - i.e. set field lengths, security keys, algo's, checks and balances etc. etc. However rogue transactions can and do get through the system - these are usually caught fairly quickly and code can be changed or fallen back, or ATM parameters for example can be dynamically altered or patched, or banks or credit card ranges can be blocked to prevent contamination etc. etc.

This can all be handled now by dedicated teams of "firefighters" - all the major players have them - VISA, MC, Amex etc., and each major bank has it's own back office teams too in addition to the normal IT staff not on duty but on call. The volume and nature of problems now can be handled - the staff is in place world-wide.

Now factor in y2k and the number of banks that are simply not ready and will effectively contaminate the system with buggy transactions...

You would obviously think that a compliant bank would, with all it's failsafe software systems in place, be able to spot a rogue transaction and handle it appropriately - usually issue a decline, or bounce the TX, or whatever.

All my research suggests that these s/w systems have been beefed up - but it is my contention that correctly formatted transactions, that will have date arithmetic induced calculation errors, will get through all the checks and balances and be legitimately processed by compliant and non compliant banks alike. The effect of processing inaccurate and essentially corrupt transactions will be to cause a loss in confidence in the Banking system of "Systems" - the inaccuracies will beget inaccuracies ad nauseam and very quickly the whole system will be "contaminated" and effectively useless.

I would refer you to the Gary North site and his imported data archives for high level overviews from experts.

Also last year I initiated a "VISA is toast thread" - this one gets specific, and I posted a discussion that took place on csy2k where my theories were supported and validated by many systems and EDI experts in the field.

More recently Bradley Sherman and Hoff Meister on csy2k issued me a challenge to provide a specific technical scenario of how this could happen. I borrowed a very good example from No Spam Please and posted it. The end result was that my detractors could not pick my theories apart.

Do a search on my name under Usenet or deja news and you can read the thread together with replies from supporters and detracters.

Flint - I'm glad that you are taking an interest in this - very few people do. They are blindsided by PR from banks - don't believe it. As Alan Greenspan said - and I'm paraphrasing- "99% is not good enough, we must be 100%" (referring to the integrity of data interchange in the Banking Community).

Nobody has taken up MY challenge - i.e. explaining to me just how firewalls can be set-up to filter out the corrupt data from "the system" that I've been talking about.

With your permission Flint I'd like to post this discussion we are having one more time to see if any newbies on the forum can shed some light on this important problem. After electricity - banking is needed to run the wheels of commerce.

And the system is in grave danger of collapse, believe me.

Cheers,

Andy

-- Andy (2000EOD@prodigy.net), February 23, 1999.


This is Flint's reply:-

I know when I'm ignorant. I appreciate your efforts on my behalf. Sorry for so many questions here, and such simple-minded questions at that, but I would really like to understand this much better.

"This can all be handled now by dedicated teams of "firefighters" - all the major players have them - VISA, MC, Amex etc., and each major bank has it's own back office teams too in addition to the normal IT staff not on duty but on call. The volume and nature of problems now can be handled - the staff is in place world-wide."

So you're saying that errors are pretty common occurances now? Is a firefighting team required to handle all of them, or just a minority? When a new type of error enters the system and causes corruption, is code added to handle this type of error as well, or is this impossible because the data met all the normal requirements, and there was no way to notice for software to notice that it was bad? And if software could NOT have caught it, how did it get caught? Is there some further process of error trapping beyond code of arbitrary sophistication (like, I don't know, a system of ACKS with the originator or those in the data path, which ultimately traps the error?)

"Now factor in y2k and the number of banks that are simply not ready and will effectively contaminate the system with buggy transactions..."

I confess I lack the knowledge to make any sense of this statement. From what you've said, presumably these bugs didn't corrupt the EDI protocols; those would be trapped easily. And presumably the bugs didn't make inadvertant changes to amounts, since these are double checked. And also presumably the bugs didn't garble the transaction date, since this would be trivial to put a range check on (these transactions have a lifetime of only a few days at most, right?) So I dont have the experience to picture just what sorts of errors y2k would introduce to these transactions that could not be flagged.

"All my research suggests that these s/w systems have been beefed up - but it is my contention that correctly formatted transactions, that will have date arithmetic induced calculation errors, will get through all the checks and balances and be legitimately processed by compliant and non compliant banks alike. "

What would be a date arithmetic induced calculation error? An incorrect interest calculation? Is there some way the incoming transaction could be scrubbed to see if the result was within some range?

A somewhat related question -- just how quickly would such errors show up during EDI testing of a reasonable sampling of test data? At my low level, I know even rudimentary testing shows up the real howlers in seconds, and I'm down to the more subtle things within a day or two. And the subtle things are both less critical and less common, so the device is usable but not reliable at that time (kind of like Windows, you know?)

Or are you convinced that even rudimentary EDI testing won't be done? That's a bit hard to swallow, and very scary if true. "The effect of processing inaccurate and essentially corrupt transactions will be to cause a loss in confidence in the Banking system of "Systems" - the inaccuracies will beget inaccuracies ad nauseam and very quickly the whole system will be "contaminated" and effectively useless."

I take it that the firefighting teams are the last line of defense that prevents this from happening under ordinary circumstances? Are you saying that these teams will be overwhelmed by the scope of what we face? Do you think the banking system can adapt within a reasonable period of time by cutting the worst offenders out quickly, and switching accounts around among compliant banks? Already there are several large banks (not enough!) engaged in EDI testing of sample data with selected customers, other banks, and regulators. I expect a lot more to reach this point before the drop-dead date.

"More recently Bradley Sherman and Hoff Meister on csy2k issued me a challenge to provide a specific technical scenario of how this could happen. I borrowed a very good example from No Spam Please and posted it. The end result was that my detractors could not pick my theories apart."

I read that thread (and No Spam's example) and was utterly confused by it. I came away from it with the impression that the example itself relied on a rather fortuitous chain of circumstances, and was eminently correctable by the firefighters. It sounded to me like things would need to be one hell of a lot more hosed than expected for such cases to become too common to keep up with. But how would I know? Also, it didn't sound like the kind of error that would live undetected through much testing at all.

"Flint - I'm glad that you are taking an interest in this - very few people do. They are blindsided by PR from banks - don't believe it. As Alan Greenspan said - and I'm paraphrasing- "99% is not good enough, we must be 100%" (referring to the integrity of data interchange in the Banking Community)."

Of course, as you've already implied, we're a bit short of 100% as things stand. But I certainly agree that 99%, if this means 1 bad transaction out of 100, is completely unsupportable. Again I express my ignorance about the efficacy of testing. Again, testing at my level cleans out the serious errors very quickly indeed. My suspicion (how would I know?) is that a few months of error-prone and unreliable banking, if things get a lot better fairly quickly, would not be enough to undermine confidence in the whole system except on the part of those who already distrust it. And even a perfect banking system wouldn't satisfy them.

"Nobody has taken up MY challenge - i.e. explaining to me just how firewalls can be set-up to filter out the corrupt data from "the system" that I've been talking about."

I certainly couldn't (grin). I simply can't visualize how y2k will introduce whole new classes of error types transparent to current procedures.

"With your permission Flint I'd like to post this discussion we are having one more time to see if any newbies on the forum can shed some light on this important problem. After electricity - banking is needed to run the wheels of commerce."

Feel free to post any of this. I find it tremendously informative.

-- Andy (2000EOD@prodigy.net), February 23, 1999.


Would anyone like to wade in on this thread and perhaps answer Flint's questions on my behalf - I would really value the opinions of others on this forum as to the validity of my statements - techies or laymen are welcome to contribute. I'll try and get some more specific information out tomorrow.

Cheers,

Andy

-- Andy (2000EOD@prodigy.net), February 23, 1999.


This is the meat of the "VISA is toast" thread from last year...

Any comments welcome - Andy

I put my "Visa is toast" post on a more technical BB (comp software year 2000 )to see what other programmers thought of my logic. I will post their answers below - one chap thought I was wrong and explained why, and two others weighed in on my side with explanations from their own perspective of just exactly why I am right (unfortunately for us all...)

Response #1 On Fri, 13 Nov 1998 23:46:48 -0800, "Andrew Rowland" wrote:

>The consequences will be catastrophic in my opinion - unless I'm >missing something, maybe my logic is faulty?

Yes, your logic is faulty, in the following paragraph:

>The vast majority of these banks will in no way be compliant. Therefore they >will be sending Visa bad data, our "compliant" Visa programmes will route or >modify this data according to the programme specifications, but the >routed/modified data will in many/most cases by inaccurate/corrupt. Assuming >it gets delivered to the next entity in a corrupt state, that next entity >that may or may not be compliant will in turn route or modify the data. The >result of this scenario, in my humble opinion, will be total, worldwide, >chaos in the banking community.

If company A is non-compliant and sends compliant company B non- compliant data, then B will simply reject the non-compliant data. This has worked for the last 40 years of data transfer between computers, and I expect that it will remain in force until at least 1/1/2000 - or do you know of some reason why Visa (or any other company) would start accepting "bad" data?

This is the misguided, unsupported Gary North idea of "corrupt" data spewing forth from computers on 1/1/2000, and "corrupting" computers that the data is transferred to. Anyone who has worked with communicating computers knows better and realizes that both the transferring and receiving machines have agreed on the data edit rules in advance, and I'm surprised that Andrew here feels otherwise.

I think everyone here would be surprised at how much data already moves across the data transfer wires that includes fully compliant dates that are well into the next century (think bonds and mortgage transactions as an example, and there are many more examples). These communication vehicles are already Y2K compliant.

If I remember correctly, the charge card companies are having the majority of their problems with POS (and hence will be concentrating tests in that area in 1999), and not with the bank-to-card-company data transfer mechanism per se. I am not suggesting that there are not compiance steps and tests to be carried out in the data transfer area, but these are not felt to be as problematic, and can be easily overcome if all participants adhere to standards.

Regards, DS

This was from Don Scott

On the next two posts you will see how Don has not quite grasped what I was trying to explain.......

This is from Jack ShitAnanda!!!

Don Scott wrote: > > On Fri, 13 Nov 1998 23:46:48 -0800, "Andrew Rowland" > wrote: > > If company A is non-compliant and sends compliant company B > non-compliant data, then B will simply reject the non-compliant data. > This has worked for the last 40 years of data transfer between > computers, and I expect that it will remain in force until at least > 1/1/2000 - or do you know of some reason why Visa (or any other > company) would start accepting "bad" data? >

One good reason why bad data *will* be received is that it won't be bad in the sense you are talking about (scrambled - garbage). It might just be inflated or deflated numbers, a product of faulty calculations. No interface program is going to do validity checking like that. That would presume one knew the range of the numbers coming in before they came in.

> This is the misguided, unsupported Gary North idea of "corrupt" data

See, you are calling invalid data corrupt data. Corrupt is corrupt. Invalid can be either corrupt or deceptively out of range.

Even corruption will be received, though, in some (maybe many) cases. That is because the interfaces (between bankA and bankB) are going to be buggy. IMO, the interfaces will be the last piece of the puzzle to get tested ..... way too late.

I've done interface testing and the number of "surprise" failure conditions that arise when interfaces change is astounding. There is no way to quantify it or explain it to somebody that has not done the testing of interfaces. They are quirky at best - total wild animals on their worst days. And 00 will bring in their worst days.

And finally this is the response from NorWester.... By the way Paul Milne is a bit like a more extreme Gary North, a real TEOTWAWKI merchant.......

Don Scott wrote in message <364d81d9.73051905@news.nbnet.nb.ca>... >On Fri, 13 Nov 1998 23:46:48 -0800, "Andrew Rowland" > wrote:

>If company A is non-compliant and sends compliant company B >non- compliant data, then B will simply reject the non-compliant data. >This has worked for the last 40 years of data transfer between >computers, and I expect that it will remain in force until at least >1/1/2000 - or do you know of some reason why Visa (or any other >company) would start accepting "bad" data? > >This is the If company A is non-compliant and sends compliant company B non-compliant data, then B will simply reject the non-compliant data. This has worked for the last 40 years of data transfer between computers, and I expect that it will remain in force until at least 1/1/2000 - or do you know of some reason why Visa (or any other company) would start accepting "bad" data?

This is the misguided, unsupported Gary North idea of "corrupt" data spewing forth from computers on 1/1/2000, and "corrupting" computers that the data is transferred to. Anyone who has worked with communicating computers knows better and realizes that both the transferring and receiving machines have agreed on the data edit rules in advance, and I'm surprised that Andrew here feels otherwise. Anyone who has worked with >communicating computers knows better and realizes that both the >transferring and receiving machines have agreed on the data edit rules >in advance, and I'm surprised that Andrew here feels otherwise.

Hmm. I work with "communicating computers" and must say that Andrew is precisely right. Date-dependant calculations have nothing to do with data-interchange validations routines. What Andrew is pointing out is that non- complient programs will produce data that is wrongly calculated; these errors will spread magnitudianlly throughout the global financial system. Validation routines between data interchanges simply verify that the parameters are correct: not the calculations forming the data. This is the meaning of corrupt data: bad information, not bad parameters. Andrew (and Gary North) are precisely correct. You are espousing the "misguided, unsupported idea of "corrupt" data " equalling bad parameter transfers. That is incorrect and a straw dummy. Corrupt data = data correctly parametered yet wrongly calculated. Wrong calculations beget wrong calculations ad nauseum. Within 24 hours of the turnover, the Global Finacial System will either A)be completely corrupt B)be completely shut down so as to avoid A. The result is the same in either case; even if we don't go Milne, you are going to see a mess bigger than you can imagine. Alan Greenspan was entirely correct when he stated that 99% is not good enough. We will be nowhere close--not even in the ballpark. The engines have shutdown; the plane is falling--we simply haven't hit the ground yet. Scoff if you must; as a professional working with professionals, I know the score. It's going down. This is why at least 61% of IT professionals are pulling their money out before it hits--of course, in 10 or 11 months, that number will rise to 100%; but then, it will be to late. We know for a fact that that 50% of all businesses in this, the best prepared of countries, will not perform real-time testing. As a Programmer/Test Engineer, I can therefore assure you that at least 50% of all businesses in this, the best prepared of countries, are going to experience mission-critical failures, Gartners new optimistic spin not withstanding. Remediation sans testing is not remediation. The code will still be broken, just in new and unknown ways.

Got wheat?

Bryan

-- Andy (andy_rowland@msn.com), November 14, 1998.

And one more for a little levity from Lane Core jr.

>If company A is non-compliant and sends compliant company B >non- compliant data, then B will simply reject the non-compliant data. >This has worked for the last 40 years of data transfer between >computers, and I expect that it will remain in force until at least >1/1/2000 - or do you know of some reason why Visa (or any other >company) would start accepting "bad" data?

Conversation at Company B when B simply rejects the non-comliant data:

Helen: "Hey, Mabel. Do you have any orders to process?" Mabel: "No, Helen. I don't have any orders." Helen: "Let me call MIS and see what's up."

....

Helen: "The EDI programs have been rejecting all the orders from our three largest customers. Something about non-compliant data...." Mabel: "When are they going to get it fixed?" Helen: "They don't know when. Our customers are blaming us and we're blaming our customers...."

This is Don's rebuttal of the logic explained above, together with my replies. >Where is this notion of wrong calculations coming from?

Visa and the member banks make complicated currency conversion calculations in the trillions every day. These will be the Achilles heel.

>The "amount" of a financial transaction is typically not "calculated" >- it just "is". If I lay down a charge card for a $49.95 purchase - >the amount is not "calculated" and then sent to Visa - it just "is" >$49.95.

Unless you are in Timbuktu.

That's where the interconnecting banks with their currency conversion programs come into play.

>And to top it all off, the Visa machine has to echo back to me the >amount that I agree to pay for - if there is some "miscalculation" > (sic) or mis-transmission of the amount (and what would 1/1/2000 have >to do with that?), then I will simply not agree to authorize that >other amount.

Precisely. There will be a magnitude of "declines" in the trillions, ergo the collapse of the Banking System.

>As far as machines higher up in the chain (e.g., merchant banks >sending batches of transactions to Visa, or whatever), I still fail to >see how non-compliant programs will calculate data incorrectly. > >Can anyone give a real-life example? From your own experience? > >Regards, >Don Scott >

Is my logic faulty?

Don Scott wrote in message <364e0939.107713744@news.nbnet.nb.ca>... >On Sat, 14 Nov 1998 14:18:57 -0800, "Andrew Rowland" > wrote: > >> >> >>>Where is this notion of wrong calculations coming from? >> >> >>Visa and the member banks make complicated currency conversion calculations >>in the trillions every day. These will be the Achilles heel. > >Oh, in the trillions every day, is it? I would never have guessed it >would be that high. Especially when you consider that you are >subsetting it to currency conversion transactions, only. > >But, if there are trillions of currency conversion transactions every >day, then so be it. > >I guess I was wrong to question your wisdom in these matters. > >DS

No Don, you are not wrong per se with compliant US Banks submitting US Dollars to compliant Visa programmes, it's the other 140,000 endpoints that feed into Visa/Mastercard/Amex/SWIFT etc. that will be the problem. Who precisely knows what they will be submitting - the data fields may synch up ok but is the data valid?

In light of the references to Gary North's position on non-compliant data infecting other computers, the following is the text of an e- mail I sent to Gary North on July 27, which I believe he subsequently archived in his database, on the topic of how compliant computers can be corrupted by non-compliant data. >>>>>>>>

I have been a computer programmer since 1961, and programmed in at least 15 languages for mainframes and personal computers.

Several respondents, including Bill Moquin on July 27, have challenged your statement that non-compliant data can infect a Y2k compliant computer. However, their reasoning always seems to be that the non- compliant data would appear in the form of a date that will be edited by the receiving computer and found to be erroneous and thus rejected.

They are failing to consider that the non-compliant data may be in the form of the result of date arithmetic done on a non-compliant computer and an erroneous answer that appears to be compliant transmitted to the compliant computer.

For example, if a non-compliant system determines in error that some payment should be made or some product should be shipped, and forwards that information to a Y2k compliant computer system in error, the receiving system (bank?) may merely edit for accurate account number, valid amount, valid item number, etc, and perform debits and credits of the amount or issue a shipping manifest, thus compounding the error. The compliant receiving system would thus have participated in an erroneous, non-compliant transaction, and its accounts would not be correct.

Dan Hunt

-- Andy (2000EOD@prodigy.net), February 23, 1999.


Just a thought. Is there any "agreed upon" standard for the "new" transaction? Is it possible that BANK A is using windowing, and BANK B is expanding date fields? If so, is there some sort of "flag" to indicate this? I don't know how the banking system works, but I would think thansactions are "batched" and some sort of control total is used. If the batch is "out of balance" is the whole batch considered in error and bounced to the firefighter for correction even before they make it into the system? If so, it's easy to see how just a "few" bad dates could lead to way too much manual work to keep up. Just my $.02 <:)=

-- Sysman (y2kboard@yahoo.com), February 23, 1999.


Add to your scenario the possibility that the clearing bank for the selling company returns a decline and the client (cardholder) still gets his account debited for the amount alledgedly declined. this is happening with one of teh larger of the national clearing banks right now in a surprising number of cases I am aware of. To my knowledge, the merchant's batch collection, and batch deposit do NOT reflect the dollars debited, but the controls in the merchant I am aware of are, ummmm, shall we say rudimentary? Yeah, that's a good word.

So, there are other possibilities for error in the system, particularly dependent on which clearing house is used.

Chuck, whose wife is dealing with this currently

-- Chuck, night driver (rienzoo@en.com), February 23, 1999.


A friend told me a few weeks ago that you are still required to have the slips and hand machines as backups (he has two). He also said he got a letter from VISA that any non Year 2000 compliant POS terminals would subject him to fines or revocation of VISA acceptance rights after a certain date (forget what date, think it was sometime in April).

-- Paul Davis (davisp1953@yahoo.com), February 23, 1999.

Sorry to intrude here, folks, but since Andy seems to be picking up the discussion again, I thought I'd respond.

One thing we never got around to is Andy's choice of example. I realize it was given to him here, but why was that?

Previously, and on the "Visa" threads, Andy constantly used the currency conversions as the "Achilles Heel" of the banking system, and that it was the failure of these conversions that would cause the corrupt data.

Now, it is understandable that Andy changed his example, and also a very telling point. The Euro Conversion, according to many estimates, was comparable for the financial institutions involved to the Y2k effort. It affected a large percentage of the institutions code, especially the code dealing with inter-company transfers. And it directly affected Andy's previous example, currency conversions.

Has the European Banking system collapsed? No. Were there some problems? Yes, including the recent story of the Ducth Bank. Will there be problems due to Y2k? Yes. But as far as banking goes, one need look no further than the Euro to get an answer to the question of collapse.

Hoffmeister

-- hoff_meister (hoff_meister@my-dejanews.com), February 23, 1999.


Stay here Hoff,

I want to get your opinion - you are not intruding in the least. By the way this thread came about as an offline request from Flint for more information on imported data. I wanted to open it up again to this forum for any newbies and old 'uns to contribute too.

You cannot compare the Euro to y2k, it's like a pimple on an Elephant's arse.

Currency conversion --- do date arithmetic calculations occur with the Euro now? No doubt. There HAVE been Euro problems - but nowhere near the Capers Jones estimates.

How far forward do Euro date arithmetic calculations take place? I have no idea. I would suspect that they rely on todays date as all transactions are realtime time stamped.

What will happen to Euro date arithmetic calculations when the date goes to '00'? Exactly the same as will happen to regular Banks world- wide with unremediated code. Inaccuracies. Bogus data, correctly formated and parametered that WILL get through EDI and EFT checks. The data will be further propagated and corrupted as it flows through the system of systems.

That's all for now.

More to come.

Andy

-- Andy (2000EOD@prodigy.net), February 23, 1999.


Andy, Sysman, all

Just ran a 'what if' test on a non compliant. Not banking or EDI but a typical batch job. Programs ran fine. For this app the elapsed period for the calcs is a constant of one year. Overdue amounts are stored fields that plug and print to output. The the problem we found is with the sort. Due 00, Past Due 98, Past Due 99 is not the correct order. Don't mean to diasppoint readers of this forum but this is not a big deal. At this point we can remediate code or we can modify JCL sort to use alternate collating sequence. Because of interface issues we will remediate rather than write intermediate programs to window and expand date fields.

However, I wanted to point out, not all non compliant 'problems' are show stoppers. Last time I did EDI work, my data either corresponded to the formats and batching rules or it was rejected.

Inthe

-- Inthe Trenches (Nondisclosure@Please.gov), February 23, 1999.



Another thing to consider is that 'batch' environments generally have an 'edit' performed on incoming batches. Just to make sure it's in the proper format. Most batch systems will reject the data if it doesn't pass the edit. It won't feed it in anyway and 'infect' the system. Now real time systems may be a different story.

Deano

-- Deano (deano@luvthebeach.com), February 23, 1999.


At the risk of coming over as a more complete buffoon, here is a greatly simplified outline of my imported data theories.

I've written it in layman's terms as I would like everyone to be able to understand what I'm getting at.

It's really very simple in a complex sort of way :)

How can I put this? This is maybe an extremely dumb analogy but it may work for you... :) [I KNOW I'm going to get flamed mercylessly for this but what the heck... :) ]

You have a rogue granny who you've asked to look after your bank account while you are in hospital after the Sicilians have broken your fingers for late under-payment of a "loan"... You have certain regular set bills to pay and verbally tell old Doris to write checks for certain amounts to certain people on certain days. You have made arrangements with other grannies to act on the amounts deposited.

Now Doris means well but is a little batty - she writes the cheques but gets them all wrong bar one or two. She messes up by a factor of +5. She also messes up by writing them from both your checking and savings accounts which are linked together. She posts the cheques. The cheques arrive and are processed by other grannies (the banks, securities, stockbrokers and credit card companies.)

Your account is debited by a factor of five. You know nothing about this until you read a statement weeks later. Worse, you have set up orders with other grannies to sell, hold or buy depending on how much money you give them. Your fingers were originally broken because Doris also messed up that transaction by a factor of -5. You didn't learn your lesson because you love old Doris and gave her a good talking too!

See my point? No, OK, I'll continue...

You had enough money in your linked accounts so the bogus data got out of your holding bank - it passed the checks and protocols. Doris is the COMPUTER who cannot perform date arithmetic calculations correctly. She broke your fingers and will break your bank account and maybe your legs too.

This is where it gets interesting or more stupid depending on your ability to think out of the box and extrapolate a little ...

All the other grannies that received funds from you via Doris have beefed up their EDI (Electronic Data Interchange) and EFT (Electronic Funds Transfer) and Publics and Private Security Keys, and put in place additional AI (Artificial Intelligence) programs to detect unusual transaction flows and/or amounts. All the transactions pass all the EDI EFT etc. checks and are processed.

You buy five times the amount of Amazon shares than you wanted, Amazon tanks and you are in trouble. You buy other options and futures on margin, the Dow Jones is shaky due to worries about Japanese Banks (!) and your margin is called. You give your ex five times more than the bitch deserves and you are in trouble as you won't get THAT back. A couple of AI programs bounce some of the cheques, bills aren't paid, and you are in trouble with penalty fees to pay and the prospect of a couple of broken legs too from the Maltese!

Some grannies will be compliant, some won't be. The ones that are not compliant may perform further erroneous date arithmetic calculations and further propagate bogus already bogus data. Some transactions will be negative and will obviously fail. The positive ones that have enough money in linked accounts will succeed.

Do you get my drift now? I AM oversimplifying this but the reality will be far more complex and worse than this scenario. For the benefit of Hoffmeister we'll throw in some currency conversions too as you send money around the world.

There will be hundreds of thousands of senile grannies out there screwing up left, right and centre. There will be an equal amount getting things right. However, all grannies after a certain point in time after rollover will be processing essentially bogus data. The data will LOOK good enough to pass the inter-granny bridges at the "Twilight Home For Wrinklies (tm)". No amount of software technology can spot plausibly inaccurate amounts - how could it possibly do that? It may spot wildly inaccurate amounts - yes, but this will lead to further fatal problems as I will explain in due course.

The Banking System - It is essentially a system of trust after all. Fiat or electronic, it's all down to confidence and trust - THAT is the biggest danger. By and large, providing all checks and balances are passed and there is enough money in an account, an icorrectly calculated transaction will succeed.

So given enough time, and I think it will be very quick, the whole system of systems will be corrupt, with no-one knowing just how accurate their databases are. Ergo loss of confidence, ergo the system of systems grinds to a halt. Alternatively, if the AI systems and beefed up systems are able to spot wayward transactions and issue mass declines (with human intervention as a follow-up (impossible as their are too many transactions and not enough manpower)) you will have precisely the same effect - you will have NO SYSTEM as the bogus transactions will be declined en masse. Bogus data compounds exponentially as it moves around the world. The Charlotte's Web will collapse.

Now factor in communications problems - no dial tones, satellite links, frame relay outages, electricity outages and all the other dominoes ad nauseam - with the best will in the world it will take ages to even THINK about trying to unravel the mess. It will be like trying to untangle the Gordian Knot - best to just cut it and start over.

Andy

Two digits. One mechanism. The smallest mistake.

"The conveniences and comforts of humanity in general will be linked up by one mechanism, which will produce comforts and conveniences beyond human imagination. But the smallest mistake will bring the whole mechanism to a certain collapse. In this way the end of the world will be brought about."

Pir-o-Murshid Inayat Khan, 1922 (Sufi Prophet)



-- Andy (2000EOD@prodigy.net), February 23, 1999.


Andy, once again you miss the point.

The issue is *not* whether correctly parametered, but bad data, will be passed. It always has, it does, and it always will.

It is the attempt to extrapolate this into a collapse where you fail.

As an overall phenomenon, yes, the Euro is much smaller than Y2k. But, for those financial institutions involved in the Euro, the comparison is valid. If you wish, I can provide links to the estimates on effort comparing the two.

The point is, the Euro conversion, for those institutions affected, is very comparable to Y2k. Whether or not date arithmetic is involved in the Euro is irrelevant; the Euro stands as an example of a software conversion, affecting multiple institutions simultaneously, that directly affects amount calculations and inter-company transfers, the very transactions that you claim will collapse the system. Your hypothesis is the number of errors introduced will overwhelm the system. The fact that, even with errors introduced, the Euro has not collapsed the European Banking system stands as a direct refutation of your points. Banks have always had to address the problem of incorrect transactions. What I have seen indicates that Banks are well aware of the potential, and have plans in place to deal with it.

You say no bank has addressed this issue, but you are wrong. What I have seen documented is a dual approach; significant testing with major partners pre-Y2k, to verify compliance, and increased scrutinization of transactions post-Y2k, including the use of statistical sampling techniques.

Hoffmeister

-- Hoffmeister (hoff_meister@my-dejanews.com), February 23, 1999.


Thanks Hoff,

"The issue is *not* whether correctly parametered, but bad data, will be passed. It always has, it does, and it always will."

####### I agree, part of my job at VISA before I left was to trace transactions and find out how they got through the system. When we hit 2000 it is my contention that these types of transactions will completely overwhelm the system. #######

It is the attempt to extrapolate this into a collapse where you fail.

####### In your eyes.#######

As an overall phenomenon, yes, the Euro is much smaller than Y2k. But, for those financial institutions involved in the Euro, the comparison is valid. If you wish, I can provide links to the estimates on effort comparing the two.

####### Infinitely smaller. No comparison. We'll have to disagree on this but it is basic common sense. There are 11,000 banks in the USA alone. Way way more world-wide. How many entities in Europe had to change their code? No Comparison. Because y2k affects every entity that interracts electronically with Banks world-wide... This includes credit unions, credit card companies, post offices, government money centres, etc. etc. ########

The point is, the Euro conversion, for those institutions affected, is very comparable to Y2k. Whether or not date arithmetic is involved in the Euro is irrelevant;

####### "Whether or not date arithmetic is involved in the Euro is irrelevant;"...

Hoff - how wrong can you be... It is TOTALLY relevant. How would Euro currency transactions work NOW if the date was Jan 1st 2000? I suspect it would be a shambles as very very few banks have announced compliance to date. In addition (pun intended), it won't just be date arithmetic problems. There will be all sorts of other weird and strange code induced problems surfacing. #######

The Euro stands as an example of a software conversion, affecting multiple institutions simultaneously, that directly affects amount calculations and inter-company transfers, the very transactions that you claim will collapse the system.

####### Yes of course but it isn't 2000 yet is it? Let me say that again - IT IS NOT 2000 YET!!! Currency Conversion as in Euro- Conversion does not extrapolate forward, unlike for example Galileo (United Airlines) which recently announced that it has successfully made a glitch-free booking 331 days ahead into January 2000. Currency conversion simply does not work this way as the markets constantly fluctuate - all conversion is realtime, not batch, and take place in a matter of seconds on the GMT date. Hoff, I think it's YOU who has missed the point here. Date arithmetic errors are NOT occurring NOW - they WILL occurr in 2000. #######

Your hypothesis is the number of errors introduced will overwhelm the system. The fact that, even with errors introduced, the Euro has not collapsed the European Banking system stands as a direct refutation of your points.

####### Your original premise was and is wrong so all your points after are invalid. I said at the top of this page, and I quote...

"Yes banks have ISO formatted (and others) transactions that follow well established protocols regarding data interchange - i.e. set field lengths, security keys, algo's, checks and balances etc. etc. However rogue transactions can and do get through the system - these are usually caught fairly quickly and code can be changed or fallen back, or ATM parameters for example can be dynamically altered or patched, or banks or credit card ranges can be blocked to prevent contamination etc. etc.

This can all be handled now by dedicated teams of "firefighters" - all the major players have them - VISA, MC, Amex etc., and each major bank has it's own back office teams too in addition to the normal IT staff not on duty but on call. The volume and nature of problems now can be handled - the staff is in place world-wide.

Now factor in y2k and the number of banks that are simply not ready and will effectively contaminate the system with buggy transactions..."

The staff is in place to handle problems, just a riot in Marseilles and a buggy Dutch Bank, all handled. We know nothing about what went on behind the scenes and probably never will. Again the success of the Euro argument is a weak straw man and should not give one false hope for the Banking y2k rollover, there is no comparison. #######

Banks have always had to address the problem of incorrect transactions. What I have seen indicates that Banks are well aware of the potential, and have plans in place to deal with it.

####### OH REALLY???

One final time Hoff - put up or shut up.

Exactly WHAT "plans" do they have "in place to deal with it."

HOW will they filter out incorrectly calculated data.

This is what you said above, and I quote...

""The issue is *not* whether correctly parametered, but bad data, will be passed. It always has, it does, and it always will."

SO HOW WILL "THEY" (ALL BANKS AND ENTITIES THAT FEED DATA INTO BANKS, EVERYWHERE, WORLD-WIDE, WITH 100% ACCURACY AS DEMANDED BY ALAN GREENSPAN) PULL OFF THIS MIRACLE???

I'VE ASKED THIS QUESTION OVER AND OVER AND ***NOT ONE*** POLLY HAS BEEN ABLE TO EXPLAIN, SO THAT WE ALL UNDERSTAND, HOW THIS CAN BE DONE.

I'm not holding my breath on this one :)#######

You say no bank has addressed this issue, but you are wrong. What I have seen documented is a dual approach; significant testing with major partners pre-Y2k, to verify compliance,

####### Notice the words "major partners". I'm sorry Hoff, this does not cut it. The nature of a the system we have in place is that a transaction originating in an entity in Okinawa on a Wells Fargo account in Sacramento could quite easily pass through many many entities before an approval and the TX gets back to Okinawa. Have "they" tested all these possible paths? Of course not - it cannot be done, you know that as well as I do. Corruption could occur anywhere along the data interchange routes.

Compliance simply cannot be verified untill rollover. There are an overwhelmingly amount of non-compliant banks world-wide. These NCB's will feed corrupt data into the system unless they are isolated. If NCB's are isolated there is NO BANKING SYSTEM ANY MORE. #######

and increased scrutinization of transactions post-Y2k, including the use of statistical sampling techniques.

####### Hoff, that's called shutting the door after the horse has bolted. You can statistically sample bad transactions up the proverbial wazoo but the damage will already have been done. You have no idea how impossible it will be to untangle the mess. Done deal - can't be done. #######

I'll ask this one more time...

How will 100% of allegedly compliant Banks world-wide co-ordinate filters or firewalls to prevent the propagation of y2k induced corrupt imported data from non-compliant Banks and entities world- wide?

That's all I need to know and I'll shut up :)

Later,

Andy



-- Andy (2000EOD@prodigy.net), February 23, 1999.


Andy, I think that you have (as usual) given the pollyannas something to think about (or, most likely, forget about). I would just like to add one thing, which simply extends everything you have already said.

The big mistake people seem to make about the Y2K problem is that they see it as asking whether "systems" will "fail", and then they try to estimate how many, whether they are "mission critical", etc., etc. A big impact of Y2K may be that computers will become untrustworthy. That is, a system may not fail, per se, but simply will not produce reliable data (or "spew out bad data" as North puts it). If people lose confidence in computers, then its Game Over for the banking system et al.

This is not as dramatic as "system failures", or the power grid collapsing, etc., etc., but in reality it could prove just as serious.

-- Jack (jsprat@eld.net), February 23, 1999.


"A big impact of Y2K may be that computers will become untrustworthy."

100% correct Jack, we are all hoping that we just have to endure disruptions.

But if the S really HTF then JQPublic will come to detest and hate both Computers and Programmers - in fact, anyone who they see as having anything to do with the problem.

-- Andy (2000EOD@prodigy.net), February 23, 1999.


Computers untrustworthy? Not. Every computer I've every worked on did just what I told it to do - not necessarily what I meant.... People are the problem - greed, shortsightedness, procrastination, and plain stupidity.

jh

-- john hebert (jt_hebert@hotmail.com), February 23, 1999.


Andy,

I worked for a major international banking firm that used to transfer in excess of $500,000,000,000.00 in a day is it's cash system alone, not counting all the other finanicial systems they have running. There are checks and balances built into the system from end to end. These reconciliations occur daily at all parties involved (banks, clearing houses, transferring agents, etc.). Any body in this chain is liable to the others for interest and penalties if problems occur within their systems and the others inccur losses. You see there is other incentives as well to make sure systems function through 2000 and do so on time. Also, keep in mind that computers work on the a basic principal of GIGO (garbage in, garbage out) the date fields do not directly update, rewrite, fill in, etc. any of the monitary data fields. As we correspond through this thread there are financial computer systems all over the world already working that are processing year 2000 dates and information. Contract, options, etc. are presently in the systems as expiring into the year 2000 and beyond as well as 99% of all the present mortgages. Yes some glitches will occur as they do now and will continue to do beyond the Y2k issue and the people who deal with these systems and issues will continue to do their jobs and address the problems that arise. I realize that alot of the posters here have a problem axcepting this but alot of time, money and effort have gone into fixing Y2k and all of the significant institutions have been addressing the issue. I realize the fustration that some feel about major institutions and government not giving a 100% guarentee against problems, but let's face it you do not have that guarentee now. Let us get through the event before you ask for more than you had before it. And I know this will be a let down but I remain anonymous because I am working on Y2k for a major institution and am in the trenches and unfortnately (right or wrong) am liable if I mention who that is.

Hope this gets you to relax a little.

P.S. Remain skepticle but do not go over board when a company gives a response that is not laiden with details or does not give a 100% guarentee.

-- ???? (?@?.?), February 23, 1999.


Ok, one more time.

You keep trying to turn the argument back around, propagating the myth that 100% compliance is needed, or that 100% of invalid transactions must be caught. Obviously, this level of compliance is not required, as it is not the case today. So both of us agree that 100% is not required; that as a matter of normal business, banks do deal with this situation daily.

The burden of proof, then, is to show that the number of invalid transactions due to Y2k will overwhelm the system. To date, you have not provided a believable scenario, one or even a combination of many that would produce these transactions. The original one on c.s.y2k, provided by someone here, was partially believable. That interest calculations may produce the effect described could be correct, but the jump to the same error causing individual transactions to be inflated was not supported.

The Euro comparison is not based in any way on date calculations. The Euro comparison is based on the following:

1) The just passed Euro conversion affected the vast majority of financial institutions in 11 countries. For those institutions affected, the conversion was at least on the same scale as Y2k. Remember, your hypothesis is on the collapse of Banking, due to invalid transactions passed between banks. The Euro affected the European banks in much the same magnitude as Y2k. Foxnews used to have a wire story, not in their archive now, which stated that for banks, they would spend three to five times as much as for Y2k. The same article quoted Capers Jones, calling the Euro conversion "the second largest software challenge in the world behind the Y2K problem...and it's more sophisticated". An older link, h ere, again estimates 5 times the cost. This link< /a> quotes the EMU director for IBM as saying that "80% of our systems will be impacted". Again, all of the above is in reference to the impacted banks. Your scenario involves banking transactions. Within Europe, the effect of the Euro conversion on the banking system is on par with Y2k.

2) The same set of metrics applied to Y2k can be applied to the Euro, for those institutions affected. We would expect to see errors introduced through programming changes. The software affected is the same software that generates the inter-company transfers, that you propose will collapse the system. Errors are errors, whether introduced through bad date calculations, or bad currency conversions. Within those European institutions, the affect on the EDI/EFT transactions is at least on par with the Y2k effect.

I say again, that for a comparison that speaks directly to your hypothesis, the Euro is very valid.

As far as how banks are attempting to address the problem, I deal with data interfaces as an everyday part of my job. Some errors are harder to reverse than others, but I know of no interface to or from an external source that does not have a procedure in place to back out errors. The sampling is to do just what you suggest; if a partner starts feeding bad data, the pipe will be shut. I say again, if you have some argument that the number of these errors will overwhelm the system, I haven't seen it. Just saying it doesn't make it so.

Hoffmeister

-- Hoffmeister (
hoff_meister@my-dejanews.com), February 23, 1999.


Sorry.

-- Hoffmeister (hoff_meister@my-dejanews.com), February 23, 1999.

????? and Hoff,

Thanks for the follow-ups. We are going around in circles again... :)

Hey, ?????, I am fully aware of the recon groups function, the clearing and settlement process etc. etc. I realise it works today, and I'm sure you would agree that it takes a lot of manpower to make the system work world-wide. My contention is that this will be a unique event, simultaneous (OK over 24 hours) major cutovers on a world-wide basis, which will stretch resources to and beyond breaking point IMHO. The non-remediation figures of banks world-wide is appalling considering we have precious little time left. I'm pretty relaxed about all this at the moment - not too sure what I'll be like next December :)

Hoff - sorry, it's not my duty to literally hand hold you through a believable scenario - whatever I do you shoot it down so I won't bother any more. We'll just have to wait and see what happens :) Lets agree to disagree on the comparison effort between the Euro and y2k.

As Programmers, what do you make of the comments of the other three or four programmers I cited above that said that essentially the plane was going down and the code was not going to get fixed properly? That data interchanges were not being tested properly?

What do you make of all the following articles about the imported data problem? You have to agree that it's not talked about too much these days - I find this strange as the problem hasn't gone away.

"On April 16, 1996, the Assistant Secretary of Defense in charge of y2k testified before a Congressional committee. He offered this warning:

"The management aspects associated with the Year 2000 are a real concern. With our global economy and the vast electronic exchange of information among our systems and databases, the timing of coordinated changes in date formats is critical. Much dialogue will need to occur in order to prevent a 'fix' in one system from causing another system to 'crash.' If a system fails to properly process information, the result could be the corruption of other databases, extending perhaps to databases in other government agencies or countries. Again, inaction is simply unacceptable; coordinated action is imperative."

He was saying that if a compliant computer sends compliant data in a compliant format to another computer, this transfer may crash that computer. Or the recipent computer may not recognize the compliant format. The system breaks down.

In a July, 1998 report by the General Accounting Office of the U.S. Congress, we read: "In addition to using bridges, filters may be needed to screen and identify incoming noncompliant data to prevent it from corrupting data in the receiving system."

On August 6, 1998, Joel Willemssen of the General Accounting Office testified before the Technology Subcommittee of the House Science Committee. He made the following observation:

"Examination of data exchanges is essential to every Year 2000 program. Even if an agency's--or company's--internal systems are Year 2000 compliant, unless external entities with which data are exchanged are likewise compliant, critical systems may fail. The first step is to inventory all data exchanges. Exchange partners, once inventoried, must be contacted; agreements must be reached as to what corrections must be made, by whom, and on what schedule; and requisite testing must be defined and performed to ensure that the corrections do, in fact, work."

This, in my view, is the biggest unsolvable problem of the y2k challenge. If a company somehow revises its computer systems' legacy code, tests it by parallel testing, does not crash its systems during the testing, and transports all of its old data to the newly compliant system, it faces a problem: it is part of a larger system. Computers transfer data to other computers. If the compliant computer imports data from a noncompliant computer, the noncompliant data will corrupt the compliant system's data. A company may have spent tens of millions on its repair, but once it imports noncompliant data, or extrapolations based on bad data, it has the y2k problem again.

Understand, this is a strictly hypothetical problem. There is no compliant industry anywhere on earth. I am aware of no company in any industry that (1) has 10 million lines of code and (2) claims to be compliant. I argue that there is not going to be a compliant industry, where the participants are all compliant. But if there were one where half the participants were compliant -- and we will not see this -- the other half would pass bad data on to the others. And if the others could somehow identify and block all noncompliant data based on noncompliant code, the industry would collapse. The data lockout would bankrupt the industry. Banking is the obvious example.

This has been denied by a few of my critics, though not many. These people are in y2k denial. Here is the assessment of Action 2000, which the British government has set up to warn businesses about y2k. The problem is not just software; faulty embedded chips/systems can transmit bad data:

"In the most serious situation, embedded systems can stop working entirely, sometimes shutting down equipment or making it unsafe or unreliable. Less obviously, they can produce false information, which can mislead other systems or human users."

In short, a noncompliant computer's data can corrupt a compliant computer's data. But those in charge of the compliant computer may not recognize this when it happens. They may then allow their supposedly compliant computer to spread the data with others. Like a virus, the bad data will reinfect the system. I describe this dilemma as "reinfection vs. quarantine."

Every organization that operates in an environment of other organizations' computers is part of a larger system. If it imports bad data from other computers in the overall network, the y2k problem reappears. But if it locks out all data from noncompliant sources, it must remove itself from the overall system until that system is compliant. This threatens the survival of the entire system. Only if most of the participants in a system are compliant will the system survive.

Consider banking. A bank that is taken out of the banking system for a month -- possibly a week -- will go bankrupt. But if it imports noncompliant data, it will go bankrupt. A banking system filled with banks that lock out each other is no longer a system.

There is no universally agreed-upon y2k compliance standard. There is also no sovereign authority possessing negative sanctions that can impose such a standard. Who can oversee the repairs, so that all of the participants in an interdependent system adopt a technical solution that is coherent with others in the system?

Corrupt data vs. no system: here is a major dilemma. Anyone who says that y2k is a solvable problem ought to be able to present a technically workable solution to this dilemma, as well as a politically acceptable way to persuade every organization on earth to adopt it and apply it in the time remaining, including all those that have started their repairs using conflicting standards and approaches.

Some people say that y2k is primarily a technical problem. Others say it is a primarily managerial problem. They are both wrong. It is primarily a systemic problem. To fix one component of a system is insufficient. Some agency in authority (none exists) must fix most of them. Those organizations whose computer systems are repaired must then avoid bankruptcy when those organizations whose systems are not compliant get locked out of the compliant system and go bankrupt.

If there is a solution to this dilemma, especially in banking, I do not see it.

My critics offer no solution. All they offer is this refrain: "North is not a programmer." But what has this to do with the entries in this category? Nothing at all."

Updated - Subject 13-Jan-97 No Date Standard = Chaos in Year 2000

24-Feb-97 Corrupt Data: How to Ruin a Compliant Computer

25-Mar-97 There Is No Agreed-Upon Standard

19-Sep-97 The Navy Describes the Interconnection Problem

23-Sep-97 British Police Describe the Coordinated Repair Problem

02-Oct-97 Shared Data: The Nuclear Regulatory Commission's Warning

22-Oct-97 Suppliers and Buyers Must Adopt Coherent Y2K Solutions

25-Oct-97 Six Largest Canadian Banks Coordinate Y2K Repair

30-Oct-97 Destruction or Breakdown of Government Files

03-Nov-97 Defending Against Noncompliant Data

04-Nov-97 Health Care Industry: Bad Data Problem

08-Nov-97 Spreadsheets from Micros Can Corrupt Compliant Mainframes (if Any)

02-Dec-97 It's a Problem for Society, Senior IT Officer Says

20-Jan-98 No Standards for U.S. Government Until March 1, 1998

24-Feb-98 Toxic Data, Says Lawyer

16-Apr-98 Bad Data from Accounting Firms

20-Apr-98 New Zealand Government Report's Warning

22-Apr-98 Ignored Problem, Expert Says

22-Apr-98 More Interconnections Means Vastly More Vulnerability

29-Apr-98 Why Y2K Is a Systemic Problem: Fusion into One System

05-May-98 Thailand's Y2K Budget Blocked by Budget Bureaucrats

27-May-98 Y2K as Ebola: The Re-infection Problem

23-Jun-98 Yardeni Describes the Imported Data Problem

26-Jun-98 Systemic Social Forests, Digital Trees -- Gary North Replies

04-Aug-98 California Policy Planning Organization Offers Warning

24-Aug-98 GAO Warns: U.S. Government Has a Massive Connections Problem

11-Sep-98 The Blind Leading the Blind: The Collapse of Governments' Data Exchange

11-Sep-98 Filters Needed to Screen Out Noncompliant Data

16-Sep-98 GAO Official Warns on Threat of Imported Data

23-Jan-99 This Problem Is Real, Which Is Why Y2K Is Like a Virus

28-Jan-99 Oregon's CIO Warns of Noncompliant Imported Data

Link at

http://www.garynorth.com/y2k/results_.cfm/Imported_Data



-- Andy (2000EOD@prodigy.net), February 23, 1999.


???? >I worked for a major international banking firm that used to transfer in excess of $500,000,000,000.00 in a day is it's cash system alone, not counting all the other finanicial systems they have running.

Oh no you didn't!

-- Rick (Concerned@thetruth.com), February 23, 1999.


Oh, you reckon ????? is a troll?

This is what Alan Greenspan had to say about Banking today Hoffmeister.

It backs up EXACTLY what I've been saying - take particular note of the last sentence and the implications for imported data to US Banks from overseas.

This statement is really quite shocking.......

"Greenspan: We are on a two pronged policy. One.. do as much as we can and we've done an awful lot and will continue to do so to prevent anything of significance happening where we have the capability of double testing and checking as we can. And secondly, to have a whole series of potential actions that we would take in the event that something does happen. And we will continue doing that obviously on an increasing manner through the rest of the year. Things are, I must say, going better than I was concerned about six months ago. People are getting serious. We are inter-relating with all of our banks and the testing systems are going well. So that the worst possibilities I think are behind us and most of those are really interfaced with the rest of the world...

Senator: right.

Greenspan: ...where we don't really know how well they are managing."

I CANNOT BELIEVE HE JUST SAID THIS TODAY...

"WE DON'T REALLY KNOW HOW WELL THEY ARE MANAGING"

Hoff and ?????,

What do you make of this? You assured me earlier that major player testing was well in hand. Greenspan says the complete opposite.

I tell ya, the systems are gonna tank at rollover, big time.

-- Andy (2000EOD@prodigy.net), February 23, 1999.


Andy --- tremendously important post, thanks. It seems to me that the two main (well, ok, three) bones of contention are:

1. Asserting that date formats are the only critical issue, rather than date formats plus date calculations.

2. Visualizing significant, common examples of date format problems.

3. The "success" of the Euro.

No comment on 3, except that Capers Jones has some egg on his face. Otherwise, I agree with everything you said.

On 2., I myself would appreciate several vivid, practical examples.

On 1., here is my hilariously crude attempt to model what I believe you have been saying:

Let's imagine that 80% of the world banking system (say, 95% in US, 70% elsewhere) is compliant by 1/1/2000 (systems, not individual banks, we'll take a positive approach here).

This means that 20% of the system will be transmitting incorrect date formats and these will be rejected and shot on sight (best case again). It probably means that somewhere between 15 and 25% of the corresponding institutions will go bankrupt. Let's say the same 20%. Result: huge chaos to confidence and hit to world economy by end of January 2000, all by itself.

Now, let's assume that the remaining 80% are giddily passing date formats to each other but that there is 1% bugginess (99% true compliance). Guessing 5T atomic transactions a day (??), this means that 50B date format errors will, again, be flagged and rejected. Arguably, this is a hige load over current operational stress, but let's just leave it for now as an unpleasant fact.

Next, let's assume that 1% of date calculation transactions slip through as invalid but unrecognizable. Guessing that there are 1T meaningful date caluculations a day (2T? 5T? 10T? help?), this would yield 10B nasty transactions propagating through the system.

Or, more vividly:

1/1/2000: 10B

1/2/2000: 10B

1/3/2000: 10B

1/4/2000: 10B

1/5/2000: 10B

... and so it goes.

Of course, the reality will be 'x' degree nastier, since these errors will indeed be corrupting the chain. So, those five days of errors may generate another 50B in their wake (100B? 1T?).

Can they be found, given enough time, effort and intelligence? Obviously. But that is precisely the point in question, no: at what point does corruption overwhelm the entire system, that is, eliminate the confidence on which it is based.

I would think my figures have probably been on the highly optimistic side, but I am not sure about the worldwide daily transactions. But it is really the percentages that matter. I believe worldwide compliance of 80% is generally conceded to be highly optimistic and successful processing of date at 99% would be a miracle.

Andy, have I caught the gist or no? Are the likely numbers better or worse? Others, fire away. IMO, this is one of the most important threads on the NG in the past month or more, because its applicability extends logically far beyond banking and brings the systemic nature of the problem back into play.

There will be lots of authentic, good, local Y2K news coming in over the next six months. But it don't matter none if Andy is correct, as I believe he is.

-- BigDog (BigDog@duffer.com), February 23, 1999.


Correction: for point 2. above, I meant to say, "date calculation issues," of course.

-- BigDog (BigDog@duffer.com), February 23, 1999.

In his remarks, Greenspan said the FED had set up a test environment last summer, and over 6000 institutions have used it since. It wasn't clear whether these tests included EDI/EFT testing.

Can anyone give a clearer picture of how rapidly such testing can help 1) develop better error trapping, 2) increase reliability in contents of transactions, or 3) flag serious offenders?

I suppose it's possible that some of the questions I originally asked Andy (in I think the third post) have been answered, more or less. I can see the potential for big big trouble, but still lack a feel for its likelihood pending realistic testing.

-- Flint (flintc@mindspring.com), February 23, 1999.


As for testing, check out the SWIFT site, and others.

As for the rest, Andy your one statement says it all.

You postulate that these date arithemtic errors are going to overwhelm the system, causing the collapse of world-wide banking.

You've made these claims at multiple times, in multiple places.

Yet you cannot even build one realistic example of such a failure, and defend it? No question, bad data will be transmitted, just as it is today. But if you cannot describe one realistic scenario, how can you claim enough of these errors will be generated to overwhelm the system?

Hoffmeister

-- Hoffmeister (hoff_meister@my-dejanews.com), February 23, 1999.


Flint --- Agree with your point about testing, but suspect that late testing (4Q?) plus the usual PR obfuscation is likely to prevent the ability to make a hard assessment beyond the kind we're all trying to consider in this thread .... we'll know somewhat more, but how much more?

-- BigDog (BigDog@duffer.com), February 24, 1999.

Rick,

For the record, Yes I did work for such an institutiom and believe it or not there are many out there that handle somes greater than that. It's a big world out there.

Andy,

I have tried to explain something without going into the technical details of how the systems function, are programmed, or designed. It is obvious by your last statement that no matter what anyone says you are going to refuse to believe them and you are convinced the end is at hand. So I can only believe that your only purpose for posting here is to spread additional panic to whomever listens even though you yourself do not have evidence to support it.

Sorry what a waste of time and effort. Become constructive not destructive.

-- ???? (?@?.?), February 24, 1999.


Hoffmeister and ???? ---- Would your challenge to my own post be that there are no date calculation errors that will make it through? That there will be < 1%? That the ones which make it through won't propagate? That they will be found quickly and the load on the system reduced before it becomes threatening?

While I myself would appreciate more examples, it is axiomatic that date calculations take place and are transmitted worldwide or do you challenge that as well?

On a related note, how many date format errors do you expect, quantitatively, from compliant-tested systems? 1%? .01%? .001%? Why?

Also, what is your best guess for worldwide banking system compliance? I have guessed 80%. 90%? 95%? 70%? Obviously, these are guesses. Just as obviously, your guesses will affect your judgments, just as they do Andy's or mine. If you assume 99% worldwide system compliance and 99.9% date format/date calculation compliance, say so.

Cough it up, quantitatively. I tried to give the most positive numbers possible, given what is known.

Otherwise, you guys are the ones who are refusing to be constructive.

-- BigDog (BigDog@duffer.com), February 24, 1999.


Big-Dog:

Let's take a look at your examples.

First, the postulated date format errors. A non-compliant bank will not necessarily have format errors. If data transfer is done using 2 digit years, there would be no reaon for formatting errors. On the other hand, if 4 digit years are used, one must then assume that the '19' was being hard-coded; while that may be the case in some instances, it is just as likely that a form of windowing was applied. All of the above also assumes the "all or nothing" approach, as well. Being non-compliant as a whole does not mean that many areas have not been addressed. As well, you seem to like throwing out assumptions; on what do you base a 20% bankruptcy failure? While Fix on Failure is not a recommended method, correcting this form of error in the source system would be trivial.

As for date calculation errors, again, you throw percentages out as assumptions. To be honest, I do have a hard time visualizing date calculations and their effect on inter-bank transfers. I can readily imagine erroneous transactions generated in areas such as POs and Invoices, but those also are typically heavily monitored. Andy has used in the past currency conversions; depending on the system, I suppose date lookups may be involved. This may be just do to my unfamiliarity with inter-bank transfers; my dealing with banks and electronic data transfers has always been on the external side, payments and lockbox receipts.

Again, just saying there will be errors does not make them happen, which is why I have requested examples. That none have been forthcoming merely underscores the fact that, while some may occur, even your guess at 1% is probably wildly overstated.

The only reasonable example I can guess at is in currency conversions, and that is why I bring up the Euro. In particular, the fact that some major systems seem to have generated large numbers of invalid transactions. That this caused problems for the institution is without questtion; yet, they have not gone bankrupt, and those transactions did not systematically spread and infect the European Banking system, corrupting it.

Hoffmeister

-- Hoffmeister (hoff_meister@my-dejanews.com), February 24, 1999.


Hoffmeister -- I will do my best to answer your questions, but you didn't answer mine: what is your estimate of worldwide compliance by end of year, etc? Your judgments as revealed on those little, uh, details are hardly irrelevant.

For instance, I postulate 1% bugginess of format interfaces and date calc pass-throughs. Why would that be "probably wildly over-stated," given IT history? Actually, given the complexity and lateness, if done at all, of WORLDWIDE systems testing, my estimate is, perhaps, wildly under-stated. What is your estimate? Why?

I do agree my 20% bankruptcy estimate is very soft, though note that I estimated worldwide compliance of the entire system at 80% (I agree compliance isn't necessarily all-or-nothing). However, if one assumes that 15% of the world's banks only get, say 30% of the way towards compliance (hardly a wild assumption), bankruptcies in this range are not impossible. Again, what is your estimate on bankruptices? None? .01%? Still waiting ....

With respect to currency conversions, from earlier in the thread, "Visa and the member banks make complicated currency conversion calculations in the trillions every day. These will be the Achilles heel." And this challenge to that, "You are not wrong per se with compliant US Banks submitting US Dollars to compliant Visa programmes, it's the other 140,000 endpoints that feed into Visa/Amex/SWIFT, etc that will be the problem. Who precisely knows what they will be submitting - the data fields may synch up ok, but is the data valid?"

You bring up the euro again in this respect, so I will repeat Andy's response to you earlier in this thread,

"Currency conversion as in euro-conversion does not extrapolate forward .... all conversion is realtime, not batch,and takes place in a matter of seconds on the GMT date ... date arithmetic errors are NOT occurring NOW - they WILL occur in 2000."

Point: regardless of the general applicability of the euro (about which we largely disagree), there is zero applicability with respect to the core theme of this thread: date calculation errors.

Unless you are postulating 100% compliance (again, I'm waiting on your specific, quantitative judgments), the burden remains on you to show why Andy's scenario is not, in principle, all too possible.

-- BigDog (BigDog@duffer.com), February 24, 1999.


(Note: apologies beforehand if I screw up the HTML)

Big Dog:

Hoffmeister -- I will do my best to answer your questions, but you didn't answer mine: what is your estimate of worldwide compliance by end of year, etc? Your judgments as revealed on those little, uh, details are hardly irrelevant.

I didn't answer because compliance, in general, is not really all that important to this thread, depending on how you define the term. Not enough information is available to even begin to answer this on a worldwide basis. Where the information is available, here in the US, my answer would be I expect upwards from 99% of the financial institutions to have either compliant or remediated systems in place by the rollover to handle inter-bank transfers. I base this off publically available information, even the latest Weiss survey. Note that Weiss in no way implies these systems won't be done, just that they were not done by the 32% as of Dec, 1998.

Again, information available internationally is just not conclusive. Even the article on the Senate report states that conclusions on Japanese Banking were made based on consultants, because the Japanese indicated far lower repair costs. Other sources explain this due to upgrades already being in place.

For instance, I postulate 1% bugginess of format interfaces and date calc pass-throughs. Why would that be "probably wildly over-stated," given IT history? Actually, given the complexity and lateness, if done at all, of WORLDWIDE systems testing, my estimate is, perhaps, wildly under-stated. What is your estimate? Why?

Given IT history? Since we don't have a starting point, as to the current percentage of "bugginess", I really have no way of estimating what it will be. For date formatting, this 1% would be far too high, as I described earlier. We'll get to the date calculations later.

I do agree my 20% bankruptcy estimate is very soft, though note that I estimated worldwide compliance of the entire system at 80% (I agree compliance isn't necessarily all-or-nothing). However, if one assumes that 15% of the world's banks only get, say 30% of the way towards compliance (hardly a wild assumption), bankruptcies in this range are not impossible. Again, what is your estimate on bankruptices? None? .01%? Still waiting ....

Again, you are probably right, that some bankruptcies will occur. But again, on what do you base this estimate? The only reasonable comparison is with the Dutch Bank having Euro problems, and they have not declared bankruptcy. So my estimate would have to be somewhere less than 1%, wordlwide. This is obviously just a guess, as is your estimate, and has absolutely no basis. But you seem to require number guessing, so I guess I'll play.

With respect to currency conversions, from earlier in the thread, "Visa and the member banks make complicated currency conversion calculations in the trillions every day. These will be the Achilles heel." And this challenge to that, "You are not wrong per se with compliant US Banks submitting US Dollars to compliant Visa programmes, it's the other 140,000 endpoints that feed into Visa/Amex/SWIFT, etc that will be the problem. Who precisely knows what they will be submitting - the data fields may synch up ok, but is the data valid?"

Again, no realistic scenario is given. Take the currency conversion. As far as I know, no date "calculation" is required. I researched this in SAP, for example. Now, SAP uses 4 digit years, but I assume the mechanism is fairly similiar, in that an exchange rate and date effective is entered on a lookup table. The logic would do a comparison on the date in question, such as (current date) >= (effective date). Nothing implies this must produce invalid data, even using 2 digit years. If no effective date was available in 00, the program would error, not finding an exchange rate, unless very poorly written. If an exchange rate *is* available with a 00 year, this comparison would work.

Unless I am missing something regarding currency conversions, even given no remediation, I would expect a very small percentage of transactions actually producing erroneous data.

You bring up the euro again in this respect, so I will repeat Andy's response to you earlier in this thread,

"Currency conversion as in euro-conversion does not extrapolate forward .... all conversion is realtime, not batch,and takes place in a matter of seconds on the GMT date ... date arithmetic errors are NOT occurring NOW - they WILL occur in 2000."

Point: regardless of the general applicability of the euro (about which we largely disagree), there is zero applicability with respect to the core theme of this thread: date calculation errors.

One more time. I bring up the Euro for the following reasons:

1) With no Y2k related error scenario postulated, the errors that will arise must be attributed to residual errors, being things either fixed badly, missed, or that cause other errors.

2) Residual errors are a result of software changes, not Y2k. Therefore, within context, comparison of a like software project, affecting the exact same systems as proposed for Y2k, is more than valid.

3) Based on comments by people such as Capers Jones, the changes relating to the Euro were more complex than for Y2k, which would presume that more residual errors and failures should occur with the Euro than with Y2k.

4) I don't discount completely Capers Jones estimates as to the number of Euro errors, at least not yet. However, it is safe to say that these errors have not collapsed the European Banking system, and defintely did not propogate through the system, causing a collapse.

5) The one fairly documented case of a non-Euro compliant bank also stands as an example. If I remember, they encountered problems with something like 10% of their transactions. Problems, yes. Bankruptcy, no. Given that example, I would say one would have to estimate a higher than 10% error rate, before talking of bankruptcies.

Unless you are postulating 100% compliance (again, I'm waiting on your specific, quantitative judgments), the burden remains on you to show why Andy's scenario is not, in principle, all too possible.

If you read through the thread, I in no way postulate 100% compliance. Systems do not work at 100% today. And I disagree with the burden of proof argument. The system works today, at less than 100%. Given no realistic scenario, how can one postulate overwhelming errors?

Hoffmeister

-- Hoffmeister (hoff_meister@my-dejanews.com), February 24, 1999.


????, This is what you accused me of doing.......

"I have tried to explain something without going into the technical details of how the systems function, are programmed, or designed. It is obvious by your last statement that no matter what anyone says you are going to refuse to believe them and you are convinced the end is at hand. So I can only believe that your only purpose for posting here is to spread additional panic to whomever listens even though you yourself do not have evidence to support it."

???? - I have been studying this problem for several years now, I have read all the articles in the Gary North Imported Data folder amongst many others, have you? I have worked at the sharp end in Banking for five years, I have worked internationally in IT for 21 years, I have dealt with buggy transactions, tracing them on their journey from entity to entity around the world on a day to day basis, have you? I described Alan Greenspan's last shocking statement on Monday of this week - I'll repeat it again - he said, and I quote

"The worst possibilities I think are behind us and most of those are really interfaced with the rest of the world where we don't really know how well they are managing."

Now in light of this, where our Banking point man effectively says that the 11,000 US Banks have no idea how well the other 100,000 + Banks in the world are managing, you wonder why I'm concerened???

He said, and I quote "the WORST possibilities are really interfaced with the rest of the world."

Do you not find that a teensy weensy bit alarming ???? or does the USA exist as an island unto itself? The fact that the 11,000 allegedly compliant US banks have to interface with 100,000 + Banks where Greenspan has no idea "how well they are managing"... the fact that invalid data gets through the system we have in place now, the fact that this invalid data streaming into the 11,000 US Banks on Monday morning, the 3rd of January 2000, will be processed by these 11,000 US Banks AS IF IT WERE VALID... The fact that this invalid data due simply to y2k bugginess will EXPLODE magnitudinally on the 3rd of January 2000 as compared to the 31st of December 1999... The fact that the sheer volume of this invalid data if not caught and fixed IMMEDIATELY will propagate throughout the world-wide financial system like a systemic world-wide Ebola outbreak...

I take umbrage at your statement above that the only reason I'm posting is "to spread additional panic to whomever listens even though you yourself do not have evidence to support it." - your rather insulting words to me and this group.

???? I don't even know who you are, you come into this forum with a fake e-mail id and despite all the sources I've quoted and evidence I've presented in this thread and others that I've cited (go back and read them) you have the audacity to accuse me of spreading panic?

I've been saying on this forum and csy2k that Bankers have not come up with a solution to this problem for a long time now. So ????, whoever you are, I'll ask you this question (AGAIN), and If you can answer it I will retract publicly everything I've said on this matter both here and on csy2k...

This is my question, which both you and Hoff and ALL the Banking shills that monitor this forum and csy2k have conveniently IGNORED...

"How will 100% of allegedly compliant Banks world-wide co-ordinate filters or firewalls to prevent the propagation of y2k induced corrupt imported data from non-compliant Banks and entities world- wide?"

I CHALLENGE ????, Hoff or anybody to answer this question and present a technically workable solution to this dilemma, as well as a politically acceptable way to persuade every organization on earth to adopt it and apply it in the time remaining (circa 150 - 180 working days left), including all those that have started their repairs using conflicting standards and approaches.

If you guys cannot do this then my and many many other experts' consistent prediction is that corrupt data will HOSE THE SYSTEM OF BANKING SYSTEMS. If firewalls and filters are not put into place then this WILL happen - don't just take my word for it read ALL the articles on imported data - it is a very real and devastatingly catastrophic problem for all linked computer systems.

So come on, cut the personal crap, just answer this one itty bitty litle old question.

Hoff, all the to-ing and fro-ing over the specifics and sematics of date calculation, windowing, date arithmetic etc. essentially are side issues.

The ULTIMATE issue is the question that I postulated above and have postulated many many times without even ONE attempt at an answer from ANYONE, let alone the Banking shills.

I'm waiting, I'm not holding my breath.

Best regards,

Andy

Two digits. One mechanism. The smallest mistake.

"The conveniences and comforts of humanity in general will be linked up by one mechanism, which will produce comforts and conveniences beyond human imagination. But the smallest mistake will bring the whole mechanism to a certain collapse. In this way the end of the world will be brought about."



-- Andy (2000EOD@prodigy.net), February 25, 1999.


Andy, Hoff, ???. I won't even try to compete in your league regarding all the above. I just have one question--While all those major banks in Europe were preparing for the Euro conversion....Who was getting them ready for y2k?? It has taken our banking industry 4-6 years to get this far (and I don't think it's far enough) so how can Europe get ready in 12 months? Lobo

-- Lobo (hiding@woods.com), February 25, 1999.

Lobo,

You are of course 100% correct on this point and I have posted exactly your thoughts several times regarding the Euro cutover.

The introduction of the Euro on January the 1st 1999 was mandated by no-nothing bureaucrats at the Maastricht Treaty meeting in, I believe, 1992.

This was a reasonable enough decision at the time as in all fairness y2k was on nobody's radar. However as we got nearer to the late '90's it became quite obvious that this huge Euro software project should have been shelved and all resources directed towards y2k remediation. Of course this never happened as the god-like bureaucrats in Brussels are amazingly stupid and always resist any sort of change whatsoever - that's why they are bureaucrats in the first place, rule number one, place brain in bucket...

Consequently Europe will suffer catastrophically at rollover. France is way behind the curve, Italy has just formed a y2k committe (whup- ee-dooo!), West Germany is woefully ill-informed, with 40% of it's power coming from fix-on-failure Russia...

The history books of the future will recall that the Euro took priority over y2k - how dumb can a no-brainer decision like this be?

Hey - the Europeans have made their bed...

At least the Brits had the sense to stay the hell out, but all their financial institutions had to do all the code changes anyway just to compete in the arena, so the UK is in trouble too.

What a stramash.

-- Andy (2000EOD@prodigy.net), February 25, 1999.


Still no example, Andy? I mean, c'mon, after the years of research, combined with the many many other experts, you must be able to come up with at least one, right?

Hoff, all the to-ing and fro-ing over the specifics and sematics of date calculation, windowing, date arithmetic etc. essentially are side issues.

Huh? Isn't this what you're proposing? That these transactions are going to start propagating through the system, with bad data? But these are side issues?

I CHALLENGE ????, Hoff or anybody to answer this question and present a technically workable solution to this dilemma, as well as a politically acceptable way to persuade every organization on earth to adopt it and apply it in the time remaining (circa 150 - 180 working days left), including all those that have started their repairs using conflicting standards and approaches.

Well, Andy, again without some specifics, which you seem unable to provide, it is hard to give a technical solution. But again, what I've seen is this:

1) Prior testing with partners. This allows a relative degree of comfort that, with at least your major partners, the interfaces will work.

2) Statistical sampling of transactions. Not complete by any means, but will tend to find systematic errors that exist.

Not being involved directly with banking, I can only base my opinions on previous experience with interfaces. The sampling could occur prior to actual posting of transactions in the recipient system.

But I have to keep returning to the basic fact, that without some example of just what type of errors will occur, how do you expect an answer as to how to address them? And how will these errors propagate through the system? The Dutch ABN/Amro bank apparently experienced about a 5% failure rate of international transactions, yet has not gone bankrupt. There apparently have been problems with other commercial banks, as well. Why did these not "infect" the banking system?

As for your statement about conflicting standards and approaches, again, what is your concern? Or is this just another attempt to cloud the issue?

Hoffmeister

-- Hoffmeister (hoff_meister@my-dejanews.com), February 25, 1999.


Gee, guys, would it not be nice if Y2K were a non-issue? I mean, if banks could say something like, "Oh, yeah, the Year 2000 problem -- of course we saw it coming, and we took care of that years ago". And they did, but they didn't, and so here we are today, February 25, 1999, debating as to what is going to happen.

The banking system, as a whole, is not ready today. It probably will not be ready by 1/1/2000. There is a plausible risk that banks that are not ready will adversely affect those that are (if any), since independent of date formats per se, banks still rely on other banks for financial information that might be incorrect due to Y2K problems. No amount of quoting Greenspan, or presenting case histories of other software problems (none that even remotely are in the same league as Y2K), can change these simple facts.

-- Jack (jsprat@eld.net), February 25, 1999.

All I'm asking for is one plausible example of a Y2k error that will propagate through the system. Should that be that hard?

I mean, these theories of imported data spreading and corrupting the banking system, causing its collapse, have been floating around for quite a while. There must be an originating basis for them, right?

Given an example, we can then look at how it would propagate, and what, if any, mechanisms are in place to keep it from happening.

Spreading grandiose theories of collapse may make for entertaining reading, but without some underlying basis, it just serves to increase the FUD surrounding Y2k. Y2k is a big enough problem on its own.

Hoffmeister

-- Hoffmeister (hoff_meister@my-dejanews.com), February 25, 1999.


You've got it backwards, Hoffmeister, the burden of proof as it were is on the banks to demonstrate beyond a reasonable doubt that they will be ready for Y2K and that our assets will not turn to electronic mush on or about 1/1/2000. Maybe someone in the banking industry can construct the example that you seek, and I would agree that it would be informative to have. But where our very lives are at stake, quite frankly (and make no mistake about it, things can break down very quickly when basic monetary transactions cease), the only prudent and responsible course to take is to assume the worst until proven otherwise. (And, of course, this extends to electricity, clean water, food supply, etc.)

-- Jack (jsprat@eld.net), February 25, 1999.

The point behind discussions such as this is to determine the liklihood of events. The amount of misinformation that flows regarding Y2k I believe dwarfs anything we have seen.

Y2k is a serious problem. The point is to try and separate the fact from fiction, and be able to base your decisions on the facts. This is difficult, not only because of the PR spin, but also because of the FUD being spread.

Hoffmeister

-- Hoffmeister (hoff_meister@my-dejanews.com), February 25, 1999.


Still no example, Andy? I mean, c'mon, after the years of research, combined with the many many other experts, you must be able to come up with at least one, right?

####### Hoff. let me preface my reply with a reference to the following link. Take a look at it (although I'm sure you know it backwards...)

The 25 Rules Of Disinformation - The Politicians Credo

Disinformati on Strategies 101

I'm going to answer your points anyway, although I've answered them all before ad nauseam, as you well know.

Hoff. Seeing as you refuse to answer adequately (or even remotely approximate an acceptable answer) the question that I've posed for you over and over, I will give you an example, which you will find below. Take a look at it. you will find that it is a realistic example of what can happen. Remember - each Bank has remediated it's own code. There is no set standard to meet, and each Bank has approached the fix with different techniques as you well know. The code is broken.

Therefore each Bank that is not compliant will imprint it's own unique spin on it's data. Some will be garbage. Some will look perfectly OK but still be garbage. It is what the receiving bank does with the valid looking data that is key. As you well know. #######

Hoff, all the to-ing and fro-ing over the specifics and semantics of date calculation, windowing, date arithmetic etc. essentially are side issues.

Huh? Isn't this what you're proposing? That these transactions are going to start propagating through the system, with bad data? But these are side issues?

####### Yes Hoff. Essentially they ARE side issues. It matters not a whit how a Bank has tried to fix it's code if the end result is mangled Data. Do you understand this concept? I don't think you do as I have to keep repeating this over and over. It's the data Hoff. It will look OK, it will pass all the edits. But it will be bogus. DO YOU UNDERSTAND? THE DATA WILL LOOK OK BUT IT WILL BE BOGUS. #######

I CHALLENGE ????, Hoff or anybody to answer this question and present a technically workable solution to this dilemma, as well as a politically acceptable way to persuade every organization on earth to adopt it and apply it in the time remaining (circa 150 - 180 working days left), including all those that have started their repairs using conflicting standards and approaches.

Well, Andy, again without some specifics, which you seem unable to provide, it is hard to give a technical solution. But again, what I've seen is this:

####### Hoff, please don't try and fool this NG. They are not stupid. I've said that I posted specifics on csy2k and those interested in this thread have checked this out. However I will give you an example - the same as I posted before which you, BKS and Don Scot failed to pick apart, the example is valid but essentially what will happen is that there will be an infinitesimal amount of routes bad data can take when received and processed by an allegedly compliant or non-compliant Bank. As you well know. #######

1) Prior testing with partners. This allows a relative degree of comfort that, with at least your major partners, the interfaces will work.

####### This is an outright lie. The 11,000 US Banks may have tested with one or two or even three major partners. This does not cut it at all. Each of these 11,000 Banks can potentially receive bad data from over 100,000 + Banks world-wide that Greenspan has admitted the US Banking community has no idea of how they are progressing. This in addition to interfacing with themselves, 11,000 banks. So the US Banks HAVE NOT TESTED REMOTELY ADEQUATELY AT ALL.

THEY HAVE NOT TESTED IN THE REAL WORLD. THEY ARE IGNORING THE POTENTIAL DANGER FROM OVER 100,000 + OVERSEAS BANKS. EVEN THE US BANKS WILL HAVE EXTREMELY HIGH RATES OF CODE PROBLEMS AT ROLLOVER. THERE WILL BE A LARGE PERCENTAGE OF US BANKS THAT WON'T MAKE THE ROLLOVER IN A STATE OF Y2K READYNESS. #######

2) Statistical sampling of transactions. Not complete by any means, but will tend to find systematic errors that exist.

####### This is garbage Hoff and you know it. Why repeat these two hoary old chestnuts. Statistical sampling of an explosion of errors will NOT solve those errors. This is absolutely laughable Hoff. Maybe you think this is the famed silver bullet? #######

Not being involved directly with banking, I can only base my opinions on previous experience with interfaces. The sampling could occur prior to actual posting of transactions in the recipient system.

####### COULD??? You are not making any sense whatsoever. I don't think you have any idea how inter-related Banks are in this system of systems. Your paragraph above is absolute garbage. What if the sampling finds that all the data is garbage? Then it won't get posted will it. And if this happens at enough Banks then you will not have a system any more will you. But the fact is that this mythic sampling will not take place at all, as you well know. Why??? Because there is no-one coordinating all this, no-one enforcing it. How are you gonna insist that the Bank of Tonga Taboo or Timbuktu is going to statistically sample all of it's transactions on Monday the 3rd of January 2000. Total codswallop my friend. #######

But I have to keep returning to the basic fact, that without some example of just what type of errors will occur, how do you expect an answer as to how to address them? And how will these errors propagate through the system? The Dutch ABN/Amro bank apparently experienced about a 5% failure rate of international transactions, yet has not gone bankrupt. There apparently have been problems with other commercial banks, as well. Why did these not "infect" the banking system?

####### Because the back-office firefighting folks were ready and waiting at the Euro cutover to do just what they were employed to do. Fix the code. With electricity and infrastructure in place. With a small number of Banks affected in comparison to the y2k rollover. With a small number of endpoints feeding in transactions.

The BIG difference at the y2k rollover will be the HUGE volume of transactions flowing around the system that are corrupt. There may or may not be the inrastructure in place to allow the Coverage and back- office staff to do their job. The volume however will simply overwhelm them. Remember that the 11,000 mix of compliant and non-compliant Banks in the US will be receiving corrupt data from largely unremediated banks world- wide, instantaneously, on Monday morning. The longer the corrupt data sloshes around the system, the more corrupt it will become - much like a "core-walker" in mainframe parlance. #######

As for your statement about conflicting standards and approaches, again, what is your concern? Or is this just another attempt to cloud the issue?

####### No Hoff it is YOU that is clouding the issue. Read the disinformation link above.

Hoff - you are exhibiting all the signs of the classic disinformation shill.

Anyone reading this thread can make up their own minds.

You have not answered the question I posed to you.

You have repeated two lame chestnuts - these do not answer my question.

You keep trying to side-track me from the issue at hand.

Your tactics are pretty obvious Hoff.

This is an example of data from a non-compliant Bank A corrupting data at an allegedly-compliant Bank B.

The allegedly compliant Bank B SIMPLY SHOULD NOT HAVE ACCEPTED THE DATA FROM BANK A BUT IT DID. IF THIS EXAMPLE FLY'S THEN ANY OTHER SCENARIO WILL FLY TO. BAD DATA BEGETS BAD DATA EXPONENTIALLY THE LONGER IT REMAINS UNCAUGHT.

You do not need a degree in rocket science to figure this out Hoff.

From csy2k - check out all the posts prior to and after this one.

Link at

http://ww2.altavista.com/cgi-bin/news?id@780nkq%24bo3u%241@newssvr04- int.news.prodigy.com

">Apparently you're not bothering. I thought you were >going to propose a hypothetical transaction originating >at an unremediated bank that would somehow corrupt the >system at a remediated bank. > > --bks

Bradley - I see you are being very specific with the term "corrupt the system at the remediated bank..."

Now I hope you're not going to try and play a game of semantics with me on this, although I suspect you will.

I was planning on a full question tomorrow - but since you asked, why not check out this scenario for starters.

More to follow this week. Example -- Bank A (unremediated Bank of Potsylvania) sends Bank B (allegedly compliant ANZ Bank) a funds transfer with the following data:

Date: 2000-01-06 Time: 09:00:00 Type of transaction: Transfer checking-to-checking Source account: Bank A account number 123-456- 789

Destination account: Bank B account number 987-654-321 Amount: $5,435.43

Looks okay. All field data correctly formatted, all numeric fields within bounds. Date is in ISO format with four-digit year. Bank A account 123-456-789 had to have sufficient funds in order for the transfer software to send the transfer request. All fields and parameters pass EDI checks.

Except --- What is not apparent in the transfer data is that Bank A account 123-456-789 was incorrectly credited on 2000-01-06 at 08:31:00 with interest in the amount of $5,555.55 instead of the correct amount, $5.55, because of a Y2k bug. Thus its available balance at 09:00:00 is $5,550.00 larger than it should be. At 08:00:00 the Bank A account's correct balance was $321.00, and at 08:59:59 its balance should have been $326.55, quite insufficient for a transfer of $5,435.43 out of it. But because of that not-yet- detected Y2k bug, that Bank A account's balance appeared to have been $5,876.55 at the time the transfer was requested, and thus was deemed to have had sufficient funds.

Oh, and the Bank A account owner wasn't trying to get away with ill- gotten gains -- she didn't yet know that her account balance was so high, and she was trying to transfer merely $5.43 (an automated request to pay a monthly bill). A Y2k bug related to the other Y2k bug caused her requested transfer amount to be changed from $5.43 to $5,435.43!

To recap the amounts:

Bank A account 123-456-789 balance as of 08:00:00 = $321.00 Correct interest amount to credit at 08:31:00 = $5.55 Correct account balance at 08:59:59 = $326.55 Correct transfer amount at 09:00:00 = $5.43 Correct account balance at 09:00:01 = $321.12

Actual (incorrect) interest amount credited at 08:31:00 = $5,555.55 Actual (incorrect) account balance at 08:59:59 = $5,876.55 Actual (incorrect) transfer amount at 09:00:00 = $5,435.43 Actual (incorrect) account balance at 09:00:01 = $441.12

Bank B account 987-654-321 is credited with $5,430.00 too much as a result of the Y2K bugs. Bank A account 123-456-789 winds up with $120.00 too much as a result of the Y2K bugs. In the given example, Bank A was not Y2K-compliant. Bank B was Y2K- compliant, but now has incorrect data not detectable by edit-checking of the transfer from Bank A.

Bank B's database is now corrupted - yes or no? It has inaccurate data.

Bank A's database is now corrupted - yes or no? It also has inaccurate data.

To further mudddy the waters the account at Bank B has an automatic order to post 20% of incoming funds to charity at unremediated bank C. Obviously the wrong amount gets sent, and who knows what Bank C will do with the data. It will be an almight mess to manually trace back and fix all transaction flows through back office intervention. Multiply this back office intervention at all entities world-wide and you have chaos.

Bank C's database is now corrupted - yes or no? It has inaccurate data.

And so on. Inaccuracies beget inaccuracies. What does Bank C do with the data? Who knows.

Gives it to the Charity :)

This is the tip of the iceberg.

In many cases transactions will not complete due to, for example, insufficient funds.

If this happens with sufficient magnitude - i.e. mass declines in the credit card world, the inability to trust the validity of data, the banking system will collapse.

More.

The scenario I envision for allegedly compliant bank B (ANZ) at rollover is as follows:-

Bank B will have EDI software in place. This software will allow data into bank B from non-compliant entities word-wide that is correctly formatted and parametered but corrupt. The possibilities are staggering.

Does anyone dispute this - and if so, why?

If this corrupt data can somehow be weeded out then there will be no problem - I accept this.

Can anyone tell me how this can be achieved, at all financial entities, word-wide, in the <190 working days left before rollover (less in most other countries.)???

Now if the type of transaction I described at Bank B runs into serious numbers in terms of quantity of transactions, bank B is in serious trouble. It will process inaccurate data and may fire off this data to umpteen other different financial entities, which will all be facing exactly the same problem, at the same time, world-wide. Databases will be corrupted world-wide. These umpteen other entities may or may not be compliant - they may or may not be processing with faulty date arithmetic - therefore the validity of data throught the world financial system will be suspect.

Result = cross-contamination/infection.

Result = distrust of data accuracy.

Result = Meltdown. Grid-lock. Deadly embrace. No more Banking system."



-- Andy (2000EOD@prodigy.net), February 25, 1999.


Well, I for one have plenty of Fear, Uncertainty and Doubt as to whether the banking system will be able to survive Y2K problems. I base this not on a mathematical proof, but on common sense: I am certain that banks depend on computers; I understand that banks are Working Very Hard and Spending Big Bucks on trying to fix the computers that they admit have a "built-in" flaw that will be enabled on or about 1/1/2000; I doubt that they will make it, since time is so short and we seemingly do not have any significant number ready for the Year 2000 today. This FUD is, I claim, completely reasonable, and not being blown out of proportion -- Y2K is indeed a big problem, and the banking system is a great example of it.

-- Jack (jsprat@eld.net), February 25, 1999.

Ahh, Andy, pulling out the "25 Rules", are we? All because I ask for one realistic example, out of the millions that will supposedly crash the system?

As long as we're playing semantics, you might want to check out this link:

Log ical Fallacies - Hasty Generalizations

Definition: The size of the sample is too small to support the conclusion. Examples: (i) Fred, the Australian, stole my wallet. Thus, all Australians are thieves. (Of course, we shouldn't judge all Australians on the basis of one example.) (ii) I asked six of my friends what they thought of the new spending restraints and they agreed it is a good idea. The new restraints are therefore generally popular. Proof: Identify the size of the sample and the size of the population, then show that the sample size is too small. Note: a formal proof would require a mathematical calculation. This is the subject of probability theory. For now, you must rely on common sense.

But let's get on to your post.

Yes Hoff. Essentially they ARE side issues. It matters not a whit how a Bank has tried to fix it's code if the end result is mangled Data. Do you understand this concept? I don't think you do as I have to keep repeating this over and over. It's the data Hoff. It will look OK, it will pass all the edits. But it will be bogus. DO YOU UNDERSTAND? THE DATA WILL LOOK OK BUT IT WILL BE BOGUS

Yes, Andy, I understand your proposal. I've said many times that this happens today, so of course it will happen in the future.

The collapse is based on the assumption that the number of these errors is going to EXPLODE on rollover, overwhelming the system. To believe this, I at least need to have examples of the types of errors that will occur, and why the number of these errors will EXPLODE. The only reasobable explanation is that date-related processing will fail on rollover, but how will this cause the errors?

This is an outright lie. The 11,000 US Banks may have tested with one or two or even three major partners. This does not cut it at all. Each of these 11,000 Banks can potentially receive bad data from over 100,000 + Banks world-wide that Greenspan has admitted the US Banking community has no idea of how they are progressing. This in addition to interfacing with themselves, 11,000 banks. So the US Banks HAVE NOT TESTED REMOTELY ADEQUATELY AT ALL.

THEY HAVE NOT TESTED IN THE REAL WORLD. THEY ARE IGNORING THE POTENTIAL DANGER FROM OVER 100,000+ OVERSEAS BANKS. EVEN THE US BANKS WILL HAVE EXTREMELY HIGH RATES OF CODE PROBLEMS AT ROLLOVER. THERE WILL BE A LARGE PERCENTAGE OF US BANKS THAT WON'T MAKE THE ROLLOVER IN A STATE OF Y2K READYNESS.

Testing? Ok, let's take a look at some examples.

Bank of NY The Bank has identified several classifications of business partners. These include global sub-custodians, correspondent banks, ADR custodians, and trading counter-parties. Through a program including identification, survey, and correspondence, an effort continues to monitor individual compliance efforts and the progress of these entities. For select partners, testing will be mandatory. If in the Bank's opinion, monitoring measures indicate an existing business partner is not making sufficient progress toward Year 2000 compliance, alternatives will be sought.

It also lists the test plan and dates for FED and SWIFT testing.

From the FED, old, from Oct 28th: >a href="http://www.bog.frb.fed.us/y2k/bos1028.htm">Testing More than 3000 depository institutions have conducted over 9000 tests with the Federal Reserve. "The strong participation thus far is significant and welcome," said Connolly. "However, we urge all depository institutions to conduct extensive Year 2000 testing of their systems, as soon as possible."

All lies, right? If you want, we can go on, with the SWIFT testing, other banks, ....

As well, any backup for the statement that a large percentage of US Banks that won't make the rollover in a state of Y2k readiness? Or is this just a "guess".

This is garbage Hoff and you know it. Why repeat these two hoary old chestnuts. Statistical sampling of an explosion of errors will NOT solve those errors. This is absolutely laughable Hoff. Maybe you think this is the famed silver bullet?

No, no one said it would solve the errors. That's not the purpose. The purpose is to identify the errors before they are posted, or to identify which need to be reversed.

COULD??? You are not making any sense whatsoever. I don't think you have any idea how inter-related Banks are in this system of systems. Your paragraph above is absolute garbage. What if the sampling finds that all the data is garbage? Then it won't get posted will it. And if this happens at enough Banks then you will not have a system any more will you. But the fact is that this mythic sampling will not take place at all, as you well know. Why??? Because there is no-one coordinating all this, no-one enforcing it. How are you gonna insist that the Bank of Tonga Taboo or Timbuktu is going to statistically sample all of it's transactions on Monday the 3rd of January 2000. Total codswallop my friend

Hmmm, let me think. Which is better, posting bogus transactions, or stopping them first?

We aren't talking about global coordination here. Your "theory" involves a compliant bank being corrupted by bogus transactions. The individual banks will do the sampling. No, there is no silver bullet. This is just one method of identifying systematic errors.

Because the back-office firefighting folks were ready and waiting at the Euro cutover to do just what they were employed to do. Fix the code. With electricity and infrastructure in place. With a small number of Banks affected in comparison to the y2k rollover. With a small number of endpoints feeding in transactions.

The BIG difference at the y2k rollover will be the HUGE volume of transactions flowing around the system that are corrupt. There may or may not be the inrastructure in place to allow the Coverage and back- office staff to do their job. The volume however will simply overwhelm them. Remember that the 11,000 mix of compliant and non-compliant Banks in the US will be receiving corrupt data from largely unremediated banks world- wide, instantaneously, on Monday morning. The longer the corrupt data sloshes around the system, the more corrupt it will become - much like a "core-walker" in mainframe parlance.

Again, we're back to these "large number" of errors, with the addition of the "infrastructure" collapse. Tell me, assuming you're correct, how long will these systems run without electricity? My experience with UPS usage has been they are good to enable a safe shutdown, not indefinite use. Large Mainframe shops do not run themselves.

But finally, on to your example. I won't repeat the details here.

First, I have no problem with the interest calculation error. I can readily envision this type of error, given an unsigned numeric field used as the exponent in the calculation of compund interest.

But the first stretch is that this happens, and nobody notices. Y2k is not a surprise. Banks, and virtually everybody else, will be on heightened alert on the rollover. Even during normal processing, based on my experience with the bean-counters, I would have a hard time imagining a 1000% increase in interest payments, without someone raising a red-flag.

Following this, you provide no evidence for how a Y2k error would also increase the transfer amount. How does this happen? What date-related error can occur to cause this?

If it is not a date-related error, but a bug introduced through modification, it would show up prior to the rollover. Banks, or for that matter no one else, is just sitting on remediated systems. They are being deployed back into production.

How will these EXPLODE?

My impression is that the final link, the automatic transfer of a percentage of deposit, is a stretch as well. I haven't verified with a bank, but does this setup really occur? It is this link that allows the "propagation" part of the theory, but how frequent is this? I know I've never set anything like this up. Anybody else?

I'm sorry, Andy, but at least to me, this example doesn't cut it. Even if this could occur, you can't seriously be claiming it will be anywhere frequent enough to overwhelm the system, can you?

Hoffmeister

-- Hoffmeister (hoff_meister@my-dejanews.com), February 25, 1999.


Hoffmeister, your last post is fascinating, it reminds me of a defense lawyer attempting to convince a jury that the prosecution has failed to meet its burden of proof beyond a reasonable doubt. Not only can noone show the actual error that Y2K might induce, additionally, even if one could, there still remains it to be proven that such an error would be frequent enough to overwhelm the system, etc.

And you are right: that Y2K will cause the banking system to collapse, or the power to go out, or whatever, can never be proven beyond a reasonable doubt. There are too many unknowns, too many complexities.

But, as I have said once already, you have it backwards. We need proof that Y2K will not cause such problems -- our very lives may depend on it.

-- Jack (jsprat@eld.net), February 25, 1999.

When someone proposes a theory, is it not their burden to provide a proof? At least, this is what I've been taught.

The theory is based on date-related errors, in sufficient quantity to collapse the system. And yes, to believe such a theory, I would need evidence of a) date-related errors and b) why they would occur in sufficient quantity as to overwhelm the system.

Is that unreasonable? I've said previously, in other areas I can readily find reasonable scenarios. Automated production planning systems, for example, if non-compliant, could easily erroneously generate EDI transactions for PO's. But again, my experience has been that these types of interfaces are limited in number, and typically heavily controlled. So no, I don't think I am asking for any unreasonable level of proof.

How can you subscribe to a theory of millions of erroneous transactions overwhelming the system, without being able to provide a single, reasonable example?

Hoffmeister

-- Hoffmeister (hoff_meister@my-dejanews.com), February 25, 1999.


Well, you were taught wrong. In a criminal trial, we put the burden of proof beyond a reasonable doubt on the prosecution, because we recognize that incarceration or even death of the defendant might be the result, and we want to minimize the possibility of having an innocent person erroneously convicted. I think that it is a fair statement to say that the banking system (as well as electric utilities, etc.) has life-and-death ramifications to our society, and that it is prudent to require that these institutions prove that they have taken proper measures to fix the Y2K problem. Somehow, the approach of, "Well, we didn't see any reason to worry about it, because nobody could ever prove that it would be a problem", just does not cut it when contrasted to the penalty of being wrong.

-- Jack (jsprat@eld.net), February 25, 1999.

Taught wrong? To think of all the centuries wasted using logic and math.

If you are speaking on a more general level, I agree. Banks have the obligation to you, as a customer, to demonstrate that your money is safe with them. I do not keep the money I have in a bank because of a meager interest rate, but because it is safer there than with me personally.

Y2k is being addressed because it is a demonstratable problem. I base my decisions and actions on real risks, not unsubstantiated theories. Y2k has many real risks.

One doesn't have to look very hard to find hundreds of doomsday scenarios to fear. I don't spend my time, or efforts, in fear of those that have no basis in fact, or that pose no reasonable risk.

Again, Y2k in general is not included in the above. Many real issues and risks are involved. But that is not a reason to leave unquestioned the many forms of FUD being propagated. In fact, it is all the more reason to separate the fact from fiction.

Hoffmeister

-- Hoffmeister (hoff_meister@my-dejanews.com), February 25, 1999.


OK, lets reverse this a bit: Prove that Y2K is real, prove that it is not hype. And beyond a reasonable doubt, bub.

-- Jack (jsprat@eld.net), February 25, 1999.

Hmmm, could be interesting.

I take it you aren't just looking for proof that the year 2000 will come, correct?

Before we start, let's agree on what is considered legitimate evidence. All I asked of Andy is a reasonable description of how a single example of such an error would occur, and a basis for the number of these errors dramatically increasing. Will the same apply here?

Tell me what you will consider legitimate evidence.

Hoffmeister

-- Hoffmeister (hoff_meister@my-dejanews.com), February 25, 1999.


I hope this link is helpful to the discussion:

http://www.house.gov/banking/11497lea.htm

CURRENCY

The Committee on Banking and Financial Services U.S House of Representatives, 105th Congress James A. Leach, Chairman

Phone: (202) 226-0471 Fax: (202) 226-6052 Internet: http://www.house.gov/banking

For Immediate Release Contact: David Runkel or Tuesday, November 4, 1997 Andrew Biggs 226-0471

Opening Statement Rep. James A. Leach Chairman, House Banking and Financial Services Committee Hearing on the Millennium Bug: Banking and the Year 2000 Computer Problem

Although few now recognize it, the nation stands on the threshold of one of the most challenging technical problems the banking and financial services industry has ever faced  the so-called "Year 2000 Problem."

Computer logic is simply unprepared to deal with a steadily ticking logic bomb.

When the clock strikes midnight on December 31, 1999, many computers could malfunction or even shut down. At financial institutions, it could mean errors in checking account transactions, interest calculations, or payment schedules. It could mean problems with ATM systems or credit and debit cards. It could affect bank recordkeeping, investments, currency transfers, and legal liability. It might interfere with payment systems, both here and abroad, and affect EFT transfers for payroll or pension recipients. It takes little imagination to picture the ricochet effects that malfunctioning computer systems could have on important bank operations.

Ironically, the cause of all this potential confusion sounds simple. For many years, computer systems were designed to record only the last two digits of the year in the date field in order to save costly computer data storage space. Hence, 1997 is recorded simply as "97." This design concept serves us well as long as we are still in the 1900s but has left us ill prepared for the century date change to the Year 2000. The problem, while not a virus, can act as what some have called a logic bomb which can effectively infect interrelated computer systems. As a result, millions of lines of computer code at banks across the nation need to be checked, and thousands of computer programs converted or replaced. Further, the problem isnt confined to computer systems. Everything that has a computer chip in it may be vulnerable  the time lock on a bank vault, the telecommunications system by which data is exchanged, and the computer-controlled elevator in a bank office building. Virtually every institution appears likely to be affected in some way, and costs may be significant. For example, Chase Manhattan has publicly estimated its Year 2000 remediation costs to be in the range of $200 - $250 million. A consultant to the industry, the Tower Group, has estimated the total cost of Year 2000 conversion for US commercial banks at around $7.2 billion.

Experts also emphasize that the problem must be fixed properly and on time if Year 2000 related problems are to be avoided. I was intrigued by a statement Federal Reserve Chairman Alan Greenspan made a couple of weeks ago. He pointed out that 99 percent readiness for the Year 2000 will not be enough. It must be 100 percent. Thus, the message seems clear: all financial institutions must be ready; federal and state regulatory agencies must be ready; data processing service providers and other bank vendors must be ready; bank customers and borrowers must be ready; and international counterparties must be ready.

Unfortunately, the fact that success or failure in meeting the Year 2000 challenge wont be evident until just over two years from now has led some to ignore or downplay its importance. For this reason, the Committee is obligated to lay out a clear record on the state of readiness of financial institutions and regulators to deal with the Year 2000 issue and to make sure that all possible precautions are being taken as early as possible. We need to establish how pervasive the Year 2000 problem is at financial institutions and whether we can have reasonable confidence it will be fixed on time. We need to know whether the industry is facing minor inconvenience, serious disruption of service, or total computer meltdown. We hope in the process of defining the parameters of the Year 2000 problem to avoid the pitfalls of either exaggerating or understating its consequences.

The Committee also intends this hearing to be but the first of a series of oversight hearings on the issue. We plan to monitor closely the pace and quality of progress in Year 2000 remediation efforts. If a large number of institutions have not finished repairs in preparation for testing by roughly this time next year, the safety and soundness consequences may be severe, with attendant systemic risk ramifications. Here, I am particularly concerned with the pace of smaller domestic and most foreign institutions in addressing the problem. It is unclear at this point whether vendor dependence is a liability or a Godsend for the banking system.

Today the Committee will hear from Senator Robert Bennett and two panels of witnesses. The first consists of the Honorable Edward W. Kelly, Jr., a member of the Board of Governors of the Federal Reserve, and the Honorable Eugene A. Ludwig, Comptroller of the Currency and Chairman of the interagency Federal Financial Institutions Examination Council (FFIEC). At the invitation of the Committee, written statements are also being provided by the FDIC, NCUA, and OTS. Our second panel provides a non-governmental perspective. We will hear from Mr. James Devlin of Citibank about that institutions experience with Year 2000 conversion programs, from Mr. John Meyer of Electronic Data Systems Corporation (EDS) about their role as a vendor of data processing services to banks, and from Mr. Lou Marcoccio of the Gartner Group on the broader Year 2000 picture and how the financial sector fits into it.

I would like to take the opportunity to commend the GAO for the helpful Year 2000 assessment guide they have prepared and the technical assistance they have provided in recent weeks. I have asked the GAO to prepare a formal report for the Committee, assessing the strategies and progress of the federal banking agencies in addressing Year 2000 challenges internally as well as at institutions they supervise. We expect to be briefed regularly by the GAO on subsequent updates to that report.

Finally, I would notify the committee that I am drafting legislation -- on which I seek comment today -- to address several discrete aspects of the Year 2000 problem as it relates to regulated financial institutions.

The proposed legislation would direct Federal banking agencies to hold seminars for their regulated financial institutions on the Year 2000 problem and require the regulators to provide financial institutions with model approaches to addressing it. The bill also would:

Give the Office of Thrift Supervision regulatory parity with the other regulators in the specific area of oversight of service corporations or vendors providing Year 2000-sensitive services to thrifts. Amend Federal copyright laws to allow regulated financial institutions or Year 2000 vendors to authorize them to temporarily copy the institution's computer software for the sole purpose of Year 2000 compliance if the appropriate consent is difficult to obtain in a timely fashion. And authorize Federal banking agencies to waive any civil monetary penalties and work towards reducing any damages that otherwise might be imposed by a federal court for inadvertent technical violations of law directly caused by failure to correct a Year 2000 problem. Despite reasonable efforts by institutions to corrects all Year 2000 issues it seems inevitable that some unforeseen problems will arise, and if institutions correct them on a timely and forthright basis upon discovery it does not seem reasonable that they be held liable. This provision is intended only as a statutory clarification to ensure that such technical errors would be treated, for example, as bona fide errors for the purposes of Section 130 (c) of the Truth in Lending Act or Section 271(c) of the Truth in Savings Act and in no way should be construed as an effort to indemnify any institution against negligence. ********

-- Kevin (mixesmusic@worldnet.att.net), February 25, 1999.


Thanks, Kevin, that's a start.

I can assume then, for purposes of this discussion, that statements issued by the Government can be taken as "facts"?

Hoffmeister

-- Hoffmeister (hoff_meister@my-dejanews.com), February 25, 1999.


Hoffmeister: By "Y2K", I did indeed mean, of course, the so-called Year 2000 Problem. And when I said prove that it is real and not hype, once again, to clarify, I mean that it is going to cause any significant disruptions. I.E., we accept that 1/1/2000 will come, on time at that; and we accept that there exists the well defined technical problem that computer software, hardware and firmware will have due to: two-digit rather than four-digit representation; incorrect recognition of 2000 being a leap year; use of date data in a procedural rather than data context (e.g., treatment of "00" as a special flag that indicates that the procedure is to take some special action). The trick is to actually "prove beyond a reasonable doubt" that this technical problem is going to cause any significant disruptions. (And remember: I'm a card carrying doom-and-gloomer, rank 15 on the 0:low to 10:high scale, I am looking for TEOTWAWKI. But I don't think such a burden of proof can be met, it is just to speculative due to the complexities involved.)

-- Jack (jsprat@eld.net), February 25, 1999.

Hoff, what kind of problems would you envisage for a bank that did NO remediation...I'm sure you can imagine them having all sorts of problems, and you could probably describe them in geek-speak too. You do agree that the remediation is necessary, right? And what about for a bank that completed 25% of their remediation..how would they fare? 35%, 50%, 80%??? A bank that was 90% fixed, they would have some problems, don't you agree. Well, what ever kind of problems you envisage for a 90% company or for a 25% company, y2kbankingdoomers say that the 90-something% remediated banking industry (-using generous figures) will encounter enough of those same sort of problems to be paralysed.

This doesn't exactly cover what you and Andy were discussing before I lost interest, I know. And also, thanks for your input Hoff. Kevin, Jack, Andy, Big Dog - you're in my will.

-- humptydumpty (no.6@thevillage.com), February 25, 1999.


Humpty:

Yes, remediation is necessary. No question there.

I think you must be very careful with percentages, and have to understand what they represent. My (recent) background is with SAP. I know many companies have been implenting SAP, partially as a Y2k response. Contrary to the common belief, many companies saw the Y2k problem long ago, and did begin addressing it. But they didn't fix systems, they replaced them. Over 9,000 companies, in fact, most being of the Fortune 1000 variety.

My point is, if the percentages represent remediation work to be done, then there is no real way of knowing overall how a company sits. Not all systems require remediation.

Yes, I believe some companies are going to have significant problems due to Y2k. But truthfully, I consider this a business failure, more than anything else. A company that fails to address Y2k, in my mind, is no different than a company that fails to expand production to meet demand, or a hundred other reasons for failure.

More specifically, as to banking, based on current knowledge, I don't foresee major problems. Yes, some foreign banks will have problems, but I truthfully expect them to be more internal than external. To be more specific, my money is staying in my bank (BB&T, more precisely).

Honestly, I cannot envision the type of propagation errors Andy keeps proposing, which is why I entered this discussion.

Jack:

Given your conditions, I will decline. I thought you were asking me to prove Y2k problems exist. I really don't foresee significant disruptions, but that is my opinion. It would be fruitless for me to attempt to prove something I do not believe will happen.

Hoffmeister

-- Hoffmeister (hoff_meister@my-dejanews.com), February 25, 1999.


Hoff --- Underneath your cool air of rational the sum of your posts here reveal merely that you are an irrational idiot. Net-net, bottom line.

-- BigDog (BigDog@duffer.com), February 26, 1999.

Hoff: one of your problems is you are looking at each facet of the banking problem in a vacuum. This problem is huge and it won't all fit in a person's head at once. You have to step back and look at the big picture:

Code complexity - Is just me or has anyone else ever noticed how it's a lot easier for modified code to work incorrectly than correctly? As the complexity increases, weird, very unpredictable things happen. There no longer are enough hours in a day to figure out why a system is misbehaving, only to patch it. How long have these 100,000+ institutions been treating the symptoms instead of fixing the problems?

Interdependence - the main thrust of this thread. Greenspan was right. 100% is required. Note that's 100% of the correctly performing system as it exists today. Of course there are errors now. But its keeping codeheads and techs busy almost 100% of the time RIGHT NOW to keep them in check. Another 1% is going to bust the kitty.

Testing - major issues concerning how much will be done (if any in some cases) and what the results prove. And of cource the real test is production...

Rollover - a singular, simultaneous, monstrous event. What will happen to banks that didn't quite finish, let alone those that weren't even close? Just how long of a bank holiday are we talking about here?

Bank runs - this will put severe pressure on the banking industry as confidence dwindles even before the actual failure events

Continuing Global economic downturn - the banking system will probably be reeling later this year and into the next with more examples of bad banking practices that have come home to roost. Can you say hedge fund?

Infrastructure failures - telecom, power, transportation, etc. It doesn't have to be the loss of the grid. Sporadic utility failures coupled with a shortage of gas to move employees to and from the cities will be enough to cripple ongoing remediation/repair efforts.

It's been an interesting thread, but my 16 years experience with complex systems says the smart money is with Andy.

-- a (a@a.a), February 26, 1999.


BigDog:

Good Job. Got me there. If you can't make an argument, then just write me off as an idiot. I think there's even a term for that type of argument.....

a:

Yes, there are many interdependancies within the system as a whole.

This is just my impression, but it seems people get overwhelmed by the size of the effort. They attempt to analyze the details of every aspect of the problem, across industries, and come to the conclusion that it can't be done.

As an analogy, one of the hardest things I've ever done was move from grunt to project management. I've always been the "doer", getting things done. I had (and still have) a very hard time relying on others to get the work done. I see the same thing in regard to Y2k. Few, if any, posts I've seen are from people who say my company isn't going to make it. Their concern is always for the other guy. My point is, you can't step back and try to grasp the big picture, without relying for the most part on information from other sources, and relying on those sources to perform.

But, with individual aspects, you have to look at the details. A failure has to have a cause. You can postulate as many complex failures as you like, but if there is no basis for the underlying cause, then there is no basis for the complex failure. Just saying it will fail doesn't cut it. And just saying it will fail because it is complex doesn't cut it, either.

I disagree that we are at 100% capacity keeping the current systems running. It just isn't my experience. Agreed, there are isolated cases where this is true. I remember my first true project, circa 1985. We were putting in a remote retail stores system, which eventually would have about 2000 installations. It was a Series/1 based system, and it was buggy as hell. We got 30 stores up, and it kept a team of 20 almost 100% busy, just fixing production errors. But the system eventually stabilized. Besides, the plans I have seen call for all staff to be available for maintenance on the rollover, which should about double the geek capacity to fix errors.

I also think the "massiveness" of the rollover is overdone. Look at the Gartner Group study, which estimates only 8% of the potential errors at rollover. By then, they estimate about 37% of the potential errors will already be encountered. Granted, there will be a spike. It's just not as large as some make it out.

I am concerned about Bank Runs. There is obviously one person who is actively trying to bring this about, along with the collapse of the banking system as a whole. And again, I think separating the fact from fantasy plays one part in allowing people to make rational decisions.

I stay away from the economics. There are far too many pseudo-economists floating around, making dire predictions. I really have nothing to contribute there.

As for the infrastructure, I guess at this point my concern is not for electric power, or telecomms. Those are topics for different threads. Oil and Gas still seems to be a concern, yet the latest survey results are pretty encouraging. Again, I try not to make judgements based on incomplete information.

Hoffmeister

-- Hoffmeister (hoff_meister@my-dejanews.com), February 26, 1999.


Hoffmeister: I can certainly appreciate your position, and understand completely as to why you are declining. I thought that you in fact did see serious disruptions coming, it was just the one issue regarding "The problem of corrupt data fatally infecting the world-wide Banking system" that you were not convinced on.

I think that this has been a very worthwhile thread. It goes a long way to show that nobody knows, nor can anybody really prove, what will happen in the months ahead. All you can do is hedge your bets.

-- Jack (jsprat@eld.net), February 26, 1999.

Hoff: I agree with your premise that a lot can be accomplished by many people working together, and that problems can look overwhelming when one person tries to analyze them. But management of a few programmers is different than management of a group the size we are talking about. Managing large groups of programmers is like herding cats, and is largely responsible for the poor metrics of large projects.

Also, here are a few more areas of concern for you to defuse:

Cyber-terrorism - the financial community will undoubtedly be exposed to serious security breaches as professional criminals and disgruntled programmers use the period of chaotic y2k code modification as an attempt to extract personal gain.

Programmer migration/preparation - as we approach the hour of reckoning, more and more programmers will become involved with making personal preparations, and some percentage of programmers, however small will "bug out". This will impact the remediation/testing/installation process significantly.

Complications from external influences - this includes things such as buggy code changes from the Euro, code merges necessitated by merger/acquisitions, and new requirements such as "Know thy Customer".

-- a (a@a.a), February 26, 1999.


So, you need a resident "defuser"?

My intention in posting here was not that. I was following up a discussion that began on c.s.y2k, after I saw Andy's posts. While this forum is one of the places I read, I don't get the impression that people here want to hear another side. Not talking of this particular thread, but if I'm going to get flamed, might as well do it on the mother-ship, c.s.y2k.

Besides, this thread is getting too long. Takes too long to load.

Hoffmeister

-- Hoffmeister (hoff_meister@my-dejanews.com), February 26, 1999.


The person actively trying to bring about the downfall of the Banking community is presumably Gary North and not me???.......

As far as the y2k "project" is concerned, this is my semi-humorous take on "Y2K Project Management" [now THERE'S an oxymoron for ya] (or the resounding lack thereoff :).......)

Have You Ever Worked On A Project Like This :)

Have you ever worked on a project of oh, say a couple of million programmers, spread all over the world, in different time zones and each speaking a different language, each given a set deadline, each starting at different times, none of them particularly talking to each other, nobody supervising the work done, nobody coordinating the work, no-one integrating the work, no set standards amongst the programmers, some programmers almost finished, one or two have finished, seven or eight are playing Doom, one or two are jerking off, one or two have finished jerking off, some will fix on failure, some have their heads up their asses, some see the urgency of the problem, some don't, some have worked on the Euro as well as y2k, some will be working on both throughout 1999, some are giving up, some are getting rich, some are bailing desparately, some are bailing out, some are sticking it out, many are pulling their hair out?

Sound familiar???

We have a snowball's chance in hell, folks, of this y2k world-wide project turning out peachy.

Andy - getting balder by the minute... (No I'm not a pointy-haired M#@**#r!)

Less than 180 working days to go...

Andy

-- Andy (2000EOD@prodigy.net), February 26, 1999.


Hoff --- If you have been on this NG regularly, you know I very rarely call people idiots. It's not my style. In fact, I think this is the first time.

Unfortunately, there are times when a person (Mitch Radcliffe comes to mind with ZD Net) stubbornly and wilfully use so-called arguments (for example, the way you "extrapolate" from the euro while refusing to see the blatant differences in scale) to avoid dealing with the actual subject in question. And don't give me the same content-less post you have been repeating yet one more time, please.

What I maintain is that anyone with any serious understanding of IT systems (and I've been involved with them for over 20 years myself), even if they don't understanding banking (which I've admitted I don't) would see that the sum total of your OWN posts on this thread are, in fact, idiotic. I invite others to make that judgment based on their own study.

Sadly, not for you, but for all of us, Andy's challenge remains entirely on the table until or if someone with any brains can answer it. You're not that person. You're not even intelligent enough to realize that Andy would be delighted if someone could, in fact, show his hypothesis to be wrong.....

Despite your pose, you haven't the least degree of interest in actually examining this matter and why should you? You've already said there will be 99% percent compliance in the U.S. What this is really about is your thought that you might save people/banking system from North-Andy.

I repeat, with the utmost soberness: you are an idiot masquerading as someone who purports to be speaking rationally. Or, you are a purposeful source of disinformation here. By all means, go back where you came from.

-- BigDog (BigDog@duffer.com), February 26, 1999.


BigDog:

Hoff --- If you have been on this NG regularly, you know I very rarely call people idiots. It's not my style. In fact, I think this is the first time.

Sorry, didn't want to start a precedent.

Unfortunately, there are times when a person (Mitch Radcliffe comes to mind with ZD Net) stubbornly and wilfully use so-called arguments (for example, the way you "extrapolate" from the euro while refusing to see the blatant differences in scale) to avoid dealing with the actual subject in question. And don't give me the same content-less post you have been repeating yet one more time, please.

Content-less posts? Others, more experienced than I, have made the comparison, among them Capers Jones.

My posts have been directly on the actual subject, being Andy's theory of collapse of the banking system.

What I maintain is that anyone with any serious understanding of IT systems (and I've been involved with them for over 20 years myself), even if they don't understanding banking (which I've admitted I don't) would see that the sum total of your OWN posts on this thread are, in fact, idiotic. I invite others to make that judgment based on their own study.

Then, you should have no problem pointing out my "idiocy". I won't get into a pissing contest about IT experience; there is always someone better than you. I'm confident of mine.

Sadly, not for you, but for all of us, Andy's challenge remains entirely on the table until or if someone with any brains can answer it. You're not that person. You're not even intelligent enough to realize that Andy would be delighted if someone could, in fact, show his hypothesis to be wrong.....

I can't comment on Andy's personal feelings.

Despite your pose, you haven't the least degree of interest in actually examining this matter and why should you? You've already said there will be 99% percent compliance in the U.S. What this is really about is your thought that you might save people/banking system from North-Andy.

Part of my purpose in discussing this is to shed light on the actual facts. I have no delusions that I can "save" anything.

I repeat, with the utmost soberness: you are an idiot masquerading as someone who purports to be speaking rationally. Or, you are a purposeful source of disinformation here. By all means, go back where you came from.

As I said previously, it is not my impression that people here want their beliefs challenged.

BigDog, you have called me an idiot multiple times, and questioned my intelligence, claiming it is readily evident from my posts. Point out for all to see where I am an "idiot". I realize it is easier to call someone names, than to back it up. Put up or stop the insults.

You call me an idiot, yet would rather believe a theory of millions of erroneous transactions, when not even one plausible example can be established. Actually, I was too kind in using the "Hasty Generalization" example. That fallacy, at least, requires the establishment of at least one case, before extrapolating to the many.

Hoffmeister

-- Hoffmeister (hoff_meister@my-dejanews.com), February 26, 1999.


Nice try once again, Hoff, but it's over. Bye.

-- BigDog (BigDog@duffer.com), February 26, 1999.

I guessed as much.

Hoffmeister

-- Hoffmeister (hoff_meister@my-dejanews.com), February 26, 1999.


Hoffmeister, I for one welcome you to this forum and hope that you will continue to post here. I find that the arguments that you have presented have not been idiotic, but as I have already said, seemingly designed to cast as much doubt as possible on the permise of this thread -- that non-compliant banks could adversely "infect" compliant banks. It is up to each of us to decide what amount of "proof" is needed to believe something.

-- Jack (jsprat@eld.net), February 26, 1999.

Well.

It seems that when Hoffmeister said people here didn't want their beliefs challenged, he was selecting his words very carefully. He didn't say opinions, he didn't say conclusions, he said beliefs. Precisely so.

The attitude expressed by some in this thread can best be summed up like this: "I don't know shit about banks, but I know bad data will hose them. I don't know how, I don't need to, and I don't care. Anyone whose direct knowledge contradicts my belief is an idiot."

Altogether, as shameful a display of rock-headed dogmatism as I've seen yet. Some of you emotional 2-year-olds are doing no more than sticking your fingers in your ears and shouting I CAN'T HEAR YOU!

My tentative conclusion is that bad data poses a real but likely not crippling threat to the banking system, and that without a serious infrastructure collapse, these problems can be isolated and remedied within an acceptable period of time. We are best advised to keep hard copies of everything and have cash on hand. The banking system faces a more serious threat from nonperforming loans than from internal bookkeeping errors.

-- Flint (flintc@mindspring.com), February 26, 1999.


Here ya go Hoff - from GN site today:

The Senate Banking Committee heard testmony from Federal Reserve Board Chairman Alan Greenspan on February 24. Members asked questions that avoided specifics, and then they refused to ask follow-up questions that would have required embarrassing answers.

Here, for the sake of future meetings with Mr. Greenspan, are some suggested questions:

"You have said that 99% compliance is not good enough for the banking industry; it must be 100%. How many banks in the United States are 100% compliant?"

"None? I see. Well, then, how many of them are y2k-ready?"

"None? I see. Well, then how how you know if any bank with over 10 million lines of code can become compliant?"

"What good does it do for a depositor to have a print-out of last month's records when his bank's computer is down? Can the teller hand him cash if she does not know what his balance is today?"

"How will credit cards work when they access records of a bank account when the bank's computer is down?"

"How will a seller verify that a check is good if the computer of the check-writer's bank is down?"

"How will sellers verify that a bank is still solvent and checks drawn on it will clear during the early months of 2000? Will the FED publish a daily list of still-solvent banks?"

"Do you regard the banking system as international?"

"You do. Fine. How many banks are compliant abroad?"

"You don't know. I see. Would it be fair to say that if half of the banks outside the United States are not compliant in 2000, then there is no way that the U.S. banking system can become compliant, given the problem of shared data?"

"How can noncompliant data in a noncompliant bank be locked out of the computers of a compliant bank?"

"What percentage of banks in the U.S. can be locked out without causing a collapse of the U.S. banking system?"

"What percentage of foreign banks can be locked out before the U.S. banking system collapses?"

"We have been told that the 19 largest banks in Japan plan to spend, collectively, about $1 billion to fix y2k, compared to Citibank, which will spend $850 million to $925 million. Would you say that the Japanese banks are facing about 6% if the y2k remediation problems that Citibank is?"

"So, you think they are facing much the same degree of challenge. Then do you think the largest Japanese banks are likely to meet the deadline, given the fact that Citibank began in 1995 to fix the problem and is not yet compliant, and also given the fact that half of these banks have fixed only 25% of their main computers?"

"Would you say that if 100% of the banks in Japan do not meet the deadline, that this will hav repercussions on banks outside of Japan? Would you describe some of these problems?"

"What happens to the capital markets if the is a bank run in Japan, and Japan sells U.S. Treasury debt and other nations' debt to buy yen to hand out to Japanese depositors? Will interest rates rise? If so, what happens to corporate earnings and the stock market?"

"Is there some reason to believe that every other nation will not be experiencing similar bank runs? If not, what can the Federal reserve System do to save the U.S. banking system from a collapse?"

"How long can a bank be cut off from electronic communications from the banking system and still remain solvent?"

"If a bank shuts down in 2000 because of computer problems, how can the FDIC or any other organization absorb the assets and liabilities of that bank without also absorbing its bad computer code? Will the data have to be entered by hand? If so, what happens to depositors in the interim?"

"If the FDIC does not achieve complice by Jan. 1, 2000, will the Federal Reserve replace it as the insurer of all accounts up to $100,000?"

"With 100 million households in the U.S., how much currency per household will the FED have on hand in late 1999 to meet demand?"

"We have a $7 trillion economy that is systained by about $6 trillion, if we use M-3 as a measure of the money supply. What size economy would you predict if we had only one-third of the $500 billion in currency now circulating inside the U.S. (two-thirds of the $500 billion is outside the country), plus the $150 billion you have in reserve, plus the $50 billion you expect to add to reserves this year, plus the $45 billion in use today as vault cash held as legal reserves?"

If the world were to go back to an all-currency economy in 2000, would you expect the velocity of money to drop? If so, would this be deflationary?"

"How could debtors repay today's level of debt in a cash-only scenario?"

"Banks are debtors to depositors. How could they repay?"

"FED Board member Edward Kelley, Jr., has testified that the FED has 90 milllion lines of code. As late as August, 1997, the FED had barely begun code repair, forming a task force to study the problem. How is it that the FED is now 99% compliant, when no organization on earth with 90 million lines of code has reached compliance, no matter when it began the repairs?"

"The FED has over 316,000 data exchanges with computers outside the FED. How many of these have been corrected and tested?"

"If some of these outside computers are not compliant on January 1, 2000, how will the FED protect its systems from being made noncompliant by imported noncompliant data?"

"If you block data coming in from these outside noncompliant computers, what percentage of them is the maximum that you can cut off before the operations of the Federal Reserve System cease to be reliable?"

But senators asked nothing like these questions. They never do. They are terrified of looking stupid when Mr. Greenspan goes into his famous Dwight Eisenhower routine, and they can't follow up with additional questions. The only person who can understand him then is Al Haig, and no one can understand Haig.

I love to see Greenspan in action. I also loved to see Professor Irwin Corey, the World's Foremost Authority. Greenspan is missing only the tuxedo and the tennis shoes.

This is from WIRED (Feb. 25).

* * * * * * * * * *

. . . Senator John Edwards (D-NC): One last question. I was very interested in your comments about Y2K and Americans' concerns about Y2K. What advice would you give a senior citizen in the United States in December of this year about what to do with their savings account?

Greenspan: I would say the most sensible thing is to leave it where it is. That's probably the safest thing. There's almost no conceivable way in which I can envisage that computers will break down and records of people's savings accounts will disappear.

I mean, that's not what the problem is. That's easy to prevent happening, and everyone will do that. And there is, fortunately, so many different checks and balances that if it gets knocked out in one place, it's available 20 other places.

The real problem, basically, is to the issue as to whether in fact a usual means of withdrawing currency will be blocked by whether technology breaks down, whether something freezes up, whether or not the safe in the bank can't be opened or something like that.

That's a really minor concern that I'm aware of. And while I do not deny that if you have currency rather than the money temporarily locked up in your bank because the doors don't open, your money is perfectly safe. You just can't get it. It's an issue as to whether you are safer taking it out and waving it around or whether you're better to just leave it there.

-- a (a@a.a), February 26, 1999.


a,

You have surpassed yourself Sir! Bravo! The best post I've read so far this year.

Why don't you submit it to Cory Hamasaki for his next DC WRP - I haven't seen a better summation of Banking problems yet...

Flint,

In view of a's post, are you going to stick with your parting shot, or reconsider? You too Hoffmesiter.

Thanks everyone,

I think this is a wrap.

-- Andy (2000EOD@prodigy.net), February 27, 1999.


Andy, you are too funny. Do you always get so thrilled at cut and paste?

Yes, I think this thread is done. BigDog refuses to address the actual issue, instead resorting to insults. I am not a banking expert, so attempting to answer the list of Gary North's questions would be pretty futile.

Jack, I may stick around. To be honest, c.s.y2k has seriously degraded. The signal to noise ratio, always low, has become almost non-existant.

Hoffmeister

-- Hoffmeister (hoff_meister@my-dejanews.com), February 27, 1999.


Ahh,

So GN wrote the questions... Bravo GN then :)

Hoff - I'll let you get the last little dig in, better luck next time.

-- Andy (2000EOD@prodigy.net), February 27, 1999.


Andy,

>Date: 2000-01-06 Time: 09:00:00 Type of transaction: Transfer checking-to-checking Source account: Bank A account number 123-456-789

Whenever you copy the EFT example I constructed, please point out that it was constructed for the limited purpose of illustrating how incorrect data could get past edit checks in a transaction, NOT that incorrect data would be propogated endlessly.

My example does not refer to the methods already available for correcting the erroneous amounts later, because it was not intended to cover that, but those methods do exist and are used to stop propogation of such errors in the EFT systems.

Now, it is valid to question whether the _capacity_ of the EFT error-correcting mechanisms will suffice for Y2k. But my example is not particularly germane to that issue.

-- No Spam Please (No_Spam_Please@anon_ymous.com), April 12, 1999.


Moderation questions? read the FAQ