UPDATE: OT: New Technologies Imperil Humanity - U.S. Scientist

greenspun.com : LUSENET : TB2K spinoff uncensored : One Thread

****UPDATE****

Announcement from Kurzweil Technologies, Inc.

Ray Kurzweil will be a guest on National Public Radio (NPR) "Science Friday" this Friday (March 17) at 3 PM EST (for one hour). Host Ira Flatow will be interviewing Ray and Bill Joy (Cofounder and Chief Scientist of SUN Microsystems).

The topic will be Bill Joy's cover story for the April issue of WIRED titled "Why the Future Doesn't Need Us." Joy's story in WIRED has itself received wide coverage in other publications, including an article on the front page of the New York Times Business Section on Monday (March 13).

Link: THIS WEEK ON SCIENCE FRIDAY: Hour Two: Perils of Technology

See original thread: OT: New Technologies Imperil Humanity - U.S. Scientist

Should be interesting...

-- Jim Morris (prism@bevcomm.net), March 16, 2000

Answers

as an inventor and creator of new technologies, I've been concerned about this for a long time;

suffice to say that I can attest that Mr. Joy is on the right track to be concerned: spent 17 years on the Security Force at the USAEC's Nevada Test Site; saw plenty;

let there be no mistaking the fact that bomb designers are [were] at the top of the heap of the "technologically [or intellectually] arrogant!!!

[this thread should be a hundred miles long...]

-- Perry Arnett (pjarnett@pdqnet.net), March 16, 2000.


This sounds very interesting Jim. For those who can't pick this up on the radio, you can go to the following page (WNYC AM 820) and listen to the live feed via your computer [W/Windows Media Player].

http://www.wnyc.o rg/audioindex/audiomenu.html

Looks like this broadcast will also be archived by monday and can then be accessed via RealPlayer at... http://www.sc iencefriday.com/pages/RealAudio.html

Thanks for the "heads-up" Jim.

-- CD (costavike@hotmail.com), March 16, 2000.


Perry,

I agree -- this thread should be a hundred miles long if only because the essay opens a lot of theoretical possibilites.

Does anyone know if there is another geek/tech board somehwere on the net that is actively discussing Joy's Essay ?

-- Celia Thaxter (celiathaxter@yahoo.com), March 17, 2000.


CD,

Thanks for the links! :-)

Celia,

There's been some discussion on the extropian mailing list. But they have pretty much poo-pooed Joy's article as being too pessimistic. This link will allow you to either subscribe to the list (s), view the archives, or do both.

And I know that Slashdot.org has quite a thread going.

-Jim

-- Jim Morris (prism@bevcomm.net), March 17, 2000.


Jim,

Thanks for the links.

Did anyone hear the radio program? I thought it devolved rather too quickly into a dialectical debate instead of a working discussion about how to safeguard the future. Even while admitting that the threshold for human extinction was quite high, they couldn't seem to move beyond that to what might be done to ensure survival.

-- Celia Thaxter (celiathaxter@yahoo.com), March 17, 2000.



Yeah, I listened to it. I agree - nothing was really discussed about how to ensure these technologies don't get into the wrong hands. Of course, one hour doesn't leave you with much time to get down to the heart of the matter. :-/

I do hope NPR (and others) devotes more time to this issue in the near future. But I guess this was a start - at least the subject is now on at least a few more people's radar screens.

-- Jim Morris (prism@bevcomm.net), March 17, 2000.


Tried to listen via computer, but poor connection ("high network traffic") required continual rebuffering. Gave up on it early but will listen once it gets archived.

Definitely an interesting subject. Would appreciate updates from you folks should you come across future media coverage.

Thanks,

-- CD (costavike@hotmail.com), March 17, 2000.


On - The Singularity

For you, Celia...

At the risk of being boring, will you allow me to comment?

Part of the definition of being human involves the ability to see, observe, remember, see anew, to integrate new observations with old ones, and from those integrations to conceive yet newer concepts, processes, products, things, events, etc.

Man is a thinker; a tool maker. Our nature is to be curious; to want to know the how and why of the natural world around us.

And surprisingly, the more difficult it is to solve a problem, the more effort one will devote to the solution! Likewise, the more it may have seemed it might cost to develop the solution, the cheaper one can make it work!

Ask any inventor for his most valued possession and he'll tell you it's his junk yard...that place where he can make 'stuff' from other discarded stuff in lots less time and for lots less money than if he had a new hardware store at his disposal.

Given the above, what's my point?

That men have cloned life forms means that they will clone human life forms.

That men have conceived nano technologies means they will be created.

That men have conceived AI and IA means they will be created.

That the above will happen means that among the ethical, the 'righteous', the moral and the altruistic, there will also be the evil, the mercenary, the diabolical and the criminal - it's in the nature of what it is to be human to do these things.

The irony is that we, the individual taxpayers of this nation, will, as we have done for so long, finance our own demise :

The young and bright have charisma, they use that charisma to smooze and sway politicians who enact legislation to confiscate our wealth through taxation, the politicians become convinced that research into these areas is good for their district so they fund the research; the young, 'arrogant intellectuals' do the research and create the monsters that cannot be controlled.

No government, nor professional society, nor ethical standard, nor any other force will stop the above from happening. - it's in the nature of what it is to be human to do these things.

That the above will happen means also that 'accidents' will happen; that the Baneberrys, the Three Mile Islands, the Chernobyls, the Love Canals and the Bhopals of the world WILL happen.

I'm 60. I've seen in my lifetime the shift from the time when human intellect and human physical labor - talented hands and fingers - were of a certain value to a capitalist, to a time, now, where I've seen that value shift to a lessor and lessor value - to the point at which very few of we humans have any intrinsic economic value to our employers; with CNC, CAD, CAM, FEA, etc., fewer and fewer tasks remain that really cannot be accomplished better, cheaper, more reliably, etc. by machines than by humans. [playing a Chopin Impromptu, smelling a rose, tasting a chocolate and making love not yet included.]

My perspective suggests to me that the young who think the threats to humanity suggested by Joy are not of value, are TOO young, or too immature to have gained a broad enough perspective of the problem. [flame away!] [Oh yes, here it comes :] When they have lived, seen, experienced, what I and others of my generation have, maybe they too would believe that Joy's concerns are valid.

I guess I'm sufficiently old enough to know the difference between possessing information and possessing wisdom; the young are smart, damned smart, they hold at their finger tips loads of information; but wisdom comes, I think, from time, and experience, like fine wine in old barrels; wisdom is that stuff of time, and humility, and, sorry, but the young don't have it, and won't, I fear, until, it may be too late. When I didn't have it, I didn't know what it was!! But that too, is nature's way; the young are designed to be bright, brash, arrogant, quick, frisky; we all were once. But when we ponder the stuff of import, like Joy's essay, then the Oxford Library with red leather overstuffed chairs, a sniffer of fine cognac and a cigar and time, time to think, to ponder, to muse, to reflect, to integrate, to distill - that's what the young don't have - yet.

If Joy and Drexler and Vinge and others are correct, if I live another 30 years, when I reach age 90, I may witness what they are concerned about. The irony is that I've already witnessed some of that with the nuclear age during the 60's and 70's. Different drummer, same tune.

Am I hopeful? I'm sorry to say not. The observations I've made during these 60 years suggest to me that the only difference between men and boys is the destructive power of their toys; and that the questions of what life is and how to create it WILL be answered; that self-replicating sentient machines will be first created, then will self-replicatingly evolve to surpass the human standard; that one can access lots more information than one can wisely use; that when the nuclear genie was let out of the bottle, all the other genii slipped past and the stopper has never been put back in - nor can it be.

In fact, I'm so 'not hopeful' that I've taken a new tack in my life and am smelling lots more roses, enjoying the company of lots more lovely ladies, savouring the tastes of lots more wines, and generally backing away from the mechanistic things and tasks I've taken for granted for so long, and am consciously trying to find out more about what it is to be human, to explore life and culture and art and thought and touching and feeling and experiencing. [wow - is THAT 'over-the-hill'...]

I've designed, invented, patented, made machines that put people out of work, loved technology for the fun of it; own too many computers and machine tools and widgets and gadgets, but Joy's essay has reaffirmed to me that there is a fundamental difference between using and playing with and enjoying tools and technologies, and being caught up by them to the point of forgetting who I am and what I am.

My challenge for any of you is: spend enough time in research, think long enough about the premises, and ponder long about the possible conclusions, then see if you don't come up with concerns very close to what Joy suggests.

If you agree, the come back with some suggestions for the rest of us on what you are doing to make your life more rich and rewarding and fulfilling - while you are alive, human and here.

Thanks for the opportunity to rant.

Perry



-- Perry Arnett (pjarnett@pdqnet.net), March 18, 2000.


Excellent post, Perry!

You've put into words much of what I have been thinking about.

Thanks for sharing it. :-)

-- Jim Morris (prism@bevcomm.net), March 18, 2000.


Perry,

It seems you have found a solution to the awful sense of doom one might feel when reading Joy's essay. I re-read it tonight (in print form, in an old-fashioned magazine), and Joy's call to action is quite compelling.

So is yours. In addition to learning more and taking a stand against these technologies (I agree with Joy that we should work toward relinquishment), I think I will make a conscious effort to enjoy life more from now on, just as you are.

Thanks for a thoughtful and wise essay.

-- Celia Thaxter (celiathaxter@yahoo.com), March 19, 2000.



The Kurzweil/Joy debate is now available in realaudio format:

http://search.npr.org/cf/cmn/cmnpd01fm.cfm ?PrgDate=03/17/2000&PrgID=5

-- Jim Morris (prism@bevcomm.net), March 19, 2000.


Ditto to Perry.

You sure are right about wisdom coming from experience. You have posted a 'thought provoking' food for thought opinion. I love to see this.

As for the 'flames' , my dear, why would anyone wish to flame an intelligent post?

Please post more, I KNOW I for one can learn from you.

Yes I am young. But old enough to know intelligence and humility when I see it.

Keep on posting Perry, and if anyone flames ya, I'll call for "the dog"

(smiling with most respectful eyes) -----consumer

-- consumer (shh@aol.com), March 19, 2000.


"Will Spiritual Robots Replace Humanity by 2100?"

----------------

Douglas Hofstadter presents...
Kurzweil/Moravec Symposium
FREE and open to the public
April 1, 2000, 1pm - 5:30. TCSEQ room 200.
(Parking in "A" lots OK on Saturdays)

Primary speakers:

  • Ray Kurzweil (inventor of reading machine for the blind, electronic keyboards, etc., and author of "The Age of Spiritual Machines")
  • Hans Moravec (a pioneer of mobile robot research, and author of "Robot: Mere Machine to Transcendent Mind")
  • Bill Joy (co-founder of, and chief scientist at, SUN Microsystems)

    Panel members:

  • Ralph Merkle (well-known computer scientist and one of today's top figures in the explosive field of nanotechnology)
  • Kevin Kelly (editor at "Wired" magazine and author of "Out of Control", a study of bio-technological hybrids)
  • John Holland (inventor of genetic algorithms, and artificial-life pioneer; professor of computer science and psychology at the U. of Michigan)
  • Frank Drake (distinguished radio-astronomer and head of the SETI Institute -- Search for Extraterrestrial Intelligence)
  • John Koza (inventor of genetic programming, a rapidly expanding branch of artificial intelligence)

    Symposium organizer and panel moderator:

  • Douglas Hofstadter (professor of cognitive science at Indiana; author of "Gödel, Escher, Bach", "Fluid Concepts and Creative Analogies", etc.)

    ----------------

    In 1999, two distinguished computer scientists, Ray Kurzweil and Hans Moravec, came out independently with serious books that proclaimed that in the coming century, our own computational technology, marching to the exponential drum of Moore's Law and more general laws of bootstrapping, leapfrogging, positive-feedback progress, will outstrip us intellectually and spiritually, becoming not only deeply creative but deeply emotive, thus usurping from us humans our self-appointed position as "the highest product of evolution".

    These two books (and several others that appeared at about the same time) are not the works of crackpots; they have been reviewed at the highest levels of the nation's press, and often very favorably. But the scenarios they paint are surrealistic, science-fiction-like, and often shocking.

    According to Kurzweil and Moravec, today's human researchers, drawing on emerging research areas such as artificial life, artificial intelligence, nanotechnology, virtual reality, genetic algorithms, genetic programming, and optical, DNA, and quantum computing (as well as other areas that have not yet been dreamt of), are striving, perhaps unwittingly, to render themselves obsolete -- and in this strange endeavor, they are being aided and abetted by the very entities that would replace them (and you and me): superpowerful computers that are relentlessly becoming tinier and tinier and faster and faster, month after month after month.

    Where will it all lead? Will we soon pass the spiritual baton to software minds that will swim in virtual realities of a thousand sorts that we cannot even begin to imagine? Will uploading and downloading of full minds onto the Web become a commonplace? Will thinking take place at silicon speeds, millions of times greater than carbon speeds? Will our children -- or perhaps our grandchildren -- be the last generation to experience "the human condition"? Will immortality take over from mortality? Will personalities blur and merge and interpenetrate as the need for biological bodies and brains recedes into the past? What is to come?

    To treat these disorienting themes with the seriousness they deserve at the dawn of the new millennium, cognitive scientist Douglas Hofstadter has drawn together a blue-ribbon panel of experts in all the areas concerned, including the authors of the two books cited. On Saturday, April 1 (take the date as you will), three main speakers and five additional panelists will publicly discuss and debate what the computational and technological future holds for humanity. The forum will be held from 1 PM till 5:30 PM, and audience participation will be welcome in the final third of the program.

    ----------------


    This event is brought to you with the gracious support of
    Stanford's Symbolic Systems Program,
    the Center for the Study of Language and Information,
    the Computer Science and Philosophy Departments,
    the Center for Computer-Assisted Research in the Humanities,
    Channel 51,
    and the GSB Futurist Club.



    -- Jim Morris (prism@bevcomm.net), March 19, 2000.

  • ...and go to the following Greenspun forum if you would like to stay current with this issue:

    Human-Machine Assimilation

    -- Jim Morris (prism@bevcomm.net), March 19, 2000.


    Jim

    Also good post. I am not as smart as you folks are, I am pretty much a been around the block type person. I call it as I see it, and love this board because I learn alot.

    With this said, I hope the intelligence here will rub off on me...smile

    Also to add my 2cents worth, I do know being in the occupation I am in that computers are taking away many jobs.

    And as a foot note, if these computers make all so fast, how come we still have no time to enjoy things?

    Such as our families, the beauty of the outdoors, etc...

    Yes, my job is now easier because of technology however, everytime I learn the 'new way' it changes...

    sigh

    -- consumer (shh@aol.com), March 19, 2000.



    PERRY:

    I had been moved by the news as you, thought of posting, but reflected on it further, and instead started thinking of going back to some of my earlier loves: American folk music performing (with guitar), and oil painting.

    I too have learned what you have: the difference between intellectual power -- and wisdom. I too am on in years (74 next month), and have been an inventor in various fields.

    If I had decided to post my thoughts it would have been yours, but you've said it with an eloquence that goes to the mark.

    Sincerely,

    Bill

    -- William J. Schenker, MD (wjs@linkfast.net), March 19, 2000.


    Shockwaves from an extraordinary article in Wired magazine on the violent pace of technology have reached right up to the White House:

    (UK 'Observer' report, March 19)

    Here is a portrait of the future. A future where humankind is under threat from hugely powerful and self-determining computers. This was the world of Arnold Schwarzenegger's Terminator, of Philip K. Dick and Dr Who , of mad scientists - until last week. Now it is a deadly serious projection of a technological Armageddon, being examined in the office of the world's most powerful man.

    A 20,000-word essay written by leading software innovator Bill Joy, published in the magazine Wired, has proved so provocative that President Bill Clinton's office requested a copy. The tract, titled 'Why The Future Doesn't Need Us,' argues that new twenty-first century technologies of digital, biological and material science present us with dangers at least as fearsome as nuclear, chemical and biological weapons, if not more so, and suggests that accelerating technological change could cause 'something like extinction' of human beings within two generations.

    Joy's critique is not so much remarkable for its message - there have always been social critics and marginalised scientists warning of danger in the progress of technology - but because it came from within the highest reaches of a technological establishment that rarely breaks rank in offering an optimistic future of the new sciences as wealth and health enhancing.

    'Finally, someone who is responsible for creating significant parts of today's technological infrastructure is voicing the concerns that have been labelled "Luddite" when voiced by others,' says Howard Rheingold, founder of The Well, a pioneering Bay Area Internet community in San Francisco. 'Technology is a wonderful vehicle, but we have no road map, headlights, or rear-view mirror.'

    Joy, co-founder and Chief Scientist at Sun Microsystems, the Silicon Valley software giant, said his warning was meant to be reminiscent of Albert Einstein's famous 1939 letter to President Franklin D. Roosevelt, alerting him to the possibility of an atomic bomb.

    But Einstein's warning at the dawn of the nuclear age was directed towards a small number of decision makers. Joy, by contrast, set his sights on a far larger community of technologists and scientists, funded by private enterprise, unconstrained by government - potentially more difficult forces to rein in. In the twentieth century, he points out, 'building nuclear weapons required, at least for a time, access to both rare - indeed, effectively unavailable - raw materials and highly protected information.

    'But the twenty-first century technologies - genetics, nanotechnology and robotics - are so powerful that they can spawn whole new classes of accidents and abuses. For the first time, these accidents and abuses are within the reach of individuals or small groups. Knowledge alone will enable the use of them.'

    And Joy certainly fears the worst: 'I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals.'

    Within scientific and technological communities, Joy's doomsday scenario has been welcomed as an articulate exercise but one that underestimates the profound benefits technology has brought and promises in the future.

    'Bill Joy is probably the most influential commentator at present because he's spreading a message that makes people anxious,' says Charles Platt, a science writer at Wired . 'Anything that creates fear get a lot of attention but it's bad news for people if they want, say, genetically engineered medicine to avoid age-related diseases or to avoid getting killed by the next global viral plague that comes along.'

    Nathan Myhrvold, a theoretical physicist who studied under Stephen Hawking and is currently on sabbatical from his job as chief technology officer at Microsoft, pointed out that such dire predictions are usually inspired by fear of change. 'Every other "unprecedented" challenge of the past has been overcome,' he says. 'Is this case really different? Or are we once again falling into the trap of overestimating technology's downside and underestimating people's ability to cope?'

    Bruce Sterling, the techno guru who was among the first cyberpunk writers, takes a wry approach: 'I'm touched by the guy's sincerity. It's clear he's troubled by this issue and thought it was something he ought to share with the rest of us. He's clearly gone to pains to research the issues, to look for other people who might have shared his problems in the past, and he hit on an excellent one with Robert Oppenheimer. But if you're going to have an Oppenheimer moment my feeling is you should have it before you detonate the device.'

    The author's apocalyptic vision of the future centres on new technologies which will be able to replicate themselves, without human help - nightmarish scenarios in which technologies exceed man's power to control them.

    By using the struggle to limit nuclear, chemical and biological weapons as his inspiration, Joy concludes by proposing that we stop to consider where our progress is leading before we enable our own extinction.

    'The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.'

    If Joy's fear of twenty-first century technologies is exaggerated, his arguments have at least opened up a debate over whether technology fits the society we want. 'He is making the case for technological determinism - if I can do something, I will do it - but that has been proven not to be the case,' says Dr Charles Bugliarello, chancellor of Polytechnic University in Brooklyn.

    'There is a constant see-saw between technological advances and societal advancement. Sometimes machines are ahead, then society adjusts. In other cases, society advances and then machines catch up. The essential issue is that machines have not had the benefit of the evolutionary process that biological organisms have had. Their development has been so rapid that we don't have a way of checking whether their development makes sense.'

    For Jason Lanier, a pioneer in the field of virtual reality, one remedy would be to reject machine-centred designs that supplant human decision-making. A simple example of this is the common function in word processors that inserts capital letters at the start of sentences and periods at the end.

    Joy's concerns centre on three areas, many of which have been addressed by science-fiction writers and technologist-authors such as Ray Kurzweil in The Age of Spiritual Machines: When Computers Exceed Human Intelligence , Hans Moravec in Robot: Mere Machine to Transcendent Mind , and George Dyson in Darwin Among the Machines: The Evolution of Global Intelligence .

    Joy predicts that by 2030, computers will be a million times more powerful than today, lending the possibility that robots may exceed humans in intelligence and have the ability to replicate themselves. Primitive examples of the kind of thing Joy is talking about include 'replication attacks' in the virtual world of computer viruses, the kind which have caused shutdowns of Internet sites and e- mail systems.

    He also points to nanotechnology - the science of creating minute machines atom by atom - that could in time produce tiny, self- reproducing mechanisms, and warns these could wreak havoc if released accidently or by malicious intent. In genetics, he supposes that the widening availability of knowledge about powerful genetic engineering could lead to a 'white plague' - an artificially designed disease that can kill selectively.

    In making his argument, Joy finds he's walking in the shoes of Theodore Kaczynski, the lapsed scientist widely discredited as a madman who conducted a 17-year bombing campaign against scientists in protest at technology, the so-called 'Unabomber'.

    Though he unequivocally condemns Kaczynski's use of violence, he notes he now shares the jailed bomber's terror of the 'unintended consequences, a well-known problem with the design and use of technology'.

    Like Kaczynski, Joy fears the potential positive applications of science are outweighed by the dangers: 'I have always believed making software more reliable, given its many uses, will make the world a safer and better place,' Joy writes. 'If I were to come to believe the opposite, then I would be morally obligated to stop this work. I can now imagine that such a day may come.'



    -- Risteard Mac Thomais (uachtaran@ireland.com), March 20, 2000.


    Well, this thread may not be a hundred miles long like Perry talked about earlier but it's a beginning. ;-)

    Consumer,

    Don't be too hard on yourself - All of us participating on this thread are trying to make sense of this subject. :-)

    Risteard,

    Thanks for posting the UK article. I've put a copy of it up at the Human-Machine Assimilation forum along with another article titled: Transhuman: The Face of Things to Come?

    -- Jim Morris (prism@bevcomm.net), March 20, 2000.


    Our robotics will become

    not only deeply creative but deeply emotive.

    But many emotions seem to occur for no reason, or only in relation to certain environmental cues. The range of human emotions evoked from the last three centuries of classical music, for example, is vast. Nearly any human emotion that can be named is reflected somewhere in classical music. But how can a machine become emotive when it is the nature of emotions to be irrational, strange, fleeting, unprovoked?

    If there is no longer any disease, aging, or environmental destruction (the promise of nanotechology), what happens to grief, regret, and despair? In short, if there is no longer any human loss or suffering, what emotion could a machine feel other than a kind of neutral self-satisfaction?

    Moreover, if our greatest creative achievements derive not only from knowledge but from emotional impulse, why would machines want or need to create?

    What use would machines find in the creative arts?

    Feeling emotions such as anxiety or guilt can be messy, painful, and disruptive. Furthermore, there's no profit in it. A few "nanotechnologists," for example, may find Joy's sincere concern irrational and highly inconvenient.

    Emotions are useless. They stand in the way of progress.

    -- Celia Thaxter (celiathaxter@yahoo.com), March 20, 2000.


    Prof. Dr. Hugo de Garis, Head, Brain Builder Group, STARLAB NV, Belgium, has also written a book about the future, The Artilect War, which is perhaps even more sombre than Joy's. From the introduction:

    I believe that 21st century global politics will be dominated by the issue of "species dominance".

    I believe that 21st century technologies will allow the creation of "artilects" (artificial intellects, artificial intelligences, ultra intelligent machines), with intellectual capacities zillions of times greater than those of human beings.

    I believe that this technological possibility will force the issue of whether artilects should be built or not. [italics added]

    I believe that humanity will split into two major ideological camps, one in favor of building artilects (the "Cosmists") and those opposed (the "Terrans").

    I believe that the ideological disagreements between these two groups on this issue will be so strong, that a major "artilect" war, killing billions of people, will be almost inevitable before the end of the 21st century.

    The question most thinking people will be asking themselves within a few decades will be "Is humanity prepared to see its status as dominant species on our planet be undermined by the artilects?" Can we always be sure that the artilects, if we build them, will treat human beings in a way that will make us feel secure, despite their enormous intelligence levels, or, would we always harbor the suspicion that they might one day decide that human beings are so inferior to them, and such a pest on the surface of the planet that we should be exterminated?

    -- Celia Thaxter (celiathaxter@yahoo.com), March 20, 2000.


    Moderation questions? read the FAQ