?

Log in

No account? Create an account

Previous Entry | Next Entry

The Singularity Challenge

I lead a silly life, and this journal has largely been a record of silly things. But about this, I'm very serious and I hope you'll lend me your attention for just a few minutes.

Mankind's future is inextricably intertwined with technology. Human physiology has changed relatively little over the millenia, but our capacity to change the world has grown exponentially because of the tools that we create. The vast power with which we change our environment, and indeed, the whole world, comes not from what we can do with muscle and bone - but with minds and tools. And there's something big coming in the near future that's going to make gunpowder, the printing press, flight, computers - seem quaint. Some futurists call it "The Singularity". What's the singularity?

Our capacity to invent new things is limited to our intelligence and our knowledge. Each successive generation has the accumulated knowledge of the previous generations to build on, and as such, the progression is linear. But human intelligence has not much changed - certainly no linear progression. The smartest person in the world today might be marginally smarter than the smartest person in the world 5,000 years ago. (Then again he might not be.) But for the most part, human intelligence is a fixed quantity within a certain range that has not varied. That's about to change.

Sometime in the near future, we'll create a computer that's as smart as a human being. With the advent of quantum computing and ever-increasing computing power, we get closer and closer to fully modelling the human brain. Even today, computer scientists have successfully modelled the brain of a mouse in its entirety. The whole thing! A virtual mouse-brain! It worked slowly - much slower than an actual mouse-brain - but it worked. From there it's just a matter of Moore's law catching up with the difference between mice and man. This is excting for a hundred reasons - modelling cures for neurological afflictions, for instance - which is unethical on a human but nothing at all with a virtual model.

But that's not even 1/1000th of what's exciting about this. No, it's the singularity that's thrilling - because there will come a moment when mankind creates an intelligence that is greater than his own. Intelligence was always the limiting factor on human progress - but no longer. Soon there will literally be no limit to human progress. Because the first thing a higher-order intelligence could do would be to design an even higher-order intelligence. Each successive generation would be more intelligent, and thus capable of creating an intelligence of a greater delta than the previous generation. That means that the growth of intelligence will be exponential, not linear. The change and progress of the last thousand years will be as nothing compared to the changes of the next 100.

This is the most important thing that will have happened in human history. And you can be a part of it. This isn't science-fiction, and it's entirely possible it will happen in your lifetime. Are the hairs on the back of your neck standing up? They are for me. Limitless power. Instantenous travel. No more aging or disease. Humanity will be forever altered by the singularity - and there are many people who are working steadily towards making this happen. Today, the Singularity Institute for Artificial Institute has announced that Peter Thiel (co-founder of PayPal and a prominent philanthropist) will match up to $400,000.00 in contributions to their work to bring about the Singularity.

The singularity isn't just the pipe-dream of a bunch of slashdotters and transhumanists - there have been summits at Stanford, books written, and a lot of serious, academic thought about how the future of AI can be made to serve, rather than threaten humanity. I'm a big believer in this. I'im going to contribute, and I hope you will too.

Comments

( 37 comments — Leave a comment )
houseofvirgo
May. 10th, 2007 06:37 pm (UTC)
The hairs on the back of my neck are standing up, but not for the reason that you think. This is for more than a few reasons:

1. With current inequalities in the world, not just in intelligence but also social and economic inequalities, this will further divide people rather than lift people as a whole. If the world were so benevolent, I would like to see this happen. But most people that are not connected to the internet nor have an advanced education are not going to see past their own basic needs for survival and immediate wants. Lifting the intelligence of humankind does not necessarily equal a bias towards compassion. Such a concept will still require stewards and will be marked with individual needs for power.

2. With such inequality, there will be a scenario where AI will be used to promote the thinking for society, since it will be presented that these structures "know better" than the inventors themselves. An individual will not be permitted to learn anything naturally nor through trial of experience. People will tend to move themselves toward conveniences and a society infused with AI will have thoughts thought for them, rather than people thinking for themselves.

3. No invention nor progress can be attained with a capitalist based society without financial backing. Should singularity happen, there will be no doubt that corporations will use such agendas to garner profit and significantly influence thought processes. Battles for artificial intelligence will be often linked to intellectual property, possibly halting the progress that you would like to see happen.

Those are my concerns. I like the idea of the Singularity but I do not trust anyone to carry it out so benevolently.
aghrivaine
May. 10th, 2007 06:43 pm (UTC)
Innoculations, light-bulbs, airplanes, telephones - regardless of individual access on a day-to-day basis, have made the world better for pretty much everybody.

The rate at which people profit may be uneven, but the point is - the progress does occur. Should researchers not develop treatments for cancer, because someone in Sub-Saharan Africa won't have access to it? Should we can developments in space travel because not everyone will be able to jump on a shuttle?

So this is important - you could put your head in the ground and leave the Singularity to well-funded corporations and governments, or you could support ethical, non-profit organizations that seek to develop and guide the process. Which is superior?
houseofvirgo
May. 10th, 2007 06:52 pm (UTC)
So this is important - you could put your head in the ground and leave the Singularity to well-funded corporations and governments, or you could support ethical, non-profit organizations that seek to develop and guide the process. Which is superior?

In an ideal world, the Singularity would be done by a combination of corporations, government, and an outside organization (NPO) in a checks and balances scenario. Funding and research would occur corporately, government would regulate to ensure that reasonable profit and investment is made, NPO would counter-regulate (is that a word?) both to make information public and accessible to people who want to know and participate in it.

But your point is well-made on the uneven profit and the various inventions.
aghrivaine
May. 10th, 2007 06:56 pm (UTC)
Here is a chance for you to encourage the NPO's with guiding a human-friendly and ethical approach to a world-changing event.

Like nanotechnology, AI offers a huge danger along with huge potential. It's very important that all of us be aware of, and as much as possible, encourage an ethical approach to progress.
(Deleted comment)
laughingwolf042
May. 10th, 2007 10:37 pm (UTC)
I have to say that I agree. It's an exciting thought that we might be able to make the world better, but it's also frightening to think (ha-ha) that we might create something that is more intelligent that it's creators - and that it would then create something even smarter than itself, ad nauseaum (sp?); this is simply because we would then lose the ability to control it. Super-smart and thusly uncontrollable AI? Sounds like a dangerous idea to me. I'm all for changing and bettering the world. Change is good. Rapid, radical change, which this will quickly become, should it be successful, I think, leads to oversights and miscalculations, which could be deadly.
aghrivaine
May. 10th, 2007 10:39 pm (UTC)
The thing is, it will happen. And it's a good thing, not a bad thing. So, best to guide it into as human-friendly and ethical a way as possible, right?
laughingwolf042
May. 10th, 2007 10:52 pm (UTC)
Whether it is a good thing or a bad thing will remain to be seen. My point is, I don't think that can be determined for certain until it actually happens and we see the results of our actions. Granted, greater minds than mine have put much more thought into this than I have and undoubtably understand the idea much better than I do, but it just seems like a dangerous and frightening concept to me. I have to wonder, is the rewards of such a venture worth the risks - which are numerous and not exactly insignificant. At this point, it is a hypothetical debate, because, as you said, it's not a question of "if", only "when". In my mind, the real question to be examined here is "should". Just because we have the capability to create something like this, does not automatically mean that we should do so.

aghrivaine
May. 10th, 2007 11:00 pm (UTC)
In my mind, the real question to be examined here is "should". Just because we have the capability to create something like this, does not automatically mean that we should do so.

Progress is not something that it's really possible to stop. When, in the past, human cultures have tried to do so, it's been disastrous, and generally costly for all involved.

Let us not stand in the way of progress - let's guide it to be as constructive and beneficial as possible, no?
laughingwolf042
May. 10th, 2007 11:11 pm (UTC)
I have nothing against progress. A world that does not move forward and does not change adapt will stagnant and die. I do think, however, that there are better ways to move forward than creating an hyper-intelligent AI, which will then create more AI's of even greater intelligence. I was discussing this with a coworker of mine and she suggested using the funds to research ways to use the massive amounts of grey matter that humans already have in our possession and yet are physiologically unable, as yet, to use or access. I think that's a brilliant idea. I'd much rather that, than a AI system with no systems of morals in place to guide the decisions and actions it takes. Especially since sometimes, the fastest and most efficient solution to a conflict may be the most brutal, heartless and immoral - logic without compassion - the most inhuman, which is exactly what this AI would be.
aghrivaine
May. 10th, 2007 11:16 pm (UTC)
Well, the thing is - as an inevitable side-effect of our ever-increasing computing power, eventually create higher-order AI. Without taking specific steps to actively limit or squash that research, it will happen, should happen, and will be a good thing.

And, by the way - the thing about humans not using all of our brains? Just a myth - we use it all, pretty much all the time. For the most part we're processing visual input, which takes up a lot of our cranial rocket-power.

Our brains are limited. Computer brains are not.

Ad astra per computers!
(no subject) - laughingwolf042 - May. 10th, 2007 11:32 pm (UTC) - Expand
(no subject) - aghrivaine - May. 10th, 2007 11:39 pm (UTC) - Expand
(no subject) - laughingwolf042 - May. 10th, 2007 11:48 pm (UTC) - Expand
(no subject) - aghrivaine - May. 10th, 2007 11:51 pm (UTC) - Expand
(no subject) - bookofnod - May. 10th, 2007 11:51 pm (UTC) - Expand
(no subject) - aghrivaine - May. 10th, 2007 11:58 pm (UTC) - Expand
(no subject) - laughingwolf042 - May. 11th, 2007 12:13 am (UTC) - Expand
(no subject) - aghrivaine - May. 11th, 2007 12:15 am (UTC) - Expand
(no subject) - maeris - May. 11th, 2007 04:57 pm (UTC) - Expand
(no subject) - aghrivaine - May. 11th, 2007 04:59 pm (UTC) - Expand
(no subject) - maeris - May. 11th, 2007 05:21 pm (UTC) - Expand
(no subject) - aghrivaine - May. 11th, 2007 05:22 pm (UTC) - Expand
(no subject) - bookofnod - May. 11th, 2007 07:31 pm (UTC) - Expand
(no subject) - bookofnod - May. 11th, 2007 07:27 pm (UTC) - Expand
(no subject) - aghrivaine - May. 11th, 2007 07:31 pm (UTC) - Expand
(no subject) - maeris - May. 11th, 2007 07:37 pm (UTC) - Expand
(no subject) - aghrivaine - May. 11th, 2007 07:41 pm (UTC) - Expand
(no subject) - maeris - May. 11th, 2007 07:57 pm (UTC) - Expand
(no subject) - maeris - May. 11th, 2007 07:58 pm (UTC) - Expand
(no subject) - bookofnod - May. 10th, 2007 11:41 pm (UTC) - Expand
(no subject) - aghrivaine - May. 10th, 2007 11:46 pm (UTC) - Expand
(no subject) - bookofnod - May. 11th, 2007 12:02 am (UTC) - Expand
(no subject) - aghrivaine - May. 11th, 2007 12:07 am (UTC) - Expand
crapdaddy
May. 10th, 2007 09:39 pm (UTC)
Ummm...

"boobs are cool".

Carry on. Carry on.
maeris
May. 11th, 2007 05:11 pm (UTC)
I agree that the advancement of AI is both inevitable and smart (to an extent). I also agree that since my previous statement is true, it should be handled delicately and ethically.

And perhaps I've seen too many movies, but I just envision the good ethical researchers at Institute spending years developing this mind-bending technology and having in all fall apart in the largest, most bloody AI uprising every witnessed by humankind (not that there have been any to date). Afterall, if the AI is smarter than us then it will certainly know how to destroy us and might enjoy doing so. I mean, if humans take such delight in killing things, why not the AI?

Though, I did recently hear a stand-up bit about how scienfitic progress has seemed to plateau in recent decades, but the guy had a point. We're lacking in scientific breakthroughs on all fronts. (The U.S. especially.) So, is this because our intelligence has plateaued or is it because we just don't throw money at the sciences as we once did? Or something else?

I saw an interview with the guy that mapped the genome (name eludes me, recently died) on Charlie Rose a few years ago and he said that we could absolutely cure cancer if the researchers were given the money. They know how to do it, but they don't have the money to put it into practice in the labs, test it, etc.

So that creates a problem for me, the potential funder--do I give money to a NPO that will create computers smarter than me (and possibly spew out a cure for cancer) or do I give my money to the human researchers who are simply lacking the funding to rid the world of the disease? I don't know.
aghrivaine
May. 11th, 2007 05:18 pm (UTC)
Well, it's a dilemma. I do think the Singularity is going to change human history. I think we have to proceed with the assumption that it won't herald the appocalypse. I mean, if it DOES there's jack we can do about it, right? But if it doesn't, let's get it here ASAP!

I don't know about your "lack of scientific progress" - the discovered in Astronomy over the past ten years have been breath-taking. Quantum entanglement. Practical fusion reactors. SCIENCE!

There's a lot of depth to science, these days - no lightning leaps forward due to savants, but lots of diligent workers filling in the cracks. It's Ben Franklin's science - patience and hard work. I'm all for it.
maeris
May. 11th, 2007 05:29 pm (UTC)
Yes, you're right. But at the same time, we are struggling to fund space missions to go out and explore all this stuff we're discovering. I mean, god, how long has it been since we've been to the moon? Granted, there's not really been a reason to go back, but now were in crunch time because we want to go to Mars and have to set up a moon base first. We should have had that done already!

What was the last disease we cured? The last big one--you know, like a polio or cholera. I honestly don't know (and sincerely hope I'm just blanking).

I guess I get frustrated in general because we have no qualms about floating money into religious institutions, but when something comes up that CAN really make a difference we're like uhhhh...maybe next month...

I guess the strongest case here is that AI advancements are going to happen anyway, so we might as well have our voices heard (by way of our checkbooks) that we support the ethical variety.

If I send them a check the memo line is going to say, "Please don't turn this into a Jerry Bruckheimer movie."
( 37 comments — Leave a comment )

Profile

monkey pirate
aghrivaine
Rum, Sodomy, and the Lash: Pick Two
My Yelp Reviews.

Latest Month

June 2018
S M T W T F S
     12
3456789
10111213141516
17181920212223
24252627282930
Powered by LiveJournal.com
Designed by Paulina Bozek