?

Log in

No account? Create an account

Previous Entry | Next Entry

The Singularity Challenge

I lead a silly life, and this journal has largely been a record of silly things. But about this, I'm very serious and I hope you'll lend me your attention for just a few minutes.

Mankind's future is inextricably intertwined with technology. Human physiology has changed relatively little over the millenia, but our capacity to change the world has grown exponentially because of the tools that we create. The vast power with which we change our environment, and indeed, the whole world, comes not from what we can do with muscle and bone - but with minds and tools. And there's something big coming in the near future that's going to make gunpowder, the printing press, flight, computers - seem quaint. Some futurists call it "The Singularity". What's the singularity?

Our capacity to invent new things is limited to our intelligence and our knowledge. Each successive generation has the accumulated knowledge of the previous generations to build on, and as such, the progression is linear. But human intelligence has not much changed - certainly no linear progression. The smartest person in the world today might be marginally smarter than the smartest person in the world 5,000 years ago. (Then again he might not be.) But for the most part, human intelligence is a fixed quantity within a certain range that has not varied. That's about to change.

Sometime in the near future, we'll create a computer that's as smart as a human being. With the advent of quantum computing and ever-increasing computing power, we get closer and closer to fully modelling the human brain. Even today, computer scientists have successfully modelled the brain of a mouse in its entirety. The whole thing! A virtual mouse-brain! It worked slowly - much slower than an actual mouse-brain - but it worked. From there it's just a matter of Moore's law catching up with the difference between mice and man. This is excting for a hundred reasons - modelling cures for neurological afflictions, for instance - which is unethical on a human but nothing at all with a virtual model.

But that's not even 1/1000th of what's exciting about this. No, it's the singularity that's thrilling - because there will come a moment when mankind creates an intelligence that is greater than his own. Intelligence was always the limiting factor on human progress - but no longer. Soon there will literally be no limit to human progress. Because the first thing a higher-order intelligence could do would be to design an even higher-order intelligence. Each successive generation would be more intelligent, and thus capable of creating an intelligence of a greater delta than the previous generation. That means that the growth of intelligence will be exponential, not linear. The change and progress of the last thousand years will be as nothing compared to the changes of the next 100.

This is the most important thing that will have happened in human history. And you can be a part of it. This isn't science-fiction, and it's entirely possible it will happen in your lifetime. Are the hairs on the back of your neck standing up? They are for me. Limitless power. Instantenous travel. No more aging or disease. Humanity will be forever altered by the singularity - and there are many people who are working steadily towards making this happen. Today, the Singularity Institute for Artificial Institute has announced that Peter Thiel (co-founder of PayPal and a prominent philanthropist) will match up to $400,000.00 in contributions to their work to bring about the Singularity.

The singularity isn't just the pipe-dream of a bunch of slashdotters and transhumanists - there have been summits at Stanford, books written, and a lot of serious, academic thought about how the future of AI can be made to serve, rather than threaten humanity. I'm a big believer in this. I'im going to contribute, and I hope you will too.

Comments

laughingwolf042
May. 10th, 2007 10:37 pm (UTC)
I have to say that I agree. It's an exciting thought that we might be able to make the world better, but it's also frightening to think (ha-ha) that we might create something that is more intelligent that it's creators - and that it would then create something even smarter than itself, ad nauseaum (sp?); this is simply because we would then lose the ability to control it. Super-smart and thusly uncontrollable AI? Sounds like a dangerous idea to me. I'm all for changing and bettering the world. Change is good. Rapid, radical change, which this will quickly become, should it be successful, I think, leads to oversights and miscalculations, which could be deadly.
aghrivaine
May. 10th, 2007 10:39 pm (UTC)
The thing is, it will happen. And it's a good thing, not a bad thing. So, best to guide it into as human-friendly and ethical a way as possible, right?
laughingwolf042
May. 10th, 2007 10:52 pm (UTC)
Whether it is a good thing or a bad thing will remain to be seen. My point is, I don't think that can be determined for certain until it actually happens and we see the results of our actions. Granted, greater minds than mine have put much more thought into this than I have and undoubtably understand the idea much better than I do, but it just seems like a dangerous and frightening concept to me. I have to wonder, is the rewards of such a venture worth the risks - which are numerous and not exactly insignificant. At this point, it is a hypothetical debate, because, as you said, it's not a question of "if", only "when". In my mind, the real question to be examined here is "should". Just because we have the capability to create something like this, does not automatically mean that we should do so.

aghrivaine
May. 10th, 2007 11:00 pm (UTC)
In my mind, the real question to be examined here is "should". Just because we have the capability to create something like this, does not automatically mean that we should do so.

Progress is not something that it's really possible to stop. When, in the past, human cultures have tried to do so, it's been disastrous, and generally costly for all involved.

Let us not stand in the way of progress - let's guide it to be as constructive and beneficial as possible, no?
laughingwolf042
May. 10th, 2007 11:11 pm (UTC)
I have nothing against progress. A world that does not move forward and does not change adapt will stagnant and die. I do think, however, that there are better ways to move forward than creating an hyper-intelligent AI, which will then create more AI's of even greater intelligence. I was discussing this with a coworker of mine and she suggested using the funds to research ways to use the massive amounts of grey matter that humans already have in our possession and yet are physiologically unable, as yet, to use or access. I think that's a brilliant idea. I'd much rather that, than a AI system with no systems of morals in place to guide the decisions and actions it takes. Especially since sometimes, the fastest and most efficient solution to a conflict may be the most brutal, heartless and immoral - logic without compassion - the most inhuman, which is exactly what this AI would be.
aghrivaine
May. 10th, 2007 11:16 pm (UTC)
Well, the thing is - as an inevitable side-effect of our ever-increasing computing power, eventually create higher-order AI. Without taking specific steps to actively limit or squash that research, it will happen, should happen, and will be a good thing.

And, by the way - the thing about humans not using all of our brains? Just a myth - we use it all, pretty much all the time. For the most part we're processing visual input, which takes up a lot of our cranial rocket-power.

Our brains are limited. Computer brains are not.

Ad astra per computers!
laughingwolf042
May. 10th, 2007 11:32 pm (UTC)
Ok, thanks for letting me know about the myth, I appreciate that. But that doesn't change or answer the question of why we don't fund research to help us think *better* for ourselves, rather than creating a AI, that will function with cold, hard logic, encourage a drone-like mentality (since it would be smarter, so obviously it would know better), and that has a very real possibility of being extremely dangerous for society in general? You seem so certain that this will be a good thing and so absolutely unwilling to consider that it *might* not be - and that, in itself - only further lends credence to my point.
aghrivaine
May. 10th, 2007 11:39 pm (UTC)
You're operating with an implicit assumption - that an AI would necessarily be amoral, and further that people would obey it without thought.

The Singularity isn't about creating a machine that replaces human thought - but rather that enhances it. *Human* progress is limited by human intelligence, and human knowledge. Human knowledge increases, but human intelligence does not. But now it will. Progress will be less limited than previously.

How many human inventions are either universally good or bad? As always, with any tool, it depends on how the people who use them decide. Therefore, I want the people who develop higher order AI to be responsible, ethical people with a sincere desire for the betterment of humanity - like the people at the Singularity Institute.

If you can find and donate to research to make humans think better, I think that's a GREAT idea. I've never heard of any such thing, though. I am advocating the research of the Singularity Institute as being desirable, and worth supporting - and ultimately something that will alter human history forever, for the better.

You are free to disagree - though it seems your disagreement is based on a fear of being supplanted, or a general resistance to change and progress. While such fears are rational - the folks at the Singularity Institute are actually the answer to those fears - not the cause of them.
laughingwolf042
May. 10th, 2007 11:48 pm (UTC)
::nod:: I do disagree and I think that we would be better served by simply agreeing to disagree. As a parting thought, however, as I stated before and will now re-state, I do not fear or have issue with change OR progress. Both are vital for our society and planet to function. I do think, however, (as I also stated previously) that radical and rapid change - such as this - is dangerous and will lead only to disaster.
aghrivaine
May. 10th, 2007 11:51 pm (UTC)
Radical and rapid change *will* happen. It is inevitable, unless we are repressive, like the government currently is with stem cell research. You may not like it, but it will happen.

I strongly believe that we *must* guide this process to an end which serves the greater good. The same goes for nanotechnology, another vastly powerful area of research that will change the world.
bookofnod
May. 10th, 2007 11:51 pm (UTC)
I'm going to respond to Aghrivaine:

"You're operating with an implicit assumption - that an AI would necessarily be amoral, and further that people would obey it without thought."

Computers are based on programming. Systems of logic and programming. Hate to say it but that's the truth. Explain to me if I am wrong how a computer or computer program can come to a solution that's NOT based on the variables it's programmed with.


"The Singularity isn't about creating a machine that replaces human thought - but rather that enhances it. *Human* progress is limited by human intelligence, and human knowledge. Human knowledge increases, but human intelligence does not. But now it will. Progress will be less limited than previously."

If you believe in evolution, then intelligence isn't necessarily limited. I think genetics and biology have more of a chance of increasing intelligence than creating a computer that does it better. We use about 10% of our brain at one time. But what if we isolated the genes that trigger evolution? Or how about the genes that determine intelligence? Tell me how we can limit ourselves when we better ourselves (as opposed to bettering external systems)?

"How many human inventions are either universally good or bad? As always, with any tool, it depends on how the people who use them decide. Therefore, I want the people who develop higher order AI to be responsible, ethical people with a sincere desire for the betterment of humanity - like the people at the Singularity Institute."

Have we not enough authoritarian people in this society who would simply "follow the leader" regardless of how the programming may be flawed or judgement lacking ethics? And again, why improve external systems of thought/intelligence instead of inproving our own?


"If you can find and donate to research to make humans think better, I think that's a GREAT idea. I've never heard of any such thing, though. I am advocating the research of the Singularity Institute as being desirable, and worth supporting - and ultimately something that will alter human history forever, for the better."

I'd donate for genetic research and engineering. Not this. We're more likely to map and identify genes in our lifetime.

"You are free to disagree - though it seems your disagreement is based on a fear of being supplanted, or a general resistance to change and progress. While such fears are rational - the folks at the Singularity Institute are actually the answer to those fears - not the cause of them."

No fear. But a logical arguement based on the behavior of our society, the needs of it, and well... simple genetics.
aghrivaine
May. 10th, 2007 11:58 pm (UTC)
Your argument rests on some factual inaccuracies, which I've explained to both of you, and you persist in. There's no sense in having an argument - just don't donate!

In fact, I'm NOT going to argue - it's my New Year's resolution not to argue with anyone about anything. I've stated my case, and you've misconstrued and misunderstood. Please, there's nothing more to say, you shouldn't contribute to any research you don't support. Find a worthy cause and make a difference.

And when you DO donate for genetic research and engineering, let me know to whom you make your donation, I'm all about supporting the sciences.
(no subject) - laughingwolf042 - May. 11th, 2007 12:13 am (UTC) - Expand
(no subject) - aghrivaine - May. 11th, 2007 12:15 am (UTC) - Expand
maeris
May. 11th, 2007 04:57 pm (UTC)
If you believe (or rather, are educated about the subject) in evolution, you know that evolution does not necessitate a move forward (as we believe forward to be). Evolution can move in either direction at any given time. So, we could actually be getting stupider as we speak.

And, we may very well be--for some reason, the untruth that humans only use 10% of their brains continues to perpetuate. In fact, Aghrivaine just stated above that it WAS a myth and you still typed it anyway.

So, I'll say it again: there is absolutely no truth to the statement that humans only use about 10% of their brains. Those in the neurosciences have all but mapped out each part of the human brain and its function. Thanks to technological advances like the MRI and PET, we now know that all of our brain gets used regularly.
(no subject) - aghrivaine - May. 11th, 2007 04:59 pm (UTC) - Expand
(no subject) - maeris - May. 11th, 2007 05:21 pm (UTC) - Expand
(no subject) - aghrivaine - May. 11th, 2007 05:22 pm (UTC) - Expand
(no subject) - bookofnod - May. 11th, 2007 07:31 pm (UTC) - Expand
(no subject) - bookofnod - May. 11th, 2007 07:27 pm (UTC) - Expand
(no subject) - aghrivaine - May. 11th, 2007 07:31 pm (UTC) - Expand
(no subject) - maeris - May. 11th, 2007 07:37 pm (UTC) - Expand
(no subject) - aghrivaine - May. 11th, 2007 07:41 pm (UTC) - Expand
(no subject) - maeris - May. 11th, 2007 07:57 pm (UTC) - Expand
(no subject) - maeris - May. 11th, 2007 07:58 pm (UTC) - Expand
bookofnod
May. 10th, 2007 11:41 pm (UTC)
I agree.... that boobs are cool....

Oh... and while AI is an interesting concept, think about the many many many other problems in society, science, economics, and technology that we need to deal with first. While it's an interesting idea to "build a brain" that might aid or solve problems better than we've been able to, I have to agree with the awesome and beautiful Laughingwolf on this one: Why? Why not perfect ourselves first and use our full potential?

Also, a robot or computer functions with logic (or a formula for anyone who understands computer programming). It's based on varibles, not the things that make humans unique such as (sometimes rational and irrational) feelings and moral/ethical systems. Take for example the situation in Iraq: A human being might find the solution to the problem to simply withdraw our troops. Why a robot with no moral or ethical system or true regard for human life might see a mass slaughter or nuclear action as the most logical and efficient solution. I might not think that AI might result in a "matrix" like future, but perhaps one where people "follow" the machine's idea (disregarding morality to an extent) since it's designed to be better or more accurate than it's human counterparts.
aghrivaine
May. 10th, 2007 11:46 pm (UTC)
The whole point of higher-order AI is that it is *more* sophisticated than the human mind, not less. There is no reason why such a machine shouldn't also value human life, probably even greater than many humans do - who are generally untroubled by the suffering of others unless it's themselves or their immediate relations. AI, on the other hand, can be built from the ground up to enhance human thought - but not to endanger human values. Just as I said above - the Singularity Institute is the solution to those fears, not the cause of them.

If you intend to contribute money or time to causes that you find more pressing and more important to you, personally, I can't but applaud that. Please do contribute to worthy causes!

If instead, you're just afraid of computers, afraid of the future - and trying to pick a fight, I'd simply ask you to desist.
(no subject) - bookofnod - May. 11th, 2007 12:02 am (UTC) - Expand
(no subject) - aghrivaine - May. 11th, 2007 12:07 am (UTC) - Expand

Profile

monkey pirate
aghrivaine
Rum, Sodomy, and the Lash: Pick Two
My Yelp Reviews.

Latest Month

June 2018
S M T W T F S
     12
3456789
10111213141516
17181920212223
24252627282930
Powered by LiveJournal.com
Designed by Paulina Bozek