singularity skeptic

by fledgling otaku on March 2, 2008

I am not a luddite by any means, but I just have to state my position plainly: I think all talk of a “Singularity” (of the Kurzweil variety) is nothing more than science fiction. I do not have an anti-Singularity manifesto but rather just a skeptical reaction to most of the grandiose predictions by Singularians. I’d like to see someone articulate a case for Singularity that isn’t yet another fancy timeline of assertions about what year we will have reverse engineered the human brain or have VR sex or foglets or whatever. I am also leery of the abusive invocation of physics terms like “quantum loop gravity” and “energy states” as if they were magic totems (Heisenberg compensators, anyone?).

If I were to break down the concept of Singularity into components, I’d say it relies on a. genuine artificial intelligence and b. transhumanism. Thus the Singularity would be the supposed union of these two. But I guess it’s not much of a surprise that I am an AI skeptic also. AI is artificial by definition – a simulation of intelligence. AI is an algorithm whereas true intelligence is something much less discrete. I tend towards a stochastic interpretation of genuine intelligence than a deterministic one, myself – akin to the Careenium model of Hoftstadter, but even that was too easily discretized. Let me invoke an abused physics analogy here – I see artificial intelligence as a dalliance with energy levels of an atom, whereas true intelligence is the complete (and for all purposes, infinite) energy state of a 1cm metal cube.

The proponents of AI argue that if we just add levels of complexity eventually we will have something approximating the real thing. The approach is to add more neural net nodes, add more information inputs, and [something happens]. But my sense of the human brain (which is partly religious and partly derived from my career as an MRI physicist specializing in neuroimaging) is that the brain isn’t just a collection of N neurons, wired a certain way. There are layers, structures, and systems within whose complexities multiple against each other.

Are there any neuroscientists working in AI? Do any AI algorithms make an attempt to include structures like an “arcuate fasciculus” or a “basal ganglia” into their model? Is there any understanding of the difference between gray and white matter? I don’t see how a big pile of nodes is going to have any more emergent structure than a big pile of neurons on the floor.

Then we come to transhumanism. Half of transhumanism is the argument that we will “upload” our brains or augment them somehow, but that requires the same knowledge of the brain as AI does, so the same skepticism applies. The other half is physical augmentation, but here we get to the question of energy source. I think Blade Runner did it right:

Tyrell: The light that burns twice as bright burns half as long. And you have burned so very very brightly, Roy.

Are we really going to cheat thermodynamics and get multipliers to both our physical bodies and our lifespans? Or does it seem more likely that one comes at the expense of the other? Again, probably no surprise here that I am a skeptic of the Engineered Negligible Senescence (SENS) stuff promoted by Aubrey de Gray – the MIT Technology Review article about his work gave me no reason to reconsider my judgment that he’s guilty of exuberant extrapolation (much like Kurzweil). I do not dismiss the research but I do dismiss the interpretation of its implications. And do they address the possibility that death itself is an evolutionary imperative?

But ok. Lets postulate that death can simply be engineered away. That human brains can be modeled in the Cloud and data can be copied back and forth from wetware to silicon. Then what do we become? A race of gods? or just a pile of nodes, acting out virtual fantasies until the heat death of the universe pulls the plug? That’s not post- or trans-humanism, its null-humanism.

I’d rather have a future akin to Star Trek, or Foundation, or even Snow Crash – one full of space travel, star empires, super hackers and nanotech. Not a future where we all devolve into quantum ghosts – or worse are no better than the humans trapped in the Matrix, living out simulated lives for eternity.

{ 17 comments… read them below or add one }

Jeffrey Boser March 2, 2008 at 10:11 am

I have yet to see any AI research address the real development question of intelligence, that of expectation and irritation.

Every single thing computers have done so far is carry out instructions given to it by a human. Not one, not a single solitary computer anywhere, has any awareness of its environment, any expectation that things are as they should be or shouldn’t be.

There is no motivation for a computer to do anything. If it is not told to do something, it sits there, if told to do something it does just what it is told and no more. It is not capable of getting irritated at its environment and noticing that there is a problem for it. It can’t even recognize that there is a problem in its environment.

It cannot notice, it cannot choose to act, it cannot explore possible problem solving techniques, even if we laid those out for it.

People modeling neurons are doing AI research ass-backwards, as far as I can tell. As humans we cannot deal with complexity very well, and even evolved programming such as field programmable gate arrays, is task oriented. I sometimes wonder if AI researchers ever had children. Until they can somehow ‘irritate’ a computer, no ‘intelligence’ will ever be produced.

Reply

FhnuZoag March 2, 2008 at 9:08 pm

Well, it’s certainly true that the origin of awareness is a question outside of the domain of science. Maybe it hasn’t been addressed by the AI makers, but I’m not sure that it can. For example, I could respond to the previous poster by suggesting that *he* is not aware either, but is just giving the appearance of awareness by following various genetic codes – it’s easy enough to script a computer to Complain() when intAnnoy > 10. It’s worthwhile noting that even if we appeal to non-physical sources of intelligence, the means by which we normally distinguish conscious from unconscious (by asking the it looks physically human, if it gives the right responses in text etc etc) is totally materialistic.

Then again, it is my opinion that true AI is essentially a technical problem, not a metaphysical one. (The practicalities might prevent it from ever being done, sure.) My reason for believing this is a thought experiment:

First, note that the behaviour of individual neurons is well understood. Hence, in theory, we can take a person, and while he is still alive, replace one neuron with an artificial copy that behaves identically. Now, we all agree that the awareness of the individual isn’t affected. Now, repeat, step by step, until every neuron in his brain is artificial. Now the question: At what point does awareness go away? 3 options, that there is a sudden switching point, which seems kinda absurd. That awareness gradually declines, which doesn’t match the 1-0 feel of consciousness. Or that it never goes away, which seems to me the most convincing expectation.

Another argument is to realise that artificial intelligence already exists – in our children. The human reproductive system is ultimately just chemistry, DNA just a molecule, so it seems silly to say that we can’t consciously do using metal and electricity what we can already unconsciously do using amino acid bases, sugar molecules and a bit of heat and water, etc.

Reply

fledgling otaku March 2, 2008 at 11:11 pm

Jeffrey, I think that “irritation” is something that can be modeled in one sense, ie as a problem. For example the mars rover robots have a rudimentary AI that assists in navigating around obstacles en route to a destination. As far as the emotion of irritation is concerned, you are assuming that emotion is a necessary part of intelligence, which may or may not be true. I don’t know.

Fhnu, your thought experiment is awesome. But I think its a lot easier to magically replace a neuron with an “artificial” one than it is to do so in practice. You also are implicitly assuming that neurons are discrete entities that function individually, but that is also not quite true. A given neuron is enmeshed in its neighborhood in such a way that by removing it, you’d cause damage to surrounding ones, or at the very least upset the regional equilibrium that makes that local areas’ processing function. A neuron by itself is easy to model but no neuron is an island, and thats what I mean by multiplicative complexity.

As a corollary to your thought experiment, replace one human being with an AI. Then two, then N… at what point does human civilization become something else? The question is flawed, of course.

Reply

fledgling otaku March 2, 2008 at 11:13 pm

The human reproductive system is ultimately just chemistry, DNA just a molecule, so it seems silly to say that we can’t consciously do using metal and electricity what we can already unconsciously do using amino acid bases, sugar molecules and a bit of heat and water, etc.

true, but you might as well argue that a human being is just a pile of water and some trace materials, but given a big pile of that stuff you don’t get a human, either :) The process by which those raw materials become more, is encoded as information in DNA, and also an emergent property. Thats all part of billions of years of evolution, and not something we can just replicate without effort. It’s like breaking an encryption code – just numbers, right? But which numbers?

Reply

TallDave March 3, 2008 at 11:44 am

Are there any neuroscientists working in AI?

Quite a few, actually.

Reply

TallDave March 3, 2008 at 11:47 am

Every single thing computers have done so far is carry out instructions given to it by a human.

Every single thing humans have done is carry out instructions given to us by evolution.

It cannot notice, it cannot choose to act, it cannot explore possible problem solving techniques, even if we laid those out for it.

Sure it can. If we can give it eyes, it can notice things. If we give it motivations, it can choose between them (just as we choose between eating, sex, work, etc at a given moment). Computers can already solve chess problems better than we can.

Reply

fledgling otaku March 3, 2008 at 5:08 pm

I guess that was a rhetorical question – I kind of assumed there were some, but the leading proponents of AI like Kurzweil betray no knowledge of neuroscience in their public writings that I have seen. My question about arcuate fasciculi, basal ganglia, and gray matter is a loaded one – these aren’t minor details when it comes to brain function. If we plan to “evolve” Ai towards the brain as a target, then we need to actually model the brain itself. I don’t see this, but if I am wrong please point me in the right direction.

I totally disagree that “every thing humans do is the result if instructions given us by evolution”. Thats arguing that humans are no more sentient than machines are. If you believe that AI can happen then you have to believe in teh I part first. If you think we are just bags of meat and chemical stimuli then what point is there in developing AI?

We certainly can give a artifical organism sensors, and program it for behaviors, but the choice is going to be like your Netflix recommendations – derived from weighted averages, not a choice. Case in point is chess – computers dont play chess. They massively brute-force the game to assess eveyr possible branching move N moves donwstream of the present board, then calculate a metric of how “good” the outcome is along each of those branches, and then “choose” the highest-scoring branch. Thats chess as algorithm not chess as a game.

I will be a lot more impressed when an AI beats a 1p dan at Go.

Reply

Anachronda March 4, 2008 at 3:15 pm

” If you believe that AI can happen then you have to believe in teh I part first. If you think we are just bags of meat and chemical stimuli then what point is there in developing AI?”

That, to me, is the big problem with AI. If we know how it works, we won’t consider it to be intelligent.

Reply

Anachronda March 4, 2008 at 11:55 pm

I’m skeptical about the ability to ever upload our minds to computers for a different reason, though.

Moving software around from machine to machine only works because computers are mass-produced. People, on the other hand, are unique. I expect that if there were some way to take a checkpoint of my brain, you would not be able to find another brain that could execute it.

I do expect that it will eventually be possible to move large quantities of *data* in and out; if you google Brainport you’ll find some intriguing stuff that points that way. Ultimately, I expect this to only be successful when it’s possible to install implants in infants so that as their brains learn about the world around them, they will also learn how to manipulate the implant. For them, moving large chunks of data will be as natural as speaking is for us.

I don’t expect the technology to be cheap, however, I suspect that there will be a variety of implants with different features for a range of prices. At some point, a child’s future may be determined by how far into debt their parents were willing to go to have a data port installed immediately after birth…

Reply

Michael Brazier March 6, 2008 at 4:58 am

When it comes to the transhumanist hopes of uploading our minds into computers, I stand with Roger Penrose. If our minds are algorithmic, so they could be accurately emulated on computers, the theorems of Turing and Goedel apply to our minds; and those theorems then imply that we cannot reach a complete understanding of ourselves. But we would need just such a complete understanding to create a means of translating our mental state to a form that runs on a computer. Therefore uploading ourselves is infeasible in principle. A mind superior to ours could emulate ours on different hardware, but we could not understand the emulator; we would be forced to trust the word of the one who made it, that it really does emulate our minds.

Paradoxically, transhumanism’s goal can be reached only if transhumanism’s assumption of philosophical materialism is wrong, and mind is more than an epiphenomenon of matter …

Reply

Clayton Barnett March 13, 2008 at 1:20 pm

I question the motive behind singularity; I’d call it fear. When I was in my 20s, there was nothing I feared more than death. Now in my 40s, I find the idea of living forever to be horrible. While the science behind it all is quite interesting (and increasingly useful), those with a hard-on for “singularity” should take more walks.

Reply

arrogantb October 8, 2009 at 5:37 am

I’ve recently fumbled into the whole “singularity” discussion – and have found it fascinating and somewhat alarming.

I decided to google up “singularity skeptic” – to try to find refutations of the notion – and found this page.

Having a little trouble with some of the arguments presented here…

“AI is artificial by definition – a simulation of intelligence.”

I think this is some kind of semantics game. AI is real intelligence. AI is currently very good at some things like playing chess. You might say that’s not intelligence – but what about in 2025 when an AI handles 90% of the design of a new airplane on its own? Or in 2030 when it designs the first commercially viable fusion reactor? These are of course hypothetical – but saying it’s just “simulated” doesn’t mean it’s not intelligence.

“Do any AI algorithms make an attempt to include structures like an “arcuate fasciculus” or a “basal ganglia” into their model? Is there any understanding of the difference between gray and white matter?”

Yes – we don’t currently have a good enough understand of the brain to try to simulate it at all levels (or at all to any real extent). That doesn’t mean we won’t in the mid-term (or further) future.

“augment them somehow, but that requires the same knowledge of the brain as AI does, so the same skepticism applies. The other half is physical augmentation, but here we get to the question of energy source.”

Right – energy source is a question – along with a billion other things – that doesn’t mean there aren’t answers. It’s also assuming the power requirements for anything useful would be high – which I don’t think is the case at all. A computer providing a few gigabytes of storage and a few MHZ of CPU could vastly expand someone’s ability with relatively massive mathematical skills and literal photographic memory – and use very little power.

“Then what do we become? A race of gods? or just a pile of nodes, acting out virtual fantasies until the heat death of the universe pulls the plug? That’s not post- or trans-humanism, its null-humanism.”

I actually share some of your concerns here. I’m not sure what the answer is – but I suspect we’ll be facing these questions in the future – not sure how soon – it may or may not be this century. Now seems to be a good time to start thinking about this though…

Reply

Zack M. Davis January 5, 2010 at 12:30 am

“I’d like to see someone articulate a case for Singularity that isn’t yet another fancy timeline of assertions about what year we will have reverse engineered the human brain or have VR sex or foglets or whatever.”

Cf. “Artificial Intelligence as a Positive and Negative Factor in Global Risk

Reply

PeteyH August 6, 2010 at 1:46 am

My question is: “how can we reverse engineer something we don’t understand?”. I could ask you to coach an NFL game but you would do a terrible job with no-to-limited knowledge of the intricacies of the sport. To solve the problem you would ask an NFL coach to teach you about the sport. Unfortunately for humans, we do not have a higher power to teach us. We have a student but no professor. Because of this A.I. will fail just like society would fail without teachers. We are stuck with what we know. We have brain imaging technologies to aide us but we face the prospect of reaching a point of incomprehensibility. This is where futurist say biotechnology will come to the rescue. But as Flegling pointed out, we would have to understand how the brain works to be able to build something that could work with it. This is the hole in the arguments of futurists. They state that it will happen, but fail to explain how it will happen. To create biotechnology we would already have to have the assistance of it. This is the brick wall we run in to. As of now neuroscientists are discussing the possibility of the existence of the soul. They say that the chemicals in our brain could be cooperating with a supernatural force (what we identify as the mind). But others say that what we identify as the soul is simply chemical and nothing more. With such inconclusiveness how can we possibly hope to move forward any time soon. More importantly, how could we possibly prove either side’s argument. There are so many things that need to be worked out before we can hope to progress to transhumanism’s sunlit “utopia”. The rate of advancement of computational technology is rapid, but the advancement of the knowledge of what we hope to model with this technology is slower than ever. For example, many neuroscientists say that neuroscience is making “anti-progress” – the more we find out, the less we seem to know. As stated in this article http://www.skeptic.com/the_magazine/featured_articles/v12n02_AI_gone_awry.html , we have no unifying theory of neuroscience. We don’t know what to build, much less how to build it. The article also states this: “The retina (of a single eye) contains about 120 million rods and 7 million cones. Even if each of those 127 million neurons were merely binary, like the beloved 8×8 input grid of the typical artificial neural network (that is, either responded or didn’t respond to light), the number of different possible combinations of input is a number greater than 1 followed by 38,230,809 zeroes. (The number of particles in the universe has been estimated to be about 1 followed by only 80 zeroes.) Testing an artificial neural network with input consisting of an 8×8 binary grid is, by comparison, a small job: such a grid can assume any of 18,446,744,073,709,551,616 configurations — orders of magnitude smaller, but still impossible.” This is a terrific example of the task we are facing and it’s intricacies.

Reply

PeteyH September 2, 2010 at 11:04 am

I would also like to add that it pisses me off that some neuroscientists will dedicate their entire careers to researching how ONE protein in the brain works, and there are guys like Kurzweil who think they can rebuild the entire thing in no time. Anybody else also notice that Kurzweil is the archetypal necrophobe. I read that article you linked to and he takes care of his health frivolously and obsessively. His claims seem to be made more in blind hope than in logical scientific reasoning.

Reply

Alex Tyuluman September 25, 2011 at 1:26 pm

I’d like to take that question! Because it has happened at least 2 times before.

At first, there were only atoms banging around. Not very efficient, but from all this banging around, came a couple little tidbits that could actually replicate themselves. Thus, from bare quantum physics arose evolution of living species. To look at quantum physics without any other data, it would be a non trivial problem to somehow figure out that evolution would arise from it. Anthropomorphised quantum physics would see the creation of life as a “technological singularity”.

Now that there were animals around, things could make copies of themselves, which they did for a very long time. However, after a while, a particular mammalian species was put through a series of trials during an iceage which required them to figure out how to modify their environment to survive the extreme conditions. Thus, intelligence was created. Homo sapiens sapiens. Capable of influencing their environment instead of adapting to it. logical thought, manual dexterity. We can build things on purpose and change things about ourselves within a generation. Anthropomorphised evolution would see the creation of human intelligence as a “technological singularity” as well.

To me, the singularity is the same. We will create something that will grow beyond our understanding. I don’t think we can avoid it.

Because honestly, isn’t a little anthropocentric to believe that humans are the end product of the universe, and that nothing more complex will arise?

I’m going to put a word out there, take it how you want.

Bootstrapping.

Reply

Chris January 22, 2012 at 11:14 am

Plenty of neuroscientists are working on AI, even if some of them don’t know it yet. They’re at places like IBM or the Blue Brain project.

Reply

Leave a Comment

Previous post:

Next post: