This is an astonishing statistic: Youtube users now upload one hour of video every second:

The video (and accompanying website) is actually rather ineffective at really conveying why this number is so astounding. Here’s my take on it:

* assume that the rate of video uploads is constant from here on out. (obviously over-conservative)

* the ratio of “Youtube time” to real time is 1/3600 (there are 3600 seconds in an hour)

* so how long would it take to upload 2,012 years worth of video to Youtube?

Answer: 2012 / 3600 = 0.56 years = 6.7 months = 204 days

Let’s play with this further. Let’s assume civilization is 10,000 years old. it would take 10,000 / 3600 = 33 months to document all of recorded human history on YouTube.

Let’s go further with this: Let’s assume that everyone has an average lifespan of 70 years (note: not life expectancy! human lifespan has been constant for millenia). Let’s also assume that people sleep for roughly one-third of their lives, and that of the remaining two-thirds, only half is “worth documenting”. That’s (70 / 6) / 3600 years = 28.4 hours of data per human being uploaded to YouTube to fully document an average life in extreme detail.

Obviously that number will shrink, as the rate of upload increases. Right now it takes YouTube 28 hours to upload teh equivalent of a single human lifespan; eventually it will be down to 1 hour. And from there, it wil shrink to minutes and even seconds.

If YouTube ever hits, say, the 1 sec = 1 year mark, then that means that the lifespan of all of the 7 billion people alive as of Jan 1st 2012 would require only 37 years of data upload. No, I am not using the word “only” in a sarcastic sense… I assume YT will get to the 1sec/1yr mark in less than ten years, especially if data storage continues to follow it’s own cost curve (we are at 10c per gigabyte for data stored on Amazon’s cloud now).

Another way to think of this is, in 50 years, YouTube will have collected as many hours of video as have passed in human history since the Industrial Revolution. (I’m not going to run the numbers, but that’s my gut feel of the data). These are 1:1 hours, after all – just because one hour of video is uploaded every second, doesn’t mean that the video only took one second to produce – someone, somewhere had to actually record that hour of video in real time).

Think about how much data is in video. Imagine if you could search a video for images, for faces, for sounds, for music, for locations, for weather, the way we search books for text today. And then consider how much of that data is just sitting there in YT’s and Google’s cloud.

## remembering memory

Nicholas Carr (not to be confused with Paul Carr) has a tremendous essay which follows the theme of his writing in general being a skeptic of Google and the modern information era. Just a teaser:

Our embrace of the idea that computer databases provide an effective and even superior substitute for personal memory is not particularly surprising. It culminates a century-long shift in the popular view of the mind. As the machines we use to store data have become more voluminous, flexible, and responsive, weâ€™ve grown accustomed to the blurring of artificial and biological memory. But itâ€™s an extraordinary development nonetheless. The notion that memory can be â€œoutsourced,â€ as Brooks puts it, would have been unthinkable at any earlier moment in our history. For the Ancient Greeks, memory was a goddess: Mnemosyne, mother of the Muses. To Augustine, it was â€œa vast and infinite profundity,â€ a reflection of the power of God in man. The classical view remained the common view through the Middle Ages, the Renaissance, and the Enlightenmentâ€”up to, in fact, the close of the nineteenth century. When, in an 1892 lecture before a group of teachers, William James declared that â€œthe art of remembering is the art of thinking,â€ he was stating the obvious. Now, his words seem old-fashioned. Not only has memory lost its divinity; itâ€™s well on its way to losing its humanness. Mnemosyne has become a machine.

The shift in our view of memory is yet another manifestation of our acceptance of the metaphor that portrays the brain as a computer.

It’s entitled, “killing Mnemosyne”. I reject that metaphor, as well, and this ties into my own skepticism on Singularity, as well.

UPDATE – Mark comments, and discusses the relevance to Exformation. Now there’s a Carrian concept! I also agree that our blogs are probably our modern-day “commonplace books”, but I am tempted to try and actually do one in paper. My problem is my handwriting speed is not fast enough to record my thoughts, and the result is usually illegible. So the blog is probably the best outlet. This is kind of ironic.

## for great justice

Steve Gillmor celebrates Independence Day by heralding the arrival of the Enterprise iPhone (or ePhone) by Apple. I guess I was wrong, the Singularity and Transhumanism really are here after all.

via NIck Carr,Douglas Hofstadter recently had a critique of the concept of the Singularity that I found refreshing and utterly unsurprising.

Indeed, I am very glad that we still have a very very long ways to go in our quest for AI. I think of this seemingly â€œpessimisticâ€ view of mine as being in fact a profound kind of optimism, whereas the seemingly â€œoptimisticâ€ visions of Ray Kurzweil and others strike me as actually being a deeply pessimistic view of the nature of the human mind.

The entire interview is an excellent read, and later on the interviewer points to some similarities in both Kurzweils’ and Hofst’s view about sentience as “software”. Hofst answers with a critique that I think echoes my earlier skepticism:

Well, the problem is that a soul by itself would go crazy; it has to live in a vastly complex world, and it has to cohabit that world with many other souls, commingling with them just as we do here on earth. To be sure, Kurzweil sees those things as no problem, either â€” we’ll have virtual worlds galore, â€œup thereâ€ in Cyberheaven, and of course there will be souls by the barrelful all running on the same hardware. And Kurzweil sees the new software souls as intermingling in all sorts of unanticipated and unimaginable ways.

Well, to me, this â€œgloriousâ€ new world would be the end of humanity as we know it. If such a vision comes to pass, it certainly would spell the end of human life.

not trans-humanism, but null-humanism, indeed.

Of course, for a more rigorous critique of Singularity, the recent IEEE special issue had some excellent critical articles alongside the fluffy vision pieces. Highly recommend: “Waiting for the Rapture“, “The Consciousness Conundrum”, and best of all, Singular Simplicity. All of these pieces level specific, scientific, and physical arguments that undercut the grandiose hand-waving arguments of the singularitans.

## singularity skeptic

I am not a luddite by any means, but I just have to state my position plainly: I think all talk of a “Singularity” (of the Kurzweil variety) is nothing more than science fiction. I do not have an anti-Singularity manifesto but rather just a skeptical reaction to most of the grandiose predictions by Singularians. I’d like to see someone articulate a case for Singularity that isn’t yet another fancy timeline of assertions about what year we will have reverse engineered the human brain or have VR sex or foglets or whatever. I am also leery of the abusive invocation of physics terms like “quantum loop gravity” and “energy states” as if they were magic totems (Heisenberg compensators, anyone?).

If I were to break down the concept of Singularity into components, I’d say it relies on a. genuine artificial intelligence and b. transhumanism. Thus the Singularity would be the supposed union of these two. But I guess it’s not much of a surprise that I am an AI skeptic also. AI is artificial by definition – a simulation of intelligence. AI is an algorithm whereas true intelligence is something much less discrete. I tend towards a stochastic interpretation of genuine intelligence than a deterministic one, myself – akin to the Careenium model of Hoftstadter, but even that was too easily discretized. Let me invoke an abused physics analogy here – I see artificial intelligence as a dalliance with energy levels of an atom, whereas true intelligence is the complete (and for all purposes, infinite) energy state of a 1cm metal cube.

The proponents of AI argue that if we just add levels of complexity eventually we will have something approximating the real thing. The approach is to add more neural net nodes, add more information inputs, and [something happens]. But my sense of the human brain (which is partly religious and partly derived from my career as an MRI physicist specializing in neuroimaging) is that the brain isn’t just a collection of N neurons, wired a certain way. There are layers, structures, and systems within whose complexities multiple against each other.

Are there any neuroscientists working in AI? Do any AI algorithms make an attempt to include structures like an “arcuate fasciculus” or a “basal ganglia” into their model? Is there any understanding of the difference between gray and white matter? I don’t see how a big pile of nodes is going to have any more emergent structure than a big pile of neurons on the floor.

Then we come to transhumanism. Half of transhumanism is the argument that we will “upload” our brains or augment them somehow, but that requires the same knowledge of the brain as AI does, so the same skepticism applies. The other half is physical augmentation, but here we get to the question of energy source. I think Blade Runner did it right:

Tyrell: The light that burns twice as bright burns half as long. And you have burned so very very brightly, Roy.

Are we really going to cheat thermodynamics and get multipliers to both our physical bodies and our lifespans? Or does it seem more likely that one comes at the expense of the other? Again, probably no surprise here that I am a skeptic of the Engineered Negligible Senescence (SENS) stuff promoted by Aubrey de Gray – the MIT Technology Review article about his work gave me no reason to reconsider my judgment that he’s guilty of exuberant extrapolation (much like Kurzweil). I do not dismiss the research but I do dismiss the interpretation of its implications. And do they address the possibility that death itself is an evolutionary imperative?

But ok. Lets postulate that death can simply be engineered away. That human brains can be modeled in the Cloud and data can be copied back and forth from wetware to silicon. Then what do we become? A race of gods? or just a pile of nodes, acting out virtual fantasies until the heat death of the universe pulls the plug? That’s not post- or trans-humanism, its null-humanism.

I’d rather have a future akin to Star Trek, or Foundation, or even Snow Crash – one full of space travel, star empires, super hackers and nanotech. Not a future where we all devolve into quantum ghosts – or worse are no better than the humans trapped in the Matrix, living out simulated lives for eternity.