Twitter doesn’t innovate

Twitter may be at the peak of it’s innovation. They haven’t really made any substantial improvements in user interface or functionality in a while; hiding replies on user profiles is basically a minor hack. What would be far more useful is marking a user (or hashtag) as “read” temporarily hiding their tweets from your stream (analogous to marking emails as read in your inbox).

But fundamentally, there really isn’t much more Twitter really CAN innovate on. It’s a micro-messaging service. They missed the boat on becoming an identity service; Facebook Connect beat them to it. They seem to have de-emphasized SMS as an interface (at least for the US market) – imagine if they had aimed at taking on BBM and WhatsApp? And they still insist on the 140-char limit, though they could easily allow for a “read more” type extra text (the way it’s done in blog software) or take other simple measures to alleviate the crunch, such as not counting http:// towards the char limit in links, or even allowing links to be metadata the way that photos and video are (the link would still be displayed in-line). You can’t even do simple markup like bold or italic. If links were meta like photos, you could even have a “Recent links” sidebar the way they do for your photos on your profile, but nope.

Twitter has no built-in emoticons, has no elegant way to show a conversation between more than two people, and when you click on the new “view conversation” link, doesn’t show it to you in the original chronological order. Twitter is deprecating RSS which means most bloggers use a plugin to embed tweetstreams; lists are usually not supported.

And of course, as Dave Winer has been saying all along, Twitter isn’t open. You can’t export your data and you can’t really even access older tweets (and did I mention that search is broken?).

It’s also worth pointing out that Twitter’s advantage of the network effect isn’t permanent. Look at what happened to MySpace and FriendFeed. Users will leave if they have a better option; you just need to woo the early adopters like Scoble and make a big splash at SXSW. Plus media attention will be lavish upon any company that has the balls to actually say, “we are out to eat Twitter’s lunch”.

Twitter doesn’t need to be beaten, it just needs to be threatened so it gets out of its comfort zone. Right now it’s chasing after NASCAR and trying to give users tailored content; that’s a fool’s game. Users will never warm to an algorithm’s suggestions – just ask Netflix (or better yet, ask a user).

And Twitter isn’t thinking big at all. What could they achieve if they wanted to? How about aiming for the moon – like becoming a defacto replacement for email?

The end of Facebook? not if it goes Prime

Is Facebook toast? I’m not asking because of it’s IPO, which despite whining from the tech pundits was perfectly calibrated. I’m asking because it’s basic business model is still such a clunker:

Facebook currently derives 82 percent of its revenue from advertising. Most of that is the desultory ticky-tacky kind that litters the right side of people’s Facebook profiles. Some is the kind of sponsorship that promises users further social relationships with companies: a kind of marketing that General Motors just announced it would no longer buy.

Facebook’s answer to its critics is: pay no attention to the carping. Sure, grunt-like advertising produces the overwhelming portion of our $4 billion in revenues; and, yes, on a per-user basis, these revenues are in pretty constant decline, but this stuff is really not what we have in mind. Just wait.

It’s quite a juxtaposition of realities. On the one hand, Facebook is mired in the same relentless downward pressure of falling per-user revenues as the rest of Web-based media. The company makes a pitiful and shrinking $5 per customer per year, which puts it somewhat ahead of the Huffington Post and somewhat behind the New York Times’ digital business. (Here’s the heartbreaking truth about the difference between new media and old: even in the New York Times’ declining traditional business, a subscriber is still worth more than $1,000 a year.) Facebook’s business only grows on the unsustainable basis that it can add new customers at a faster rate than the value of individual customers declines. It is peddling as fast as it can. And the present scenario gets much worse as its users increasingly interact with the social service on mobile devices, because it is vastly harder, on a small screen, to sell ads and profitably monetize users.

The basic problem is that Facebook’s major innovation is to facilitate social interactions, but unless you charge people 1 penny per like you can’t actually monetize those interactions (and any attempts to do so would act like a brake).

But there is an obvious way to monetize Facebook that I am surprised few are talking about. Consider the numbers: Facebook is valued at $100 billion, has about a billion users, so each user is “worth” $100. But Facebook only makes $5/user annually in revenue from ads. So, why not offer users a paid option? If Facebook followed Amazon’s example and offered a “Prime” service, they could charge users $75/year (or $8/month ongoing). In return, that user could get a pile of perks:

  • no ads anywhere, of course
  • free digital gifts and an expanded menu of “pokes” (bring back the sheep!)
  • a “premium” version of the Facebook app with built-in Skype functionality
  • more search filters and automated searches for friends (akin to LinkedIn’s subscriptions)
    the ability to track who views your profile

this is just a basic and obvious list but I am sure there are other perks that could be offered. For example, given that Craig’s List hampers innovation in the classifieds space, Facebook can and should leverage the social graph and offer it’s own (as well as compete with Angie’s List, or buy them outright). Facebook Prime users could be rewarded with better access or free listings.

And then there’s the coupon space – Facebook has all the data it needs to outdo Groupon or LivingSocial. If Facebook acquired the latter in fact it would have a headstart, and again Facebook Prime users would benefit with specialer-than-special offers or early access to deals.

People have already compared Zuckerberg to the next Bezos, but unlike Amazon’s profligate revenue streams, Facebook remains stubbornly focused on one thing. It’s time to diversify and leverage that social data in ways that people actually use. And let the users pay for it!

mind your b’s and K’s: the arcane art of measuring download speeds

I’ve just upgraded to the 30 MB/s internet plan at Charter cable (and added HBO so we can watch Game of Thrones), so here’s the obligatory speedtest results.

It occurs to me that the units for download can be incredibly confusing. Charter advertises the download speed plan using units of Mbps. So, the question naturally arises, how long should it take to download something 18.3 GB in size? (and a related question, if I am downloading something at 300 KB/s, am I getting my max download speed?)

1 GB refers to a gigabyte (10^9 bytes) in this context, since we are talking about file sizes and network speeds. If we were talking about RAM, a GB would actually refer to a gibibyte. However, 1 Mb is a megabit (10^6 bits), not a megabyte (10^6 bytes), because of the small-case b. So 1 Mb is actually 1/8 MB (since there are 8 bits per byte).

So 18.3 GB downloading at 30 Mbps should require:

(size) / (speed) = (time)

(18.3 x 10^9 bytes) / ( (30 x 10^6 bits / sec) x (1 byte / 8 bits) = 18.3 x 10^9 * 8 / 30 x 10^6 = 4880 seconds = 81.3 minutes

Wolfram Alpha gets the answer right, too (and I like teh natural language query – very intuitive).

Now, suppose I’m rocking 300 KB/s according to a certain beta software download client. How am I really doing? The capital B means it is kilobytes, so that’s actually 300 x 10^3 x 8 = 2400 x 10^3 = 2400000 = 2.4 Mbps. Wait, what??

I’m only getting 1/10th my actual download speed for this??

This is why it’s important to do the math. Of course, the download speed may be limited by a lot of other factors, most notably how fast the server at the other end can deliver the data. I clocked almost 40 Mbps doing a speedtest with some local, low-ping server somewhere, but for downloading this big file I’m probably going a lot further and their server has a lot more to do than humor my ping requests. I guess I should be satisfied.

(But, I’m not. grrr….)

renting bytes: the case for digital non-ownership

Cutting off the DRM nose to spite the reader's face?
In the course of my search for free ebook content, I found an advocacy group called Librarians Against DRM. I found the existence of this puzzling, because if not for DRM then the free-lending program of ebooks by libraries wouldn’t exist. In fact note that Macmillan, which is among publishers that refuse to give ebooks to libraries, is one of those moving to DRM-free ebooks. These facts have a powerful relationship to each other that I think is being ignored by most of the DRM activists. The knee-jerk reaction to DRM (it’s always bad! cheer it when it’s gone!”) misses the point on how its absence might have negative consequences of its own.

It’s an article of faith that DRM is bad and that when you buy something, you should own it in the digital realm just as we do in the physical. Ebooks are probably the most vibrant front in this war against Big Content and the End User. And I have to admit that I do prefer it this way, in an entitled sort of way. The idea that I’ve have to pay $9.99 every time I wanted to read my digital copy of Reamde is of course utterly absurd and offensive – I should be able to read it whenever I want, precisely because I paid so much for it. Ditto the MP3’s and videos I buy on Amazon or iTunes.

And yet there’s an assumption here that we do reuse our content. Obviously with music, we do – and that’s facilitated by the low price of the media. MP3 tracks don’t cost $9.99. But what about books and video? How often do we really rewatch or reread? The answer is, it depends. I don’t think I’m ever going to re-read 90% of the books I physically own, and that percentage will only increase for digital copies. For video, especially long series like Game of Thrones or Battlestar Galactica, the main experience is watching it without foreknowledge, and the value of rewatch is low (though there are exceptions, such as Farscape or Firefly). Movies have the lowest rewatch of all, apart from a handful of favorites (Star Wars, LOTR, Princess Bride etc).

It’s worth noting that the rewatch potential is inversely related to the price. But as you move up the chain from music to ebooks to video to movies the production cost of the content also goes up, which is why cost of ownership of that media also increases (obviously, not all of these forms of digital content are truly DRM-free such that we fully own them outright).

So, let’s factor in the rewatch potential, and ask ourselves, is ownership really useful to us? Are we getting our money’s worth? I don’t really think so. As a consumer, what if I had more choice, and paid accordingly?

The scheme I propose is just a starting point for a thought exercise of course, but imagine if (using ebooks as an example) we paid significantly less for “first read” and then paid a little more for each reread until some threshold after which we “unlocked” the content. So instead of paying $9.99, what if I paid only $1.99, which earns me one complete read through. The second read through would cost $1.07, and subsequent read throughs would be $.99 each, until I’ve read the book 9 times at which point the book unlocks to unlimited further reading – and I’ve paid $9.99 total.

The advantage of this is that the barrier to purchasing a book is much lower, such that publishers will see more sales. Not of the individual title, perhaps, but of more titles overall. It wouldn’t be hard to do some numerical estimates based on reasonable assumptions: suppose that number of people purchasing a book scales with price according to a Zipf distribution for example.

The bottom line is that maybe we as consumers should stop focusing on theoretical rights and instead focus on actual expenses and cost-benefit, the same way we do for toilet paper and cereal, when it comes to digital goods. If we think of them as consumables rather than goods, it would be more in line with our actual usage.

the singular implication of uploading one hour every second to @youtube …

This is an astonishing statistic: Youtube users now upload one hour of video every second:

The video (and accompanying website) is actually rather ineffective at really conveying why this number is so astounding. Here’s my take on it:

* assume that the rate of video uploads is constant from here on out. (obviously over-conservative)

* the ratio of “Youtube time” to real time is 1/3600 (there are 3600 seconds in an hour)

* so how long would it take to upload 2,012 years worth of video to Youtube?

Answer: 2012 / 3600 = 0.56 years = 6.7 months = 204 days

Let’s play with this further. Let’s assume civilization is 10,000 years old. it would take 10,000 / 3600 = 33 months to document all of recorded human history on YouTube.

Let’s go further with this: Let’s assume that everyone has an average lifespan of 70 years (note: not life expectancy! human lifespan has been constant for millenia). Let’s also assume that people sleep for roughly one-third of their lives, and that of the remaining two-thirds, only half is “worth documenting”. That’s (70 / 6) / 3600 years = 28.4 hours of data per human being uploaded to YouTube to fully document an average life in extreme detail.

Obviously that number will shrink, as the rate of upload increases. Right now it takes YouTube 28 hours to upload teh equivalent of a single human lifespan; eventually it will be down to 1 hour. And from there, it wil shrink to minutes and even seconds.

If YouTube ever hits, say, the 1 sec = 1 year mark, then that means that the lifespan of all of the 7 billion people alive as of Jan 1st 2012 would require only 37 years of data upload. No, I am not using the word “only” in a sarcastic sense… I assume YT will get to the 1sec/1yr mark in less than ten years, especially if data storage continues to follow it’s own cost curve (we are at 10c per gigabyte for data stored on Amazon’s cloud now).

Another way to think of this is, in 50 years, YouTube will have collected as many hours of video as have passed in human history since the Industrial Revolution. (I’m not going to run the numbers, but that’s my gut feel of the data). These are 1:1 hours, after all – just because one hour of video is uploaded every second, doesn’t mean that the video only took one second to produce – someone, somewhere had to actually record that hour of video in real time).

Think about how much data is in video. Imagine if you could search a video for images, for faces, for sounds, for music, for locations, for weather, the way we search books for text today. And then consider how much of that data is just sitting there in YT’s and Google’s cloud.

it’s SOPA day on the Internet

Google's doodle for SOPA Day
anyone else see any irony in this? Google.com, Wikipedia.org, WordPress.org, and hundreds of other websites large and small are going all-out against SOPA. Google has the logo censored by a black bar, and Wikipedia is actually offline. Lots of other sites and blogs are following their example. The idea is to symbolically register dissent against censorship by using self-censorship.

When you click the link from Google’s homepage, you are taken to a cool infographic which states:

Fighting online piracy is important. The most effective way to shut down pirate websites is through targeted legislation that cuts off their funding. There’s no need to make American social networks, blogs and search engines censor the Internet or undermine the existing laws that have enabled the Web to thrive, creating millions of U.S. jobs.

I think I disagree with all three statements – first, fighting online piracy is NOT important. Piracy will always exist and will always stay a step ahead of measures to prevent it. In fact those measures ultimately end up facilitating casual piracy – look at Napster, deCSS, and now Bitorrent. All were solutions designed to evade piracy and which in the end ultimately made even more piracy possible.

Second, the LAST thing we need is “targeted legislation” that “shuts down funding” for websites of any type. Besides OBVIOUSLY being a First Amendment issue, such legislation would represent a precedent far more damaging and capable of leading to true censorship than SOPA (which is targeted at foreign websites and DNS).

Finally, while I agree we don’t want to force American blogs or websites to censor themselves, the implication is that SOPA would do this, which it does not do. SOPA is explicitly targeted at foreign websites. US-based websites (and this includes all .org and .net domains as well) are not affected by SOPA at all.

(Read the actual SOPA bill here – PDF)

I’m a big supporter of network neutrality (unless the network operators are willing to forgo their government subsidies), but what we have here is basically SOPA Theater (analogous to the Security Theater we have for airline travel).

Looks like the DNS provisions in SOPA are getting pulled, and the House is delaying action on the bill until February, so it’s gratifying to see that the activism had an effect. However, that activism would have been put to better use to educate people about why DRM is harmful, why piracy should be fought not with law but with smarter pro-consumer marketing by content owners (lowered prices, more options for digital distribution, removal of DRM, fair use, and ubiquitous time-shifting). Look at the ridiculous limitations on Hulu Plus – even if you’re a paid subscriber, some shows won’t air episodes until the week after, old episodes are not always available, some episodes can only be watched on the computer and are restricted from mobile devices. These are utterly arbitrary limitations on watching content that just drive people into the pirates’ arms.

All that priceless real estate on Google and Wikipedia could have been used to educate millions of people about these issues, and instead it is mostly wasted on a pointless battle that’s already won. The real battle is being lost.

Addendum: Color me skeptical of Google’s commitment to free speech, by the way. Here’s a question for them: If SOPA were to pass, would they comply with takedown requests that don’t meet the safe-harbor provisions of the DMCA? (The argument is that SOPA would lower the bar for claiming infringement, but that’s vague in the bill). Would Google fight SOPA and be willing to go to court if their users were unfairly targeted, say for example by using a snippet of copyrighted music in a personal Youtube video? (the stark scenario that Tom’s Hardware painted last week)

UPDATE: vigorous discussion at Shamus’ place, but as one commentor puts it, full of “fashionable anti-Americanism” and chest-thumping about “freedom”.

Why SOPA might kill commenting, and is that such a bad thing?

UPDATE: I think the anti-SOPA blackouts at Google, Wikipedia etc are a gigantic wasted opportunity to educate people about DRM. And I’m skeptical of Google putting money where their mouth is.

I get it, the Stop Online Piracy Act (SOPA) is bad because it doesn’t actually do anything to stop piracy. There are various screeds online, from left and right alike. It’s basically an article of faith that SOPA will “kill the internet”, but I’m not entirely convinced. The best article by far against SOPA and the most convincing argument is not by political sites but rather the techsphere, specifically Tom’s Hardware:

As an example, imagine a user posts a video clip to the Tom’s Community of a step-by-step guide on how to set up water cooling on an overclocked i7 CPU. Playing in the background behind the voiceover is “Derezzed” by Daft Punk. The studio representing Daft Punk could issue a complaint, without being required to notify us or request a take-down. Tom’s Hardware would be liable and prosecuted solely on a good faith assertion of the copyright owner, without notification, with the site operators subject to possible jail time for not preventing the video from being posted. In short order, the http://www.tomshardware.com/ domain in the United States would no longer resolve to our servers and visitors attempting to come to Tom’s Hardware would be redirected to a “This site under review for piracy/copyright violations” page.

To conform to these new restrictions would mean that Tom’s Hardware would have to switch to a review/approval process for any and all new posts to our forums and articles. Our community team would have to approve every single news comment, every new thread, and every new response before it went live and filter them for potentially infringing material. Even so, we would still possibly be under threat from violations not caught – a user posting a paragraph from “Unix for Dummies” as an example or a snippet of software news from another website in excess of a certain summary threshold. That’s just here on Tom’s. The effect on sites like YouTube, Google, Facebook, Twitter, Reddit and the rest of the internet would be devastating, and progress and innovation would grind to a halt under the cumbersome new restrictions.

I’m not sure if the scenario above would be as cut and dried as Tom’s states. In that example, the offending post would likely be flagged by the IP owner and that information would be given to Tom’s. If Tom’s wants to shut down their whole site, that’s their choice, but a simple targeted hiding of the offending post would probably suffice instead. We are living quite comfortably in an era where content violations are removed surgically from Youtube all the time and yet the Internet hasn’t collapsed.

But the broader issue as I see it is simply, are websites liable for their users? Which might be more broadly restated as, is there a right to comment? I think the answer to the former question is a yes and to the latter is a no.

Parislemon already closed his comment systems, Dave Winer uses Disqus, and Ars Technica’s top user forums are only available to paid users. These are all different mechanisms for signal-noise filtering. Killing off usercontent is only necessary when the userbase is essentially random, uncontrolled, hostile (the default state of most user spaces towards their hosts). But SOPA would kill the anonymous, seething mass of commentary and force everyone into more regulated userbase management. Why is that bad?

Arguably, increased liabilty from users might even lead to a rebirth of blogging – after all, if you have something to say,better to say it in a space you control rather than someone else’s. The first company to offer blog hosting services and security on par with wordpress.com but also allowing the user to retain complete control over the blog on par with a wordpress.org install is going to cause a new revolution. Blork, maybe?

Related: Dave Winer says SOPA will lead to a Disneyified web. We just got back from Disney World, and it’s called the Happiest Place on Earth for a reason – it’s tightly scripted, carefully managed, and meticulously designed to be that way (not unlike using Apple ecosystem products, but I digress…). It’s only we power users who are ever really unhappy – the vast bulk of the userbase will sit in line for 100 minutes to ride Peter Pan or accept limitations on bandwidth and copyright takedowns, as long as Hulu gets them their weekly fix of Gossip Girl.

In fact, in the longer term, having our capitalist overlords clamp down on the web might actually force some innovation beyond this aging platform. Leave the disneyweb to the world and lets have new parallel networks tailored for specific niches, built on new technologies and standards. Why do we force video to travel over http, for example? Or file sharing? Shadow internets already exist, such as the mobile web, Facebook, or the torrent community. Having one network to rule them all is a gigantic kludge.

Google+ is closed, Facebook and Twitter are open

There’s a simple reason that Google+ can not be a facebook killer – it adds to social noise and creates a walled garden where data can not be exported from nor imported to. There are no RSS feeds generated by Google+ that you can pipe into Twitter using Twitterfeed, nor can you import tweets to Google+ the way you can with Facebook. There is no Google+ API like the Facebook API that allows data import to the service from other services.

This is a huge, critical flaw in Google+ that guarantees it won’t be a Facebook killer.

A better use of Google+ would be to unify Gmail and Circles such that you can create whitelists for email with a single click. There’s no email service at present that permits a user to create a whitelist easily – you have to tediously set up manual filters instead, and even then there’s simply no way to say “send all emails (except some) to Trash”. A simple whitelist functionality is the real way to declare email independence. I fully support what MG Siegler is trying to achieve here but until we can say “receive mail ONLY from X, Y, Z” we will never be free of the tyranny of the inbox.

Maybe Google+ is the first step. But we need to stop treating it like Facebook and start thinking about how it can be used to improve the original social network – email. If Circles can be used to define whitelists, that’s real value.

Related: a little slideshare I put together a few years back about managing social noise. Still relevant, if a little outdated.

Apple Cloud of FUD: it just works

What the hell is Techcrunch smoking?

ooooooh! The Cloud! The Truth is in the Cloud!

With iCloud, Apple is transforming the cloud from an almost tangible place that you visit to find your stuff, to a place that only exists in the background. It’s never seen. You never interact with it, your apps do — and you never realize it. It’s magic.

Compare this to Google, the company perhaps most associated with the cloud. Google’s approach has been to make the cloud more accessible to existing PC users. They’re doing this by extending familiar concepts. Google Docs is Microsoft Office, but in the cloud. Your main point of interaction is a file system, but in the cloud. Gmail is Outlook, but in the cloud. Etc.

Meanwhile, another company now largely associated with the cloud, Amazon, has essentially turned it into one giant server/hard drive that anyone can use for a fee. But it takes developers to build something on top of it to give users a product to use. Some are great. But many again just extend the idea of the cloud as a remote hard drive.

While the fundamentals are the same, Apple’s approach to the concept of the cloud is the opposite of their competitors. Apple’s belief is clearly that users will not and should not care how the cloud actually works. When Jobs gave a brief glimpse of their new North Carolina datacenter that is the centerpiece of iCloud, he only noted that it was full of “stuff” — “expensive stuff,” he quipped.

How on earth can Apple’s approach to the cloud be the same and also the opposite? There’s a cloud alright, and it’s being smoked big time.

Someone explain to me how Amazon or Google force the user to care how the cloud actually works? When I read books on the Kindle app, “it just works” on iPad, Blackberry, or iPod – i put one device down, pick up the other, and start reading right where i left off. When I open a document in google docs in one web browser at work, I save my document and go home and open the same document from my PC at home, and “it just works”.

OK, I think Gruber had a better insight in pointing out that for Google, the Cloud is accessed through a browser window, whereas for Apple, it’s accessed through your entire screen. But then again, have we forgotten about AWS? Or App Engine?

whatever. get ready for endless droning on by the MG Sieglers of the world about how the Truth is In the Cloud. ooooooh!