my Netgear WNDR3700 (v1) router’s 5 Ghz network just went missing – bad antenna?

As this is a geekblog, I might as well document my woes here in public. Here is the support ticket I filed with Netgear just now.

Hello,

I purchased a WNDR3700 on 1/11/2011 – serial number 21840B550A390. I have registered the router on my.netgear.com.

this week the 5 Ghz wireless network stopped working entirely. I have updated to latest firmware, and also:

– the 5 Ghz blue light is on
– the settings on the configuration dashboard (192.168.1.1) indicate the 5Ghz network is active
– SSID for 5 Ghz is set to “broadcast SSID name: on”
– the 2.4 Ghz network works fine, computers connected can access internet
– computers attached to the router via ethernet also can also access the internet normally

however no device capable of 5Ghz is able to detect the 5Ghz SSID. the scanning software inSSIDer does not detect any 5Ghz network being broadcast either.

Logically, maybe the antenna or antenna amplifier has burned out, I can think of no other explanation in software for why 5Ghz is missing – the router itself is convinced that 5Ghz is indeed working, but it isnt. That suggests a hardware problem to me.

The router is only 2 1/2 years old and my previous Netgear routers are still going strong at my relatives’ homes after 5-6 years so this is very surprising. I am hoping Netgear support will not disappoint me.

It looks like other users have reported similar issues with the 2.4 Ghz network also mysteriously vanishing, which is why I think my diagnosis of a bad antenna is correct. To be honest I was never enamoured of the speed of this router to begin with, as my initial tests showed.

ASUS RT N66U router

I am skeptical that Netgear will be willing to replace the unit but if they make some kind of gesture that will go a long way towards persuading me to buy a Netgear replacement. I’m not going to bother with a draft 11ac router, all I need is a solid 11abgn machine with some MIMO and I’ll be happy. Unless they make me a good deal, I am very tempted to ditch Netgear. For example, that ASUS RT-N66U “Dark Knight” got a nice review. External antennas, too!

a semi-skeptical view of Google Glass

Dave Winer takes a semi-luddite view about Google Glass (which he refers to as Google glasses, minus branding and capital G). He writes,

I think they will make an excellent display device for the obvious reason that they’re mounted in front of your eyes, the organ we use for vision. The idea of moving your fingers to the side of your head, of winking to take a picture, well I don’t like that so much. I admit I might be a luddite here, and am going to keep my eyes and ears open for indications that I’m wrong. It happens, quite a bit when it comes to brand-new tech.

I think they could be a great part of a mobile computing platform. With more computing power and UI in my pocket, in the form of my smart phone, or in a big pocket, in the form of a tablet. They communicate over Bluetooth, and together form a more useful reading and communication device, but probably still not a very good writing tool.

I totally agree with Dave that a mouse/keyboard will be a requirement for any serious content creation, which is why I still prefer a Blackberry (lusting after the Q10, to be precise). But Google Glass is not going to be a content creation device so much as the initial, baby step towards true Augmented Reality. Note that Google describes Glass as having a primarily voice-directed interface, for initiating search queries, taking a picture, or real-time language transcription. The main function of Google Glass is to record video and take pictures (not content creation, but content acquisition), to facilitate access to information, and most importantly to overlay data onto the visual field, such as maps or translations. It’s the latter that is the “augmentation” of reality part, and is very, very crude.

denmo coil 1

A much more sophisticated vision of Augmented Reality is the one in the anime series, Dennou Coil. I’ve written a number of posts reviewing the series, including a review of my favorite episode where digital, virtual lifeforms colonize a character’s bald head (not unlike the Futurama episode Godfellas) and my closing thoughts on the series as a whole. The screenshot at right is from the first episode, which clearly lays out the technology paradigm: people wear special glasses that let them see virtual realities overlaid onto our real, physical world. Sound familiar?

But it’s cooler than that. In the screencap, the main character is using a cell phone that she draws in the air. There’s no need for physical technology anymore like cell phones or PDAs or even ipods or tablets. Literally, the entire world is your canvas and you consume your content through your regular senses. This is a vision that transcends mere augmentation of reality and becomes more akin to and extension of reality itself.

And it’s not limited to tech gadgetry – the concept extends to virtual pets, to virtual homes, even ultimately to evolution of virtual lifeforms that inhabit the same geographic space as we do but are invisible unless your glasses reveal them. I will be astonished if at least someone on the Google Glass team has not seen this series.

So, Google Glass really is a tentative step towards something new, and there is enormous potential in where it might lead. But as a device itself, Glass won’t be very transformative, because as Dave points out it will be an adjunct to our existing devices. And the content that people pay to consume won’t be created on Glass any more than it is created on iPads or Galaxy phones. Every single major technological advance of the past ten years has been in content consumption devices, not creation. Glass will be no different in that regard.

But content creation vs consumption is the old paradigm. The new one has less to do with “content” which is passively consumed and more with “information” which is a dynamic, contextual flow of information.

Budget Gaming PC build for spring 2013

Corsair Carbide 400R (upgrade)
Corsair Carbide 400R (upgrade)
One of my hobbies here at Haibane is blogging about computer hardware, and I’ve decided to put some of that hobbyist energy towards creating my own spec sheets for PC builds, mainly because I’ve been asked to do that a few times recently for friends and family anyway. I’ve created a page here at Haibane called the Budget Gamer Build that specs out an entry-level box that should be capable of running most modern games at medium resolution, at a target price of $800. There’s also an upgraded version of the build that comes in at $1200 which offers better graphics performance, audio and an SSD drive.

The writeup goes into detail about why I chose each component, but I also have direct links to Lists at Amazon to facilitate ordering:

(I get a few percent back from any purchase at Amazon via those links or the affiliate links on the spec page here at haibane.)

I will probably update that page every quarter so I stay within the price envelope and add new components as applicable. Hopefully I will also find time to spec out a higher-end build in the $2000 range and a home-theater build in the $1000 as well. If you are looking to build a PC, I’d appreciate the opportunity to advise you as well, just drop me a line or comment.

Will Amazon update the Kindle DX?

The Amazon Kindle DX: an iPad-sized eInk screen.

Looks like Amazon is going to have a number of new Kindle models, including next-generation versions of the Kindle Fire in both 7 and 10 inch versions, and also an updated Kindle Touch that incorporates screen illumination (for parity with the new Nook version that came out a few months ago). Amazon is even rumored to be working on a Kindle phone. But the Kindle DX (with a 10 inch screen) is still stuck in its previous-generation, overpriced ghetto. You can buy a DX today but you’re getting the older version of the eInk screen, not the new one with faster refresh times and better contrast on the latest eInk Kindles. And you’re paying a monstrously inflated price reminiscent of the first-generation Kindle hardware. The DX doesn’t even have the same software as it’s smaller brethren, including the advanced PDF support. For these reasons the DX is basically a dinosaur that has been unchanged for almost 3 years. One of the reasons I held out for so long in buying a Kindle of my own is because I kept hoping for a DX refresh, but they still haven’t even discounted the aging hardware.

I would still buy the old DX if they dropped the price in half. And if they came out with a new version, I’d find it compelling at the same price point it is now – imagine how amazing a Kindle DX Touch would be? It would be smaller, lighter, thinner than an iPad 3 and would have 100 times the battery life. It would be a much more natural platform for reading digital newspapers and magazines. And we can dream even bigger: what if the DX had a more advanced touch screen to allow note-taking with a stylus? Suddenly it would be more compelling than an iPad for hundreds of thousands of students. In fact given the cheaper hardware and longer battery life, a note-taking DX would be a real game-changer.

Debating Dyson spheres

a wonderfully geeky debate is unfolding about the practicality of Dyson Spheres. Or rather, a subset type called a Dyson Swarm. George Dvorsky begins by breaking the problem down into 5 steps:

  1. Get energy
  2. Mine Mercury
  3. Get materials into orbit
  4. Make solar collectors
  5. Extract energy

The idea is to build the entire swarm in iterative steps and not all at once. We would only need to build a small section of the Dyson sphere to provide the energy requirements for the rest of the project. Thus, construction efficiency will increase over time as the project progresses. “We could do it now,” says Armstrong. It’s just a question of materials and automation.

Alex Knapp takes issue with the idea that step 1 could provide enough energy to execute step 2, with an assist from an astronomer:

“Dismantling Mercury, just to start, will take 2 x 10^30 Joules, or an amount of energy 100 billion times the US annual energy consumption,” he said. “[Dvorsky] kinda glosses over that point. And how long until his solar collectors gather that much energy back, and we’re in the black?”

I did the math to figure that out. Dvorsky’s assumption is that the first stage of the Dyson Sphere will consist of one square kilometer, with the solar collectors operating at about 1/3 efficiency – meaning that 1/3 of the energy it collects from the Sun can be turned into useful work.

At one AU – which is the distance of the orbit of the Earth, the Sun emits 1.4 x 10^3 J/sec per square meter. That’s 1.4 x 10^9 J/sec per square kilometer. At one-third efficiency, that’s 4.67 x 10^8 J/sec for the entire Dyson sphere. That sounds like a lot, right? But here’s the thing – if you work it out, it will take 4.28 x 10^28 seconds for the solar collectors to obtain the energy needed to dismantle Mercury.

That’s about 120 trillion years.

I’m not sure that this is correct. From the way I understood Dvorsky’s argument, the five steps are iterative, not linear. In other words, the first solar panel wouldn’t need to collect *all* the energy to dismantle Mercury, but rather as more panels are built their increased surface area would help fund the energy of future mining and construction.

However, the numbers don’t quite add up. Here’s my code in SpeQ:


sun = 1.4e9 W/km2
sun = 1.4 GW/km²

AU = 149597870.700 km
AU = 149.5978707 Gm

' surface of dyson sphere
areaDyson = 4*Pi*(AU^2)
areaDyson = 281229.379159805 Gm²

areaDyson2 = 6.9e13 km2
areaDyson2 = 69 Gm²

' solar power efficiency
eff = 0.3
eff = 0.3

' energy absorbed W
energy = sun*areaDyson2*eff
energy = 28.98 ZW

'total energy to dismantle mercury (J)
totE = 2e30 J
totE = 2e6 YJ

' time to dismantle mercury (sec)
tt = totE / energy
tt = 69.013112491 Ms

AddUnit(Years, 3600*24*365 seconds)
Unit Years created

' years
Convert(tt, Years)
Ans = 2.188391441 Years

So, I am getting 2.9 x 10^22 W, not 4.67 x 10^8 as Knapp does. So instead of 120 trillion years, it only takes 2.2 years to get the power we need to dismantle Mercury.

Of course with the incremental approach of iteration you don’t have access to all of that energy at once. But it certainly seems feasible in principle – the engineering issues however are really the show stopper. I don’t see any of this happening until we are actually able to travel around teh solar system using something other than chemical reactions for thrust. Let’s focus on building a real VASIMIR drive first, rather than counting our dyson spheres before they hatch.

Incidentally, Dvorsky points to this lecture titled “von Neumann probes, Dyson spheres, exploratory engineering and the Fermi paradox” by Oxford physicist Stuart Armstrong for the initial idea. It’s worth watching:

UPDATE: Stuart Armstrong himself replies to Knapp’s comment thread:

My suggestion was never a practical idea for solving current energy problems – it was connected with the Fermi Paradox, showing how little effort would be required on a cosmic scale to start colonizing the entire universe.
[…]
Even though it’s not short term practical, the plan isn’t fanciful. Solar power is about 3.8×10^26 Watts. The gravitational binding energy of Mercury is about 1.80 ×10^30 Joules, so if we go at about 1/3 efficiency, it would take about 5 hours to take Mercury apart from scratch. And there is enough material in Mercury to dyson nearly the whole sun (using a Dyson swarm, rather than a rigid sphere), in Mercury orbit (moving it up to Earth orbit would be pointless).

So the questions are:

1) Can we get the whole process started in the first place? (not yet)

2) Can we automate the whole process? (not yet)

3) And can we automate the whole process well enough to get a proper feedback loop (where the solar captors we build send their energy to Mercury to continue the mining that builds more solar captors, etc…)? (maybe not possible)

If we get that feedback loop, then exponential growth will allow us to disassemble Mercury in pretty trivial amounts of time. If not, it will take considerably longer.

my @AmazonKindle Touch: well worth the wait

I’ve wanted a Kindle since version 2.0, and it’s hard to imagine that these devices were several hundred dollars. At long last, I’ve joined the club, with this little beauty:

my Kindle Touch WiFi

With a retail price of $99 it literally is almost a no-brainer now. Especially since buying a hardware Kindle gets you access to the Kindle Lending Library (assuming you are an Amazon Prime customer) which lets you read one book a month for free. I’m working my way through The Hunger Games now.

In addition, public libraries have ebook lending programs that work just like regular borrowing (though like physical books, you have to put a hold on the popular ones and wait a while). And of course there is Project Gutenberg and the vast public domain. I’m not averse to buying books but the same rules in my mind apply to buying a ebook as apply to buying a physical one: unless it’s a must-read, I can wait to borrow it from my library. The fact that the library lending model extends to ebooks’ domain is just pure unadulterated awesome. But if there’s something I really want to read, I can wait a month and get it via Amazon’s program, so that’s an advantage over the physical realm.

Of course, with the recent news that the Department of Justice is going after Apple and the big-name publishers for price-fixing collusion on ebooks, the price of ebooks will likely drop significantly at Amazon. The first book of The Hunger Games trilogy is already marked down to $5 (though the sequels are higher). The omnibus collection of the Game of Thrones books still says “price set by the publisher” at $29.99 which is less than $8/book, and I wouldn’t be surprised if that drops in the next week to $25 as well.

It’s a good time to own a Kindle. I have the same feeling of loyalty towards the Amazon ecosystem as most Apple stalwarts do theirs.

in defense of Microsoft Word

It’s the post-PC era, where we use apps and mobile phones and tablets and ultra-books, e-books, iBooks, and Nooks. We Kindle and we Hulu and we tweet and tumblr and like. Everything is in a cloud somewhere. This is quite a change from the halcyon days of when computing meant sitting down at your computer and launching a program to do something; now all it seems we do (if you live in the digerati echo chamber, that is) is consume and critique.

That’s the context I perceive for this piece by Tom Scocca (@tomscocca) in Slate mocking Microsoft Word, which quickly went viral. Of the many Top Tweets about it, I found these two rather illustrative:

Most of the other tweets just repeat the author’s assertion that Word is “cumbersome, inefficient, and a relic of obsolete assumptions about technology.” The tweets above are useful in that they are explicit in their counter-assumptions about technology; namely, that the only real writing happens on the Web. It’s certainly true that using Word for simple text like email or blog posts is overkill, in much the same way that using a jet engine to drive your lawnmower is overkill. What’s peculiar is that rather than using simpler tools for their simpler tasks, these people have declared that the more complex and capable tool is “obsolete” and “must die”. This attitude betrays a type of phobia towards technology that I suspect has grown more prevalent as our technology interfaces have become increasingly more “dumbed down”.

In actuality, most of the writing in the real world is the complex variety that requires more than a few buttons for bold, italics and blockquote. Ask any lawyer writing a brief, a scientist writing a grant, or a student writing a dissertation how useful Word is and you’ll get a very different perspective than that of people writing tweets about how Word is too complicated for their blogging. Scocca himself acknowledges that he used Word when he wrote his book, which is a pretty telling reveal that completely undercuts his argument that Word has outlived its utility.

If I were to match Scocca’s hyperbole, I’d have to contend that Word is possibly the finest piece of software ever written, in terms of its general utility to mankind. That statement is arguably more true than claiming Word must “die” – especially since as of fiscal year 2011, Office 2010 had sold over 100 million licenses and drove record revenue growth. And note that the software division inside Microsoft that release Office for the Mac is actually the largest OS/X software developer outside of Apple, Inc. itself.

The reason that Word has outlived all its competitors, including dearly departed Wordperfect and Wordpro, is that it has evolved over time, to becoming an indispensable tool for a writer to save time and stay organized. Here’s a great list of 10 features in Word that any serious writer should be intimately familiar with. And even for casual use, some basic knowledge of Word’s features can let you do amazing things with simple text.

However, let’s suppose that you really don’t want to do anything fancy at all. You just want to write a plain text document, which is the basis of Socca’s argument. Is Microsoft Word really as bad as he makes it out to be? Here’s a quick summary of Scocca’s complaints, with my comments:

* Too many features that are left “on”. As examples, he uses the infamous Clippy (which hasn’t been in Word since 2003) and the auto-correct function (which is also enabled by default in Gmail, as well as TextEdit and OS/X Lion). If you really hate the autocorrect, though, it’s almost trivially easy to turn it off – a small blue bar always appears under the autocorrected word when the cursor is next to it. You can use that to access a contextual dropdown that lets you immediately undo the autocorrect or turn it off entirely, for example:

autocorrect options in Microsoft Word

* Scocca finds certain features irritating, specifically “th” and “st” superscripts on ordinal numbers (1st, 2nd, 3rd, etc) and auto-indenting numbered lists. This is largely a matter of personal taste. Style manuals tend to recommend not using superscripts, out of concern on line spacing. Modern processors like Word can easily handle a superscript without breaking the paragraph’s layout.

* He thinks that Word incorrectly uses apostrophes and quotes. He’s mistaken; see the image below where I demonstrate single and double quotes. Note that if you insist on using “dumb” quotes, you can immediately revert by using CTRL-Z (which every Word user should be familiar with, hardly “hidden under layers of toolbars”).

smart quotes in Microsoft Word

* For some reason, the logo for the Baltimore Orioles uses a backwards apostrophe. And for some reason, Scocca believes this is Word’s fault. I have absolutely no idea why he blames Word for this. Try typing O-apostrope-s (O’s) into Word and you’ll see that the apostrophe is indeed facing the right way. I’m frankly unclear on why the backwards apostrophe on the Orioles’ logo is a threat to civilization, but even if so, it’s not Word’s fault.

* Word uses a lot of metadata to keep track of its detailed and complex formatting. This has the effect of marginally increasing file sizes by a trivial and negligible amount (the files taking up space on your hard drive aren’t Word documents, they are MP3 files, video, and photos). Bizarrely, Scocca tries to cut and paste the metadata back into Word as proof of excess, but this is a completely meaningless exercise which proves nothing. It’s true that if you try to open a native Word file in a plaintext editor, you’ll see a lot of gobbledygook, but why would you do that? If you open a JPG file in a text editor you’ll see the same stuff. Every file has metadata and this is a good thing when you use the file in the software it is intended. Of course, Word lets you export your data to any number of file formats, including web-friendly XML and plain text, so Scocca’s ire here is particularly misplaced and mystifying.

* Scocca sneers that Word still uses the paradigm of a “file” on a single “computer”. He says it’s impossible to use Word to collaborate or share. Perhaps he’s unaware of the fact that as of last month, email-based file attachments have been around for 20 years? Microsoft also is lauching a cloud-based version of Office, though, called Office 365, and with the advent of tools like Dropbox and Live Mesh the old one-file-one-PC paradigm is no longer a constraint. It’s actually better that Word focus on words and not include network-based sharing or whatnot; there are tools for that, and isn’t feature bloat one of Scocca’s chief complaints anyway?

* and finally, he calls the Revision Marking feature of Word “psychopathic” and “passive-aggressive”. I wonder if he’s ever actually collaborated on a document? The revision feature has literally transformed how I collaborate with my colleagues and is probably the single most useful feature in Word. It’s trivially easy to accept a single specific change or to do a global “Accept All” between revisions and users. The interface, with color-coded balloons for different users in the margin rather than in-line is elegant and readable. Scocca gripes that “No change is too small to pass without the writer’s explicit approval” – would he rather the software decide which revisions are worthy of highlighting and which aren’t? This complaint is utterly baffling to anyone who has ever actually used the feature.

Frankly, as a regular Word user for years myself, I find it pretty hard to sympathize with Scocca’s rant. None of his feature complaints are really valid, apart from some stylistic preferences (he’d rather bullet his own lists, etc) which are easily modified in Word’s settings. If the menus are really so intimidating, it’s trivially easy to google things like disable autocorrect, and if your google-fu isn’t up to that task then you can always leave a post at Microsoft’s super-friendly user forums where ordinary users themselves will be glad to walk you through it.

If Microsoft Word were to truly die, then we’d lose one of the most productive tools for complex and professional writing in existence. If that’s the future of the written word, where anything above the level of complexity of a tweet, email or blog post is considered too hard to deal with (and software gets dumber to match), then it’s a grim future indeed.

Long live Microsoft Word!