Consider playing in the NFL as the epitome of sports – and being an astronaut as the epitome of a STEM career. In both cases, postulate that college is where you can reasonably draw a line for determining basic qualification for application. In the case of the NFL, to reasonably apply to the NFL you must at minimum play NCAA football. In the case of an astronaut, you must at minimum have a Bachelor’s degree in a STEM-related field. Fair enough?
The NFL statistics are summarized in this graphic (via @GatorsScott) –
The relevant numbers are: 15588 NCAA seniors playing football, of which 256 are drafted to the NFL, or 256/15588 = 1.6%. (note, these numbers are from 2013, via a study commissioned by the NCAA.)
This year’s astronaut corps application had a total of 18,300 applications. The minimum education requirements to apply are “a bachelor’s degree from an accredited institution in engineering, biological science, physical science, computer science or mathematics. An advanced degree is desirable” (about a third of astronauts have an MS, and a third have PhDs). There will be 8-14 open slots, so lets assume the maximum for best possible probability: 14/18,300 = 0.07%.
Now, this doesn’t disprove the so-called STEM shortage – the evolution of the modern-day disposable academic suffices to do that on its own. It is however a cautionary tale about the rhetoric we use when we tell children to “reach for the stars”. Thats good for *children*, but as advice to college students, it’s terrible. A child should be encouraged to dream, and dream big. A college student is practically an adult and deserves to hear stark realities about the job market because that is precisely the moment in time where they can have to make decisions about the rest of their life – decisions that should be informed by those dreams, but not dictated by them.
There are a lot of astronauts and NFL players who decided from day one that was what they were going to do, and succeeded. And that is amazing. But there just isnt enough room for everyone who is equally capable and has the same amount of sheer determination and talent to do the same. We don’t need 18,300 astronauts, nor do we need 15,588 NFL players drafted every year.
As per my router troubles earlier, I have finally upgraded to the Asus RT-AC56U. I’ve been using an old Linksys WRT54GL as an access point for legacy 802.11g connections, so here is the baseline for comparison, using a desktop machine located two feet away, using built-in wifi antennas:
Linksys WRT54GL, 802.11g, 2.4 Ghz
here’s the result from using the new router:
ASUS RT-AC56U, 802.11n, 2.4 Ghz
here’s the result from using the new router on my main workstation PC in the basement, using a PCI wifi adapter:
Linksys WRT54GL, 802.11g, 2.4 Ghz
and using the new router, with a USB AC-1200 wifi adapter (ASUS USB-AC56):
As this is a geekblog, I might as well document my woes here in public. Here is the support ticket I filed with Netgear just now.
I purchased a WNDR3700 on 1/11/2011 – serial number 21840B550A390. I have registered the router on my.netgear.com.
this week the 5 Ghz wireless network stopped working entirely. I have updated to latest firmware, and also:
– the 5 Ghz blue light is on
– the settings on the configuration dashboard (192.168.1.1) indicate the 5Ghz network is active
– SSID for 5 Ghz is set to “broadcast SSID name: on”
– the 2.4 Ghz network works fine, computers connected can access internet
– computers attached to the router via ethernet also can also access the internet normally
however no device capable of 5Ghz is able to detect the 5Ghz SSID. the scanning software inSSIDer does not detect any 5Ghz network being broadcast either.
Logically, maybe the antenna or antenna amplifier has burned out, I can think of no other explanation in software for why 5Ghz is missing – the router itself is convinced that 5Ghz is indeed working, but it isnt. That suggests a hardware problem to me.
The router is only 2 1/2 years old and my previous Netgear routers are still going strong at my relatives’ homes after 5-6 years so this is very surprising. I am hoping Netgear support will not disappoint me.
I am skeptical that Netgear will be willing to replace the unit but if they make some kind of gesture that will go a long way towards persuading me to buy a Netgear replacement. I’m not going to bother with a draft 11ac router, all I need is a solid 11abgn machine with some MIMO and I’ll be happy. Unless they make me a good deal, I am very tempted to ditch Netgear. For example, that ASUS RT-N66U “Dark Knight” got a nice review. External antennas, too!
I think they will make an excellent display device for the obvious reason that they’re mounted in front of your eyes, the organ we use for vision. The idea of moving your fingers to the side of your head, of winking to take a picture, well I don’t like that so much. I admit I might be a luddite here, and am going to keep my eyes and ears open for indications that I’m wrong. It happens, quite a bit when it comes to brand-new tech.
I think they could be a great part of a mobile computing platform. With more computing power and UI in my pocket, in the form of my smart phone, or in a big pocket, in the form of a tablet. They communicate over Bluetooth, and together form a more useful reading and communication device, but probably still not a very good writing tool.
I totally agree with Dave that a mouse/keyboard will be a requirement for any serious content creation, which is why I still prefer a Blackberry (lusting after the Q10, to be precise). But Google Glass is not going to be a content creation device so much as the initial, baby step towards true Augmented Reality. Note that Google describes Glass as having a primarily voice-directed interface, for initiating search queries, taking a picture, or real-time language transcription. The main function of Google Glass is to record video and take pictures (not content creation, but content acquisition), to facilitate access to information, and most importantly to overlay data onto the visual field, such as maps or translations. It’s the latter that is the “augmentation” of reality part, and is very, very crude.
A much more sophisticated vision of Augmented Reality is the one in the anime series, Dennou Coil. I’ve written a number of posts reviewing the series, including a review of my favorite episode where digital, virtual lifeforms colonize a character’s bald head (not unlike the Futurama episode Godfellas) and my closing thoughts on the series as a whole. The screenshot at right is from the first episode, which clearly lays out the technology paradigm: people wear special glasses that let them see virtual realities overlaid onto our real, physical world. Sound familiar?
But it’s cooler than that. In the screencap, the main character is using a cell phone that she draws in the air. There’s no need for physical technology anymore like cell phones or PDAs or even ipods or tablets. Literally, the entire world is your canvas and you consume your content through your regular senses. This is a vision that transcends mere augmentation of reality and becomes more akin to and extension of reality itself.
And it’s not limited to tech gadgetry – the concept extends to virtual pets, to virtual homes, even ultimately to evolution of virtual lifeforms that inhabit the same geographic space as we do but are invisible unless your glasses reveal them. I will be astonished if at least someone on the Google Glass team has not seen this series.
So, Google Glass really is a tentative step towards something new, and there is enormous potential in where it might lead. But as a device itself, Glass won’t be very transformative, because as Dave points out it will be an adjunct to our existing devices. And the content that people pay to consume won’t be created on Glass any more than it is created on iPads or Galaxy phones. Every single major technological advance of the past ten years has been in content consumption devices, not creation. Glass will be no different in that regard.
But content creation vs consumption is the old paradigm. The new one has less to do with “content” which is passively consumed and more with “information” which is a dynamic, contextual flow of information.
One of my hobbies here at Haibane is blogging about computer hardware, and I’ve decided to put some of that hobbyist energy towards creating my own spec sheets for PC builds, mainly because I’ve been asked to do that a few times recently for friends and family anyway. I’ve created a page here at Haibane called the Budget Gamer Build that specs out an entry-level box that should be capable of running most modern games at medium resolution, at a target price of $800. There’s also an upgraded version of the build that comes in at $1200 which offers better graphics performance, audio and an SSD drive.
The writeup goes into detail about why I chose each component, but I also have direct links to Lists at Amazon to facilitate ordering:
(I get a few percent back from any purchase at Amazon via those links or the affiliate links on the spec page here at haibane.)
I will probably update that page every quarter so I stay within the price envelope and add new components as applicable. Hopefully I will also find time to spec out a higher-end build in the $2000 range and a home-theater build in the $1000 as well. If you are looking to build a PC, I’d appreciate the opportunity to advise you as well, just drop me a line or comment.
Looks like Amazon is going to have a number of new Kindle models, including next-generation versions of the Kindle Fire in both 7 and 10 inch versions, and also an updated Kindle Touch that incorporates screen illumination (for parity with the new Nook version that came out a few months ago). Amazon is even rumored to be working on a Kindle phone. But the Kindle DX (with a 10 inch screen) is still stuck in its previous-generation, overpriced ghetto. You can buy a DX today but you’re getting the older version of the eInk screen, not the new one with faster refresh times and better contrast on the latest eInk Kindles. And you’re paying a monstrously inflated price reminiscent of the first-generation Kindle hardware. The DX doesn’t even have the same software as it’s smaller brethren, including the advanced PDF support. For these reasons the DX is basically a dinosaur that has been unchanged for almost 3 years. One of the reasons I held out for so long in buying a Kindle of my own is because I kept hoping for a DX refresh, but they still haven’t even discounted the aging hardware.
I would still buy the old DX if they dropped the price in half. And if they came out with a new version, I’d find it compelling at the same price point it is now – imagine how amazing a Kindle DX Touch would be? It would be smaller, lighter, thinner than an iPad 3 and would have 100 times the battery life. It would be a much more natural platform for reading digital newspapers and magazines. And we can dream even bigger: what if the DX had a more advanced touch screen to allow note-taking with a stylus? Suddenly it would be more compelling than an iPad for hundreds of thousands of students. In fact given the cheaper hardware and longer battery life, a note-taking DX would be a real game-changer.
a wonderfully geeky debate is unfolding about the practicality of Dyson Spheres. Or rather, a subset type called a Dyson Swarm. George Dvorsky begins by breaking the problem down into 5 steps:
Get materials into orbit
Make solar collectors
The idea is to build the entire swarm in iterative steps and not all at once. We would only need to build a small section of the Dyson sphere to provide the energy requirements for the rest of the project. Thus, construction efficiency will increase over time as the project progresses. â€œWe could do it now,â€ says Armstrong. Itâ€™s just a question of materials and automation.
Alex Knapp takes issue with the idea that step 1 could provide enough energy to execute step 2, with an assist from an astronomer:
â€œDismantling Mercury, just to start, will take 2 x 10^30 Joules, or an amount of energy 100 billion times the US annual energy consumption,â€ he said. â€œ[Dvorsky] kinda glosses over that point. And how long until his solar collectors gather that much energy back, and weâ€™re in the black?â€
I did the math to figure that out. Dvorskyâ€™s assumption is that the first stage of the Dyson Sphere will consist of one square kilometer, with the solar collectors operating at about 1/3 efficiency â€“ meaning that 1/3 of the energy it collects from the Sun can be turned into useful work.
At one AU â€“ which is the distance of the orbit of the Earth, the Sun emits 1.4 x 10^3 J/sec per square meter. Thatâ€™s 1.4 x 10^9 J/sec per square kilometer. At one-third efficiency, thatâ€™s 4.67 x 10^8 J/sec for the entire Dyson sphere. That sounds like a lot, right? But hereâ€™s the thing â€“ if you work it out, it will take 4.28 x 10^28 seconds for the solar collectors to obtain the energy needed to dismantle Mercury.
Thatâ€™s about 120 trillion years.
I’m not sure that this is correct. From the way I understood Dvorsky’s argument, the five steps are iterative, not linear. In other words, the first solar panel wouldn’t need to collect *all* the energy to dismantle Mercury, but rather as more panels are built their increased surface area would help fund the energy of future mining and construction.
However, the numbers don’t quite add up. Here’s my code in SpeQ:
' energy absorbed W
energy = sun*areaDyson2*eff
energy = 28.98 ZW
'total energy to dismantle mercury (J)
totE = 2e30 J
totE = 2e6 YJ
' time to dismantle mercury (sec)
tt = totE / energy
tt = 69.013112491 Ms
AddUnit(Years, 3600*24*365 seconds)
Unit Years created
Ans = 2.188391441 Years
So, I am getting 2.9 x 10^22 W, not 4.67 x 10^8 as Knapp does. So instead of 120 trillion years, it only takes 2.2 years to get the power we need to dismantle Mercury.
Of course with the incremental approach of iteration you don’t have access to all of that energy at once. But it certainly seems feasible in principle – the engineering issues however are really the show stopper. I don’t see any of this happening until we are actually able to travel around teh solar system using something other than chemical reactions for thrust. Let’s focus on building a real VASIMIR drive first, rather than counting our dyson spheres before they hatch.
Incidentally, Dvorsky points to this lecture titled “von Neumann probes, Dyson spheres, exploratory engineering and the Fermi paradox” by Oxford physicist Stuart Armstrong for the initial idea. It’s worth watching:
UPDATE: Stuart Armstrong himself replies to Knapp’s comment thread:
My suggestion was never a practical idea for solving current energy problems â€“ it was connected with the Fermi Paradox, showing how little effort would be required on a cosmic scale to start colonizing the entire universe.
Even though itâ€™s not short term practical, the plan isnâ€™t fanciful. Solar power is about 3.8Ã—10^26 Watts. The gravitational binding energy of Mercury is about 1.80 Ã—10^30 Joules, so if we go at about 1/3 efficiency, it would take about 5 hours to take Mercury apart from scratch. And there is enough material in Mercury to dyson nearly the whole sun (using a Dyson swarm, rather than a rigid sphere), in Mercury orbit (moving it up to Earth orbit would be pointless).
So the questions are:
1) Can we get the whole process started in the first place? (not yet)
2) Can we automate the whole process? (not yet)
3) And can we automate the whole process well enough to get a proper feedback loop (where the solar captors we build send their energy to Mercury to continue the mining that builds more solar captors, etcâ€¦)? (maybe not possible)
If we get that feedback loop, then exponential growth will allow us to disassemble Mercury in pretty trivial amounts of time. If not, it will take considerably longer.