binary thinking

Cognition is more complicated than IQ.
Cognition is more complicated than IQ.

I try to stay out of political theory on this blog, but Vox Day’s essay on the differences between the “VHIQ” and the “UHIQ” struck me as intellectually interesting enough that I felt like exploring it further. Personally, I don’t know what my IQ is, so that means I am merely above average*, since only people with very/ultra-high IQ seem to be motivated to willingly take the test. VD lists a number of plausible qualitative traits, of which the following caught my eye:

VHIQ inclines towards binary either/or thinking and taking sides. UHIQ inclines towards probabilistic thinking and balancing between contradictory possibilities.

VHIQ is uncomfortable with chaos and seeks to impose order on it, even if none exists. UHIQ is comfortable with chaos and seeks to recognize patterns in it.

VHIQ is competitive. UHIQ doesn’t keep score.

VD later goes on to quote Wechsler, the founder of the IQ test, at length and summarizes:

Wechsler is saying quite plainly that those with IQs above 150 are different in kind from those below that level. He is saying that they are a different kind of mind, a different kind of human being.

The division into binary groups here – “normal human” (sub-150 IQ) and the Next (150+), and then at the next iteration between VHIQ and UHIQ, is confusing to me, particularly since it is IQ itself being used to classify people into the binary choices. In the comments, VD clarifies (?) that “It’s entirely possible for a 175 IQ to be VHIQ and for a 145 IQ to be UHIQ” but that just moves the binary classifying to a relative scale than an absolute one. Since he also asserts that you need to be at least +3 SD (ie, IQ of 145) to even qualify as VHIQ, it’s clear that the numbers do matter.

There’s a glaring circularity here that I am doing a poor job of articulating. I’ll just make note of it and move on.

VD’s excerpted passage from Wechsler is, however, nonsense. He created an empirical test, intended to assess “varying amounts of the same basic stuff (e.g., mental energy)” and then made it into a score. I have worked with neurologists before and they make the same category error that psychologists like Wechsler do, in ascribing quantitative rigor to tests like the Expanded Disability Status Scale (EDSS). Just because you can ask someone a bunch of qualitative questions and then give them a “score” based on a comparison of their answers to those of a “baseline” person, does not mean you have actually magically created a quantitative test. Wechsler’s very use of the word “quantitative” is an abuse of language, a classic soft-sciences infatuation with concepts best left to hardsci folks. There’s nothing quantitative about the WAIS whatsoever, until you look at aggregate results over populations. Wechsler lacked even a basic understanding of what human cognition’s base units might be – certainly not hand-wavy bullshit like “mental energy”. Volumetric imaging with DT-MRI is probably the only actual quantitative method the human race has yet invented to probe that “basic stuff” of which Wechsler dreams; but there are some serious engineering constraints on how far we can go in that direction.**

Human cognition isn’t so easily captured by a single metric, even one built on such muddy foundation as the WAIS. It’s chaotic, and emergent, and inconsistent. This infatuation with pseudo-qualitative testing isn’t limited to WAIS; people overuse Meyers-Briggs and over-interpret fMRI all the time. Do qualitative metrics like WAIS or EDSS have value in certain contexts? Of course. However, as a signpost towards Homo Superior, it’s no better than Body Mass Index.

* Why bother with false modesty? I do have a PhD in an applied physics field, after all, and I scored higher than VD on that one vocab test, so empirically it seems reasonable to suppose I am somewhat ahead of the curve.

** spouting off about fMRI in this context is a useful marker of a neurosci dilettante.

The Hugo Awards and political correctness


The Hugo Awards are science fiction’s most celebrated honor (along with the Nebula Awards). This year there’s a political twist: the accusation that the Hugos are “politically correct” and favor liberal writers over those with conservative political leanings.

The fact that Orson Scott Card won the Hugo in both 1986 and 1987 for Ender’s Game and Speaker for the Dead, or that Dan Simmons won a Hugo in 1990 for Hyperion, is sufficient evidence to prove that no such bias against conservative writers exists [1].

The current controversy is a tempest in a teapot, originating because two conservative writers (Larry Correia and Theodore Beale aka “Vox Day”) have decided to make an example out of the entrenched political correctness that both are convinced exists (see: confirmation bias). Here is Correia’s post about his actions and here is Beale’s. One of the common mantras of these people is that their hero, Robert Heinlein, would not be able to win a Hugo in today’s politically correct world.

Past SFWA president, Hugo winner, and all-around good guy on the Internet, John Scalzi definitively refutes the idea that Heinlein would not have won a Hugo and does so with genuine insight and understanding of who Heinlein was, what he wrote, and how Heinlein himself promoted SF as a literary genre. Key point:

When people say “Heinlein couldn’t win a Hugo today,” what they’re really saying is “The fetish object that I have constructed using the bits of Heinlein that I agree with could not win a Hugo today.” Robert Heinlein — or a limited version of him that only wrote Starship Troopers, The Moon is a Harsh Mistress and maybe Farnham’s Freehold or Sixth Column — is to a certain brand of conservative science fiction writer what Ronald Reagan is to a certain brand of conservative in general: A plaster idol whose utility at this point is as a vessel for a certain worldview, regardless of whether or not Heinlein (or Reagan, for that matter) would subscribe to that worldview himself.

They don’t want Heinlein to be able to win a Hugo today. Because if Heinlein could win a Hugo today, it means that their cri de coeur about how the Hugos are really all about fandom politics/who you know/unfairly biased against them because of political correctness would be wrong, and they might have to entertain the notion that Heinlein, the man, is not the platonic ideal of them, no matter how much they have held up a plaster version of the man to be just that very thing.

Read the whole thing.

In fact, the whole idea that the Hugo are biased against conservatives is a form of political correctness in and of itself. Steven just linked this article about how political correctness is a “positional good” and summarizes:

briefly, a positional good is one that a person owns for snob appeal, to set oneself apart from the rabble. Ownership of the positional good is a way of declaring, “I’m better than you lot!” And it continues to be valued by the snob only as long as it is rare and distinctive.

The idea, then, is that being one of the perpetually aggrieved is a way of being morally superior. I’m open-minded and inclusive, which makes me better than all those damned bigots out there.

Of course, Steven is invoking this idea as a critique about liberals crying racism; he overlooks the same dynamic at work by conservatives crying about exclusion, possibly because he is sympathetic to the “Hugos are biased” claim.

Regarding that claim, Scalzi had meta-commentary on the controversy overall (“No, the Hugo nominations were not rigged“) that is worth reading for perspective. It’s worth noting that Scalzi’s work was heavily promoted by Glenn Reynolds, of Instapundit fame, back in the day, a debt Scalzi is not shy about acknowledging publicly. This should, but won’t, dissuade those inclined (as Correia and Beale are) to lump Scalzi in with their imaginary “leftist” oppressors.

I’ve decided to put my money where my mouth is and support the Hugos by becoming a contributing supporter [2] for the next year. This will allow me to vote on nominees and I will receive a packet of nominees prior to the actual voting, which if you think about it, is an incredible value. If you’re interested in supporting the Hugos against these claims of bias, consider joining me as a contributor yourself. Now that I’m a member, I plan to blog about the nominations process as well, so it should be fun.

RELATED: Scalzi’s earlier post about The Orthodox Church of Heinlein. Much like the Bible, and history, the source material often gets ignored.

[1] To be fair, Card and Simmons aren’t really conservative – they are certifiable lunatics. See here and here.

[2] Here’s more information about becoming a member for the purposes of voting for the Hugos. This year’s convention will be in London, “Loncon3” so membership is handled through their website.

Reason is a limited process and can never explain everything objectively

Reason is a limited process because it arises from consciousness, which observes the universe via filters. The mind has physiological filters (ex, wavelengths of light you can perceive, frequencies of sound you are limited to), chemical filters (the specific biochemistry of your brain, your mood and emotion, etc), and mental filters (pre-existing ideas and biases, fidelity of your metal models and assumptions, simple lack of knowledge). These are all best understood of as filters between you and the “objective” truth of reality. The universe as you observe it and understand it is vastly more complex than you can understand. The process of reason happens only with information you can process at the end of that chain of filters, so you are always operating on an insufficient dataset.

The brain is actually evolved to extrapolate and infer information based on filtered input. The brain often fills in the gaps and makes us see what we expect to see rather than what is actually there. Simple examples are optical illusions and the way the brain can still make sense of the following sentence:

Arinocdcg to rencet rseaerch, the hmuan brian is plrectfey albe to raed colmpex pasasges of txet caiinontng wdors in whcih the lrettes hvae been jmblued, pvioedrd the frsit and lsat leetrts rmeian in teihr crcerot piiotsons.

As a result, there are not only filters on what we perceive but also active transformations of imagination and extrapolation going on that actively modify what we perceive. These filters and transformations all happen “upstream” from the rational process and therefore reason can never operate on an untainted, objective reality. Despite the filters and transformations, the mind does a pretty good job, especially in the context of human interactions on the planet earth (which is what our minds and their filters and transformations are optimized for, after all). However, the farther up the metaphysica ladder we go, the more we deviate from that optimal scenario for which we are evolved (or created, or created to have evolved, or whatever. I’ve not said anything to this point that most atheists and theists need disagree on).

A good analogy is that Newton’s mechanics were a fantastic model for classical mechanics, but do not suffice for clock timing of GPS satellites in earth orbit. This is because Newton did not have the tools available to be aware of general relativity. Yes, we did eventually figure it out, but Newton could not have ever done so (for one thing, his civilization lacked the mathematical and scientific expertise to formulate the experiments that created the questions that Einstein eventually answered).

Godel’s theorem makes this more rigorous by demonstrating that there will always be statements that can neither be proved nor disproved by reason. In other words, as Douglas Hoftstadter put it, Godel proved that provability is a weaker notion than truth, no matter what axiom system is involved. That applies for math and it applies to philosophy, it applies to physics and it applies to the theism/atheism debate.

Related – more detailed explanation of Godel’s Theorem and implications on reason.

pathological chemistry

By way of this entertaining tall tale about how really nasty chemical compounds make for the best rocket fuels (with some conspiracy theorizing about “red mercury” and Chernobyl thrown in for fun), I ended up reading about FOOF, and was treated to one of the more entertaining lines of text I’ve read in some time:

If the paper weren’t laid out in complete grammatical sentences and published in JACS, you’d swear it was the work of a violent lunatic.

Context is king, so start here and then go here. Any chemists in the house?

(here’s the paper online with link to full text PDF!)

Just Another Day #goodbyeEureka – thank you, @SyFy

Eureka, the scifi show on Syfy about a crazy town full of geniuses, has ended. They gave us 5 great seasons and I am grateful to Syfy for allowing them to produce the “series finale” episode as a send-off to all the characters, something that Stargate: Universe never did get.

The best thing about Eureka wasn’t the science fiction or the high concept. It was teh characters – they had more heart and were more authentic than most scifi shows. Firefly was full of wisecrackin’ badasses, but the only person who really was genuine was Kaylee; Eureka had an entire cast full of Kaylees. Stargate Universe was character driven but was more about the high-concept of true exploration of the Unknown, and it did that brilliantly, but the appeal was different. You can’t compare Eureka to SGU in that way. In fact, if anything, the template for Eureka was The Cosby Show, which served to inform mainstream America that here was an upper-class African American family, with the same dreams and problems as everyone else. Eureka took that template and applied it to Science and scientists, normalizing them the same way. The only way you do that is with a cast of genuinely interesting people, with an authenticity to the chemistry and camraderie that clearly isn’t limited to the screen.

Regardless of why it was great, it’s over, and though of course I have my usual issues about the broken model of television and cable and the perverse incentives that seem to bury the shows I want to watch while rewarding the ones I don’t, I can accept it. Eureka and Farscape and SGU still exist, I did watch them, and I loved them. And I can recommend them to others here on my blog in the hope that others will be enriched by them as I was.

Debating Dyson spheres

a wonderfully geeky debate is unfolding about the practicality of Dyson Spheres. Or rather, a subset type called a Dyson Swarm. George Dvorsky begins by breaking the problem down into 5 steps:

  1. Get energy
  2. Mine Mercury
  3. Get materials into orbit
  4. Make solar collectors
  5. Extract energy

The idea is to build the entire swarm in iterative steps and not all at once. We would only need to build a small section of the Dyson sphere to provide the energy requirements for the rest of the project. Thus, construction efficiency will increase over time as the project progresses. “We could do it now,” says Armstrong. It’s just a question of materials and automation.

Alex Knapp takes issue with the idea that step 1 could provide enough energy to execute step 2, with an assist from an astronomer:

“Dismantling Mercury, just to start, will take 2 x 10^30 Joules, or an amount of energy 100 billion times the US annual energy consumption,” he said. “[Dvorsky] kinda glosses over that point. And how long until his solar collectors gather that much energy back, and we’re in the black?”

I did the math to figure that out. Dvorsky’s assumption is that the first stage of the Dyson Sphere will consist of one square kilometer, with the solar collectors operating at about 1/3 efficiency – meaning that 1/3 of the energy it collects from the Sun can be turned into useful work.

At one AU – which is the distance of the orbit of the Earth, the Sun emits 1.4 x 10^3 J/sec per square meter. That’s 1.4 x 10^9 J/sec per square kilometer. At one-third efficiency, that’s 4.67 x 10^8 J/sec for the entire Dyson sphere. That sounds like a lot, right? But here’s the thing – if you work it out, it will take 4.28 x 10^28 seconds for the solar collectors to obtain the energy needed to dismantle Mercury.

That’s about 120 trillion years.

I’m not sure that this is correct. From the way I understood Dvorsky’s argument, the five steps are iterative, not linear. In other words, the first solar panel wouldn’t need to collect *all* the energy to dismantle Mercury, but rather as more panels are built their increased surface area would help fund the energy of future mining and construction.

However, the numbers don’t quite add up. Here’s my code in SpeQ:

sun = 1.4e9 W/km2
sun = 1.4 GW/km²

AU = 149597870.700 km
AU = 149.5978707 Gm

' surface of dyson sphere
areaDyson = 4*Pi*(AU^2)
areaDyson = 281229.379159805 Gm²

areaDyson2 = 6.9e13 km2
areaDyson2 = 69 Gm²

' solar power efficiency
eff = 0.3
eff = 0.3

' energy absorbed W
energy = sun*areaDyson2*eff
energy = 28.98 ZW

'total energy to dismantle mercury (J)
totE = 2e30 J
totE = 2e6 YJ

' time to dismantle mercury (sec)
tt = totE / energy
tt = 69.013112491 Ms

AddUnit(Years, 3600*24*365 seconds)
Unit Years created

' years
Convert(tt, Years)
Ans = 2.188391441 Years

So, I am getting 2.9 x 10^22 W, not 4.67 x 10^8 as Knapp does. So instead of 120 trillion years, it only takes 2.2 years to get the power we need to dismantle Mercury.

Of course with the incremental approach of iteration you don’t have access to all of that energy at once. But it certainly seems feasible in principle – the engineering issues however are really the show stopper. I don’t see any of this happening until we are actually able to travel around teh solar system using something other than chemical reactions for thrust. Let’s focus on building a real VASIMIR drive first, rather than counting our dyson spheres before they hatch.

Incidentally, Dvorsky points to this lecture titled “von Neumann probes, Dyson spheres, exploratory engineering and the Fermi paradox” by Oxford physicist Stuart Armstrong for the initial idea. It’s worth watching:

UPDATE: Stuart Armstrong himself replies to Knapp’s comment thread:

My suggestion was never a practical idea for solving current energy problems – it was connected with the Fermi Paradox, showing how little effort would be required on a cosmic scale to start colonizing the entire universe.
Even though it’s not short term practical, the plan isn’t fanciful. Solar power is about 3.8×10^26 Watts. The gravitational binding energy of Mercury is about 1.80 ×10^30 Joules, so if we go at about 1/3 efficiency, it would take about 5 hours to take Mercury apart from scratch. And there is enough material in Mercury to dyson nearly the whole sun (using a Dyson swarm, rather than a rigid sphere), in Mercury orbit (moving it up to Earth orbit would be pointless).

So the questions are:

1) Can we get the whole process started in the first place? (not yet)

2) Can we automate the whole process? (not yet)

3) And can we automate the whole process well enough to get a proper feedback loop (where the solar captors we build send their energy to Mercury to continue the mining that builds more solar captors, etc…)? (maybe not possible)

If we get that feedback loop, then exponential growth will allow us to disassemble Mercury in pretty trivial amounts of time. If not, it will take considerably longer.

Transparent aluminum? That’s the ticket, laddie

So, it’s actually a thing – called ALON. It’s not so much a metal as an aluminum-based ceramic called aluminum oxynitride, but the point is, it’s aluminum, and it’s transparent:

there be no whales here

and this stuff is strong – 1.6″ is enough to stop a .50 AP bullet that easily passes through twice that thickness of laminated glass armor:

aye, ol’ Scott woulda been proud. And just for old times’ sake:

Let’s take this opportunity to correct a misconception: they did NOT use transparent aluminum for the whale tank. They traded the “matrix” for it to the engineer at the large plate glass manufacturing place in exchange for enough conventional plate to build the tank. Which was a lot.