binary thinking

Cognition is more complicated than IQ.
Cognition is more complicated than IQ.

I try to stay out of political theory on this blog, but Vox Day’s essay on the differences between the “VHIQ” and the “UHIQ” struck me as intellectually interesting enough that I felt like exploring it further. Personally, I don’t know what my IQ is, so that means I am merely above average*, since only people with very/ultra-high IQ seem to be motivated to willingly take the test. VD lists a number of plausible qualitative traits, of which the following caught my eye:

VHIQ inclines towards binary either/or thinking and taking sides. UHIQ inclines towards probabilistic thinking and balancing between contradictory possibilities.

VHIQ is uncomfortable with chaos and seeks to impose order on it, even if none exists. UHIQ is comfortable with chaos and seeks to recognize patterns in it.

VHIQ is competitive. UHIQ doesn’t keep score.

VD later goes on to quote Wechsler, the founder of the IQ test, at length and summarizes:

Wechsler is saying quite plainly that those with IQs above 150 are different in kind from those below that level. He is saying that they are a different kind of mind, a different kind of human being.

The division into binary groups here – “normal human” (sub-150 IQ) and the Next (150+), and then at the next iteration between VHIQ and UHIQ, is confusing to me, particularly since it is IQ itself being used to classify people into the binary choices. In the comments, VD clarifies (?) that “It’s entirely possible for a 175 IQ to be VHIQ and for a 145 IQ to be UHIQ” but that just moves the binary classifying to a relative scale than an absolute one. Since he also asserts that you need to be at least +3 SD (ie, IQ of 145) to even qualify as VHIQ, it’s clear that the numbers do matter.

There’s a glaring circularity here that I am doing a poor job of articulating. I’ll just make note of it and move on.

VD’s excerpted passage from Wechsler is, however, nonsense. He created an empirical test, intended to assess “varying amounts of the same basic stuff (e.g., mental energy)” and then made it into a score. I have worked with neurologists before and they make the same category error that psychologists like Wechsler do, in ascribing quantitative rigor to tests like the Expanded Disability Status Scale (EDSS). Just because you can ask someone a bunch of qualitative questions and then give them a “score” based on a comparison of their answers to those of a “baseline” person, does not mean you have actually magically created a quantitative test. Wechsler’s very use of the word “quantitative” is an abuse of language, a classic soft-sciences infatuation with concepts best left to hardsci folks. There’s nothing quantitative about the WAIS whatsoever, until you look at aggregate results over populations. Wechsler lacked even a basic understanding of what human cognition’s base units might be – certainly not hand-wavy bullshit like “mental energy”. Volumetric imaging with DT-MRI is probably the only actual quantitative method the human race has yet invented to probe that “basic stuff” of which Wechsler dreams; but there are some serious engineering constraints on how far we can go in that direction.**

Human cognition isn’t so easily captured by a single metric, even one built on such muddy foundation as the WAIS. It’s chaotic, and emergent, and inconsistent. This infatuation with pseudo-qualitative testing isn’t limited to WAIS; people overuse Meyers-Briggs and over-interpret fMRI all the time. Do qualitative metrics like WAIS or EDSS have value in certain contexts? Of course. However, as a signpost towards Homo Superior, it’s no better than Body Mass Index.

* Why bother with false modesty? I do have a PhD in an applied physics field, after all, and I scored higher than VD on that one vocab test, so empirically it seems reasonable to suppose I am somewhat ahead of the curve.

** spouting off about fMRI in this context is a useful marker of a neurosci dilettante.

the Ummm… Drive

figure19

So, there is now a peer-reviewed paper on the fabled EmDrive, which empirically measured a statistically significant thrust. The important results are in Figure 19 up above, and here is what the paper has to say about it:

Figure 19 presents a collection of all the empirically collected data. The averaging of the forward and reverse thrust data is presented in the form of circles. A linear curve is fitted to the data and is shown with the corresponding fitted equation. The vacuum test data collected show a consistent performance of 1.2±0.1uN/kW

It’s not clear if the fit was to the averaged data or the raw data. I suspect the averaged, because looking at the raw data, at no time did thrust exceed 130 uN, even when power was increased from 60 to 80 kW. In fact the data at 80 kW points averages out to the same thrust as at 60 kW, and the error bars are a textbook example of the difference between accuracy and precision.

These results are peer-reviewed, and there is a “statistically significant” linear fit to the data that does demonstrate a correlation between the input power and the observed thrust, but this data does not show that the EmDrive actually works. As Chris Lee at Ars Technica put it, the drive still generates more noise than thrust:

The more important point is that the individual uncertainties in their instrumentation don’t account for the variation in the thrust that they measure, which is a very strong hint that there is an uncontrolled experimental parameter playing havoc with their measurements.

Lee also points out that there are a lot of experimental questions left unanswered, including:

  • Why are there only 18 data points for an experiment that only takes a few minutes to perform?
  • Where is the data related to tuning the microwave frequency for the resonance chamber, and showing the difference between on-resonance mode and an adjacent mode?
  • What is the rise-time of the amplifier?
  • What is the resonance frequency of the pendulum?

on that last point, Lee elaborates:

The use of a pendulum also suggests the sort of experiment that would, again, amplify the signal. Since the pendulum has a resonance frequency, the authors could have used that as a filter. As you modulate the microwave amplifier’s power, the thrust (and any thermal effects) would also be modulated. But thermal effects are subject to a time constant that smears out the oscillation. So as the modulation frequency sweeps through the resonance frequency of the torsion pendulum, the amplitude of motion should greatly increase. However, the thermal response will be averaged over the whole cycle and disappear (well, mostly).

I know that every engineer and physicist in the world knows this technique, so the fact that it wasn’t used here tells us how fragile these results really are.

This is really at the limit of my empirical understanding, but it’s a question that the authors of the paper (not to mention anyone over at /r/emdrive) should be able to field with no worries.

Basically, this paper doesn’t answer any of the substantive questions. But it does at least validate the notion that there is something going on worth investigating. But let’s be real about the outcome – because we’ve seen this before:

For faster-than-light neutrinos, it was a loose cable. For the BICEP2 results, it was an incorrect calibration of galactic gas. For cold fusion, it was a poor experimental setup, and for perpetual motion, it was a scam. No matter what the outcome, there’s something to be learned from further investigation.

and that’s why we do science. It’s not as if scientists are fat cats out to protect their cash cow. (Seriously. I wish it were so). Maybe we are on the verge of another breakthrough, but it will take a lot more than this paper to convince anyone. And that’s as it should be.

Stargazing at Sandstone Peak

I went stargazing last night at Sandstone Peak in Malibu with my friend Huzaifa – here are some of the post-processed long-exposure shots he took:

Huzaifa has two scopes, and a local named Bob showed up with his own rig. All together, we viewed Saturn’s rings, Jupiter’s moons and bands, and Mars, not to mention a few Messier globular clusters, an open cluster in Hercules, and Berenice’s Comb.

Here’s the location – the ocean was due south, and offered the darkest skies, though we left around midnight, well before the bulk of the Milky Way rose. The western sky was a slightly contaminated by glow from Oxnard. Due east was pretty poor due to light from Thousand Oaks and the Valley beyond. The bulk of Los Angeles proper was southeast and too far away to really interfere, however. For a site only 30 min from home, this was an absolutely superb location, especially for the southeastern sky. See:

definitive proof that time travel is impossible

If time travel is possible, then the present is the past for an infinite number of futures. (Assuming the time stream is changeable by travelers, and not fixed).

In an infinite number of futures, there are a sub-infinity number of those futures in which a time traveler exists who finds today, the day you are reading this blog post, a fascinating and pivotal moment in history.

Therefore, even if only a small fraction of those infinite future travelers obsessed with our today actually bother/have the means to travel to today, there are still an infinite number of them.

Therefore, today there should have been an infinite number of time travelers appearing from an infinite number of different futures. Or, as Douglas Adams would have said, “whop

Of course the same argument holds for every moment of every day in all of recorded history, so basically we should be inundated with infinite numbers of time travelers arriving at every moment of time for all time.

Since that is clearly not happening, time travel must be impossible.

I’d love to see a What-If XKCD on the idea of an infinite number of time travelers arriving today, actually… would probably be a mass extinction, the Earth would suffer gravitational collapse, and we’d be in a black hole. I think.

Self-Assembled Plasmonic Nanoparticle Clusters

(In addition to MRI and medical physics, it’s worth keeping an open mind and keeping tabs on various other branches of physics and science. To that end, I’ll highlight interesting papers or research that strikes my fancy from time to time.)

Eric Berger aka SciGuy, a science columnist at the Houston Chronicle, points to a new paper in Science that introduces new “metamaterials” which can manipulate light, which are easy to fabricate (in principle). Eric makes the analogy to this being as much a game-changer as lasers were when they were invented almost exactly 50 years ago.

Here’s the abstract of the paper:

Self-Assembled Plasmonic Nanoparticle Clusters

The self-assembly of colloids is an alternative to top-down processing that enables the fabrication of nanostructures. We show that self-assembled clusters of metal-dielectric spheres are the basis for nanophotonic structures. By tailoring the number and position of spheres in close-packed clusters, plasmon modes exhibiting strong magnetic and Fano-like resonances emerge. The use of identical spheres simplifies cluster assembly and facilitates the fabrication of highly symmetric structures. Dielectric spacers are used to tailor the interparticle spacing in these clusters to be approximately 2 nanometers. These types of chemically synthesized nanoparticle clusters can be generalized to other two- and three-dimensional structures and can serve as building blocks for new metamaterials.

and here’s a link to the full text of the article. As with lasers when they were first introduced, it’s a challenge to the imagination to envision how this might be used or applied. What possible medical imaging applications could this be exploited for? That’s the billion dollar question 🙂

NIH funding running dry

This isn’t exactly a surprise, but worth mentioning anyway:

Before the ink was dry on the government’s 2007 budget (or even completed for that matter), the Bush administration’s proposal for the 2008 budget was submitted on February 5th, and the news for biomedical researchers was not very good. According to sources the NIH is slated to receive a $500 million budget cut, before inflation is factored in—assuming a bill inflating their budget for 2007 passes through congress.

Making this even more dire for biomed researchers is the fact that over 10,000 NIH extramural grants are up for renewal in 2008. Those contending for extensions or renewals of such grants are now faced with double difficulty: less money to go around and more people vying for the same number of spaces. Constraints such as these have driven the average age of first-time grant recipients to over 40 years old, barely a young researcher anymore.

The simple truth is that the NIH is probably the single greatest investment of public funds apart from NASA in terms of knowledge generation for the benefit of society that the world has ever seen. Less funds mean less research; less Ph.D.s choosing an academic career; less innovation and less risk-taking. That means more orthodoxy, entrenched and defensive peer-review, and ultimately more echo-chambering.

Even with new funding programs aimed at transitioning postdocs to faculty, it’s hard to justify doing a post-doc to people in the field nowadays – if they have the flexibility, they can make more than double the salary working for industry. What does the future of our field, medical physics and MRI in particular, look like?