Goodbye, Cassini

I met Cassini in 1996 at JPL before it departed for Saturn. For 20 years I have cheered its mission. That mission is over, and Cassini’s watch has ended.

I posted this six years ago here at haibane, but it’s worth reposting in salute: an incredible compilation of a flyby of the Saturnian system:

5.6k Saturn Cassini Photographic Animation from stephen v2 on Vimeo.

the hype about Hyperloop

Elon Musk comes to town
Elon Musk comes to town

This article makes an important point about the Hyperloop:

America has the means to reduce traffic and connect people to where they want to go in less time — but solving these problems entails politically difficult choices to shift travel away from cars and highways. Any high-tech solution that promises a shortcut around these thorny problems is probably too good to be true.

I can’t help but see an echo of the wishful thinking surrounding the EMDrive in the Hyperloop marketing campaign. Maybe I’ll be proven wrong.

Here’s the original white paper PDF from Elon Musk, and here’s a rather detailed critique by mathematician and transit analyst Alon Levy. Anyone who takes Hyperloop seriously should read both.

We Just Got Our ’30s Sci-Fi Plots Back

By now, you’ve heard that seven – count ’em, seven – terrestrial planets have been discovered orbiting the ultra-cool M8 star Trappist-1.  According to the paper that the research team released yesterday, all of them could potentially have liquid water on their surfaces, although only three are judged to be good candidates: the authors’ model considers it likely that the three innermost planets have succumbed to a runaway greenhouse effect and that the outermost is too cold.  But that still leaves three potentially habitable planets in a single system.

Those three – Trappist-1e, 1f and 1g – range from .62  to 1.34 estimated Earth masses, and as one would expect from a red-dwarf system, they’re  tidally locked and orbit close to their star with periods of 6 to 12 days.  Their orbits are also very close to each other.  The distance between the orbits of 1e and 1f is .009 AUs – about 830,000 miles – and 1f passes within 750,000 miles of 1g.  This is a system that, even according to its discoverers, shouldn’t exist – their model gives it only an 8.1 percent chance of surviving for a billion years – but as they point out, it obviously does.

There are many more fascinating details about the Trappist-1 system and still more that we have yet to learn.  The discoverers hope that further research, and the launch of the James Webb space telescope next year, will enable them to confirm the details of the planets’ atmospheres and possibly look for biological signatures.  But in the meantime, for those of us who write SF, the discovery of the Trappist-1 system means this: we just got our pulp-era plots back.

We’ve all read stories from the heady days of the 1930s in which the intrepid heroes travel to Mars or Venus in a few days, take off their space suits, breathe the air, encounter exotic life forms and interact with non-human societies.  As we learned more about our solar system, that all got taken away.  The jungles of Venus and the canals of Barsoom have long since been relegated to the realm of nostalgia, and if we want aliens in our stories, we have to cross impossible interstellar distances to find them.

But now, there’s a system where all that can happen!  Three habitable worlds with orbits less than a million miles apart, Hohmann transfers that can be done in a few weeks with inspired 1950s tech – we’ve got the ingredients for interplanetary travel that’s almost as easy as pulp writers imagined it.  And a citizen of Trappist-1f might actually find that Old Venus jungle world one planet in and an arid Old Mars one planet out, and generations of its people could watch their neighbors’ fields and cities grow and dream of one day visiting them.  All we need to do to make pulp stories into hard SF again is move them 40 light years.

All right, we’d need to do a little more than that.  The planets are tidally locked – and with zero eccentricity, they don’t have libration-generated twilight zones – so we’d need to model the day-side and night-side weather.  We’d need to account for the tidal and geological effects of so many worlds so close together, and the atmosphere had better have plenty of ozone to protect against UV and X-ray emissions.  But none of those constraints are deal-breakers, and within them, Weinbaum-punk is suddenly acceptable.

That may not last, of course.  By this time next year, the research team might have found that the Trappist-1 planets have reducing atmospheres or that there’s insufficient protection from stellar radiation or that some other factor makes pulp SF as impossible in that system as in our own.  But right now, it’s wide open to stories of the imagination.  We’ve found one spot in the universe where it’s the Golden Age all over again.

binary thinking

Cognition is more complicated than IQ.
Cognition is more complicated than IQ.

I try to stay out of political theory on this blog, but Vox Day’s essay on the differences between the “VHIQ” and the “UHIQ” struck me as intellectually interesting enough that I felt like exploring it further. Personally, I don’t know what my IQ is, so that means I am merely above average*, since only people with very/ultra-high IQ seem to be motivated to willingly take the test. VD lists a number of plausible qualitative traits, of which the following caught my eye:

VHIQ inclines towards binary either/or thinking and taking sides. UHIQ inclines towards probabilistic thinking and balancing between contradictory possibilities.

VHIQ is uncomfortable with chaos and seeks to impose order on it, even if none exists. UHIQ is comfortable with chaos and seeks to recognize patterns in it.

VHIQ is competitive. UHIQ doesn’t keep score.

VD later goes on to quote Wechsler, the founder of the IQ test, at length and summarizes:

Wechsler is saying quite plainly that those with IQs above 150 are different in kind from those below that level. He is saying that they are a different kind of mind, a different kind of human being.

The division into binary groups here – “normal human” (sub-150 IQ) and the Next (150+), and then at the next iteration between VHIQ and UHIQ, is confusing to me, particularly since it is IQ itself being used to classify people into the binary choices. In the comments, VD clarifies (?) that “It’s entirely possible for a 175 IQ to be VHIQ and for a 145 IQ to be UHIQ” but that just moves the binary classifying to a relative scale than an absolute one. Since he also asserts that you need to be at least +3 SD (ie, IQ of 145) to even qualify as VHIQ, it’s clear that the numbers do matter.

There’s a glaring circularity here that I am doing a poor job of articulating. I’ll just make note of it and move on.

VD’s excerpted passage from Wechsler is, however, nonsense. He created an empirical test, intended to assess “varying amounts of the same basic stuff (e.g., mental energy)” and then made it into a score. I have worked with neurologists before and they make the same category error that psychologists like Wechsler do, in ascribing quantitative rigor to tests like the Expanded Disability Status Scale (EDSS). Just because you can ask someone a bunch of qualitative questions and then give them a “score” based on a comparison of their answers to those of a “baseline” person, does not mean you have actually magically created a quantitative test. Wechsler’s very use of the word “quantitative” is an abuse of language, a classic soft-sciences infatuation with concepts best left to hardsci folks. There’s nothing quantitative about the WAIS whatsoever, until you look at aggregate results over populations. Wechsler lacked even a basic understanding of what human cognition’s base units might be – certainly not hand-wavy bullshit like “mental energy”. Volumetric imaging with DT-MRI is probably the only actual quantitative method the human race has yet invented to probe that “basic stuff” of which Wechsler dreams; but there are some serious engineering constraints on how far we can go in that direction.**

Human cognition isn’t so easily captured by a single metric, even one built on such muddy foundation as the WAIS. It’s chaotic, and emergent, and inconsistent. This infatuation with pseudo-qualitative testing isn’t limited to WAIS; people overuse Meyers-Briggs and over-interpret fMRI all the time. Do qualitative metrics like WAIS or EDSS have value in certain contexts? Of course. However, as a signpost towards Homo Superior, it’s no better than Body Mass Index.

* Why bother with false modesty? I do have a PhD in an applied physics field, after all, and I scored higher than VD on that one vocab test, so empirically it seems reasonable to suppose I am somewhat ahead of the curve.

** spouting off about fMRI in this context is a useful marker of a neurosci dilettante.

the Ummm… Drive


So, there is now a peer-reviewed paper on the fabled EmDrive, which empirically measured a statistically significant thrust. The important results are in Figure 19 up above, and here is what the paper has to say about it:

Figure 19 presents a collection of all the empirically collected data. The averaging of the forward and reverse thrust data is presented in the form of circles. A linear curve is fitted to the data and is shown with the corresponding fitted equation. The vacuum test data collected show a consistent performance of 1.2±0.1uN/kW

It’s not clear if the fit was to the averaged data or the raw data. I suspect the averaged, because looking at the raw data, at no time did thrust exceed 130 uN, even when power was increased from 60 to 80 kW. In fact the data at 80 kW points averages out to the same thrust as at 60 kW, and the error bars are a textbook example of the difference between accuracy and precision.

These results are peer-reviewed, and there is a “statistically significant” linear fit to the data that does demonstrate a correlation between the input power and the observed thrust, but this data does not show that the EmDrive actually works. As Chris Lee at Ars Technica put it, the drive still generates more noise than thrust:

The more important point is that the individual uncertainties in their instrumentation don’t account for the variation in the thrust that they measure, which is a very strong hint that there is an uncontrolled experimental parameter playing havoc with their measurements.

Lee also points out that there are a lot of experimental questions left unanswered, including:

  • Why are there only 18 data points for an experiment that only takes a few minutes to perform?
  • Where is the data related to tuning the microwave frequency for the resonance chamber, and showing the difference between on-resonance mode and an adjacent mode?
  • What is the rise-time of the amplifier?
  • What is the resonance frequency of the pendulum?

on that last point, Lee elaborates:

The use of a pendulum also suggests the sort of experiment that would, again, amplify the signal. Since the pendulum has a resonance frequency, the authors could have used that as a filter. As you modulate the microwave amplifier’s power, the thrust (and any thermal effects) would also be modulated. But thermal effects are subject to a time constant that smears out the oscillation. So as the modulation frequency sweeps through the resonance frequency of the torsion pendulum, the amplitude of motion should greatly increase. However, the thermal response will be averaged over the whole cycle and disappear (well, mostly).

I know that every engineer and physicist in the world knows this technique, so the fact that it wasn’t used here tells us how fragile these results really are.

This is really at the limit of my empirical understanding, but it’s a question that the authors of the paper (not to mention anyone over at /r/emdrive) should be able to field with no worries.

Basically, this paper doesn’t answer any of the substantive questions. But it does at least validate the notion that there is something going on worth investigating. But let’s be real about the outcome – because we’ve seen this before:

For faster-than-light neutrinos, it was a loose cable. For the BICEP2 results, it was an incorrect calibration of galactic gas. For cold fusion, it was a poor experimental setup, and for perpetual motion, it was a scam. No matter what the outcome, there’s something to be learned from further investigation.

and that’s why we do science. It’s not as if scientists are fat cats out to protect their cash cow. (Seriously. I wish it were so). Maybe we are on the verge of another breakthrough, but it will take a lot more than this paper to convince anyone. And that’s as it should be.