We Just Got Our ’30s Sci-Fi Plots Back

By now, you’ve heard that seven – count ’em, seven – terrestrial planets have been discovered orbiting the ultra-cool M8 star Trappist-1.  According to the paper that the research team released yesterday, all of them could potentially have liquid water on their surfaces, although only three are judged to be good candidates: the authors’ model considers it likely that the three innermost planets have succumbed to a runaway greenhouse effect and that the outermost is too cold.  But that still leaves three potentially habitable planets in a single system.

Those three – Trappist-1e, 1f and 1g – range from .62  to 1.34 estimated Earth masses, and as one would expect from a red-dwarf system, they’re  tidally locked and orbit close to their star with periods of 6 to 12 days.  Their orbits are also very close to each other.  The distance between the orbits of 1e and 1f is .009 AUs – about 830,000 miles – and 1f passes within 750,000 miles of 1g.  This is a system that, even according to its discoverers, shouldn’t exist – their model gives it only an 8.1 percent chance of surviving for a billion years – but as they point out, it obviously does.

There are many more fascinating details about the Trappist-1 system and still more that we have yet to learn.  The discoverers hope that further research, and the launch of the James Webb space telescope next year, will enable them to confirm the details of the planets’ atmospheres and possibly look for biological signatures.  But in the meantime, for those of us who write SF, the discovery of the Trappist-1 system means this: we just got our pulp-era plots back.

We’ve all read stories from the heady days of the 1930s in which the intrepid heroes travel to Mars or Venus in a few days, take off their space suits, breathe the air, encounter exotic life forms and interact with non-human societies.  As we learned more about our solar system, that all got taken away.  The jungles of Venus and the canals of Barsoom have long since been relegated to the realm of nostalgia, and if we want aliens in our stories, we have to cross impossible interstellar distances to find them.

But now, there’s a system where all that can happen!  Three habitable worlds with orbits less than a million miles apart, Hohmann transfers that can be done in a few weeks with inspired 1950s tech – we’ve got the ingredients for interplanetary travel that’s almost as easy as pulp writers imagined it.  And a citizen of Trappist-1f might actually find that Old Venus jungle world one planet in and an arid Old Mars one planet out, and generations of its people could watch their neighbors’ fields and cities grow and dream of one day visiting them.  All we need to do to make pulp stories into hard SF again is move them 40 light years.

All right, we’d need to do a little more than that.  The planets are tidally locked – and with zero eccentricity, they don’t have libration-generated twilight zones – so we’d need to model the day-side and night-side weather.  We’d need to account for the tidal and geological effects of so many worlds so close together, and the atmosphere had better have plenty of ozone to protect against UV and X-ray emissions.  But none of those constraints are deal-breakers, and within them, Weinbaum-punk is suddenly acceptable.

That may not last, of course.  By this time next year, the research team might have found that the Trappist-1 planets have reducing atmospheres or that there’s insufficient protection from stellar radiation or that some other factor makes pulp SF as impossible in that system as in our own.  But right now, it’s wide open to stories of the imagination.  We’ve found one spot in the universe where it’s the Golden Age all over again.

binary thinking

Cognition is more complicated than IQ.
Cognition is more complicated than IQ.

I try to stay out of political theory on this blog, but Vox Day’s essay on the differences between the “VHIQ” and the “UHIQ” struck me as intellectually interesting enough that I felt like exploring it further. Personally, I don’t know what my IQ is, so that means I am merely above average*, since only people with very/ultra-high IQ seem to be motivated to willingly take the test. VD lists a number of plausible qualitative traits, of which the following caught my eye:

VHIQ inclines towards binary either/or thinking and taking sides. UHIQ inclines towards probabilistic thinking and balancing between contradictory possibilities.

VHIQ is uncomfortable with chaos and seeks to impose order on it, even if none exists. UHIQ is comfortable with chaos and seeks to recognize patterns in it.

VHIQ is competitive. UHIQ doesn’t keep score.

VD later goes on to quote Wechsler, the founder of the IQ test, at length and summarizes:

Wechsler is saying quite plainly that those with IQs above 150 are different in kind from those below that level. He is saying that they are a different kind of mind, a different kind of human being.

The division into binary groups here – “normal human” (sub-150 IQ) and the Next (150+), and then at the next iteration between VHIQ and UHIQ, is confusing to me, particularly since it is IQ itself being used to classify people into the binary choices. In the comments, VD clarifies (?) that “It’s entirely possible for a 175 IQ to be VHIQ and for a 145 IQ to be UHIQ” but that just moves the binary classifying to a relative scale than an absolute one. Since he also asserts that you need to be at least +3 SD (ie, IQ of 145) to even qualify as VHIQ, it’s clear that the numbers do matter.

There’s a glaring circularity here that I am doing a poor job of articulating. I’ll just make note of it and move on.

VD’s excerpted passage from Wechsler is, however, nonsense. He created an empirical test, intended to assess “varying amounts of the same basic stuff (e.g., mental energy)” and then made it into a score. I have worked with neurologists before and they make the same category error that psychologists like Wechsler do, in ascribing quantitative rigor to tests like the Expanded Disability Status Scale (EDSS). Just because you can ask someone a bunch of qualitative questions and then give them a “score” based on a comparison of their answers to those of a “baseline” person, does not mean you have actually magically created a quantitative test. Wechsler’s very use of the word “quantitative” is an abuse of language, a classic soft-sciences infatuation with concepts best left to hardsci folks. There’s nothing quantitative about the WAIS whatsoever, until you look at aggregate results over populations. Wechsler lacked even a basic understanding of what human cognition’s base units might be – certainly not hand-wavy bullshit like “mental energy”. Volumetric imaging with DT-MRI is probably the only actual quantitative method the human race has yet invented to probe that “basic stuff” of which Wechsler dreams; but there are some serious engineering constraints on how far we can go in that direction.**

Human cognition isn’t so easily captured by a single metric, even one built on such muddy foundation as the WAIS. It’s chaotic, and emergent, and inconsistent. This infatuation with pseudo-qualitative testing isn’t limited to WAIS; people overuse Meyers-Briggs and over-interpret fMRI all the time. Do qualitative metrics like WAIS or EDSS have value in certain contexts? Of course. However, as a signpost towards Homo Superior, it’s no better than Body Mass Index.

* Why bother with false modesty? I do have a PhD in an applied physics field, after all, and I scored higher than VD on that one vocab test, so empirically it seems reasonable to suppose I am somewhat ahead of the curve.

** spouting off about fMRI in this context is a useful marker of a neurosci dilettante.

the Ummm… Drive

figure19

So, there is now a peer-reviewed paper on the fabled EmDrive, which empirically measured a statistically significant thrust. The important results are in Figure 19 up above, and here is what the paper has to say about it:

Figure 19 presents a collection of all the empirically collected data. The averaging of the forward and reverse thrust data is presented in the form of circles. A linear curve is fitted to the data and is shown with the corresponding fitted equation. The vacuum test data collected show a consistent performance of 1.2±0.1uN/kW

It’s not clear if the fit was to the averaged data or the raw data. I suspect the averaged, because looking at the raw data, at no time did thrust exceed 130 uN, even when power was increased from 60 to 80 kW. In fact the data at 80 kW points averages out to the same thrust as at 60 kW, and the error bars are a textbook example of the difference between accuracy and precision.

These results are peer-reviewed, and there is a “statistically significant” linear fit to the data that does demonstrate a correlation between the input power and the observed thrust, but this data does not show that the EmDrive actually works. As Chris Lee at Ars Technica put it, the drive still generates more noise than thrust:

The more important point is that the individual uncertainties in their instrumentation don’t account for the variation in the thrust that they measure, which is a very strong hint that there is an uncontrolled experimental parameter playing havoc with their measurements.

Lee also points out that there are a lot of experimental questions left unanswered, including:

  • Why are there only 18 data points for an experiment that only takes a few minutes to perform?
  • Where is the data related to tuning the microwave frequency for the resonance chamber, and showing the difference between on-resonance mode and an adjacent mode?
  • What is the rise-time of the amplifier?
  • What is the resonance frequency of the pendulum?

on that last point, Lee elaborates:

The use of a pendulum also suggests the sort of experiment that would, again, amplify the signal. Since the pendulum has a resonance frequency, the authors could have used that as a filter. As you modulate the microwave amplifier’s power, the thrust (and any thermal effects) would also be modulated. But thermal effects are subject to a time constant that smears out the oscillation. So as the modulation frequency sweeps through the resonance frequency of the torsion pendulum, the amplitude of motion should greatly increase. However, the thermal response will be averaged over the whole cycle and disappear (well, mostly).

I know that every engineer and physicist in the world knows this technique, so the fact that it wasn’t used here tells us how fragile these results really are.

This is really at the limit of my empirical understanding, but it’s a question that the authors of the paper (not to mention anyone over at /r/emdrive) should be able to field with no worries.

Basically, this paper doesn’t answer any of the substantive questions. But it does at least validate the notion that there is something going on worth investigating. But let’s be real about the outcome – because we’ve seen this before:

For faster-than-light neutrinos, it was a loose cable. For the BICEP2 results, it was an incorrect calibration of galactic gas. For cold fusion, it was a poor experimental setup, and for perpetual motion, it was a scam. No matter what the outcome, there’s something to be learned from further investigation.

and that’s why we do science. It’s not as if scientists are fat cats out to protect their cash cow. (Seriously. I wish it were so). Maybe we are on the verge of another breakthrough, but it will take a lot more than this paper to convince anyone. And that’s as it should be.

Stargazing at Sandstone Peak

I went stargazing last night at Sandstone Peak in Malibu with my friend Huzaifa – here are some of the post-processed long-exposure shots he took:

Huzaifa has two scopes, and a local named Bob showed up with his own rig. All together, we viewed Saturn’s rings, Jupiter’s moons and bands, and Mars, not to mention a few Messier globular clusters, an open cluster in Hercules, and Berenice’s Comb.

Here’s the location – the ocean was due south, and offered the darkest skies, though we left around midnight, well before the bulk of the Milky Way rose. The western sky was a slightly contaminated by glow from Oxnard. Due east was pretty poor due to light from Thousand Oaks and the Valley beyond. The bulk of Los Angeles proper was southeast and too far away to really interfere, however. For a site only 30 min from home, this was an absolutely superb location, especially for the southeastern sky. See:

definitive proof that time travel is impossible

If time travel is possible, then the present is the past for an infinite number of futures. (Assuming the time stream is changeable by travelers, and not fixed).

In an infinite number of futures, there are a sub-infinity number of those futures in which a time traveler exists who finds today, the day you are reading this blog post, a fascinating and pivotal moment in history.

Therefore, even if only a small fraction of those infinite future travelers obsessed with our today actually bother/have the means to travel to today, there are still an infinite number of them.

Therefore, today there should have been an infinite number of time travelers appearing from an infinite number of different futures. Or, as Douglas Adams would have said, “whop

Of course the same argument holds for every moment of every day in all of recorded history, so basically we should be inundated with infinite numbers of time travelers arriving at every moment of time for all time.

Since that is clearly not happening, time travel must be impossible.

I’d love to see a What-If XKCD on the idea of an infinite number of time travelers arriving today, actually… would probably be a mass extinction, the Earth would suffer gravitational collapse, and we’d be in a black hole. I think.

Self-Assembled Plasmonic Nanoparticle Clusters

(In addition to MRI and medical physics, it’s worth keeping an open mind and keeping tabs on various other branches of physics and science. To that end, I’ll highlight interesting papers or research that strikes my fancy from time to time.)

Eric Berger aka SciGuy, a science columnist at the Houston Chronicle, points to a new paper in Science that introduces new “metamaterials” which can manipulate light, which are easy to fabricate (in principle). Eric makes the analogy to this being as much a game-changer as lasers were when they were invented almost exactly 50 years ago.

Here’s the abstract of the paper:

Self-Assembled Plasmonic Nanoparticle Clusters

The self-assembly of colloids is an alternative to top-down processing that enables the fabrication of nanostructures. We show that self-assembled clusters of metal-dielectric spheres are the basis for nanophotonic structures. By tailoring the number and position of spheres in close-packed clusters, plasmon modes exhibiting strong magnetic and Fano-like resonances emerge. The use of identical spheres simplifies cluster assembly and facilitates the fabrication of highly symmetric structures. Dielectric spacers are used to tailor the interparticle spacing in these clusters to be approximately 2 nanometers. These types of chemically synthesized nanoparticle clusters can be generalized to other two- and three-dimensional structures and can serve as building blocks for new metamaterials.

and here’s a link to the full text of the article. As with lasers when they were first introduced, it’s a challenge to the imagination to envision how this might be used or applied. What possible medical imaging applications could this be exploited for? That’s the billion dollar question 🙂