Author: Otaku-kun

  • high score: 1026 on Entanglement

    woah.

    not too shabby

    Clearly, my new strategy is on the right track. My goal is to basically create my endgame path, and wind my red path around in semi-helical fashion around it for as far as I can go. I dropped all my earlier strategies (go concave, isolate the center, etc etc) and as you can see with some luck I found myself with a very low score right before the finale (click to enlarge):

    just before the final play… 80 measly points

    You can see how I shepherded the long path around the board and for the most part kept the red path close to it. Here’s the final board (click to enlarge):

    the final board – added 43 segments in one move

    I think if I am careful to save a really good tile with a U and a V and some crossovers in my Swap, and try for slightly more torturous endpath winding, I can go even higher. This literally was my first try at this new strategy and of course I really lucked out that I was able to connect the two long path segments at the end. But amazing as this was, it’s humbling to look at how my score fares on the leaderboards:

    I’m #2 for today!

    I’m #79 this week. Yay.

    I’m… not even in the top 100, all-time. Sigh.

  • and now, let’s talk about mechanical keyboards

    UPDATE – There are a number of other Filco Majestouch keyboards available on Amazon right now – supplied probably limited as these are no longer in production.

    Anyone following my hardware posts as I build my new workstation (named PREFECT, for reasons which are unlikely to become clear again at the moment), will notice a pattern: each and every single component was chosen after agonizing and masochistic research into trying to determine the optimal choice, balancing performance, cost, and my projected usage pattern – mainly WoW, MATLAB, and Office. (Though I also will be dabbling in programming and web apps development).

    At any rate, the few components I have not had to research were monitors, mouse and keyboard. I actually did look into new monitors, but was flummoxed by the fact it’s nearly impossible to find 1920×1200 widescreens anymore – almost all new monitors are 1900×1080, to fit the HDTV aspect ratio of 16:9. I really want my 120 lines of vertical real estate, so i took my father’s old pair of Sceptre X24WG Naga 16:10 screens while he went out and bought himself a honking 32inch HDTV monitor. My wireless keyboard and mouse were also hand-me-downs, a Logitech media set which worked well enough but weren’t the greatest thing in the world to type on (especially in comparison to my Thinkpad T42).

    However, I’ve been having intermittent issues with that aging keyboard – first of all, it runs on 2 AAA batteries, which it positively devours. Lately I’ve also been having issues where keys pressed don’t always register which makes my already-high typo rate even worse. And the keyboard is flat as a board, which is another obstacle to me finally learning how to type. Like most keyboards, it’s a rubber-dome mechanism which is essentially a throwaway technology. Given that 99% of my interaction with my computer is via the keyboard, I’ve decided to make the switch to a mechanical keyboard instead. Overclock.net has a great primer to mechanical keyboard technology which really makes the case. It turns out that one of the best switches made is by a German company called Cherry, whose MX-series key-switches are used in virtually all the mechanical keyboards on the market. But these switches come in different variants, which vary in their tactile response and audible sound. The main issue then boils down to what type of profile I want, and then find a vendor who makes that type.

    There are other switch technologies like Alps and Topre but for simplicity I am sticking with Cherry-based keyboards (which are also a little cheaper generally, though not always). Let’s go through the various Cherry switches again (I am assuming the Reader is familiar with mechanical keyboard technology or has read the primer I linked earlier).

    Cherry MX Blue – a tactile, “clicky” switch. The audible sound is a very loud click, which gives you auditory feedback but may not be the best thing for a quiet environment. The tactile and audible feedback let you move on to your next keypress quickly, which is optimal for typing. These have moderate actuation force (50g) meaning you can rest your fingers on the keys but still easy to initiate a keypress – this is the switch to get if you’re a high-speed typist. However it is not optimal for gaming since the release point is above the actuation point, which means if you are double-tapping a key or pressing the same key a number of times in quick succession, it may not register.

    The Das Keyboard and the Razer BlackWidow series both use Cherry MX blues. The BlackWidow comes in a regular version for $70 or an Ultimate version for $120 with backlighting and a USB hub. The Das Keyboard is $130, for both a standard lettered version and also a blank, featureless version, both of which have USB hubs.

    UPDATE – You can also get Filco Majestouch 104-key blues at Amazon for $149 – these are no longer manufactured so supplies are limited.

    However, I’m pretty certain I’m not interested in a blue-based keyboard because I don’t want a loud “click”. Also, I’m not going to be using it exclusively for typing, so I do want a bit more linear response. So that rules out the Razer for me. Moving on…

    Cherry MX Black – basically the opposite of the Blue, with no tactile feedback and no audible clicky sound. The black switches have a linear response where the point of activation is the same as the point of release, which makes it optimal for gaming where you might be pushing the same key a number of times in succession. These are reported to have a very smooth feel, but are supposedly not as great for extended typing. They also have a very high actuation force – 60g, which means a keypress must be very deliberate (minimizing accidental keypresses).

    A lot of mechanical gaming keyboards out there use blacks, the most notable of these being the Steelseries 6GV2 for $100 and it’s big brother the Steelseries 7G which adds audio ports and a ginormous palmrest. These keyboards have superior NKRO and make the deliberate decision to exclude a Windows key. Deck Legend keyboards fetauring backlighting can also be found using blacks, in the Fire ($149), Toxic ($159) and Ice ($159) variants.

    UPDATE – Amazon has tenkeyless Filco Majestouch keyboards with black MX switches available now for $139.

    The silent non-clicky nature of the MX black appeals to me. However, since I am not exclusively a gamer, I’m not sure if a black-based keyboard would be ideal for me. Fortunately there are other options, such as…

    Cherry MX Brown, Cherry MX Clear – These are hybrids of blacks and blues, both with a tactile response, but no clicky sound. The main difference between them is that browns have less actuation force than blues (45g) whereas clears have actuation force in between blues and blacks (55g). Thus browns are for warp-speed typists and clears are a good hybrid for gaming and typing. Neither have the linear response of blacks.

    You can order Deck Legend keyboards with clears, in Frost ($176) and Ice ($169) variants.

    Das Keyboard also comes in a “silent” variant using browns, again in lettered or non-lettered variants, both $135.

    UPDATEFilco Majestouch 104-key Brown keyboards are also in stock for $149 at Amazon. As mentioned above, these are discontinued boards so supplies are limited.

    Cherry MX Red – these have the same linear response as the blacks, but with lower actuation force of 45g akin to browns. No tactile response and no audible click.

    I actually could not find any keyboards for sale at the usual retail outlets or online using these switches.

    UPDATE87-key (tenkeyless) Filco Majestouch with red switches is available at Amazon for $165 and free shipping – extremely rare, worth snapping up if you have even passing interest. Possibly the ideal hybrid keyboard for typing and gaming.

    UPDATE 2 – the tenkeyless Filco Reds are out of stock now but they have full-size 104-key Filco Reds instead for $179. Still worth snapping up!

    All of the keyboards above with the exception of the Decks have sculpted keys, where different rows have different height, to accomodate the different distances fingers must travel from the home row. And all of them support n-key rollover, where multiple keypresses will register without the “beep” (also called anti-ghosting), to varying extent. You always get better NKRO using the PS/2 port than if you use USB, so you should use a USB to PS2 adapter (which is included with the Steelseries, unsure about the others).

    SUMMARY – So, as usual, I need to make a decision. I don’t like the lack of sculpting and higher cost of the Decks. And I don’t want a blue, as I’m not a warp-speed typist and am not interested in clicky sound, ruling out the Razer. All things being equal I’d lean towards a clear-based board, but only Deck makes those. Brown might also work, if I can live with accidental keypresses, but Das Keyboard is expensive. The Steelseries seems to be the best balance of features and cost, but it’s only available with black switches. However as I am not a great typist so maybe that would be ok. I’m just not sure. Probably any of these boards (well, apart from the blue – just not a fan of loud click) would be a dramatic improvement for all my writing and gaming over my current membrane-based Logitech. There are some comparative reviews of mechanical keyboards from BenchmarkReviews and from Tom’s Guide, but these aren’t much help.

    Sigh. Decisions, decisions. I think the Steelseries is probably my best bet. Any advice?

  • hard drive and storage woes

    Figuring out the optimal solution for backup and storage has been really difficult for PREFECT, not least because both the original WD Caviar Black and then the replacement Samsung Spinpoint F3 drives I purchased as main drive seemingly failed. In the former case it was BSOD after BSOD, and then the latter it was repeated disk read errors. The WD was from NewEgg, the Samsung from Amazon, so yesterday in frustration I drove to best Buy and bought an overpriced Seagate Barracuda. If this drive starts throwing disk read errors then I know its a software issue as I’ve cycled through all the major retailers and vendors at this point.

    I had earlier decided against RAID, but now I wonder is that might be a solution again. I have this Barracuda in place, which gives me some breathing room (and a 30-day return window). Given that Spinpoints are on sale for $55 apiece right now at NewEgg, what if I bought two of them and set them up in RAID-1? That would be about the same price as this single barracuda, and it’s a faster drive (see HD Tune benchmarks for the Samsung, the Barracuda, and also the 2TB Caviar Green I am using as a data store, below).

    My backup strategy is to have a 2TB drive in the system (the Caviar Green) where I store Windows backup files, a copy of all my backups of the other PCs in the house, and assorted files like VDI and ISO and torrents. I also have a 1 TB external drive, where I also store a copy of the old backups. And then my primary drive has my OS, apps, and documents in current use. I also am evaluating Backblaze which seems to be a little more robust than Carbonite and less expensive than Mozy, for off-site cloud storage.

    If I replace the primary TB drive (currently the Barracuda) with two Spinpoints in RAID-1, then if I understand it correctly, I might even see some slight read-speed advantages, while gaining redundancy from disk failure. My biggest fear is that a disk failure leads me to lose some short-term data which isn’t captured by my backups or by Backblaze.

    Am I being overly paranoid? I’d like to solicit some opinions from you all. I’m not interested in spending more money aside from potentially replacing the Barracuda with the pair of spinpoints. I could see an argument for buying a single SSD for just the OS, however (though not right now, later). What do you think? go for the spinpoints? do RAID or not?

    benchmarks from HDTune below the fold… (more…)

  • Rage against the machine: Watson isn’t elementary

    IBM’s supercomputer Watson is playing Jeopardy against human champions, and the first round was a tie. This is spawning two narratives in the media.

    The first narrative – by far the most widepsread – might be summarized thus: OMFG the Machines are kicking our asses! I bow to our mechanical overlords. Where’s my Matrix pod? Fear Skynet!

    The second, however, seems to me the more interesting one, and to be honest I haven’t actually found any examples yet of it out there but I am hopeful that someone (besides me, anyway) is writing about it. That narrative might be summarized thus: You mean, with all those gigabits/sec, petabytes, petaflops, and nanoseconds at its disposal, the best the machine can do is tie?

    Let’s keep in mind that the combined total of the entire world’s CPU power – the total, across all computers on the planet – is estimated to be the equivalent of one human brain. One.

    You might argue that this fact supports narrative #1, the OMFG one. After all, measured in CPU capacity, Watson is kind of toast. But human brains compute using chemical reactions, whereas computers compute using electronics. That means that computers can compute about a hundred to a thousand times faster than we can. (I am reminded of the aliens living on a neutron star in Robert Forward’s landmark scifi novel, Dragon’s Egg, arguably the progenitor of the hard-sci-fi genre). Also, information retrieval using search algorithms on indexed data is obviously far more accurate than the vagaries of memory. So mere compute capacity isn’t the issue. An enormous desert full of rocks has more compute power than my desktop, but you still can’t play Warcraft in any reasonable timescale.

    So for a game like Jeopardy – which is full of questions that can be answered quite simply using Google, unlike other Trivia contests I could mention – a computer really should walk all over the poor bags of mostly water who literally have (chemical) soup for brains. I’m not privy to watson’s architecture, but I suspect that Alex reads the question, it’s converted to a text strong by some straightforward voice-to-speech algorithm, processed by natural language algorithms to extract the keywords and rudimentary context, sent off to google or IBM’s inhouse equivalent against Watson’s database, and then the results are ranked using some sort of fuzzy logic (again influenced by the context of the question). Watson takes the most probable answer, sends it through another natural language filter, and constructs a response in the form of a question as per Jeopardy’s rules. With answer in hand, the “buzzer” is activated, and if the humans haven’t already buzzed in by now, that response is vocalized using a Stephen Hawking Box. Nothing about this requires any intelligence, just clever code – and if you are a machine, but your performance depends entirely on the code written by your human handlers, then that’s the digital equivalent of holding HAL by the short hairs.

    So please, Watson merely tied the humans? Even if he beats them in the end, anything less than a total rout is indeed a soft bigotry of low expectations. To quote Kirk, “I’m laughing at the Superior Intellect.”

  • entangled by Entanglement

    an Entanglement tile
    an Entanglement tile
    Google Chrome has started featuring the Entanglement game by Gopherwood studios on it’s website. You can link your google account to the game so your scores can be posted online, under your moniker (which you choose using three haiku-esque phrases – I am Tidy Folded Landscape).

    The game – coded entirely in HTML5, not Flash – is quite simple – place hexagonal tiles on a hexagonal board, to construct a long pathway. The path can (and must) cross itself but cannot intersect the darker tiles around the periphery or at the center. Each tile has 12 nodes and six paths connecting them, and you can rotate the tile as you decide where to place it. You are given randomized tiles, one at a time, and you cannot “look ahead” to the next one, but you can “swap” a tile with an extra one kept aside.

    Strategy for this game is deceptively complicated. I’ve developed a strategy whereby I try to isolate the central tile, always preferentially double back rather than curve my path outwards, and try to maintain escape routes along the edges. But of course sometimes you have to choose between adhering to one rule and violating another, since the tiles are random and you can’t plan ahead very far.

    My maximum score thus far is 311, and that’s with a lot of luck. Today’s top 100 scores begin at 500+, the all-time leaderboard starts at 1200 for the #100 slot up to 3500 for the #1. The maximum theoretical score is 9000+ but that is probably impossible to achieve in practice. But I am absolutely astonished at how people can get 500 and above. There clearly must be some additional strategies at work here which I haven’t discovered. I’m not alone in my frustration – there’s a whole thread on Reddit about scoring in Entanglement, but no one shares their strategies, alas.

    Path length is another metric by which you can keep score. My best is 102, today’s best is 122, and the theoretical maximum should be 169. Oddly there are scores on the leaderboard with path length of 400, which just makes no sense to me at all.

  • diet, cholesterol, and heart disease skepticism?

    I’m involved in a debate over diet and health over at Dean’s and in the course of that debate, was encouraged to read a paper by Corr et al. that suggests low-fat diets are essentially useless for reducing heart disease. This post started out as a comment but it grew enough to warrant a post in it’s own right. So, let’s look at what the Corr paper is actually saying, shall we?

    The international bodies which developed the current recommendations based them on the best available evidence[1-3]. Numerous epidemiological surveys confirmed beyond doubt the seminal observation of Keys in the Seven Countries Study of a positive correlation between intake of dietary fat and the prevalence of coronary heart disease[4] although recently a cohort study of more than 43,000 men followed for 6 years has shown that this is not independent of fiber intake[5] or risk factors. The prevalence of coronary heart disease has been shown to be correlated with the level of serum total and low density lipoprotein cholesterol (LDL) as well as inversely with high density lipoprotein.

    So, high intake of dietary fat indeed has a positive correlation for coronary heart disease. Corr is conceding this at the very start!

    Further, coronary heart disease is also indeed associated with high LDL and low HDL. So far I am not seeing any Cholesterol Conspiracy here… the ADA seems to be right on the ball.

    So, we’ve already established that CHD is associated with high fat, high LDL, and low HDL. So, what’s left to argue about?

    As a consequence of these studies, it was assumed that the reverse would hold true: reduction in dietary total and especially saturated fat would lead to a fall in serum cholesterol and a reduction in the incidence of coronary heart disease. The evidence from clinical trials does not support this hypothesis.

    Hmm. Two sentences here. one about a reasonable inference from the conceded association between fat and LDL with CHD. But ok, let’s call the question of whether teh reverse is true, Question A – “does reducing fat and LDL in the diet reduce CHD?”

    And then another sentence, about evidence from clinical trials not supporting that inference. What about those clinical trials, exactly?

    It can be argued that it is virtually impossible to design and conduct an adequate dietary trial. The alteration of any one component of a diet will lead to alterations in others and often to further changes in lifestyle so it is extremely difficult to determine which, if any, of these produce an effect. Dietary trials cannot generally be blinded and changes in the diet of the ‘control’ population are frequently seen: they may be so marked as to render the study irrevocably flawed. It is also recognized that adherence to dietary advice over many years by large population samples, as for most people in real life, is poor and that the stricter the diet, the worse the compliance.

    Ah. so the available evidence from clinical trials is fundamentally suspect to systematic error. Fair enough. So, any conclusions we make from them should be tempered with that, right?

    (long analysis of clinical trials in literature follows)

    The message from these trials is that dietary advice to reduce saturated fat and cholesterol intake, even combined with intervention to reduce other risk factors, appears to be relatively ineffective for the primary prevention of coronary heart disease and has not been shown to reduce mortality.

    OK, so the trials focusing on low-fat diets alone didn’t show any primary prevention benefit. Well, see caveat above, right? (and Corr’s noted exception about the MRFIT study…)

    However, what about secondary prevention?

    well, good! But still, is there some reason that maybe we aren’t seeing better results here? Is diet necessary, or sufficient? Let’s look at studies that not only remove fat, but also add HDL:

    The first successful dietary study to show reduction in overall mortality in patients with coronary heart disease was the DART study reported in 1989[20]. The three-way design of this ‘open’ trial compared a low saturated fat diet plus increased polyunsaturated fats, similar to the trials above, with a diet including at least two portions of fatty fish or fish oil supplements per week, and a high cereal fibre diet. No benefit in death or reinfarctions was seen in the low fat or the high fibre groups. In the group given fish advise there was a significant reduction in coronary heart disease deaths and overall mortality was reduced by about 29% after 2 years, although there was a non-significant increase in myocardial infarction rates. The reduction in saturated fats in the fish advice group was less than in the low fat diet group and there was no significant change in their serum cholesterol.

    Finally, the more recent Lyon trial[21] used a Mediterranean-type of diet with a modest reduction in total and saturated fat, a decrease in polyunsaturated fat and an increase in omega-3 fatty acids from vegetables and fish. As in the DART study there was little change in cholesterol or body weight, but the trial was stopped early following a 70% reduction in myocardial infarction, coronary mortality and total mortality after 2 years.

    In other words, adding HDL to your diet helps a lot, whereas reducing polyunsaturated fat (or just increasing fiber) still doesn’t seem to do anything. We’ve established that a modest increase in HDL can help. But have we established that a modest reduction in LDL will not help?

    Unfortunately, the design and conduct of these trials are insufficient to permit conclusions about which polyunsaturates and other elements of these diets are the most beneficial. The long term effects of these trials[20,21] and the compliance with the dietary regimes remain to be seen.

    So, we don’t really know if these studies answer that question. It’s possible that lowering LDL has a longer-timescale benefit than increasing HDL. These studies don’t answer the question either way, because of the limitations Corr concedes – certainly we haven’t proven that lowered LDL is not genuinely helpful yet.

    Anyway, how much LDL was really reduced anyway?

    An important aspect of the lipid-lowering dietary trials is that on average they were only able to achieve about a 10% reduction in total cholesterol. The results of recent drug trials have demonstrated that there is a linear relation between the extent of the cholesterol, or LDL, reduction and the decrease in coronary heart disease mortality and morbidity, and a significant effect seen only when these lipids are lowered by more than 25%[23].

    Ahhhh. Corr goes on to quote a bunch of studies that show frankly awesome improvements in mortality using drugs to lower LDL by 25% or more.

    (in other words, definitively proving that lower LDL does indeed reduce heart disease. We just answered Question A from above).

    So, let’s summarize:

    conceded by Corr at the outset:
    – increased HDL reduces CHD.
    – increased fat increases CHD.
    – increased LDL increases CHD.

    dietary trials:
    – somewhat lowered LDL does not reduce CHD.

    drug trials:
    – significantly reduced LDL does reduce CHD.

    caveats:
    – dietary trials have systematic errors.
    – long-term trials on reducing LDL have not been performed.

    special note: The MRFIT trial follow up focused on reducing LDL diet alone, and did show reduced myocardial infarctions over a longer term.

    My conclusion from this would be that a. increase HDL now for immediate benefit, and b. reduce fat and LDL in my diet for long term benefit. Seems obvious enough, and fully in accord with what the ADA recommends.

    Corr’s conclusion?

    diets focused exclusively on reduction of saturated fats and cholesterol are relatively ineffective for secondary prevention and should be abandoned.

    umm.. what?!?!

    This is where they cross over into vaccines-autism and flouridated water territory, frankly.

    What would have made the Corr paper immeasurably stronger would have been for them to devise an experiment that would answer these questions and fill the gaps. That’s always my challenge to these self-styled “skeptics” of the scientific consensus. What’s the experiment you propose? What would you do to make your case?

    That’s how science works. Theory drives experiment, experiment refines theory, and back again. If your claim is that available evidence (in this case, clinical trials) don’t support the contention, that’s not enough. You need to come up with an experiment that actually refutes the contention. Formulate your hypothesis and test it! Anything else is just nitpicking from the sidelines, which is how most of these agenda-driven meta-analyses end up reading.

    Frankly, I am very much eager to be able to dispense with the low-fat, low-cholesterol crap. Here’s why in a nutshell.

    So please, Dr Corr and anyone other “cholesterol skeptics” out there. Show me the proposal for your experiment, and I guarantee you the fast food industry will show you the money.

  • comparing wireless router speeds

    Using the same dual-band wireless card on the same PC, I am getting surprising differences in wireless speed between two different wireless networks. See below.

    Router #1 is an old single-band (2.4 Ghz) Linksys WRT54GL router configured as an access point (DHCP disabled) and plugged into Router #2, a new dual-band (2.4 Ghz and 5 Ghz) Netgear WNDR3700. The PC (PREFECT) has a static IP from the Netgear router.

    Here’s results from various online speed tests.

    Linksys WRT54GL, 2.5 Ghz 802.11b connection:

    DSLreports.com

    speedtest.net

    Netgear WNDR3700, 5 GHZ 802.11n connection:

    DSLreports.com

    speedtest.net

    (I am paying Charter for a 12 Mbps connection)

    The bottom line seems to be that the older router can give me better throughput than the old one. Before I pack up the new one and send it back, some speculation as to why?

    Some thoughts might be – I’m biasing the test somehow by using the Linksys as a AP rather than a full router. Also, the fact that they are on different bands might be a factor – I could try running the same test but with the Netgear’s 2.4GHz radio instead of the 5 Ghz radio. There also could be some cache in Windows that is biasing the results (I tried to do tests in different orders, but I wasn’t diligent about this). Other thoughts?

    UPDATE: here’s the results for the Netgear on the 2.5 GHz radio, using 802.11g.

  • well, that takes care of Zombo.com

    Today’s strip:

    Today’s victim:

    Zombo.com thanks to XKCD
  • make war, not bosons

    I try to keep things apolitical around here, and its not my intent to change that policy. But this is an issue of science funding as a national priority, so I feel it is relevant: Fermilab funding ends in September.

    U.S. researchers will soon abandon their search for the most coveted particle in high-energy physics because of a lack of funding.

    Researchers working at Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois, had wanted to run their 25-year-old atom smasher, the Tevatron, through 2014 in hopes of spotting the so-called Higgs boson before their European counterparts could discover it with their newer, more powerful atom smasher. But officials at the U.S. Department of Energy (DOE), which funds Fermilab, informed lab officials this week that DOE cannot come up with the extra $35 million per year to keep the Tevatron going beyond September.

    “Unfortunately, the current budgetary climate is very challenging and additional funding has not been identified. Therefore, … operation of the Tevatron will end in [fiscal year 2011], as originally scheduled,” wrote William Brinkman, head of DOE’s Office of Science, in a letter to Melvyn Shochet, chair of DOE’s High Energy Physics Advisory Panel (HEPAP) and a physicist at the University of Chicago in Illinois.

    Fermilab is, as far as I am concerned, a national treasure like the Hoover Dam or Mount Rushmore. It’s about 50 miles from my home growing up and I still remember a childhood visit there 20 years ago.

    The worst thing about this is how science is a victim of political climate. As others have pointed out, even the reduced spending on Afghanistan as we draw down there still means we spend more in six hours there than we’d need to keep Fermilab funded through 2014. I’m not saying we shouldn’t spend the money in Afghanistan (which puts me at odds on my other blog communities, as some of you are aware). But I am saying that maybe in the grand scheme of things, with a deficit in the trillions anyway, we shouldn’t be penny wise and pound foolish.

    end rant.