Should Have Seen it Coming a Mile Away

Here’s a question for you

Did any futurologists from last century predict online harassment? All I remember is VR, full-body haptic interfaces, video-telephones that would translate as you talked

I think with some future predictions, like flying cars, nobody would be surprised that some drivers would still be dicks, if we ever got flying cars. But as to the point that there are few/no predictions that easier communication would lead to online behavior like we see, I think there are a couple of reasons.

One is that it would be too dark to write about, unless the point of the article/story was a dystopian future, and then it would be filed under that heading. Another is that if the authors were ones who weren’t already on the receiving end of such behavior, they simply wouldn’t project it to the future. I’m guessing that the people who are least surprised by people being (pardon my use of the vernacular) assholes online are the people who tend to already experience it regularly in real life. The internet just made it easier to behave that way, and to find a community of like-minded individuals to insulate one’s self from social feedback.

On a tangent to this, it reminded me of an incident from the days of my second post-doc (a lower-paid version of my current job). We were talking about VR, and fresh in my mind was an incident from my first post-doc at TRIUMF. A colleague needed to do some work in a high-radiation environment, so it had to happen in a short amount of time, lest he exceed the allowable exposure limit. A proper mock-up for practice didn’t exist, but that’s exactly the kind of thing VR would be great at doing. Practice makes perfect, or at least a reasonable facsimile thereof. That, and other dangerous situations, where you could train for many contingencies. You could use VR for all sorts of training that can’t be done for real, or could do it cheaper than a mechanical system. I posited that such simulation would be an important use for VR.

At that point the others in the conversation smirked a little, and said, “Tom, face it. The primary use of VR will end up being porn.” The (im)moral of the story: the basest behavior will prevail.

A Million Prescient Monkeys

A History of Books that Forecast the Future

As interesting as this is, it’s also an example of selection bias. Also: 2013 is the year for government spying on individuals, like this wasn’t happening earlier? really? But I digress…

Lots of stories appear to make predictions of the future, but are they really predictions or just fanciful things thought up by the author? What sci-fi devices haven’t come to pass? (How many have flying cars or superluminal travel of some sort, etc.?) That’s context that’s missing, because looking only at successful predictions (more on that in a moment) is the wrong way to look at it. If the author is truly a visionary maker of predictions, s/he has to be right more often than chance. It’s tough to measure that in an open-ended medium like storytelling, but one could at least do a systematic measure of it. Regardless, with myriad predictions, some are bound to be right. So what’s the success rate?

Also, how do you define success? For predictions that are vague it’s much easier to argue that it was successful, but of course vague predictions are next to useless precisely because they are vague. This is one element of how so-called psychics and their ilk make their livings – be vague enough that you can throw up your hands and declare success no matter what happens. I’m not familiar enough with the stories to know how much leeway the authors are being given.

The next step and the real trick — much harder IMO — is if the author was able to capture how society exploited the technology.

At the Tone, it Will be 'Now' O'Clock

The Problem of Now

I don’t spend much effort thinking about this sort of issue, since I’m much more interested in the experimental aspects of measuring time than the philosophical aspects of it, but I’ve run across some folks who think this problem of “Now” is so perplexing they can’t get past it. (again, because my interests lie elsewhere, this seems more of a dorm-room discussion, or possibly one involving a professor who looks like Donald Sutherland discussing whether atoms can be universes). My view of the utility of this is that while “It’s always now” may or may not be deep thinking, it doesn’t help GPS tell you where you are. (unless “You are here” is an acceptable answer)

[R]egardless of whether you use an external definition of time (some coordinate system) or an internal definition (such as the length of the curve), every single instant on that curve is just some point in space-time. Which one, then, is “now”?

Later on there’s also an interesting point about memory not needing consciousness.

Decisions, Decisions

I read this bit on McDonald’s Theory recently:

An interesting thing happens. Everyone unanimously agrees that we can’t possibly go to McDonald’s, and better lunch suggestions emerge. Magic!

It’s as if we’ve broken the ice with the worst possible idea, and now that the discussion has started, people suddenly get very creative. I call it the McDonald’s Theory: people are inspired to come up with good ideas to ward off bad ones.

Two thoughts came to mind.

First, this is a variation on the restaurant choice aspect of the dinner diffusion problem, wherein people don’t want to be the one caught making a decision about where to go to dinner at a conference.

The other thing is that, when the author ties this in to the broader decision-making process, it’s partly the blank page syndrome — tasks are more daunting when an empty page is staring at you, and it’s better to just get started, somewhere — anywhere — even if you have to completely revise the work, because you’ve gotten the ball rolling.

But the hesitancy to float ideas in front of colleagues is somewhat foreign to me, and I wonder if that’s simply due to my little corner of science, or if that’s broader. Scientists are used to people trying to shoot down their ideas because that’s how peer review works, so there is a distance between the person and the idea, or there is supposed to be. It’s a bad dynamic to have someone who won’t accept criticism of their ideas and/or gets personally invested in them. Pursuing wrong ideas is a waste of time and resources, so you’d prefer to know the problems with an idea as early on as you can. So not taking the criticism personally makes it easier to bring ideas up. If someone finds a flaw, you fix it and move forward, or if it’s fatal, you discard the idea and move on to something else. (Of course, it’s possible I’ve just lucked into the right situations all these years)

You Keep Using That Word…

Something I ran across last week was the so-called periodic elements of star wars ep. IV, V, and VI

It’s very pretty, and a lot of effort obviously went into the graphic presentation of it. However, that’s apparently where the effort stopped. What’s wrong with it?

It’s not periodic.

The periodic table has such power because of the similarity of properties and the trends one can identify — it was gaps in the layout that helped identify some of the elements. Those properties are completely missing on this table — any you might glean have got to be there purely by accident.

A truly periodic table might, for example, put all the Jedi into a column. All the droids into another. The pilot identifiers (Red/Gold/Rogue), too — they shouldn’t be in a row.

There are other tables out there like this — where the creators seemingly mistake “periodic” for “collection” or something like that. It is a table, and if you happen to have around a hundred names or so to put on it, you might think it would be clever to geek it up in this way. But when you actually want to represent it as or call it a periodic table, what you’ve shown is you weren’t paying attention in chemistry class.

Are We Our Own Worst Enemy?

Why the Scientist Stereotype Is Bad for Everyone, Especially Kids

To many – too many – science is something like North Korea. Not only is it impossible to read or understand anything that comes out of that place, there are so many cultural differences that it’s barely worth trying. It’s easier just to let them get on with their lives while you get on with yours; as long as they don’t take our jobs or attack our way of life, we’ll leave them in peace.

That’s very frustrating to scientists, who often bemoan the lack of public interest in what science has to say. They’re right to be frustrated: all our futures are dependent on proper engagement with science. So, how to solve this problem?

One thing to which I object is the charge that we did this to ourselves:

[T]he problem doesn’t lie with the science. It lies with the scientists. Or rather the myth the scientists have created around themselves.

The author makes several good points in the article, but never backs this one up. Which would have been nice, because I don’t see it as being true.

Networking

On Networking: A rant.

Ok, then! I am told to go up to the people I am interested in meeting, and INTRODUCE MYSELF! We all have name tags! I’m sure it’ll be fine! And I’ll just go up and say who I work for and drop some pithy comment that they will think is totally cool and in line with current perspectives on the field. Then I will smoothly invite them to my poster.

Except it doesn’t go like that at all. You go up to the person you want to meet at a conference or seminar? They WILL be talking to someone else. You can hover and looking annoying or weird, or try to butt in without interrupting and look annoying and weird. They will give you a sideways look to inquire WHY you are interrupting, and inform you with that look that you are annoying and weird.

I have no answers for this. When I was in grad school, I went to a conference or two with my prof, and he was really bad at introducing me/us to people he knew. He was just starting out, so I couldn’t drop his name when I was at a conference alone — few people knew him. So I really developed no contacts in grad school. I, too, felt the awkwardness of trying to introduce myself (and try not to forget that now that I’m in a more senior situation). My best progress was made at conferences where I gave a talk, because there were a few people who would come up to me afterwards to discuss details, and you have an excuse to talk to others who spoke in the same session, because they are now quite likely to know who you are and should be working in similar fields if you are speaking in the same session.

In my current job, there was a deliberate attempt to have me give talks at conferences when I first started, to give me exposure, and so that people would identify me as being with our group. That’s part of a much better atmosphere of having colleagues who introduce me to people they know.

There’s also part of networking where the people come to you — lab visits and seminars/colloquia, where you can have your professor make the introductions. Once you’ve done that, the second meeting (perhaps at a conference) is easier, since you can mention that you’ve already met and remind them of the circumstances. Even if they don’t remember, you’ve still gotten yourself into a conversation.

A Day at the Mad Science Fair

“Teratogenic Effects of Pure Evil in Ursus Teddius Domesticus.”

Winning entry in the Mad Science Fair by Dr. Allison von Lonsdale of the Institute for Dangerous Research.

1. A sample of Pure Evil was obtained from the ruins o f an exploded toaster in the south of England.

2. Pure Evil was administered, via drinking water, to pregnant laboratory teddy bears for the duration of their pregnancy (4 months).

3. Dosage varied from 0 parts per million (ppm) to 1000ppm, titrating upwards by steps of 100pm.

4. Offspring were euthanized and mounted for display.

Did you get the Time Bandits reference?