Understanding my limits and being willing to acknowledge them is, in fact, one of my strengths. I don’t think it should be pathologized alongside the very real problem of “impostor syndrome”.
In fact, it is the opposite behavior—the belief that you can do anything, including things you are blatantly not qualified for or straight up lying about—should be pathologized.
When you say we should work harder, I hear two things: 1) we aren’t working hard, and 2) we don’t think we have to. Professors seem like an easy target. We have good job security, we’re paid well, we often come from privileged backgrounds. We appear to have little to do but teach a class for a few hours a week, and we have extended vacations. It’s easy to see us as cloistered in the Ivory Tower, without much experience with the “real world” and the concerns of average folks.
The picture I’ve painted for you is incomplete, though.
All those other enterprises, though, seem to have come to terms with the fact that there are going to be mis-steps along the way, while scientists continue to bemoan every little thing that goes awry. And keep in mind, this is true of fields where mistakes are vastly more consequential than in cosmology. We’re only a week or so into July, so you can still hear echos of chatter about the various economic reports that come out in late June– quarterly growth numbers, mid-year financial statements, the monthly unemployment report. These are released, and for a few days suck up all the oxygen in discussion of politics and policy, often driving dramatic calls for change in one direction or another.
But here’s the most important thing about those reports: They’re all wrong.
Chad makes an excellent point, but if I’m reading the post correctly it’s an admonition toward scientists, and I think that’s misplaced, or at least too narrow a focus. As a group, I think we have a decent handle on the difference between the levels of confidence one places in results at different stages of confirmation. Many scientists I follow on twitter were saying we need to be cautious about the BICEP2 results, and how we needed to wait for further analysis and confirmation — that’s the protocol, and it needs to be more widely acknowledged.
What’s missing is in the restraint of the media chain, which often includes the principal scientists; one should understand that they and the attached PR machine may tend to be a little aggressive in touting their results, and may have a bias to which they are blind; it’s why replication of experiments is important. However, everyone else involved has to slow down a little and consider the shortcomings of the system as well.
Is this a preliminary result/small sample size, or is this further down the line in terms of confirming the original discovery? (I’m assuming we’re over the hurdle of this being peer reviewed). If it’s early in the game, then these are much like the preliminary economic numbers Chad discusses — there will be revisions, and that needs to be explained. More data require more experiments, preferably by different research teams. Results have a way of disappearing when more data are examined — which is exactly what you should expect! But this doesn’t get much prominent discussion when BIG RESULT™ has been announced.
In the case of economic reporting, the public has been seeing this same style of reporting for decades — they’re used to it. They expect a certain level of wrongness from the folks who have predicted twelve of the last five recessions. What they’re used to in science reporting is a hyperbolic headline and the promise that it will result in a flying car really soon (and then, of course, the flying car never materializes) being reported in the same fashion as science that has a much longer pedigree of confirmation.
Scientists need to do better in getting the word out properly, to be sure. But my feeling is that the entire system needs to be reined in.
I don’t spend much effort thinking about this sort of issue, since I’m much more interested in the experimental aspects of measuring time than the philosophical aspects of it, but I’ve run across some folks who think this problem of “Now” is so perplexing they can’t get past it. (again, because my interests lie elsewhere, this seems more of a dorm-room discussion, or possibly one involving a professor who looks like Donald Sutherland discussing whether atoms can be universes). My view of the utility of this is that while “It’s always now” may or may not be deep thinking, it doesn’t help GPS tell you where you are. (unless “You are here” is an acceptable answer)
[R]egardless of whether you use an external definition of time (some coordinate system) or an internal definition (such as the length of the curve), every single instant on that curve is just some point in space-time. Which one, then, is “now”?
Later on there’s also an interesting point about memory not needing consciousness.
Today was a Very Bad Day™, in a depressingly long and growing list of Very Bad Days™. And while there are bound to be proclamations of “it’s too soon to talk about the implications” countered by “if not not now, when?” and so forth, and also some shooter(s) was/were (religion) and (ethnicity) and this has profound implications because of (generalization and/or inappropriate extrapolation), that discussion really doesn’t interest me right now, because we don’t know everything yet.
Of more immediate import to me is the one certain fact: that we don’t know everything yet. That was true all day, as the stories poured out — they were sketchy and often wrong. One shooter, two shooters, three shooters — the number kept changing. One shooter was down, and then that was withdrawn, and then confirmed, but nobody could say if “down” meant dead or arrested. Shooting at Bolling AFB was reported, and then dismissed as being false.
Information dissemination is fast. Twitter and internet news were reporting this very soon after it started. Information collection is slow, and since it’s also imperfect it requires confirmation, making it even slower. And this is one things that tends to get glossed over in the aftermath. That while all of this was going on, we didn’t really know what was happening. It was true after the Boston Marathon bombing and manhunt, and it was true in all of the other incidents before that. If you are going to get involved in any sort of discussion, don’t fall prey to the notion that anyone had more than scant knowledge, or that anything about this should have been obvious. That’s hindsight bias.
(one note, since you may not be familiar with DC at all: I don’t work at the navy yard. Emotions aside, in the grand scheme of things this event only had a minor impact on my day, in that we were in a heightened security situation)
What I, and many other physical scientists, object to is the notion that math and science are cleanly separable. That, as Wilson suggests, the mathematical matters can be passed off to independent contractors, while the scientists do the really important thinking. That may be true in his home field (though I’ve also seen a fair number of biologists rolling eyes at this), but for most of science, the separation is not so clean.
As much as I agree with Wilson’s statement about the need for detailed knowledge to constrain math, even in physics, there is also some truth to the reverse version of the statement, which I have often heard from physicists: If you don’t have a mathematical description of something, you don’t really understand it. Observations are all well and good, but without a coherent picture to hold them all together, you don’t really have anything nailed down. Big data alone will not save you, in the absence of a quantitative model.
Yeah, what he said.
One interesting subset of the discussion is the ice cream question.
Chunky Monkey is the best possible ice cream.
The ice cream question is the one that is closest to the issue of morality. Again, one might suggest that all we need to do is collect neurological data relevant to the functioning of pleasure centers in the brain when one eats different kinds of ice cream, and decide which does the best job. But that’s the question “What effect do different flavors of ice cream have on the brain?” (which is scientific), not “What flavor of ice cream is the best?” (not). To answer the latter question, we would have to know how to translate “the best ice cream” into specific actions in human brains. We can (and do) discuss how that might be done, but deciding which translation is right is — you guessed it — not a scientific question. If I like creamy New-England-style ice cream, and you prefer something more gelato-y, neither one of us is wrong in the sense that it is wrong to say that the universe is contracting. Even if you collect data and show beyond a reasonable doubt that New York Super Fudge Chunk lights up my brain more effectively in every conceivable way than Chunky Monkey does, I’m still not “wrong” to prefer the latter. It’s a judgment, not a statement about empirically measurable features of reality. We can talk about how we should relate such judgments to reality — and we do! — but that talk doesn’t itself lie within the purview of science. It’s aesthetics, or taste, or philosophy.
One thing Sean doesn’t say (possibly because it’s tangential to his discussion) is simply this: as assertion based your ice cream preference is an opinion, and personal opinions — if they truly are opinions — are neither right nor wrong. Where some people go off the rails is when they assert opinions as if they are facts. If you start from the position that “Chunky Monkey is the best possible ice cream” is objectively true, then you’re building a house of cards; the argument is not going to hold up. Yet this seems to happen quite a bit, at least in certain discussions in which I have participated.
[A]s I got more experienced in New Guinea, I realized, every night I sleep out in New Guinea forest. At some time during the night, I hear the sound of a tree crashing down. And, you see tree falls in New Guinea forest, and I started to do the numbers. Suppose the chances of a dead tree crashing down on you the particular night that you sleep under it is only one in 1,000. But suppose you’re a New Guinean, who’s going to sleep every night in the forest, or spend 100 nights a year sleeping out in the forest. In the course of 10 years, you will have spent a thousand nights in the forest, and if you camp under dead trees, and each dead tree has a one in 1,000 chance of falling on you and killing you, you’re not going to die the first night, but in the course of 10 years, the odds are that you are going to die from sleeping under dead trees. If you’re going to do something repeatedly that each time has a very low chance of bringing disaster. But if you’re going to do it repeatedly, it will eventually catch up with you.
That incident affected me more than anything else, because I realized that in life, we encounter risks that each time the risk is very slight. But if you’re going to do it repeatedly, it will catch up with you.
He has a new book out, The World Until Yesterday: What Can We Learn from Traditional Societies? which, given his prior work, I will probably read at some point.
“Any software that can help profile people while keeping their identities anonymous is fantastic,” said Uché Okonkwo, executive director of consultant Luxe Corp. It “could really enhance the shopping experience, the product assortment, and help brands better understand their customers.”
While some stores deploy similar technology to watch shoppers from overhead security cameras, the EyeSee provides better data because it stands at eye level and invites customer attention, Almax contends.
A few interesting peripheral observations about the concern that customers are being profiled and whether that constitutes an invasion of privacy. I think it’s similar to resistance to the photo-radar and red-light cameras I’ve seen here in the US: for some reason, when a person does it it’s acceptable but when a camera is involved, it becomes objectionable. People can observe you in stores, and it’s not like this information is private — anyone can estimate your age group, determine your gender (unless you’re Pat) and make a guess as to your racial makeup. (Though if you knew the greeter at the store was doing that and recording it, you’d probably find it to be creepy). So I wonder if there will be any formal objections, or if it will fall under the rubric of “irksome technology” mentioned at the end of the article.