It’s all in how you see the data
There’s another view, too.
It’s all in how you see the data
There’s another view, too.
U.S. State Science Standards Are “Mediocre to Awful”
“A majority of the states’ standards remain mediocre to awful,” write the authors of the report. Only one state, California, plus the District of Columbia, earned straight A’s. Indiana, Massachusetts, South Carolina and Virginia each scored an A-, and a band of states in and around the northwest, including Oregon, Idaho, Montana and Nebraska, scored F’s. (For any New Yorkers reading this, our standards earned a respectable B+, plus the honor of having “some of the most elegant writing of any science standards document”).
[W]e live in a world where it’s de rigueur to know your Shakespeare, Molière or Goethe, but quite all right to be proudly ignorant of Faraday, Pasteur or Einstein. It hasn’t always been that way, and it doesn’t have to be that way. But right now, there’s a trend in society towards scientific apathy, and even antagonism. This is dangerous for us all and it’s incumbent on the scientific community to address the issue.
I think it’s de rigueur to know your Shakespeare, Molière or Goethe if you want to claim to be an intellectual (which, as I have said, I do not). But I think one must note that literacy is a term we associate with a minimum level of capability. One who is literate can read, but that does not mean that said person will be able to appreciate the works of Shakespeare (or Molière or Goethe). That next level is where we find interactional expertise, and we need to be clear whether we expect this, or simply literacy. But anyone claiming to be an intellectual cannot legitimately exclude math and science from their arsenal.
Somewhat related to this topic, I have to say that Howard Johnson Jennifer is right! in Meet Me Halfway
It’s frustrating. That frustration is often expressed in a renewed cracking of the whip, insisting that scientists just need to do better in communicating via public outreach. While I agree that the scientific community should (and is) working to improve in that area — heck, I do this for a living and still am constantly striving to improve! — what Hasson’s research clearly shows is that genuine communication is a two-way street. Scientists — a.k.a., the speakers — are only half of the equation, and thus they are only half of the problem.
The other half of the equation are the listeners; any type of communication will fail if it doesn’t have a receptive audience. And I’d go one step further. We tend to think of listening as a passive act, but it actually requires some effort in order to achieve that elusive connection. Particularly when it comes to bridging a gap, as with scientists and the general public, the listeners need to be more actively engaged, more invested in having a true conversation.
This is a view I’ve held for a long time. There are concepts that do require years of college to get a handle on, and reading a pop-sci book is not a substitute. You have to go out and expend some effort to have your interactional expertise if you want to be part of the conversation.
All of which ties in to a session I attended on scientific literacy (Is encouraging scientific literacy more than telling people what they need to know) at ScienceOnline 2012. We agreed that it’s important, because science appears in many places and people need to be able to make informed decisions, but in light of Jennifer’s post, I think one must add that people need to be motivated to want to make informed decisions, and take steps toward that end.
There was an interesting exercise in which the (Canadian) moderators gave a short dialogue about a curling result, and used the collective sports (il)literacy as an analogy for science (though it’s not the first time one might have thought of this). Since I lived in Canada for 2.5 years and am familiar with curling, though, I didn’t get the full benefit of the exercise.
The FTL neutrino experiment came up as well, and I wish I had better notes because I don’t recall exactly what the objection was — something about conflicting information being presented, but this is because most physicists are not neutrino experts and there’s a difference between literacy and expertise. I pointed out that in some ways, the issue actually raised scientific literacy, because it was a demonstration of the scientific process.
There was a very interesting example given by one of the moderators (Catherine Anderson, with whom I talked at length about this after the session) about some science-camp exercises that were modeled to be like a CSI investigation. Clues were given and the students had to gather evidence and make their case, but one of the driving lessons of the exercise was that there was no right answer, just as in any part of “real” science — you do your experiment and then have to interpret the results. There’s no “right answer” to compare it to, which is one of the tougher concepts I’ve had to try to convey in introductory physics labs back when I was doing that sort of thing. The students get the idea that experimental error was the difference between what they got and what the textbook said they should get. I occasionally try and think of ways one could do a lab where the “right” answer isn’t available, so the students would have to the chance to do something that compared to real science investigation, and understanding the process of science and how uncertainty/error fits in is a big part of scientific literacy
One the heels of my recent discussion of the value of information trading (as opposed to deception) to research, I read about a push for even more access to this information: Scientists, Share Secrets or Lose Funding: Stodden and Arbesman
Many people assume that scientists the world over freely exchange not only the results of their experiments but also the detailed data, statistical tools and computer instructions they employed to arrive at those results. This is the kind of information that other scientists need in order to replicate the studies. The truth is, open exchange of such information is not common, making verification of published findings all but impossible and creating a credibility crisis in computational science.
…
Inadequate sharing is common to all scientific domains that use computers in their research today (most of science), and it hampers transparency.
By making the underlying data and computer code conveniently available, scientists could open a new era of innovation and growth. In October, the White House released a memorandum titled “Accelerating Technology Transfer and Commercialization of Federal Research in Support of High-Growth Businesses,” which outlines ways for federal funding agencies to improve the rate of technology transfer from government-financed laboratories to the private business sector.
I’m not a fan of this proposal.
The problem is, as I previously discussed, that the data and analysis techniques have value. They represent an investment in time a lab has made, and forcing that information to be given away means that any other lab can catch up in research, extract information from the data or apply the analysis tools to other data, all without a similar amount of investment.
The lab that did the work should get the credit for discovery, not only for the recognition and prestige but also to help in their competition for funding. Without overhauling the funding system, this proposal would be asking labs to handicap themselves in their quest for future funding.
Technology transfer is not that same thing as forced sharing of data and analysis tools, so I’m not sure what connection the authors were trying to make. The technology transfer they mention is from federal agencies to the private sector — this would apply to me, for example, if I helped discover or build something but our lab was not in a position to exploit the work, e.g. commercial development of a product, which is something that’s not part of our mission. But the government gets something back from that — it’s not just us giving it away to a business.
I recently had an interesting discussion with someone who is interested in science, but without training or experience as a scientist. The question was, basically: Why don’t we (scientists) all just lie to each other? i.e. what compels scientists to truthfully share their research results? It’s a fair question — we’re human and competitive to some degree, and at first blush there would seem to be a lot to gain from keeping competitors off-balance by feeding them false clues.
I will draw a distinction here between non-cooperation, i.e. secrecy, and deception. Certainly there are endeavors where information sharing is limited — corporate and military research have their secrets, and I suppose that some endeavors might actually try and mislead the competition. This secrecy is (in my experience) rare in the more open environment of academia. Why is this? There a couple of factors.
Science is big — really big — and there is a built-in symbiosis that has arisen. Nobody can possibly study every area of science, so we have to be able to trust that information we get elsewhere is valid. The so-called scientific method has developed ways to do this — we like to confirm experimental results either by replicating the work, doing a similar experiment or doing a more advanced experiment that uses the results, which gives a trust-but-verify attitude, and knowing that results are going to be checked is a motivation to be truthful. Reputations are at stake, and even if there were no peer review you wouldn’t want to be know as an untrustworthy researcher — if you are consistently wrong, whether by sloppiness or deceit, nobody will pay attention to you. So part of the answer is peer pressure.
Science is big in terms of career length as well — decades, vs the time scale of experiments, or more importantly, publications and talks, which is measure in months to a few years. If a scientist is going to engage in deception in order to complete an experiment first, the benefit has to last beyond that one experiment. And it doesn’t. Some level of cooperation serves any scientist’s interests much more than subterfuge.
In science, information is a valuable commodity. You publish and present results at conferences with the expectation that everyone else will do the same, and beyond this, you have discussions in the hallways or at meals, in which you discuss details that never make it into the papers or talks. Just reading a paper does not give you all of the information necessary to complete it — one might describe a certain layout for an experiment, but what isn’t discussed is how to get the experiment to work. The nice graph of data that has been published, for example, is the end result. There is no information about what knobs you have to tweak to get from an ugly, noisy signal to the nice one, and there is usually no discussion of the false paths one went down in pursuing the result. That’s what you talk about when you are trading information, and the value is that it’s a huge time-saver. It’s one justification for the graduate school system, where acquiring experience is given some value (overvalued, perhaps, by those paying salaries). Knowing what not to do is a benefit in terms of both time and money.
Bucking the system, then, threatens to cut you off from that flow of information. Someone more familiar math could no doubt go into details, but I’m sure that it’s some application of game theory. While you might be smarter than everyone else, you are not smarter than everyone else put together — even if you started with a lead isolation will eventually leave you far behind the pack as you have to try all of the dead ends, while the cooperative ones will share that information and save time. Taking the middle ground of simply not sharing is no better. People will stop talking to you once they realize that the relationship is asymmetric.
Since I work for the military, I run into the conflict between sharing and keeping secrets, which is pretty much divided along the lines of scientists vs military staff. You have to make your case that not sharing has a price — this information flow asymmetry would quickly shut one off from juicy details, and cost your program time and money to get the desired results. That usually works.
There’s another consequence of the value of information sharing, and I think it contributes to the scientific community’s attitude toward fraud. There are some self-policing communities that seem to have an “I have your back if you have mine” attitude and so misdeeds are sometimes not punished severely. But in science, fraud is a death knell. Part of this is because one of the parties to research can’t be convinced to go along: nature itself. If I falsify data, there will come a time when this impacts someone else, and I don’t have any control over the results of someone else’s experiment. And because, as I said, human nature might tempt us, the punishment has to be severe, such as revoking your degree.
What is it like to have an understanding of very advanced mathematics?
You can answer many seemingly difficult questions quickly. But you are not very impressed by what can look like magic, because you know the trick. The trick is that your brain can quickly decide if question is answerable by one of a small number of powerful general purpose “machines” (e.g. continuity arguments, combinatorial arguments, correspondence between geometric and algebraic objects, linear algebra, compactness arguments that reduce the infinite to the finite, dynamical systems, etc.).
One of a long list.
It’s hard to convince those that don’t “speak” math how necessary it is, rather than being forced to explain things in a much less precise language (be it English or something else) that the audience understands.
via @seanmcarroll
Are you scientifically literate? Take our quiz
Took this the other day but am only now getting around to posting (holiday distraction). And with the taking of any scientific literacy quiz, there is the obligatory comment on what scientific literacy is or is not.
It’s not a bad quiz, other than the slide-show implementation of it and reloading the page to grade each question, but it’s not great, either, and it suffers from the problem that any multiple-choice quiz is going to have: it becomes a test of facts rather than concepts, and science literacy is more than memorization of facts.
I won’t hazard a guess where the literacy cutoff is, because some of the questions lean toward trivia and from my view, knowing trivia is not really synonymous with literacy. Knowing why Pluto was demoted from planet status shows more literacy than knowing the name of the orbiting body whose discovery led to that act, for example. Knowing what Mendel discovered is more important than remembering what plant he used to discover it, unless you can take it up a notch and know why he was lucky to have studied that plant for study (it had a simple structure which facilitated the discovery — one gene per feature). Knowing what two planets don’t have moons is not as important as being able to use some physics to reason why this might be, and a test like this doesn’t distinguish between the two approaches.
But you do have to have some facts at your disposal. Knowing the major constituent of air is important, too, as is knowing your way around the periodic table, and other things that show up on the quiz. Scientifically literate people will do well, overall, because they will probably know the trivia, having picked it up in the process of learning the concepts.
Why doesn’t America like science?
The views that Bloomberg considers “mind-boggling” are not outliers, or not outside the coastal areas such as New York, where he resides.
But common or not, the spread of this sentiment is leaving many American scientists alarmed. Last month, New Scientist magazine warned in an editorial that science is now under unprecedented intellectual attack in America. “When candidates for the highest office in the land appear to spurn reason, embrace anecdote over scientific evidence, and even portray scientists as the perpetrators of a massive hoax, there is reason to worry,” it thundered.
Perhaps people think that new products and innovation are the result of the technology fairy, rather than application of the underlying science and that the technology works by magic.
How to argue with a scientist: A guide
I have created this handy guide to arguing with a scientist precisely for people like you! I’ve collected the most commonly used phrases and translated them into everyday English, so that the next time you argue with a scientist, you’ll not only better understand their arguments, but you might learn how to make yours better, too.
Only applies to people for whom facts actually matter. But if they do, these are the things that make for more solid support of a position.