Just Don't Make It So They Blow

Sucky Schools – How To Repair Our Education System

Lots of good stuff.

Our schools are fact-junkies. We teach students thousands of useless facts that will be forgotten as soon as the next exam is over. Hell, usually they’re forgotten even before that, and then you see students cramming late into night, only to forget it all within 48 hours. How’s that for effective use of everyone’s time.
[…]
Standardized testing is like a black hole that sucks up and annihilates any learning it gets close to. It bends the very fabric of curriculum and students’ time.

via sciencegeekgirl

Do As I Say, Not As I Do

Are we science-savvy enough to make informed decisions?

Let me guess: no. I mean, really, is this a gimme or what?

Seventy-six percent of Americans say presidential candidates should make improving science education a national priority, according to a national Harris Interactive survey of 1,304 adults in November and December. Results were released this spring.

But only 26% believe that they themselves have a good understanding of science. And 44% couldn’t identify a single scientist, living or dead, whom they’d consider a role model for the nation’s young people.

So at least some of those possessing marginal scientific literacy recognize that science education is important.

It boils down to this — if you can’t make the informed decision yourself, then you’re going to fall for whoever can lie most convincingly. And I think that accomplished liars have an advantage.

There is also a link to a quiz, which looks like the NSF Science and Engineering Indicators quiz. Unfortunately, we are told

10 or 11 right: You are a geek!

Maybe some small part of the problem is that basic science competency is being identified as geeky, though somehow I doubt that USA TODAY is the arbiter of cool amongst today’s teens.

Udate: commentary at Physics and Physicists

That's Gonna Leave a Mark

I was once asked, by someone outside of academia, about academic (dis)honesty, and concurred that accusing a researcher of this kind of misconduct is about as serious as you can get. Using data or results without attribution (plagiarism) or worse, outright fabrication of data, are things the scientific community should not (and generally does not) tolerate. Part of the feedback loop keeping things on the straight-and-narrow should be vested self-interest. I can’t imagine researchers wanting to collaborate with one who has plagiarized, and it’s more difficult to do research alone. One who fabricates data is almost certain to be found out, unless it’s in an area of research so obscure that there is no follow-up. (But then that means the research has little value — it’s like counterfeiting a dollar-bill. Why bother?)

I’ve never observed any of this, though I’ve been around long enough to see the type of worker who likes to take credit for others’ work in endeavors outside of research. Fortunately, these cases are peripheral to my own career — I’ve mostly worked with people who were quite insistent on making sure that the credit for work was properly attributed. That’s something that boosts your own credibility, of course, because your audience will believe you when you give an account of your own contribution to the work.

There’s now a study that followed up on some cases on scientific misconduct, and an article summarizing it. Does fraud mean career death?

“People who were found guilty of plagiarism [as opposed to expressly fabricating or falsifying data] get less severe of a punishment, so they were more likely to continue to publish,” Redman noted. Ten of the 28 scientists whose employment information they were able to trace continued to hold academic appointments after the ORI ruling. Originally, 23 out of those 28 had worked in academia.

However, Merz and Redman’s data, as well as interviews they conducted with the seven researchers who agreed to speak with them, indicate that recovering from the misconduct ruling was extremely difficult. Unsurprisingly, the group’s average publication rate was significantly lower after the ruling, dropping from 2.1 to 1.0 publications per year. Twelve of the scientists ceased to publish completely. In interviews with Merz and Redman, researchers described extensive personal and financial hardships due to the ruling.

The Importance of Being Earnestly Stupid

The importance of stupidity in scientific research

I recently saw an old friend for the first time in many years. We
had been Ph.D. students at the same time, both studying science,
although in different areas. She later dropped out of graduate school,
went to Harvard Law School and is now a senior lawyer for a major
environmental organization. At some point, the conversation turned
to why she had left graduate school. To my utter astonishment, she
said it was because it made her feel stupid. After a couple of years
of feeling stupid every day, she was ready to do something else.
I had thought of her as one of the brightest people I knew and
her subsequent career supports that view. What she said bothered
me. I kept thinking about it; sometime the next day, it hit me. Science
makes me feel stupid too. It’s just that I’ve gotten used to it. So
used to it, in fact, that I actively seek out new opportunities to feel
stupid. I wouldn’t know what to do without that feeling. I even
think it’s supposed to be this way.

My immediate reaction was that, technically, ignorance and stupidity were being mixed here — experience and intelligence aren’t the same thing, but it’s not always apparent which is which. But I understand the sentiment — as soon as you figure something out, you move on to something new that you don’t know. Isn’t that one of the draws of doing science? Of learning, in general? I like getting my “fix” of somethingnew, whether it’s a solved problem or some new topic. One of the usual side effects of studying science is an awareness of all that there is that we do not know. If that make you feel stupid, well, so be it. It’s also a side effect of working with a lot of smart people, but that’s also a great way of getting that “fix” I like.

Other commentary at Science to Life, Blog Around the Clock, FemaleScienceProfessor, Counter Minds, and probably elsewhere, as I imagine this is making the rounds.

Good Talk, Bad Talk

Thoughts on Conferences at Faraday’s Cage is where you put Schroedinger’s Cat

The second case was a conference where the only requirement for approval was an abstract. I realize that some of the more “cutting edge” conferences proceed this way so that people can present their latest results. I don’t like them, however, because many people seem to have worked up to the last minute on the project and not seem to have give much thought to the actual talk.

There’s another option? I thought all data for talks were obtained in the last few days before the conference.

This was brought on by a list of things not to do while speaking in public (which, if a strict grammarian I know had her way, would include “Not starting a sentence with the word ‘hopefully.'”

The Truth Stings a Little

Charlie Brooker’s screen burn
Science is like a good friend: sometimes it tells you things you don’t want to hear

The wariness [of scientists] stems from three popular misconceptions:

1) Scientists want to fill our world with chemicals and killer robots; 2) They don’t appreciate the raw beauty of nature, maaan; and

3) They’re always spoiling our fun, pointing out homeopathy doesn’t work or ghosts don’t exist EVEN THOUGH they KNOW we REALLY, REALLY want to believe in them.

And don’t forget, they say they’re working for us, but what they really want to do is rule the world!”

Science is Inductive: Film at 11

Dealing with Uncertainty at Backreaction, in the context of “science is never 100% certain” and how this plays out with public perception.

There are times when this seems to be a no-win scenario: if you fail to address the uncertainty and have to make any changes to your conclusions, you lose credibility, but if you point out the uncertainty, someone will run with it, exaggerating it. One need go no further than discussions of global warming to see this in action.

One of my least favorite phrases in this area of discussion is “for all we know.” Statements that sound like “For all we know, the phenomenon could be caused by blargh” should be taken with a huge grain of salt, because one of the things science does is to widen the scope of what “all we know” entails, and correspondingly narrow the possible undiscovered explanations for the phenomenon. We rule things out, and attempt to do so in a quantifiable way — we limit the uncertainty. If you are doing an experiment and see something unusual in your data, you start systematically testing to see what could possibly be causing it. So if someone were to claim, “For all we know, that glitch is caused by a spurious magnetic field,” you can respond with “No, we tested the effect of a magnetic field, and eliminated that as a cause.” You do this all the time in setting up an experiment, and you continue to do it when running the experiment — doing everything you can to confirm that the correlation you see is actually causal. But I don’t think that this gets portrayed very well. There’s always someone out there trying to leverage science not being 100% certain, and instead portray uncertainty as being 0% certain, which is far from the truth.

Bee notes that

As I have previously said (eg in my post Fact or Fiction?) uncertainties are part of science. Especially if reports are about very recent research, uncertainties can be high.

And I recall that Feynman touches on this in Surely You’re Joking, Mr. Feynman!. Someone drew a conclusion based on the last data point in some experiment, and he realized that the last data point isn’t so trustworthy — if you weren’t pushing the limit of the apparatus, you’d have obtained more data, so this is certainly a valid point. And here one starts fighting the tendencies of the media, because if the result isn’t novel, it isn’t newsworthy. What ends up happening is that that the least reliable results, the ones most likely to be mistaken, are often the ones making the headlines. The study that challenges a long line of other research (which, being “as expected,” was ignored) gets notice, even though one expects, statistically, the occasional contradictory study. Such is the essence of random noise. This is made worse by the journalistic desire to show both sides of a story, even if there really aren’t two sides, as they have massively different amount of evidentiary support. This, too, misleads the general public about what is know, what is unknown and what level of confidence exists in science.

More Intellectualism

Some good followup to the whole why-are-math-and-science-such-small-portions-on-the-plate-of-intellectualism and all of the tangents (too math-y? juxtaposed topics, perhaps?)

Fear and loathing in the academy and Assorted hypotheses on the science-humanities divide at Adventures in Ethics and Science. A lot to chew on (or gum, if you are so inclined)

The best reason to learn something is that learning it is a fun thing to do with your brain. Learning math and science can make your brain just as happy as learning humanities and arts, so who wouldn’t want to be an intellectual omnivore?

Indeed.