The importance of evidence
Science can be communicated in many ways. But what evidence do we have that any of our communication is effective in improving the understanding of science? The answer is very little. When it comes to engaging students or the public in science, the approach has a scattergun texture: the unstated hope that some of the shot hits the right target.
Sless and Shrensky characterise the situation perfectly: "...the evidence for the effectiveness of (science) communication is about as strong as the evidence linking rainmaking ceremonies to the occurrence of rain."
Some years ago a leading American science communicator, Dr Rick Borchelt, led a team of "blue ribbon" science journalists and communicators to produce a best-practices roadmap for NASA's Marshall Spaceflight Center. A key finding of the 2001 report was the surprising insight that science, as a data-driven enterprise, did not demand evidence of the effectiveness of science communication.
Why does so little evidence - beyond simple evaluation - exist? One reason is a lack of funding to gather it. While the resources required are much less than those needed for scientific research, it is still significant enough to need substantial funding.
Another reason is the way that science communication is valued, especially in a university environment. No "research quantum" (monetary value to a university) is assigned to science communication successes because Government funding does not include it. For example, I led a team in collaboration with NASA to produce a science-related project, which in 18 months attracted five million hits on the associated wiki. Why is that not worth quantum for the research centre it was done in?
Science communication is still - more often than not - included almost as an afterthought in relation to science research. I've been involved in a major report on a specific area of research where I fought for more than two years for the education and outreach component to be recognised as more than the required add-on with little or no funding. We are reaching out to tomorrow's scientists: with three decades of decline in interest in science in our high schools, goodness knows we need it.
Scientists are frequently called on to do more in communicating their science to the public. But what evidence is there that it does anything to improve the public understanding of science? Borchelt and his team found that scientists were more likely raising the awareness of science rather than increasing the public understanding of science, but in either case there was very little evidence of effectiveness.
Perhaps the most worrying aspect of all is the silo mentality that seems to exist between science education and the public domain. That applies to Australia especially, but not exclusively in the international area. In one silo educators aim to produce scientifically literate (or scientifically capable in latest-speak) citizens. The high school students leave their science education, unless going onto tertiary science, and enter the other silo - the adult world.
The obvious question is: how do we know the science education has effectively prepared the student to be a scientifically literate adult? Plainly, we don't know. While Australia participates in international testing of Australian students in earlier years, there appears to be no metric to measure how that translates into the adult arena.
Governments all over the world worry about the adult silo. In the US, the adult scientific literacy rate is 28%. As low as that is, until about four years ago it was half that figure for almost 50 years, in spite of innovations in science education and outreach. Science communication researcher Jon D. Miller attributes the current dramatic rise in the scientific literacy rate to a requirement that all university students take one year of science.
In 2005 and 2006 I applied the US adult measure of scientific literacy to 692 students in seven Australian high schools and three Welsh high schools. The scientific literacy rate, using the adult measure, was just 7%. Among the brightest science students the score was around 20%. Among 150 first year university arts students the rate was around 15%.
Using the standard adult question to elicit scientific literacy of high school students reveals worrying data. The question, asked in adult surveys since 1957, is: "What does it mean to study something scientifically?" Of those able to give at least a minimally acceptable response, only two respondents used the word "predict" or "predictive" and only 4% used the word "discovery" or "explore" or any word or phrase that can mean either of those words.
While the above statistics are breathtaking, another statistic eclipses them. It relates to the student understanding of the process of science: ALL survey respondents reflected a linear view of the practice of science. In other words, students cite the hypothesis-data collection-analysis-interpretation-conclusion model of the scientific method rather than the non-linear cycles of hypothesis-data collection-analysis that take place in science before moving onto any conclusions. The Australian Bureau of Statistics has identified this key difference in approach between school and real science, but it seems no-one has noticed.
Qualitative data in the form of 21 in-depth interviews showed that students generally viewed science as a static, boring subject that requires infinite detail and bears little relationship to a creative, probabilistic, predictive, empirical and human activity that it is.
The survey replies hint at a sharp difference between school science and real science. We may also have more to worry about than process of science alone. For example, Wong and Hodson (2008)that show science curricula and textbooks stand in stark contrast to the actual practice of science. It hints of fundamental understanding of science issues that are likely perpetuated in the public domain when the student becomes an adult.
While there is no evidence of cause and effect between school science and scientific literacy among the adult public, there are examples of scientific illiteracy among the Australian public. For instance, a focus group of educated Melbourne adults recently discussed genetically modified foods and came to the vehement conclusion that GM foods have the potential to cause Thalidomide-type deformed babies.
One of the key tenets of good communication, science or otherwise, is to "know your audience". On this rests the most serious issue of all for Australia. Unlike the US, Europe, and elsewhere, we do not test public audiences for their understanding of science, so we have no way of knowing how well our students fare as adults in a science-based world. As far as I know, no country attempts to understand the strength of the bridge between a science education and the application of that education in the adult world, so in this respect it is not just Australia missing this obvious big-picture gap.
How does this impact on public audience consumers of science in the mass media? Journalists have little but intuition to make assumptions about reader/listener/viewer understanding of science, let alone know how information via the mass media shapes personal world views about science. But as a former science journalist, I know the standard media response: we do not aim to educate, only to inform, based on the evidence that science provides.
Perhaps the Australian word for a type of camping gear - a swag - is actually an acronym for the effectiveness of science communication: Stupid Wild Assumptions and Guesses!
More seriously, how can we understand what we are doing (or achieving) in science education, outreach, and in promoting the public understanding of science, unless we have the evidence of effectiveness in any of those areas?
Significant change needs to happen if we are going to do any more than what seems to be intuitively good to do - from high school science education to science communication in the public domain. We need to develop research tools to understand the issues and to apply them in ways that let us predict the outcomes of curricula and science communications activities. We will then have data to inform a wide range of stakeholders on how to shape science curricula and science communication activities, based on evidence, and to improve the effectiveness of both.