Friday, July 11, 2008

What National Park Science Means to Me

From our first guest contributor!


Hi, I’m Jerry Freilich, Research Coordinator at Olympic National Park. I’m also head of the North Coast & Cascades Research Learning Network, the organization Michael Liang has been working with to improve science communication in Northwest National Parks. Michael invited me to contribute to his blog. I’ve been with the Park Service for 18 years in six National Parks working as a law enforcement ranger, naturalist, field biologist, and now as a science administrator.

This summer, Michael has been artfully helping us turn some of the national parks’ Inventory & Monitoring results into clear, understandable language for non-technical audiences. This is important work. After all, taxpayers pay for our national parks and they won’t be concerned about the national parks unless they’re aware of problems national parks face. Beginning in the 90’s, national parks began monitoring their resources with the goal of detecting human-induced changes. Michael’s efforts have centered on communicating the results of that monitoring. So I thought I’d offer some comments on how these results are actually produced. What does it take to run a monitoring program? And what drives researchers to the Herculean effort required to make this happen?

When you read that spotted owls are declining or that exotic weeds are increasing, these terse facts may seem trivial or unrelated to day-to-day problems. And the results conceal the great difficulty required to gain that information. It’s funny, but the public seems to view science with reverent detachment. Perhaps they picture that park scientists wear white lab coats and walk around in spacious labs gently shaking test tubes. The more likely picture is a spotted owl researcher, some rain-drenched soul, clinging to a steep, forested hill slope. Or a drier but equally agonized biologist struggling for the fifth consecutive hour over some ghastly multivariate analysis as the clock ticks relentlessly towards the reporting deadline. Hard data are called that for a reason! Because they’re hard to get. Not only is being a National Park scientist not an “ivory tower joy ride” but what many don’t realize is that science is actually a form of battle, fought with numbers and peer reviewed publications.

Theoretically, monitoring plants and animals should be relatively straightforward. The team goes out, counts the birds, comes back to the office and writes down what they saw. Simple, right? It isn’t so. The whole process is fraught with peril from beginning to end. So you want to study aquatic insects? Well, what method do you use to do this? What are the biases of the 10 different possible methods? What are the experimental assumptions of each method? What statistical method should be used to analyze the data? Is it even possible to gather enough data to produce a properly replicated, statistically valid result? How many samples are needed to detect a change of ± 15% with greater than 90% probability of being correct? Is the thing worth doing at all if your chances of finding a significant result are unlikely until the 5th year of the study? The 20th year? Enormous, contentious arguments even among well-meaning specialists in a given field can chew up years of precious time before the monitoring even gets underway.

Just as an example, I worked on threatened desert tortoises at Joshua Tree National Park in California for six years. Everyone seemed to agree that the tortoises were disappearing. But documenting the decline requires a valid, repeatable field method. When I first met with Park biologists from the other parks with tortoises they all said, “well, let’s just use the easiest method, count them, and we’ll report the results.” Alas, for better or worse, my research specialty was determining the sensitivity of different methods used for counting the tortoises. To make a very long story short, after years of work, dozens of meetings, and four publications in peer reviewed journals, I am here to tell you that there is STILL no known method for accurately counting desert tortoises in areas where they occur at low densities (which accounts for most of the Mohave desert).


So why do we do this? It’s not just because these data we gather, analyze, and present are things we would like to know. They are things we must know in order to “preserve and protect [the parks] for the enjoyment of future generations,” as our organizing legislation commands. Moreover, scientific information is a tool – a weapon actually – that can be used against those who would degrade park resources purposely, or more often, out of ignorance. Although hard data are hard to get, knowledge is power. Carefully constructed research holds up very well in board rooms and in court. Thoughtful use of monitoring data can not only help managers make informed decisions – it can also influence the tone of public discourse – in the local press or when combined with data from other parks and passed on to Congress.

When we hear about science news in magazines or on TV the daunting mathematical and technical realities are frequently relayed in a cutesy or sugary way intended to shield folks from scary technical things they would not understand. But it shouldn’t be this way and it doesn’t have to be. The best nature educators know how to explain complex things plainly. And the most important thing is to never forget the reasons we go to the incredible lengths of doing this work.

To me, being a scientist is an expression of the utter passion I have for living things. It is my opportunity to spend hours reading technical papers and researching how nature works. It is an excuse to learn the Latin names of countless plants, birds, and bees. But most importantly it is a tool that ‘speaks truth to power.’ Science is my way of working to protect living things big and small.

No comments: