by Ciel Amy Wharton
The accuracy and reliability of data collected in wildlife surveys can be affected by observer skill. Field sign surveys are especially sensitive to observer effects because of the proficiency required to correctly identify tracks and signs. As a result, there is a need for a method that will systematically measure the skill of participants in wildlife research. CyberTracker Conservation has created one such tool, the Tracker Evaluation.
I analyzed the utility of this evaluation system as both a mechanism for assessing observer skill and as a training tool. I present a case study of two Tracker Evaluation workshops for 19 Texas Parks and Wildlife Department employees. All participants improved their scores from the first evaluation workshop (mean = 62%) to the second (mean = 79%) three months later. The mean increase in score was 17 percentage points, with some participants increasing their score by nearly 30 percentage points. In response to an in-house questionnaire, participants stated that the evaluation process measured their tracking skills well (mean = 4 on a Likert Scale of 1 to 5). Participants' level of confidence for correctly identifying animal tracks and sign increased from the first to the second workshop. Overall, participants were very satisfied with the workshops (mean = 5). This case study illustrates that the Tracker Evaluation has the potential to serve as both a local and an international standard for data collectors while simultaneously functioning as an effective training instrument. With broader application of this system to wildlife research and monitoring programs that use field signs, managers could better understand observer reliability and its implications for interpretation of survey data.