Statistical validity is one of those things that is vitally important in conducting and consuming social science research, but less than riveting to learn about. It doesn’t help that people use the term “validated” very loosely. In a health coaching context, I hear mention of “validated instruments” and “validated outcomes” without a consistent meaning behind the terms.
Most of the data I work with is self-report, provided by a user to a database via a device like a computer or a mobile phone. No live counselor or coach processes that information before it’s crunched in the database and appropriate content selected for the user to read.
There are drawbacks to this method to be sure. We don’t have the luxury of interpreting non-verbal cues like facial expression or tone of voice that could give nuance to a user’s words. We can’t be as sensitive about follow-up questions as we would be in a live conversation, since any follow-ups and their associated skip logic are pre-written. And we don’t allow users an opportunity to add color commentary, which leads to occasional frustrated feedback from users who really want to explain their specific circumstances related to their health.
Would you change your behavior if you knew your doctor could keep tabs on you?
Would you pause before putting the chocolate or the chips in your grocery cart? Before forking over your credit card for a pack of cigarettes? What if your doctor could tell how often your gym membership card was swiped for entry? Would you go more often?
After the Boston Marathon on Monday, the New York Times has a thought provoking analysis of marathon finish times. Theoretically, marathon finish times should be relatively evenly distributed around a mean. In particular, there’s no real reason why, say, a 3:29 finish should be more common than a 3:31. But it is.