Determining Crash Risk: What Works (and What Doesn’t)


VTTI pioneered the use of naturalistic driving studies where real-world driver behaviors are unobtrusively captured for data analysis.

We’ve been working on a series about the evolution of the next wave of transportation innovation: automated driving. You can find Part 1 here, and Part 2 is in the works of how connected vehicles fit within the overall scheme of automation.

But I wanted to detour for this entry to address another immediate research concern.

Recently, VTTI researchers have been making the media rounds, asking to respond to both the recent push by the National Highway Traffic Safety Administration (NHTSA) to equip all new light vehicles with connected-vehicle technology and our own follow-on award to a $3 million connected-vehicle project. During these national discussions, a sub-dialogue has occurred regarding the crash risk associated with driver distraction.

At VTTI, we perform many different kinds of studies because all have strengths and weaknesses. That is, we consider a range of testing methods that predominantly include real-world driving (in fact, VTTI pioneered the use of naturalistic driving studies) as well as test-track studies, simulator studies, and surveys (e.g., Dingus [In Press]).

Other researchers, however, may generally use study methods that consider only controlled situations (such as a simulator) or self-reported behaviors (normally determined through surveys). These types of studies can provide valuable insight when appropriately designed, conducted, and interpreted. Specifically, it has become popular to try and estimate crash risk from survey or simulator data, even though these methods often omit factors present in the larger context of driving, do not account for inaccuracies present in self-reports, do not account for driver adaptation, and/or do not necessarily represent the behavior of all driving populations.

For instance, Dr. Paul Atchley, professor of psychology at the University of Kansas, has used such methods as student surveys (e.g., Atchley et al. [2012]) to claim that the risk of distracted driving (e.g., texting, talking on a cell phone) is up to six times higher than alcohol impairment. Certainly, VTTI agrees that any type of distraction is a problem for driver safety; just take a look at VTTI Impact to see how we help inform local, state, national, and international policies about such distraction-related issues as cell phone use and drowsiness. The problem is not Dr. Atchley’s message itself, but rather, the accuracy of the facts used to compose the message.

Let’s do some back-of-the-envelope calculations based on Dr. Atchley’s assertion, using cell phone conversation as the comparative measure:

According to the National Household Travel Survey, there are about 233 billion driving trips taken annually in the United States.

According to MADD, there are about 116,500,000 trips taken each year where drivers are under the influence, or about 1 out of every 2,000 trips.

The NHTSA Fatality Analysis Reporting System (FARS) lists about 10,000 fatalities a year caused by drunk drivers.

NHTSA believes that about 10 percent of drivers are using a cell phone while driving.

VTTI’s naturalistic data show that about 5 percent of the time (or about 1 out of every 20 trips) that someone is driving, they are engaged in a cell conversation.

Thus, the ratio of cell conversation trips to alcohol-impaired trips is about 100:1.

If alcohol impairment was of equal risk to a cell conversation, one would expect 10,000 (alcohol fatalities) x 100 (ratio of cell/alcohol trips)—or 1,000,000—fatalities per year in the U.S. due to cell conversations.

If Dr. Atchley is correct in that the risk of driver distraction is actually up to six times higher than alcohol impairment, then one would expect as much as 6 x 1,000,000—or 6,000,000—fatalities due to cell conversations, or about 200 times more fatalities than the approximate 30,000 total (reported by FARS) for any recent year.

Obviously, there are some assumptions I have made and some estimations that I have done to illustrate this point. However, it’s not possible that a cell conversation can have an odds ratio* of 24. It is just not possible that the results reported paint a realistic picture.

But how can this be? How can a reputable scientist from a reputable university be off by such a large margin? The answer is simple and goes back to the point made earlier: There is no way to predict crash risk, either directly or indirectly, from the use of laboratory or survey data. Using a simulator or survey method, you can measure detriments in such factors as driver performance when conducting two or more tasks simultaneously, or you can determine self-reported willingness to engage in a risky behavior. But these do not equate to crash risk. Drivers adapt in the real world, where the context of real driving is much more complex.

We see first-hand just how complex it is: VTTI researchers have authored myriad articles (e.g., New England Journal of Medicine) and final reports that use naturalistic driving data to detail crash risk among different driving populations and across secondary tasks (e.g., texting, reaching for a phone, dialing, etc.). Multiple national and international safety agencies and industry leaders use our findings to highlight the risks of distracted driving, including the “23 times” statistic featured in national anti-distraction campaigns ranging from the New York Times to the Ad Council to AT&T. The “23 times” message came from our Driver Distraction in Commercial Vehicle Operations study, which evaluated data from two VTTI naturalistic driving studies and found that texting while driving raises a heavy truck driver’s crash and near-crash risk by—you guessed it—23 times. The crash/near-crash risk included crash-relevant conflicts and unintended lane deviations, what is collectively referred to as “safety-critical events.”** The “23 times” message ultimately helped lead the U.S. Department of Transportation to issue a call to end distracted driving. As a result, 41 states and Washington, D.C., have now banned text messaging for all drivers.

Self-reported surveys alone cannot accurately tell us all there is to know about driver behavior in the context of driving. In an environment such as a simulator, there is no life-threatening stress; simulators simply don’t crash (unless they have a bad operating system, pun intended). There are very good, award-winning simulator studies (e.g., Lee et al. [2002], Ranney et al. [2011]) out there that help us learn a significant amount about dual task performance under conditions of impairment, distraction, fatigue, etc. But we currently cannot, and perhaps never will, be able to assess or predict crash risk from operator-in-the-loop simulation or surveys alone.

*Odds ratio: A commonly used measure of the likelihood of event occurrence is termed as the odds. The odds measure the frequency of event occurrence (i.e., presence of inattention) to the frequency of event non-occurrence (i.e., absence of inattention). That is, the odds of event occurrence are defined as the probability of event occurrence divided by the probability of non-occurrence.

**VTTI analyzes both crashes and near-crashes when assessing driver distraction, and with good reason: there is a growing body of research (e.g., Guo et al. [2010]) that has found near-crashes are effective and predictive substitutes for crashes. Newer large-scale naturalistic driving studies led by VTTI, such as the Second Strategic Highway Research Program, will allow us to analyze crash-only data in future studies.


Atchley, P., Hadlock, C., and Lane, S. (2012). Stuck in the 70s: The role of social norms in distracted driving. Accident Analysis & Prevention, 48 (pp. 279-284).

Dingus, T.A. (In press.) Estimates of prevalence and risk associated with inattention and distraction based upon in situ naturalistic data. Annals of Advances in Automotive Medicine.

Guo F., Klauer S.G., McGill M.T., Dingus T.A. Evaluating the Relationship Between Near Crashes and Crashes: Can Near-Crashes Serve as a Surrogate Safety Metric for Crashes? DOT HS 811 382. (2010)

Lee J.D., McGehee D.V., Brown T.L., Reyes M.L. (2002). Collision Warning Timing, Driver Distraction, and Driver Response to Imminent Rear-End Collisions in a High Fidelity Driving Simulator. Human Factors.

Ranney, T. A., Baldwin, G. H. S., Parmer, E., Martin, J., & Mazzae, E. N. (2011, August). Distraction effects of manual number and text entry while driving. (Report No. DOT HS 811 510). Washington, D.C.: National Highway Traffic Safety Administration.

Tagged with: , , ,
Posted in Uncategorized
One comment on “Determining Crash Risk: What Works (and What Doesn’t)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: