A decade ago, President Barack Obama proposed spending $75 million over three years to help states buy police body cameras to expand their use. The move came in the wake of the killing of teenager Michael Brown, for which there was no body camera footage, and was designed to increase transparency and build trust between police and the people they serve.
Since then the first funds were allocated In 2015, tens of millions of traffic stops and accidents, street stops, arrests and the like were recorded with these small digital devices that police attach to their uniforms or winter jackets. The images were considered useful as evidence in the disputed events that led to the death of George Floyd in Minneapolis in 2020. also avoid bad behavior in the relationship between the police and the public.
But unless something tragic happens, body camera footage is generally not seen. “We spend so much money collecting and storing this data, but it’s almost never used for anything,” says Benjamin Graham, a political scientist at the University of Southern California.
About supporting science journalism
If you like this article, please consider supporting our award-winning journalism subscribing. By purchasing a subscription, you’re helping to ensure a future of impactful stories about the discoveries and ideas that shape our world.
Graham is among a small number of scientists who are recreating this footage as data rather than evidence. Their work leverages advances in artificial intelligence-based natural language processing to automate the analysis of video transcripts of citizen-police interactions. The findings have enabled police departments to identify policing problems, find ways to solve them and determine whether the repair improves the behavior.
Only a handful of police agencies have opened their databases to investigators so far. But if this footage were regularly analyzed, it would be “a real game changer,” says Jennifer Eberhardt, a psychologist at Stanford University who pioneered this line of research. “We can see beat-by-beat, moment-by-moment how an interaction unfolds.”
In articles published over the past seven years, Eberhardt and his colleagues have analyzed body camera footage to reveal How the police speak differently to blacks and whites and what kinds of conversations are likely to gain a person’s trust or predict a desirable outcome, such as handcuffing or arrest. The findings have improved and enhanced police training. in a study published in PNAS Nexus In September, researchers showed this the new training changed the behavior of the officers.
“By taking these types of learnings and making improvements in your department, it helps build trust in communities that have very low levels of trust,” says LeRonne Armstrong, former longtime police chief of the Oakland Police Department in California. -Continuous collaboration with the Stanford team.
The approach is taking place gradually. Encouraged by the Stanford findings, the Los Angeles Board of Police Commissioners, which oversees the Los Angeles Police Department (LAPD), asked USC for help in making sense of the department’s images. A project to analyze 30,000 body camera videos from a year’s worth of circulation is now underway. And the Stanford team is also partnering with the San Francisco Police Department to use body camera footage to evaluate a program to send its officers to Birmingham, Ala., to learn about the Civil Rights Movement and the principles of nonviolence.
Stanford’s work began in 2014 following a scandal involving the Oakland Police Department. Four Oakland, California, police officers known as “The Riders” were accused of roughing up and arresting innocent people and planting drugs on them, among other crimes, in the late 1990s. Of the 119 plaintiffs, 118 were Black. So as part of the $10.9 million settlement agreement, the department was required to collect data on vehicle and pedestrian stops and analyze them by race. More than a decade after the deal was reached, the department’s federal monitor asked Eberhardt for help.
The plaintiffs’ attorneys told Eberhardt what they most wanted to know was what happened. the cruiser lights came on—why the officers were stopping people and how the interactions went. The department was the first to adopt body cameras, having launched them five years earlier. “You actually have the footage,” Eberhardt recalls telling them, though no one in the department thought to use it for that.
Eberhardt hired Stanford linguist and computer scientist Dan Jurafsky and his then-student Rob Voigt, now a computational linguist at Northwestern University, to develop an automated way to analyze video transcripts for nearly 1,000 traffic stops. The researchers decided to measure whether officers spoke less respectfully to black drivers than to white drivers. First, people rated the respectability of passages in the transcripts. They then built a computational model that linked the ratings to various words or phrases and assigned numerical weights to those phrases. Showing concern for the driver, for example, was considered highly respectful, while addressing them by their first and last name was less respectful.
The model then assigned a respect score to all officers’ language during traffic stops over a month, and the researchers correlated those scores with the race of the person pulled over, among other variables. They found a light racial inequality respecting the official language. When speaking to the black driver, the officers did not state the reason for the stop, to offer reassurance or express concern for the driver’s safety, for example. The lack of respect was present throughout the interaction and did not depend on the race of the officer, the reason for the stop or its location, or the outcome.
Those initial results, published in 2017, had a big impact on Oakland. “When Stanford released the findings, it was almost like a relief to minority communities,” says Armstrong. “This validated concerns people have always felt, and the department re-examined how we train our officers to communicate with our community.”
The Stanford team used the findings to develop a “respect” module for a procedural justice training program provided by the department. Procedural justice seeks to build fairness into police procedures. In addition to emphasizing respect, the police can explain their actions to others and allow those people to give their point of view. As part of this effort, the team used their computational model to extract real interactions that were specifically respectful and disrespectful. “As a training example, that feels a lot more legitimate to someone who’s training” than contrived scenarios, Jurafsky says. “(The officers) know their language.”
After the training went into effect, the researchers conducted another body camera study to determine whether the officers used what they learned. The Stanford team compared key features of officers’ language in 313 stops that occurred up to four weeks before the training with 302 stops that occurred four weeks after the training. Researchers found that officers who had gone through the training were more likely to express concern for driver safety, offer reassurance and give explicit reasons for the stop, they reported in September. PNAS Nexus to analyze.
Systematic analysis of body camera footage, Eberhardt says, offers a promising way to understand what types of police training are effective. “A lot of the training they have now is not rigorously evaluated,” he says. “We don’t know if everything they’re learning in those training sessions … actually translates into real interactions with real people on the street.”
In a study published last year, Stanford researchers analyzed body camera footage to find out Language associated with an “increase result”. to stop traffic, such as handcuffing, searching or arresting. Using footage from 577 stops of black drivers in an undisclosed city, they found what Eberhardt calls a “linguistic signature” in the first 45 words an officer speaks: from the start giving the driver orders and no reason. stop “The combination of those two was a good sign that the stop would end with the driver being handcuffed, searched or arrested,” he says.
None of the investigative stops involved the use of force. But investigators were curious if the signature they found would be present in footage of the police interaction that led to Floyd’s death. It was In the first 27 seconds of the encounter (roughly the time it takes police officers to speak 45 times during traffic stops), the officer spoke. alone orders and did not tell Floyd why he had stopped.
The USC team recruited a diverse team, including previously incarcerated and retired police officers, to judge interactions captured by LAPD body cameras for civility, respect and other aspects of procedural justice. The team plans to use AI advances to capture these insights, such as why a statement that is funny or respectful might be perceived as sarcastic or disrespectful. “The biggest hope is that our work will improve LAPD officer training by providing a data-driven way to update and modify training procedures to better suit the populations they serve,” says USC cognitive scientist Morteza Dehghani. , who co-directs the project with Graham.
The policy may dissuade police departments from sharing footage with academics. In some cases, departments may be reluctant to surface systemic problems. In the future, however, the departments will be able to analyze the footage themselves. Some private companies – for example TRULEO and Polis Solutions—it already offers software for that.
“We’re pushing departments to be able to use these tools and not just make it an academic exercise,” says Nicholas Camp, a social psychologist at the University of Michigan who worked on Eberhardt’s team. But commercial models have not been entirely transparent—users cannot inspect their component modules—so some academics, including Camp and Dehghani, are wary of their production.
USC plans to make the language models the team builds, which will be open for inspection, available to the LAPD and other police departments so officers can continuously monitor their interactions with the public. “We should have much more detailed information about how these day-to-day interactions are going. That’s a big part of democratic government,” says Graham.