Lyria Bennett Moses: The use of powerful AI technology in policing can create challenges<\/figcaption><\/figure>\nIt has been with the arrival of AI that has added urgency to the discussions about the law and technology.<\/p>\n
But the old meme that the law continually fails to keep up with technology is not something Professor Bennett Moses believes. \u201cI\u2019ve always thought that was unfair criticism,\u201d she\u2019s says, and is more a part of tech marketing as companies try to project a future-looking buzz.<\/p>\n
\u201cLaw is all about how you regulate, run, manage a society. And it is always written in the present,\u201d Prof Bennett-Moses said.<\/p>\n
\u201cYou can only think about how people should behave, what people should be allowed to do or not do in the context of what kinds of activities are possible [in the present].<\/p>\n
You cannot write laws for future technology, no matter how good you think your crystal ball is.<\/p>\n
Which brings us to predictive policing.<\/p>\n
\u201cPredictive policing is effectively police asking a different kind of question. Instead of saying \u2018how do we solve this crime that has already happened\u2019, it is reorienting the question to say \u2018can we predict \u2013 probabilistically \u2013 where crime might take place in future,\u201d Prof Bennett Moses said.<\/p>\n
\u201cIs crime more likely to take place in this location in the coming week [for example]. And the answer is yes, you can \u2013 up to a point.\u201d<\/p>\n
There have been different kinds of predictive policy applied already, for many years. The tools range from literally using a spreadsheet to track crime as a basic level, all the way to sophisticated machine learning and applied data techniques.<\/p>\n
But notion of predictive policing has been used since forever, even if only to allocate resources \u2013 like extra patrols \u2013 on given days of the week.<\/p>\n
And this is where things get complicated with predictive policing, and where the professor has a lot to say in this podcast. The application of powerful AI adds significant juice to existing challenges.<\/p>\n
These issues relate to data-driven inferencing, not just bias. The use of exiting crime databases is problematic, because data is itself an imperfect representation of reality. Some neighbourhoods report more crime than others, is an example. Domestic violence is an under-reported crime is another.<\/p>\n
And beside crime databases are a compendium of crimes that have been reported, not crimes that have actually taken place.<\/p>\n
And using smart data techniques to send more patrol cars to a particular location can create a feedback look: The more patrol cars, the more reported crime \u2013 and therefore the need for more patrol cars.<\/p>\n
So predictive policy is already problematic at a location level. But it gets more so when its applied to profiling individuals and making predictions about the likelihood of an individual to commit a crime.<\/p>\n