Welcome to the renaissance of surveillance

How AI tools are changing crime fighting and civil rights concerns

A scandal is brewing in the United States surrounding the use of Cybercheck, an AI (machine learning) software from Canadian firm Global Intelligence, by law enforcement agencies.

Cybercheck has been used nearly 8,000 times by hundreds of police departments and prosecutors in 40 countries to solve serious crimes such as murder and human trafficking.

But while the police embrace the software and the manufacturers defend it, defense lawyers on the other side of the process are making serious allegations.

Imagine a scenario where your entire digital life is visible to an algorithm that is constantly assessing whether you might be a criminal. Cybercheck can easily analyze huge amounts of internet data and invade your privacy without a search warrant or whispered consent. The potential for abuse is not a paranoid fantasy, but an urgent problem.

The reliability of the software, the transparency and the trustworthiness of Cybercheck founder Adam Mosher are being questioned.

Cybercheck is designed to help gather evidence by providing data from publicly available sources – what the company calls “open source intelligence.”

These are email addresses, virtual communication networks accounts and other parts of the trail of personal information that people leave behind on the Internet. The aim is to locate suspects and provide other data to law enforcement authorities.

But the problem, critics say, is that despite its widespread use and its consequences—namely, sending people to prison for serious crimes—Cybercheck largely flies under the radar.

This is not surprising considering that the “open source intelligence” collection software itself is closed source and proprietary. This means that no one other than those behind it knows exactly what methods it uses and how accurate it is.

Furthermore, Mosher himself is accused of lying under oath about his own expertise and the way the tool was used.

This point was formally raised by defense attorneys in a motion filed in Ohio requesting that Cybercheck’s algorithms and code be disclosed for inspection.

Of course, Mosher argued that this was not possible because this technology was protected by copyright. The request is part of a murder case being handled by the Akron Police Department.

In the Akron case, two men who denied committing the crime were charged based on what Mosher calls “purely circumstantial” evidence (shell casings at the crime scene that came from a gun located in the home of one of them, and a car registered in the other suspect’s name that was seen on surveillance camera footage near the crime scene).

But there was also evidence from Cybercheck that linked the suspects’ physical location to the location of the robbery and shooting through algorithms crawling their social profiles and searching for more information on the Internet.

Ultimately, the network address of an Internet-connected device, obtained via a surveillance camera’s Wi-Fi, was used as sufficient evidence to prosecute.

And although a camera was involved, there is no footage of the fatal incident. There is also no information on how exactly Cybercheck, to quote an NBC report, “found the camera’s network address or how it verified that the device was at the crime scene.”

Other details that forensic experts hired by the defense couldn’t figure out include how Cybercheck verified an email address allegedly used by both suspects, and those experts also couldn’t “locate” a virtual communication network account used in one report prepared by Cybercheck was mentioned.

It appears that police and prosecutors are not only taking this evidence at face value, but are also taking Mosher at his word when he says that Cybercheck’s accuracy is “98.2%.”

However, there is no information about how he arrived at this number. However, Cybercheck’s founder has stated that the software has never been peer reviewed.

However, in a third case last fall, also in Akron, Mosher claimed that the University of Saskatchewan had actually peer-reviewed the document. But the 2019 document “appears to be an instruction document for the software and says nothing about who performed the review or what the result was,” NBC reported.

A university spokesman denied that the document had ever been peer-reviewed.

The judge needs now to decide whether to force Mosher to disclose the code of the controversial program.

Civil liberties are not only threatened, they are at stake. Cybercheck’s ability to analyze data could be used as a weapon to suppress free expression and monitor political dissidents. It’s not far from using technology to track criminals to monitoring people who don’t follow the rules. In the digital echo chamber, this tool could silence dissenting voices under the guise of public safety.

How Cybercheck works remains a secret. How are the goals of the algorithm determined? Who makes sure that he doesn’t overshoot the mark? This lack of transparency fuels legitimate fears of a digital panopticon in which algorithms have the final say over the history of one’s own data. This shadow that hangs over the way decisions are made and data is interpreted shows that there is an urgent need for open regulation and public oversight.

Currently, the market for mass surveillance, such as that used by U.S. law enforcement, is in turmoil. The excitement is actually just the consolidation of a lucrative market.

And this consolidation seems to be taking place as it often does in a booming industry: through mergers and acquisitions with the aim of concentrating (economic) power in as few hands as possible.

This time it is SoundThinking, which is acquiring some divisions of Geolitica, which developed PredPol (as the company was previously called), a technology for predictive policing.

It looks like a classic takeover in the technology industry – a successful company is broken up to buy its most valuable parts: engineers and patents.

We are in a period of consolidation where the big police tech companies are getting bigger, and this move is part of that process,” American University law professor Andrew Ferguson was quoted as saying in an interview with Wired.

Nothing against Ferguson’s credentials – he wrote a book called The Rise of Big Data Policing – but his conclusion about this latest acquisition is hardly a rocket science-level analysis. In other words, it’s pretty obvious what’s going on here.

But without going into the obvious – the importance for people’s privacy and ultimately, if somewhat ironically, for security – we would first like to get to know the actors.

Even though these companies operate largely in secret, often consciously, the main player, SoundThinking, doesn’t seem to need much of an introduction. Not when reports are already calling it “the Google of crime fighting” (“fighting crime” is a generous phrase).

No, the assumption is that the company is piling up products and services in its industry, like Google did in its industry. And that’s just not good.

It is not for nothing that there has been a lot of rebranding here, because these types of companies, even if they only appear marginally in the mainstream media, have managed to build a bad reputation for themselves.

And so, what SoundThinking once openly called “predictive policing” – and the main product they sell to help law enforcement agencies “achieve” that goal – is now called “resource management for police departments.”

This player – not least with its cheerful, positive, wholesome name “SoundThinking” – seems to be a savvy customer. And buyers.

This time the takeover target is called Geolitica. There is nothing to be found on the company’s website – apparently it is a provider of “Trusted Services for Secure Communities for safer communities”.

But if you delve a little into the company’s jargon, you’ll get a rough idea of what it’s all about: analyzing daily patrol patterns, managing patrol operations in real time, creating heatmaps for patrols, identifying resource hotspots.

There are two or perhaps three key questions here: Does it work? Why does it need to be outsourced? And how does the “Google of mass police surveillance” come to the conclusion that it is worth (partially) adopting?

SoundThinking CEO Ralph Clark said in August that Geolitica’s patrol management customers would now become SoundThinking customers.

We have already hired their engineering team, it would facilitate our application of AI and machine learning to public safety,” Clark said at the time in a conference call with the company’s investors.

And yet this is about more than just the “autopsy” of a company. Some observers are convinced that what is happening here is just an example, a brick in the wall, so to speak, of a profound and comprehensive – not to say controversial – change in the way the US government handles policing to carry out in their communities.

But for all the optimism about the value of a company like Geolitica (formerly called PredPol – and you don’t have to be an Orwell fan to understand the company’s purpose and business), there are also concerns.

And they go back to 2011, when PredPol first came onto the scene, using historical crime data to make current predictions.

For years, critics and academics have argued that the PredPol algorithm, which is based on historical and unreliable crime data, reproduces and reinforces biased policing patterns,” says a Wired report.

And yet it seems that in the coming years, perhaps not exactly the same technology, but undoubtedly – because why not, it is essentially rewarded for “failure from above” – these kinds of tools and services will ripple through the waves of policing in the USA, where the current government is suspiciously (no pun intended) inclined to outsource it to private entities.

When all is said, done, played out and examined, there will most likely be endless controversy.

Reminiscent of various technologies, tools like Geolitica are designed to collect, analyze and map large amounts of geospatial data to create detailed profiles that can be used by everything from marketing agencies to government agencies. Such tools promise better analysis, security and targeted services, but also pose a major challenge to individual privacy and civil liberties.

To understand the controversy, one should first understand how Geolitica works. Imagine a tool capable of tracking movements, analyzing patterns, and predicting people’s future location based on a variety of data points collected from smartphones, IoT devices, public cameras, and more. By fusing artificial intelligence, big data and geospatial technology, Geolitica creates a digital representation of a person’s life with alarming accuracy.

Another contentious issue is the commercialization of personal data, known as surveillance capitalism. Companies use relentless digital surveillance tools like Geolitica to collect behavioral data – often without explicit consent. This data is used for targeted advertising, to influence purchasing behavior and even to manipulate opinions, eroding individual privacy for commercial purposes. Such practices not only promote a power imbalance between companies and consumers, but also create data monopolies that threaten market competition.

The use of geolocation tools by governments raises the specter of “Big Brother” surveillance. The ability to track citizens’ whereabouts in real time under the guise of national security or public health risks authoritarian behavior. History is replete with examples in which instruments intended for protection were used to oppress citizens. Left unchecked, these technologies could be used to suppress dissent, target vulnerable populations, and unfairly advance certain political goals, fundamentally undermining democratic principles and civil liberties.

Law enforcement agencies are increasingly relying on technological advances to prevent crime and ensure public safety. One such advancement is predictive policing, which uses algorithms and data analytics to predict where crime is likely to occur, using tools like PredPol, which we previously reported on.

Predictive policing doesn’t occur in a vacuum. It is often part of a larger surveillance network that includes video surveillance, license plate readers and facial recognition technologies. While PredPol primarily analyzes patterns in historical crime data, the integration of additional personal data sources can lead to significant privacy intrusions.

For example: In Chicago, police use of a predictive system called the “Heat List” or “Strategic Subject List” to identify people likely to be involved in future crimes caused a stir because it was not clear how the people responded list came. The people on the list were not notified but were additionally monitored, which was viewed as a direct violation of their privacy rights.

By focusing law enforcement attention on specific hot spots, preventative policing inadvertently restricts citizens’ freedom of movement. Residents of areas often identified as hotspots may be subject to repeated interactions with police and unjustified checks, creating a virtual fence that restricts freedom of movement and contributes to alienation from the community.

For example: In New York City, a report showed that people living in areas where predictive police stops are frequent feel constantly under surveillance and are less likely to engage in everyday activities, such as walking, out of fear of unwarranted police attention. E.g. visiting friends or attending community meetings.

Aware that certain behaviors or movements may attract the attention of law enforcement, individuals may forgo exercising their freedoms of assembly, speech, and association. This self-censorship, also known as the chilling effect, can harm community cohesion and civic engagement.

 

yogaesoteric
May 21, 2024

 

Leave A Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More