The high-tech stocktake New Zealand Police didn’t want you to see

New Zealand policing is playing catch-up with a fast-moving tech world, but privacy campaigners are concerned over the pace of the implementation and the lack of transparency involved. 

Incredibly, before 2019 New Zealand  Police had no dedicated teams scanning the internet for threats; they missed for example, internet chatter preceding the Christchurch mosque attack; fast forward two years and social media and other high-tech monitoring by New Zealand Police, including facial recognition, is advancing at a pace, but without a lot of oversight.

It’s an issue RNZ’s Phil Pennington has covered closely – most recently outlining info obtained after an OIA request that found that the New Zealand Police had used three social media search tools since 2018, on cyber crime, the mosque terror attacks investigation, and to feed into the Royal Commission of Inquiry into those attacks. 

But New Zealand Police declined to state exactly which ones were used for fear it would tip off criminals and have them employ evasive tactics.

We do know that the police have upwards of 20 tech tools in their arsenal because police have admitted to using them in a “high-tech stocktake” of software and other tools utilised in crime prevention and detection.

Among them were drones that can send live footage to patrols, a super fast system to spot suspects in CCTV feeds and a cellphone scourer with facial recognition capability.

That stocktake also revealed that New Zealand Police had trialled an algorithm that searches social media for face matches from controversial US supplier Clearview AI.

Clearview AI enables law enforcement to take a picture of a person walking down a street, upload it and then see public photos of that person, along with links to where those photos appeared on various internet sites. It could for example identify law abiding people at a protest giving New Zealand Police their names and addresses and a list of contacts. 

“The weaponisation possibilities of this are endless,” Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University, told the New York Times in January – (the story ran under the headline The Secretive Company That Might End Privacy as We Know It.)

“Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.”

The use of Clearview AI was implemented without informing the government, the Privacy Commissioner or the public.

Other new surveillance upgrades, part of a recent $23 million spend, also includes software from Japanese manufacturer NEC, a tool called Cellebrite that searches lawfully seized cell-phones and Brief Cam that locates faces or vehicle movements in CCTV footage.

While it’s no surprise police are trawling through social media accounts, their lack of transparency around the tools they’re using and around the controversy regarding police taking photographs of mainly Maori and Pasifika teenagers, who had broken no laws, is concerning. 

New Zealand Police did destroy some images but wouldn’t say what databases the saved images were stored on or how the photos were used. Many of the youths were under 17 which means that the police are not able to interview them without the consent of their caregivers, let alone take photos of them.

Indeed that high-tech stocktake was a direct result of the public outcry around those incidents last year, which many saw as, racial profiling, (New Zealand Police maintained that the images taken were lawfully acquired under Section 214 of the 1989 Oranga Tamariki Act.)

Without the excellent reporting by RNZ that uncovered the story in the Wairarapa it’s likely the public would still be in the dark around this issue.

The controversy highlights that New Zealand, like many other jurisdictions, has a regulatory gap when it comes to advanced surveillance practices, unlike other long accepted and regulated identifying technologies like fingerprinting or taking a DNA sample.

Clearly aware that this was a growing concern New Zealand Police announced in April that they’d appointed two independent facial recognition experts, Dr Nessa Lynch (Associate Professor at Victoria University) and Dr Andrew Chen (a Research Fellow at the University of Auckland) to “explore the current and possible future uses of facial recognition technology and what it means for policing in New Zealand communities.” 

While facial recognition is used by many forces around the world some are pushing back, citing privacy concerns. 

San Francisco for example, has banned New Zealand Police from using the technology.

Lynch and Chen’s findings are planned to be published in the coming months; and due in September is The Independent Police Conduct Authority and Privacy Commissioner’s report into the Maori photographing.

Dr Lynch believes that part of the problem is that the pace of technological change has outstripped that of law and regulation. 

“We welcome the opportunity to provide independent advice to assist New Zealand Police to develop and strengthen their policies for legal and ethical use of this technology.”

He says he and Chen are interested to find out how New Zealand Police are using facial recognition technologies today, and how they can consider the use of new tools in the future.

“There is often a perception of a trade-off between public safety and privacy – we hope to find a path forward that supports both of these values at the same time.”

Utilising independent, expert oversight is a welcome acknowledgement by New Zealand Police that it has some way to go to gain the public’s trust on this issue; but that comes after some pretty big missteps and maybe too little, too late.