IREX Blog

How to Govern the Ethical Use of AI in Surveillance

2022-12-19 12:44 AI Ethics
Society is being deprived of innovative new technologies to make our communities, schools, and cities safer.

The western world has a problem. Revolutionary technology such as artificial intelligence is here, but the awareness and knowledge in the public sector is not. There is a scarcity of US engineering talent, an alarming lack of ethical framework parameters, a non-adoption of any AI product, and evidently an absence of cojones in the public sector. It’s all being driven by the private sector.

So, what’s the answer? Let’s break it down into four segments. I’ll start with the technology that’s readily available; what is currently happening; what we don’t want to be like; and finally, how to effectively govern the ethical use of AI in surveillance.

The Technology Available Today


In a nutshell, artificial intelligence has advanced to a stage where if an event, person, or vehicle occurs under surveillance, it can trigger an alert to the appropriate authorities/personnel while using all existing cameras. Cities should be using all their current infrastructure to locate and recover missing children and human-trafficking victims while keeping communities safe from non-compliant sex offenders.

While 99.99% of society would agree with using the aforementioned use cases as a foundational starting point, using weapon detection at schools and campuses would be seen as something also accepted by the masses. There are countless other events which can be triggered by surveillance, such as unauthorized/wanted vehicles, dangerous crowding, suspicious behavior, and more, but let's move on to what the decision makers in cities and society are doing with this new available technology.


What’s Currently Happening in Public Sector


The biggest issue for the public sector is an in-house lack of knowledge and expertise when it comes to artificial intelligence and the ethics surrounding it – it’s all coming from the private sector. So, what’s the public sector doing? Unfortunately, a lack of cojones and effectively putting their heads in the sand, passing it on to their successors-simply closing their eyes and ears. I mean, would you talk about or/and make a decision on something you don’t know or fully understand?

With that being said, their lack of decision-making is costing our communities. AI adoption is seen as a risk at this moment in time for all those who are seeking re-election. My message to them is that you should be afraid to take this risk, but you should be even more afraid of staying where we are. Using existing ethical frameworks on how to utilise the technology is key. More on this is mentioned in the solution.

What (Who) We Don’t Want to Be Like


We can learn a lot from others in life, whether it be to implement or avoid in our future. A certain large country to the east has implemented AI in cameras to a George Orwell 1984 vibe – social credits, mass law enforcement monitoring society without communities and society holding them accountable. This we cannot tolerate in western society. We do not want individuals being recognised by AI for outstanding parking fines – we have to purposefully use AI for powerful reasons that are driven by society.

We have to preserve civil liberties that we care about, and we can advance participatory governance, which brings in the solution.

How to Govern the Ethical Use of AI in Surveillance


We start with transparency by design. It is paramount that every user’s action is logged, meaning each user can be audited – no hidden steps, no hidden searches. This way, we within our societies and communities can trust those who protect us to use the technology ethically. We trust but we verify.

Okay, let’s get examples; the technology is at a level where for all connected devices (cameras), you could perform searches by uploading a photo of an individual or vehicle. You can see in chronological order if/when that person or vehicle last appeared under surveillance. This is where the Technology Integrity Council and others like it have members of communities have the ability to audit that those who protect us are using the technology for justifiable reasons.

What about those that protect us? Law Enforcement, Sheriff Departments, Intelligence Agencies, etc. Undoubtedly, the technology is going to assist in the recovery of human trafficking victims, locating missing children, and locating non-compliant sex offenders. This is going to be a revolutionary tool for their operations but will also provide clear justification for their decisions, something which will protect their actions to stop and question individuals and vehicles.

Right now, the bridge between Law Enforcement and society seems to have never been bigger. With Artificial Intelligence, it will significantly bridge that gap. Society now has a tool to ensure Law Enforcement will be using their most revolutionary tool ethically whilst Law Enforcement will become transparent on their decisions when using the technology.

The technology is here and if we can minimize the suffering and evil in the world just a little bit, isn’t it worth it?