Putting Ethics and Privacy at the Heart of Computer Vision

Published on
November 28, 2022
Rachel Melisa
Marketing & Brand Manager
Ana Banaynal
Account Manager

Previously published as a guest post on the Cisco Meraki blog. 

With the advent of AI and big data technologies, companies are now more than ever relying on computer vision to provide data for trustworthy insights to help them make smart business decisions that maintain compliance, create more personalized customer experiences, and improve staff efficiency. 

It’s no doubt that computer vision is transforming how companies function and engage. Yet, as computer vision firmly embeds itself into the IT mainstream, concerns are growing over its potential misuse.

Teaching machines to think 

To address the ethical issues that may arise from non-human data analysis and decision-making, it is important to affirm that AI is merely a tool designed to augment human capabilities

Steve Jobs once famously referred to computers as “the bicycle of the mind.”
If the computer is “a bicycle for our minds,” Artificial Intelligence is a  Harley Davidson

This is to say that humans are not as strong as a horse or as fast as a cheetah, we can't fly like a bird or dive like a fish, but still we dominate because we build tools that augment our physical shortcomings. 

If computers were created to augment our mental shortcomings, then AI is going to augment not only our ability to think, but also to perceive this world. This creates fear in many people — fear of losing control of it and the robots “taking over”.

However, if we were to liken AI to any of the tools humans have created before, whether it does good or bad to human beings, it’s the mind behind it that sets the trajectory. 

AI learns by mimicry and without understanding why it works or any concern of the consequences. Even if technology seems neutral, AI is only as equitable as the humans that program it, and the data that feeds into it. 

Building ethical AI models for computer vision

Companies that use computer vision have a responsibility to consider how the AI models that drive it impact all stakeholders, such as customers, suppliers, employees, and society as a whole. 

When building AI models for computer vision, some questions to be considered…

  • What data can or may be included or processed?
  • Who can view the data?
  • How can we create algorithms that don't make unethical or biased decisions?

1. Training with synthetic datasets

‍One way to mitigate ethical concerns is to use synthetic data creation processes to train computer vision machine learning (ML) models. 

Synthetic data is and can be anonymized and created manually or artificially apart from data generated by real-world events. Think: Sim-like 3D environments. This allows developers to produce millions of anonymized images needed for ML training at a relatively low cost, saving organizations from the costly and error-prone process of stripping personal information from collected data. 

Synthetic data creation also minimizes privacy risks and reduces the likelihood of data bias. 

2. Data anonymization

Even better, when capturing real-life data to generate insights, companies can take the extra step to de-identify individuals. This includes blurring faces on camera feeds, not recording or storing any footage, and removing any personally identifiable information (PII) from datasets.

At meldCX, we made a decision early on in our AI journey to not capture any PII by turning individuals into a tokenized anonymous persona—a random number in the system. More detail and depth is then added into the anonymized persona through objects, such as the clothes the person is wearing, and non-face behavior, such as movement and gait.

3. Segmenting user roles 

As a tool for communication and collaboration, computer vision analytics are at their best when all areas of a business can fully participate and glean value from them. 

To maintain the security of data, computer vision platforms should have flexible and customizable security permissions that allow for an appropriate balance of collaboration and control. 

For instance, permissions can be set to restrict everyone from viewing videos except the Security Lead, and granting access to the Marketing team to view only non-video data output from the platform dashboard.

4. Regulatory bodies promoting ethical AI

Globally, the industry is heading toward ethical AI regulation across the board, not just for computer vision. 

All 194 member states of the United Nations’ Educational, Scientific, and Cultural Organization (UNESCO) have unanimously adopted a series of recommendations on ethical AI. These recommendations aim to realize the advantages of the technology while reducing the human rights risks associated with its use. 

Additionally, companies such as TrustArc provide third party independent assessments and certifications to ensure that technology providers adhere to privacy regulations such as GDPR and ISO/IEC 27001.

Businesses can leverage these tools and resources to ensure their computer vision systems meet the highest standards of ethics and to get ahead of compliance before regulations go into effect. 

A collective responsibility

In this information age, data is power, and with that comes great responsibility.

Computer vision is a powerful tool, and it’s up to everyone to address tough ethical questions to establish best practices that uphold human dignity. 

All teams—from research and data science to executive levels—are equally responsible for making sure that ethical and privacy standards are top-of-mind. This process begins from ideation and continues all throughout the entire product lifecycle.

Latest from meldCX

meldCX's Vision Analytics Solution Showcased at TD Synnex Showroom in Munich, Germany

by
meldCX
Mar 19, 2024
2 minutes
Featuring our Content at the Right Opportunity (COATRO) solution for digital signage. In partnership with Intel and Signagelive.

Get the latest meldCX news and insights right to your inbox!