Testing the transparency gap in AI and data protection
ABSTRACT
As part of an ALT Special Project on artificial intelligence, the researchers made information requests in terms of South Africa’s data protection law, the Protection of Personal Information Act 4 of 2013, to ask a range of prominent companies and government agencies how, if at all, they used artificial intelligence to process personal information. The responses yielded little detail about actual uses of AI, but demonstrated significant gaps in implementation of data protection rights in South Africa, and gaps in the legal framework to uphold data subjects’ rights in relation to automated processing.
KEYWORDS
Artificial intelligence – data protection – surveillance capitalism – automation – South Africa – transparency
Citation: T Davis & M Hunter, Testing the transparency gap in AI and data protection, ALT Advisory Insights 2022 (5) (20 September 2022).
+-*#*-+
Artificial intelligence (AI) has moved from the realm of science fiction into our pockets. And while we are nowhere close to engaging with AI as sophisticated as ‘Data’ from Star Trek, the forms of artificial narrow intelligence that we do have inform hundreds of everyday decisions, often as subtle as what products you see when you open a shopping app or the order that content appears on your social media feed.
Examples abound of the real and potential benefits of AI, like health tech that remotely analyses patients’ vital signs to alert medical staff in the event of an emergency, or initiatives to identify vulnerable people eligible for direct cash transfers.
But the promises and the success stories are all we see. And though there is a growing global awareness that AI can also be used in ways that are biased, discriminatory, and unaccountable, we know very little about how AI is used to make decisions about us. The use of AI to profile people based on their personal information – essentially, for businesses or government agencies to subtly analyse us to predict our potential as consumers, citizens, or credit risks – is a central feature of surveillance capitalism, and yet mostly shrouded in secrecy.
As part of a new research series on AI and human rights, we approached 14 leading companies in South Africa’s financial services, retail and e-commerce sectors, to ask for details of how they used AI to profile their customers. In this case, the customer was us: we specifically approached companies where at least one member of the research team was a customer or client. We also approached two government bodies, Home Affairs and the Department of Health, with the same query.
WHY AI TRANSPARENCY MATTERS FOR PRIVACY
The research was prompted by what we don’t see. The lack of transparency makes it difficult to exercise the rights provided for in terms of South Africa’s data protection law – the Protection of Personal Information Act 4 of 2013. The law provides a right not to be subject to a decision which is based solely on the automated processing of your information intended to profile you.
The exact wording of the elucidating section is a bit of a mouthful and couched in caveats. But the overall purpose of the right is an important one. It ensures that consequential decisions – such as whether someone qualifies for a loan – cannot be made solely without human intervention.
But there are limits to this protection. Beyond the right’s conditional application, one limitation is that the law doesn’t require you to be notified when AI is used in this way. This makes it impossible to know whether such a decision was made, and therefore whether the right was undermined.
WHAT WE FOUND
Our research used the access to information mechanisms provided for in POPIA and its cousin, the Promotion of Access to Information Act (PAIA), to try and understand how these South African companies and public agencies were processing our information, and how they used AI for data profiling if at all. (In policy jargon, this sort of query is called a ‘data subject request’.)
The results shed little light on how companies actually use AI. The responses – where they responded – were often maddeningly vague, or even a bit confused. Rather, the exercise showed just how much work needs to be done to enact meaningful transparency and accountability in the space of AI and data profiling.
Notably, nearly a third of the companies we approached did not respond at all, and only half provided any substantive response to our queries about their use of AI for data profiling. This reveals an ongoing challenge in basic implementation of the law.
Among those companies that are widely understood to use AI for data profiling – especially those in financial services – the responses generally did confirm that they used automated processing, but were otherwise so vague that they did not tell us anything meaningful about how AI had been used on our information.
Yet, many other responses we received suggested a worrying lack of engagement with basic legal and technical questions relating to AI and data protection. One major bank directed our query to the fraud department. At another bank, our request was briefly directed to someone in their internal HR department. (Who was, it should be said, as surprised by this as we were.) In other words, the humans answering our questions did not always seem to have a good grip on what the law says and how it relates to what their organisations were doing.
Things were even more dire in the case of the government agencies: the Department of Home Affairs has yet to update its PAIA manual, a public guide on access to information procedures which is required by law, to include data protection procedures, eight years after the Protection of Personal Information Act became law. The National Department of Health had not published its PAIA manual at all.
Perhaps all this should not be so shocking. In 2021, when an industry inquiry found evidence of racial bias in South African medical aid reimbursements to doctors, lack of AI transparency was actually given its own little section. Led by Adv Thembeka Ngcukaitobi, the inquiry’s interim findings concluded that a lack of algorithmic transparency made it impossible to say if AI played any role in the racial bias that it found. Two of the three schemes under investigation couldn’t actually explain how their own algorithms worked, as they simply rented software from an international provider. The AI sat in a ‘black box’ that even the insurers couldn’t open. The inquiry’s interim report noted: “In our view it is undesirable for South African companies or schemes to be making use of systems and their algorithms without knowing what informs such systems.” Indeed.
WHAT’S TO BE DONE
In sum, our research shows that it remains frustratingly difficult for people to meaningfully exercise their rights concerning the use of AI for data profiling.
We need to bolster our existing legal and policy tools to ensure that the rights guaranteed in law are carried out in reality – under the watchful eye of our data protection watchdog, the Information Regulator, and other regulatory bodies.
The companies and agencies who actually use AI need to design systems and processes (and internal staffing) that makes it possible to lift the lid on the black box of algorithmic decision-making.
Yet, these processes are unlikely to fall into place by chance. To get there, we need a serious conversation about new policies and tools which will ensure transparent and accountable use of artificial intelligence. (Importantly, our other research shows that African countries are generally far behind in developing AI-related policy and regulation.)
Unfortunately, in the interim, it falls to ordinary people, whose rights are at stake in a time of mass data profiteering, to guard against the unchecked processing of our personal information – whether by humans, robots, or – as is usually the case – a combination of the two. As our research shows, this is inordinately difficult for ordinary people to do.
+-*#*-+
* Tara is a Senior Associate at ALT Advisory.
^ Murray is a Senior Associate at ALT Advisory.
+ ALT AI, a research series focused on safeguarding fundamental rights in relation to artificial intelligence, is available at ai.altadvisory.africa. This work was carried out in the context of the Africa Digital Rights Fund with support from the Collaboration on International ICT Policy for East and Southern Africa.
ENDS.