E-tika podcast: How to regulate digital technologies
The second series of the E-tika podcast brought the social and ethical dimensions of digital technologies into the spotlight. We delved into many intriguing topics with interesting guests from various fields of expertise. We are now turning these conversations into articles, please enjoy the english summary of the fourth episode below.
In the E-tika podcast we often discuss the possible benefits and new opportunities that intelligent technologies can provide to our society. At the same time, technoscientific developments can pose new challenges and unaccounted risks. There are different approaches to tackle these risks. As we have seen in the previous episodes, important values such as fairness and transparency can be directly embedded into the design of technologies. Furthermore, digital technologies need to be assessed and audited, but also regulated.
This episode deals with regulation of disruptive technologies and the current regulatory proposals by the European Union, such as Digital Services Act (DSA), Artificial Intelligence Act (AIA) or Data Governance Act (DGA) from multiple perspectives. The hosts Juraj Podroužek and Matúš Mesarčík have invited Tomáš Jucha. Tomáš works at the Department of Digital Policies and International Cooperation At the Ministry of Investments, Regional Development and Informatization of the Slovak Republic. Part of his work is to set out regulatory frameworks on disruptive technologies such as artificial intelligence and to align these with the EU legislation.
The decision when to regulate, or co-regulate is to some extent relative to the particular technology, its complexity or its use-cases and might depend also on our reliance on it. Additionally, the potential damage or risks posed by the technology on our society, human rights, individual autonomy and democratic principles should play a decisive role for the decision whether to regulate. This however carries additional problems.
Regulatory law, and law as such, is traditionally rather conservative and slow, compared to the rapidly evolving nature of digital technologies. There is a certain paradox, captured by the Collingridge Dilemma, which asserts it is often too late to regulate a technology at a point in time when we can already perceive its negative consequences.
Additionally, there might be discrepancies between the intended and the actual purpose of certain technologies. It is often hard to foresee the scope and magnitude of their impact on society. This poses a rather tricky challenge to regulators, who are expected to propose regulations that will be future-proof.
One of the ways that the European Union is trying to tackle this challenge is by using “regulatory sandboxes”. Regulatory sandboxes allow technologies to be tested in controlled environments, without necessarily complying with existing regulations and laws.
The fact that government-backed regulatory law is often slow and reactive may lead us to thinking that regulation as such is worthless, and it would be better to let the market self-regulate itself. To some extent, this is already happening as some companies proactively go beyond what would be required by the law.
On the other hand, there are voices that are saying that it is precisely this over-reliance on self-regulation that has resulted in certain companies making use and profiting off rather questionable practices. This creates the current incentives (e.g. by the European Commission) to regulate digital platforms in order to become more transparent and safer.
Social media and digital platforms such as Facebook are often mentioned in the current debates on regulation. To some extent, this is due to the fact that they operate on rather intransparent attention based models in order to generate profit. Most of us are aware of scandals such as Cambridge Analytica where user data were misused for political goals. The recent developments with whistleblowers coming forward also shed light on the fact that these platforms were very well aware of the negative impact their models might cause.
Among others, these were the crucial aspects that prompted the European Commission to bring forward major proposals on regulation on digital technologies. The Digital Services Act (DSA) aims to create a safer digital space where fundamental rights of individual users are protected. It promises to bring a higher level of transparency, e.g. by increasing control over content and data on digital platforms by implementing a unified mechanism of flagging problematic content or by making visible the paying entities behind advertisements. Additionally, it addresses the attention-based recommendation systems and lets the users themselves decide whether to use personalized recommender systems on these platforms.
On the other hand, there is the Data Governance Act (DGA) that aims to increase Europe’s competitiveness in the data economy by promoting a unified data market. This would in turn promote innovation and research in areas such as personalized medicine, transport infrastructure, but also artificial intelligence systems. At the same time, it would be of utmost importance to satisfy the current standards of data protection.
In the E-tika podcasts we discuss the various social and ethical implications of intelligent technologies. From the perspective of regulation of artificial intelligence systems, it is necessary to mention the recent Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence, or also referred to as Artificial Intelligence Act (AIA).
It is a complex legislative framework that proposes a risk-based approach to AI systems that calls for stricter oversight of certain risky uses while restricting some uses completely. This proposal can open new avenues for innovation, but it can also bring new problems and uncertainties. It might be rather challenging to implement certain values into technology such as artificial intelligence systems which are often, as we have previously discussed, very context and use dependent. If you wish to know more about AIA, KInIT has published its own stance on the AIA proposal.
Regulation as such is clearly only one part of the picture. Last but not least, the importance seems to lay in education. In order to develop robust, fair and trustworthy technologies, it is important to promote digital skills and to facilitate these by increasing cooperation between private companies, academia and the public sector.
At KInIT, we do realize that the future of digital technologies does not lie solely in the hands of computer scientists. It has to be shaped by actors from various fields. One such embodiment of how such cooperation could look like is the recently established Commision on Ethics and Regulation of Artificial Intelligence (CERAI) at the Ministry of Investments, Regional Development and Informatization of the Slovak Republic.