Čo je
E-tika podcast: Parenthood in the digital age
In our podcast, we often discuss the various benefits but also risks related to the use of digital technologies and artificial intelligence. They have the potential to bring many positive benefits, to help us make faster and more qualified decisions. On the other hand, they can reveal our weaknesses and negatively affect our ability to make autonomous decisions.
This is all the more true when it comes to vulnerable groups such as children or the youth. In this blog, I will be summarizing the sixth episode of the E-Tika podcast, in which the hosts Juraj Podroužek and Tomáš Gál have invited Andrea Cox.
Andrea is an educator in the field of online safety for various institutions, such as the Ministry of Interior of the Slovak Republic and Iuventa – Slovak Youth Institute or Google Slovakia. Besides, she leads the civic association Digital Intelligence – digiQ , which aims to educate and sentise people, most importantly young people, to using digital technologies in a beneficial and respectful manner.
As Andra explained in the episode, their aims and goals are to communicate about digital technologies and social networks without unnecessary moral panic. Not telling kids what not to do, but instead offering them alternatives and education about what to do, if they find themselves in a problematic situation online.
They facilitate and participate in a number of programmes and initiatives. One example is digiPEERS. It creates opportunities for young people to discuss their vision of their ideal internet directly with representatives of various social networks.
Another example is a series of hackathons “Code Against Hate” where young people cooperate together with various NGOs that focus on hate speech and extremism online. The idea was to devise and develop an open source tool that would help to tackle problematic online content. The resulting tool called Modera identifies text on online forums and automatically warns the admins if hate speech is detected.
Another tool, this time a rather preventive one, analyzes the text before the author submits it to the forum. In case it contains hate speech, the author will be warned and an explanation will be offered for why this is considered to be a possibly hateful text. Drawing from insights of behavioral psychology, we know that these additional steps, such as having to click on a few extra pop-ups, might discourage the user to post such content in the end.
Today, we know that such technical solutions are simply not enough and we need to consider the bigger picture. The scope of the problem is huge: starting from the lack of common Europe-wide and legally binding definition of hateful content to the general intransparency and black-box nature of the social media algorithms.
This becomes even more problematic when we consider it in relation to vulnerable groups of people such as children. In this podcast, we have extensively discussed issues related to Cambridge Analytica and problematic practices of social platforms such as Facebook.
We also know, based on the recent Facebook Files leak by the whistleblower Frances Heugen, that these platforms are aware of the negative impact they have on teenagers. Interestingly enough, the companies themselves are often framing their own efforts in fighting problematic content such as misinformation or hate speech, they often sign various ethical codes of conduct and even offer grants for various initiatives that aim to combat these aspects. While appreciating all these efforts, Andrea is rather cautious and calls for a more sustainable business model that would consider the needs of the most vulnerable people their priority.
Andrea also shared some findings from the EU Kids Online survey, which assessed that in general, young people and children have rather positive experiences on the internet. At the same time, she warns that children and young people often underestimate and trivialize the various risks and dangers that are present on the internet – but at the end of the day, this is quite normal and expected.
From this point of view, it is important to call for legislation that leaves no space for such unethical practices discussed above. The recent Digital Services Act (DSA), also discussed in the previous episode, is the stepping stone for ensuring that our online activities are not subject to constant surveillance which is then turned into profit. There are discussions about going even further: ideas for strengthening people’s right to be forgotten (a kind of “digital data reset”) are being proposed.
It is worth asking whether technological fixes and legislation are enough. The troubling practices discussed above, coupled with the fact that young people are often more prone to believe in various types of desinformation, bring other important aspects into the picture: education and critical thinking.
Education and critical thinking are crucial. When it comes to critical thinking, Andrea calls for a preventive and holistic approach. This includes creating a safe educational environment where opinions and discussions supported by evidence and facts are nurtured. Children should be taught to be more aware of the processes through which knowledge is generated and assessed. A good start would be to explain how an educational textbook is made, by whom, and what purposes it should serve in the educational process. Sadly, such discussions and reflections are often lacking in schools.
Artificial Intelligence is the topic that runs through all of the episodes of the E-tika podcast. Therefore it makes sense to also discuss the potential benefits that AI can bring, especially when it comes to children and education.
There are seemingly endless benefits when it comes to AI powered personalized learning and educational tools, but also cognitive support such as AI personal assistants that might help reduce overall stress. As always, these are not without risks.
As we have discussed in the previous episode with Martin Tamajka, explainability and transparency are crucial aspects of trustworthy AI. How can we make sure that children are aware that they are not interacting with real humans, but rather AI?
Take deepfakes as an example: artificially manipulated visual or audio content. These are important aspects to consider when it comes to children. Is simply informing them that this content has been manipulated enough, or should we demand more?
Andrea shared her experience with discussing these problems with young people, and confirmed that young people are mostly unaware of these possibilities. She argued that it boils down to our decreasing attention spans, facilitated by the algorithms that seek to provide new content fast, not leaving enough time for critically analyzing it.
Labeling manipulated content is a good start. There is a good example from Norway that now bans the use of retouched photographs and filters in advertisements unless the audience is warned about this. They also require social media influencers to label all photos where the size, shape or color of their bodies was modified.
What should a parent think of all of this? At the end of the day, when it comes to the use of digital technologies by children, the parents also share some part of the responsibility.
There are various opinions and approaches about when to allow children to start engaging with digital technologies or consuming online content, or what the role of parents should be in this process. There are even WHO recommendations on the amount of screen time allowed for children based on their age.
However, Andrea remains cautious and calls for a more individualistic approach that takes into account the overall wellbeing of the children. With her initiative DigiQ, they have even prepared a “digital family contract” that facilitates an agreement between parents and the children on how and on which terms they intend to use digital technologies.