IJCAI 2022: Highlights from the conference and its doctoral consortium
Branislav Pecher is one of our PhD students. He is a member of the Web & User Data Processing team where he focuses on machine learning models that involve only a limited number of annotated samples, particularly meta-learning. His domain is misinformation detection and related phenomena. In July 2022, Branislav attended the IJCAI 2022 conference in Vienna. Read on to find out more about his experience.
At the end of July, I had the opportunity to attend the 31st International Joint Conference on Artificial Intelligence (IJCAI), co-organized with the 25th European Conference on Artificial Intelligence (ECAI). We went to Vienna with some of my colleagues to take part in this prestigious conference. IJCAI is the premier international gathering of AI researchers, and it is the longest running major AI conference (it was launched in 1969). It encompasses all areas of artificial intelligence, ranging from reinforcement learning, causality, Bayesian statistics, but also including topics such as AI ethics, trust and fairness.
I attended the conference in-person, both as a student volunteer and author, as I took part in the doctoral consortium at the conference. I am going to share some insights from the conference with you.
Workshops and tutorials – networking with people working on the same, niche topics
The conference started with three days of workshops and tutorials. Each of the 32 workshops was focused on a narrow research topic, such as evaluation of the artificial intelligence, adverse impacts of AI technologies or explainability of AI. The complete list of workshops is available at the conference website. Thanks to this narrow focus, attending the workshops was a great opportunity to network and discuss our work with people focusing on the same research direction, potentially leading to future collaborations.
I attended a workshop that dealt with the evaluation of artificial intelligence. The central idea throughout the workshop was that simple evaluation leads to models that can perform well on the task we want, but doing it simply by memorizing data provided, without any understanding of the task at hand. Such models cannot generalize well and fail when we move on to different data.
An example that really stuck with me went like this: “If we want to recognize cats and dogs, we can do it by showing the examples to the model. But if we then want to recognize cats with top hats, the model we have is unusable, even when it performed without any error previously. Only because it has never seen cats with top hats and it is infeasible to show every single possibility of cats and dogs to the model.” The intention of the workshop was to move beyond the simple evaluations by evaluating the “general intelligence” of the models using (besides other things) concepts from psychology and evaluation of animal intelligence.
Another interesting workshop was on the adverse impacts of AI technologies. It was focused on the ethics side of Artificial Intelligence, with many speakers coming from non-technical fields. This workshop also included papers and presentations from other colleagues from KInIT. Ján Čegiň presented his paper on crowdsourcing adversarial examples using a game to improve false information detection. Adrián Gavorník gave a presentation on improving the assessments of trustworthiness of AI.
Besides these two workshops, we also attended the workshop on explainability of AI. This workshop also included a presentation from our researchers Martin Tamajka and Marcel Veselý about finding faithful and understandable explanations for a combination of model, task and data. As the name of the workshops suggests, the focus was on explanations of AI decisions in all shapes and forms.
Last, but not least, Peter Pavlík attended the Workshop on Complex Data Challenges in Earth Observation. He presented his paper titled Radar-Based Volumetric Precipitation Nowcasting: A 3D Convolutional Neural Network With U-Net Architecture. He has shown how using volumetric data for training the nowcasting model can improve the predictive accuracy in this domain. You can watch his presentation here.
Doctoral consortium and poster session – great opportunity for networking, discussions and gathering feedback
During the last day of workshops, I attended the Doctoral Consortium at the IJCAI Conference. It consisted of two parts. In the first part, the attending PhD students, with me being one of them, presented a short spotlight presentation about their PhD research direction.
Afterwards, we discussed and got feedback on our current and future research directions from other students and selected experts in the field. This first part served as a great networking opportunity between students, where I obtained a lot of valuable feedback, discussed my ideas about where I envision my research going.
The most significant piece of advice was to focus on a narrower scope. This was repeated by multiple participants, as many students struggle with a scope that is too broad. The goal I envisioned was too ambitious to complete thoroughly in the time allocated for my doctoral studies. Focusing on a more narrow scope will always lead to better contributions.
I also discovered new and interesting possibilities for the direction of my research, and met students that work on similar topics.
In the second part, we had a really interesting talk on a topic most of the AI researchers struggle with – how to communicate our research to broad non-technical audiences. This talk included a lot of tips and tricks on what to do, what to focus on, and what not to do, in order for the audience to understand our research. One tip that stood out to me was to create analogies with common things everyone knows. E.g., to remember the year when the Magna Carta was signed (1215), we can simply remember that it was signed at “lunch time”.
After the talk, we had a career panel with Andrea Rendl, Barry O’Sullivan and Peter Wurman. We could ask any questions related to PhD. It was really great to find out from experts in the field how they view doctoral studies, and what they think about how it should be approached. Some of the ideas and opinions that caught my attention were:
- the student owns the thesis so it is okay to occasionally disagree with your supervisor about the direction of your dissertation
- the dissertation topic does not define the whole research career
- the ideal PhD student should show leadership and initiative
- it is not enough (or important) to get a paper accepted, but it is also important to actively talk about the paper and the research to other researchers.
In addition to the short presentations at the Doctoral Consortium, each participant had a poster at the main conference poster session the following day. The poster session served as a networking opportunity with all the participants of the conference. For me, it was 2 hours of non-stop networking, discussions about my work, its direction and gathering further valuable feedback – on how it can help, how it can be improved and even how to better present it to other people.
I think that the combination of the doctoral consortium discussion and the poster session gave me really valuable feedback. It helped me to further flesh out my dissertation topic, to significantly improve it and to uncover future directions that will (hopefully) allow me to enhance the potential impact of my thesis.
Keynotes and the main conference
The main part of the conference started with, in my opinion, the most interesting keynote by Gerhard Widmer. The main topic was the role of Artificial Intelligence in music – studying music, but also playing and even teaching music. Mimicking music using AI is problematic. Even though AI can already play any music piece we want, it lacks the human touch. As each performer interprets the piece in his or her own way, leaving a piece of himself/herself in the music, each performance sounds a bit different. However, AI performance is really monotonous, as it cannot easily mimic the behavior – mainly due to lack of data and problems with evaluation. The highlight of this keynote was a live AI-human duet, where the AI model mimicked the human player’s speed, tone and volume, producing human-like performance.
Another two keynotes dealt with the explainability of AI. The first one was a critique of the current explanation methods that do not really take into account the non-expert human perspective. This makes them unusable in practice, as they were designed by AI experts, who are not well placed to design explainability tools for non-experts. The speaker made an analogy to inmates running an asylum and argued that we should turn to social sciences to mitigate this.
The second keynote presented a new set of explanation methods for unraveling the underlying governing equations of medicine from data, enabling scientists to make new discoveries and non-experts to easily utilize these explanations.
All other keynotes were just as good, dealing with social aspects of AI, using AI to help in teaching coding practices or discussing where data science is going and why more focus on causal inference is needed. An abstract from each keynote is available here.
In addition to the keynotes, many interesting papers were presented. The topics included natural language processing, reinforcement learning, agent systems and ethics, social aspects of AI and many more. One of the highlights was the AI for good track, which included papers on how AI is being used to improve the world – such as vaccine allocation, dealing with the pandemic and other health related issues, or dealing with climate change.
As a part of the main conference, our researcher Róbert Móro presented our work on black-box audit of YouTube. His presentation was part of the Sister Best Paper Conference track. This track included best papers from other AI-based conferences to enable better dissemination of these topics. It was also included in the poster session which yielded many interesting conversations about the problem.
Volunteering – an unexpected opportunity for networking
I attended the conference as a student volunteer, helping out with organizing different sessions during two different days of the conference. Even though it can be viewed as a distraction from the conference, it was instead a great opportunity for further networking. Both with other volunteers and all the participants.
The most interesting discussion I had was during my volunteering duty on the last day of the conference with the chair of one of the tracks of the conference. He works on a similar topic as me – improving the evaluation of the models beyond simple metrics and working on machine learning models with only few labeled samples, although in a context of teacher model instead of the context of lacking enough labeled data. Talking to him was really eye-opening for me, as I got to see all the possibilities of my research topic and got multiple ideas where to head next with my work.