Martin Tamajka
Research areas: natural language processing, computer vision, deep neural networks, explainable AI, medical image analysis
Position: Research Engineer
Martin is a research assistant focusing on novel methods of artificial intelligence, mostly deep learning and computer vision, as well as on increasing the transparency and reliability of neural networks through methods of explainability and interpretability. His past research also includes analysis of multidimensional medical images and images in general.
Besides research, Martin is a passionate teacher. He has supervised and co-supervised 20 Master’s and Bachelor’s theses and led multiple successful teams of students in the team project competition. His teaching activities are not limited to academia. As an invited speaker, he has presented at multiple industrial conferences and tech meetups, including Data Science Club, Tech Summit and Life science innovation day. He regularly leads hands-on tech workshops, such as Openslava, CESCG, and BaseCamp, which focus on deep learning and other cutting-edge technologies, and he led a series of eight machine learning courses for an international company.
Selected Projects
Other notable projects
Modeling of Human Visual Attention Using Automatic Visual Recognition of Scenes and Objects
Visual recognition of object classes in video sequences by linking semantic segmentation at the local level and global segmentation of saliency
Selected Publications
Selected Student Supervising
Master
- Zaťko Timotej – Application of interpretability and explainability of neural networks in the evaluation of medical images. Defended 2021.
- Kolibášová Martina – Estimating the reliability of neural network decisions in evaluation of medical images. Defended 2021.
- Sebechlebský Šimon – Application of interpretability and explainability in detection of false predictions in the evaluation of medical images. Ongoing.
- Mikuš Matej – Prediction of Alzheimer’s disease progression based on deep learning. Defended 2020.
- Pavlík Peter – Interpretable diagnosis of Alzheimer’s disease based on deep learning. Defended 2020.
- Grivalský Štefan – Segmentation of brain tumours in volumetric medical data using deep learning. Defended 2019.
- Mňačko Tomáš – Nerve segmentation in ultrasound images using deep learning. Defended 2019.
Bachelor
- Háberová Ivana – Explainability and interpretability of neural network in classification of medical images. Defended 2021.
- Sandanus Michal – Classification of volumetric medical images and other data using methods of machine learning. Defended 2021.
- Veselý Marcel – Explainability and interpretability of neural network in classification of medical images. Defended 2021. — Research Intern at KInIT
- Králik Timotej – Video analysis of sport matches using methods of computer vision and artificial intelligence. Defended 2021.
- Mikuš Matej – Segmentation of anomalies in volumetric medical data. Defended 2018.