The browser you are using is not supported by this website. All versions of Internet Explorer are no longer supported, either by us or Microsoft (read more here: https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Please use a modern browser to fully experience our website, such as the newest versions of Edge, Chrome, Firefox or Safari etc.

AI Lund WS: AI & ML Technologies

This AI Lund fika-to-fika workshop focuses on the development of the technologies that form the basis of Artificial Intelligence and Machine Learning. Possible topics to discuss are the research front for different types of AI, but also to look at different techniques for machine learning.

When: 30 August at 9.30 - 15.30  

Where: MA:6 Annexet*, Sölvegatan 20, Lund, Sweden, LTH, Lund University

Programme

9.30 Fika and mingle

10.15 Introduction and update regarding the AIML@LU network 

 

10.30 Ongoing projects

Martin Karlsson, Lund University: Robot Programming by Demonstration Based on Machine Learning 

Abstract: Whereas humans would prefer to program on a high level of abstraction, for instance through natural language, robots require very detailed instructions, for instance time series of desired joint torques. In this research, we aim to meet the robots half way, by enabling programming by demonstration.

Marcus Klang, Lund University:  Finding Things in Strings 

Abstract: Things such as organizations, persons, or locations are all around us, particularly in the news, forum posts, facebook updates, and tweets. With named things, we can introduce background in news articles, summarize articles, build question-answering systems, and much more.  However, it is challenging to find and link them, as they often may be ambiguous. In this work, we aim to enrich the knowledge graph Wikidata with new relations and things only found in the articles of multilingual Wikipedia. The long term goal is the development of a multilingual system that can answer any natural question and improve how we find new relevant information.

Najmeh Abiri, Lund University: Variational Autoencoders 

Joakim Johnander, Linköping University: Deep Recurrent Neural Networks for Video Object Segmentation 

Abstract: Given a video with a target or object marked in the first frame, we aim to track and segment the target throughout the video. A fundamental challenge is to find an effective representation of the target and background appearance. In this work, we propose to tackle this challenge by integrating a probabilistic model as a differentiable and end-to-end trainable deep neural network module.

12.00 Lunch and mingle

13.00 Future trends and interesting examples

Michael Green, Desupervised2: Bayesian Deep Probabilistic Programming: Are we there yet? 

Abstract: Not many would argue against the Bayesian paradigm being the most useful one in modeling problems where parameter estimations are inherently uncertain. But unfortunately most interesting models, especially the ones we know from deep learning, have been very hard to fit in any reasonable amount of time. When dealing with +10 million parameters and +100 thousand data points, Markov Chain Monte Carlo just isn't a viable option. This is why almost every practitioner in deep learning defaults to maximum likelihood estimates through optimization via stochastic gradient descent, because it's much faster. In this talk we'll explore a promising way of doing full Bayesian inference on large scale models via stochastic black box variational inference.

Erik Gärtner, Lund University: Intrinsic Motivation - Curiosity and learning for the sake of learning  

Abstract: Humans as well as other animals are curious beings that develop cognitive skills on their own without the need for external goals or supervision.
Inspired by this, how can we encourage AIs to learn and solve tasks by themselves?
This talk presents the fascinating area of intrinsic reward in the context of reinforcement learning by showcasing recent articles and results.

14.30 Summary and conclusions

15.00 Fika and mingel