Invited speakers

Professor Carolina Wählby, Uppsala University
Carolina Wählby obtained her PhD in Digital Image Analysis at the Department of Information Technology at Uppsala University, and carried out post doctoral research at the Department of Genetics and Pathology at the same university, focusing on image analysis as a quantitative tool for novel molecular markers in microscopy. She established her own research group at the Centre for Image Analysis at Uppsala University in 2007, and was recruited to the Imaging Platform at the Broad Institute of Harvard and MIT in 2009. The focus of her research at the Broad is development of algorithms for high throughput screening assays using C. elegans as a model organism, where she is part of the team developing the free and open-source CellProfiler software (www.cellprofiler.org). In 2011 she also joined SciLifeLab Sweden as Associate Professor, and from 2014 as full Professor, heading a research group on Quantitative Microscopy. The lab develops digital image processing methods for extracting information from medical image data, with applications ranging from cancer diagnosis to high throughput screening for discovery of novel treatments of infectious disease. In 2010, Wählby received funding from NIH for her research on methods for C. elegans screening, was appointed ‘strategic recruitment’ for SciLifeLab Sweden in 2011, and awarded the ‘young investigator’ grant from the Swedish research council in 2012. The research field as such, known as bioimage informatics, is rapidly growing in the exciting intersection between computer science and biomedicine as microscopy is becoming a measurement tool, and not only a means of visualization.

Title: Image based drug discovery – challenges and opportunities at the microscopy scale.

Abstract
High-throughput screening (HTS) is a technique for testing large libraries of chemical or genetic perturbants on biomedical model systems to better understand disease pathways or find new treatments. Often, such disease model systems are cultured cells, and automated imaging, cell segmentation, feature extraction, and classification quantify the response to treatment. I will present one such study where we capture milions of images to characterize human cancer stem cells (CSCs) with the aim is to elucidate patient-specific mechanisms of action and targeting of disease subgroups. Compared to one-dimensional readouts of cell viability or metabolic activity, image data has the power not only to tell us if cells survive treatment or not, but also give us clues on how they die, based on careful measurements of morphology. Algorithms for automated cell segmentation and measurements of morphology have been around for decades. Still, the there is a lot left to explore in this rich type of data, and one of the major challenges in quantification of cell morphology overcoming the bias introduced by cell density.
Modeling disease by culturing cells allows for efficient analysis and exploration. However, many diseases and biological pathways can be better studied in whole animals – particularly diseases that involve organ systems and multicellular interactions, such as metabolism, infection and embryo development. The worm Caenorhabditis elegans as well as the Zebrafish are a well-established and effective model organisms used by thousands of researchers worldwide to study complex biological processes. I will show how novel image-analysis algorithms are capable of scoring phenotypic changes in 2D and 3D high-throughput assays using these systems.

———————————————————————————————————————————————————————————————————

Associate Professor Robert Jenssen, Department of Physics and Technology (DoPT), University of Tromsø
Jenssen has published extensively on machine learning and pattern recognition methodology, focusing especially on information theoretic learning, kernel methods, spectral dimensionality reduction, clustering, and classification. Jenssen’s machine learning group (site.uit.no/ml) focus mainly on methodology, algorithms, and on health data analytics applications (mining electronic health records). Robert Jenssen is also Senior Researcher at the Norwegian Center on Integrated Care and Telemedicine, University Hospital of North Norway.

Title: Theory and Applications of Information Theoretic Machine Learning

Machine learning research has had tremendous impact on data analytics and applications, including image analysis.
This talk will present a successful direction in machine learning, utilizing information theoretic optimization criteria, for capturing the higher order statistical properties of the data in the parameters of the learning machine.  This line of research is known as information theoretic learning (ITL), and has been partly pioneered at the Machine Learning @ UiT Lab at the University of Tromsø (UiT), Norway. Interestingly, ITL shares many properties with so-called kernel-based and graph spectral approaches, together shedding new light on the field. The talk will focus in particular on the recent method known as kernel entropy component analysis, for dimensionality reduction and spectral clustering, synergistically fusing aspects of information theory, kernels, and graphs.
Applications of kernel entropy component analysis, including image clustering, will be presented, as well as further theoretical developments and applications of the method developed by other groups, in various domains of research.
———————————————————————————————————————————————————————————————————

Professor Thomas B. Moeslund, Aalborg University (AAU)
Thomas B. Moeslund is head of the Visual Analysis of People Lab as well as the Media Technology Section – both at Aalborg University, Denmark. His research is focused on all aspects of computer vision, but he has a special interest in detecting, tracking and in general understanding humans via a camera. He has been involved in 19 (inter)national research projects and co-chaired 17 international conferences/workshops/tutorials. Awards include a most cited paper award in 2009, a teacher of the year award in 2010, Northern Jutland University-Foundation Innovation Award in 2013 and best paper awards in 2010, 2012 and 2014. For further details: www.create.aau.dk/tbm

Title: Analyzing People in Thermal Imagery

Abstract
Automatic detection, tracking and analysis of people is very interesting for many applications. Doing it remotely using computer vision has obvious benefits. The past decade’s research into this field has however shown that methods are significantly challenged when applied in real-life scenarios for extended periods of time. The rise of thermal cameras has paved the way for more robust solutions to these problems due to their robustness towards changing environmental conditions, i.e. the weather and the illumination. Thermal images, however, lack textual details compared to RGB images and are therefore not the solution to all problems. In this talk we will look at what thermal cameras are and their pros and cons when it comes to visual analysis on people. The talk will include practical examples as well as research examples where thermal cameras – alone or in combination with other sensors – are applied to robustly analysis of people.

———————————————————————————————————————————————————————————————————

Professor Kalle Åström, Lund University
Kalle Åström received his B.Sc in Mathematics in 1990, M.Sc. degree in Engineering Physics in 1991 and Ph.D. in Mathematics in 1996 form Lund University, Sweden. His thesis was awarded Best Nordic Ph.D. Thesis in pattern recognition and image analysis 1995-1996 at the Scandinavian Conference in Image Analysis, 1997. He has been a post-doctoral research fellow, associate professor and is now professor at the Centre for Mathematical Sciences, Lund University. His teachings include undergraduate and gradate courses in mathematics, image processing and computer vision. His current research interests include stochastic analysis of low level vision, computer vision for autonomous guided vehicles, geometry and algebra of multiple views of points, curves and surfaces, structure from sound, radio and wifi, cognitive vision, handwriting recognition, medical image analysis and bioinformatics. He is the co-author of 10 patent applications and 140 refereed conference and journal publications. He is co-founder of three spinn-off companies, Decuma, Cognimatics and Spiideo.

Title: Small-scale and large-scale 3D reconstruction and localization using vision, sound and radio.

Abstract
Automatic and robust methods for determining structure and motion can be constructed using systems that rely on several components, such as feature detection (Harris/Foerstner, SIFT, SURF, etc), feature extraction, matching heuristics, minimal solvers, RANSAC, non-linear optimization, convex optimization and 3D modelling. In this talk I will present some of our latest research on such components and systems. Examples will primarily be given using images, but the presentation there will also contain examples on how such techniques from computer vision can be applied to structure from motion systems using other sensors as input, e g sound, ultrasound, radio time of flight and radio signal strength.

———————————————————————————————————————————————————————————————————

Professor Christoph Busch, Gjøvik University College
Christoph Busch is member of the faculty computer science and media technology at the Gjøvik University College (GUC), Norway. He holds a joint appointment with the computer science faculty at Hochschule Darmstadt (Darmstadt, Germany). Further he lectures Biometric Systems at Technical University of Denmark (DTU) since 2007.
He received his PhD in the field of computer graphics in 1997. In the same year he joined the Fraunhofer Institute for Computer Graphics (Fraunhofer IGD) as head of the department Security Technology. Prof. Dr. C. Busch has since been responsible for the acquisition, the management and the control of various applied research and development projects.
Christoph Busch published more than 200 technical papers and has been a speaker at conferences. He served for various program committees (NIST IBPC, IBC, BSI-Congress, GI-Congress, DACH, WEDELMUSIC, EUROGRAPHICS) and served for several conferences, journals and magazines as reviewer (ACM-SIGGRAPH, IEEE CG&A, IEEE Transactions on Signal Processing, Elsevier Computers & Security, etc.). He is also an appointed member of the editorial board of the IET journal on Biometrics.
Since 2002 Christoph Busch is a member of the steering committee of the BIOSIG Special Interest Group on Biometrics within the Gesellschaft für Informatik (GI). He was elected as chair of the group in 2008.
Furthermore, on behalf of Fraunhofer, he chairs the biometrics working group of the TeleTrusT association as well as the German standardization body on Biometrics (DIN-NIA37). He is active in international standardization committees on Biometrics ISO/IEC JTC1 SC27 WG5 and is convenor of in ISO/IEC JTC1 SC37 WG3 on Biometric data interchange formats.

Title: Sample quality metrics for biometric systems

Abstract
Image quality measurement plays a vital role in improving biometric system accuracy and efficiency during the capture process, in database maintenance and also in invocation of quality‐directed processing of samples in multimodal systems. If quality can be improved, either by sensor design, by user interface design, or by standards compliance, better performance can be realized. For those aspects the ability to analyze the quality of a reference or probe sample is needed.

Biometric quality analysis remains a technical challenge as the quality measures should reflect the performance sensitivities of one or more target biometric comparison subsystems. The talk will present a standardized framework for quality metrics in Biometrics. Moreover it will analyze fingerprint recognition in more detail and discuss methods that are predicting biometric performance. Also of relevance are those fingerprint image quality metrics that can provide actionable feedback to the data subject or the operating personal in the capture process and potentially could initiate a re-capture of the fingerprint image. The talk will present results of the ongoing NFIQ2.0 project, which is a cooperation of the US National Institute of Standards and Technology
(NIST) and the German Federal Office for Information Security (BSI) and their academic partners.