Funding

Our lab is currently funded by several agencies. For past funding, please see below.

Past Projects

The following projects have been completed:

ERC Starting Grant

Visual Learning and Inference in Joint Scene Models (VISLIM)

Our research lab is supported in a significant part through an ERC Starting Grant.

This ERC-funded project is concerned with the joint estimation of several scene attributes from one or more images, with the aim of leveraging their dependencies. The project covers aspects of modeling, learning and inference (in) such models.

Funding duration: 2013 – 2018

One of the principal difficulties in processing, analyzing, and interpreting digital images is that many attributes of visual scenes relate in complex manners. Despite that, the vast majority of today's top-performing computer vision approaches estimate a particular attribute (e.g., motion, scene segmentation, restored image, object presence, etc.) in isolation; other pertinent attributes are either ignored or crudely pre-computed by ignoring any mutual relation. But since estimating a singular attribute of a visual scene from images is often highly ambiguous, there is substantial potential benefit in estimating several attributes jointly.

The goal of this project is to develop the foundations of modeling, learning and inference in rich, joint representations of visual scenes that naturally encompass several of the pertinent scene attributes. Importantly, this goes beyond combining multiple cues, but rather aims at modeling and inferring multiple scene attributes jointly to take advantage of their interplay and their mutual reinforcement, ultimately working toward a full(er) understanding of visual scenes. While the basic idea of using joint representations of visual scenes has a long history, it has only rarely come to fruition. VISLIM aims to significantly push the current state of the art by developing a more general and versatile toolbox for joint scene modeling that addresses heterogeneous visual representations (discrete and continuous, dense and sparse) as well as a wide range of levels of abstractions (from the pixel level to high-level abstractions). This is expected to lead joint scene models beyond conceptual appeal to practical impact and top-level application performance. No other endeavor in computer vision has attempted to develop a similarly broad foundation for joint scene modeling. In doing so we aim to move closer to image understanding, with significant potential impact in other disciplines of science, technology and humanities.

Smiths Detection

Collaborative research

Funding duration: 2016 – 2018

Intel

Faculty Support Program

Funding duration: 2015 – 2017

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

EU FP7 STREP

Harvesting Dynamic 3D Worlds from Commodity Sensor Clouds (Harvest4D)

This EU-funded STREP is concerned with harvesting data from commodity sensor clouds, such as cell phones and inexpensive range sensors, for capturing 3D models of our dynamic world.

Partners: TU Vienna, TU Darmstadt, CNR, U Bonn, TelecomParisTech, TU Delft
Funding period: 2013 – 2016

Project page

The current acquisition pipeline for visual models of 3D worlds is based on a paradigm of planning a goal-oriented acquisition – sampling on site – processing. The digital model of an artifact (an object, a building, up to an entire city) is produced by planning a specific scanning campaign, carefully selecting the (often costly) acquisition devices, performing the on-site acquisition at the required resolution and then post-processing the acquired data to produce a beautified triangulated and textured model. However, in the future we will be faced with the ubiquitous availability of sensing devices that deliver different data streams that need to be processed and displayed in a new way, for example smartphones, commodity stereo cameras, cheap aerial data acquisition devices, etc.

We therefore propose a radical paradigm change in acquisition and processing technology: instead of a goal-driven acquisition that determines the devices and sensors, we let the sensors and resulting available data determine the acquisition process. Data acquisition might become incidental to other tasks that devices/People to which sensors are attached carry out. A variety of challenging problems need to be solved to exploit this huge amount of data, including: dealing with continuous streams of time-dependent data, finding means of integrating data from different sensors and modalities, detecting changes in data sets to create 4D models, harvesting data to go beyond simple 3D geometry, and researching new paradigms for interactive inspection capabilities with 4D data sets. In this project, we envision solutions to these challenges, paving the way for affordable and innovative uses of information technology in an evolving world sampled by ubiquitous visual sensors.

Our approach is high-risk and an enabling factor for future visual applications. The focus is clearly on Basic research questions to lay the foundation for the new paradigm of incidental 4D data capture.

DFG Research Training Group

Cooperative, Adaptive and Responsive Monitoring in Mixed Mode Environments (GRK 1362)

The DFG-funded Research Training Group addresses fundamental scientific and technological challenges arising from the collaboration of networked autonomous entities that accomplish a common task through actively monitoring the environment and through requisite responses via a variety of stationary and mobile sensors/actuators. The sensing/actuating entities (and the collaborative system thus formed) monitor, acquire, manage and disseminate data with the goal of deriving higher level (context/event) information upon which the system can respond appropriately.

Funding duration: 2006 – 2015

Microsoft Research Scholarship Program

Microsoft Research provided a PhD scholarship to Uwe Schmidt.

Project partners: Microsoft Research Cambridge
Funding duration:
2011 – 2013

German Ministry of Education & Research (BMBF)

Sicherheits-Untersuchungen mittels Röntgenbild-Analyse (SICURA)

This project was concerned with the automatic detection of objects in multiple-view x-ray images.

Project partners: Smiths Detection, U Kaiserslautern, U of Applied Sciences Rhein Main, U Frankfurt
Funding duration: 2010 – 2013

German Research Foundation (DFG)

Heinz Maier-Leibnitz-Prize

Funding duration: 2012

Adolf Messer Foundation

Adolf Messer Prize

Funding duration: 2011