On Multi-modal and Structured Visual Intelligence

Leonid Sigal
DFP Classroom: FSC 2300

In this talk, I will discuss recent work from my group focusing on multi-modal learning, in the form of the visual question dialoging. The benefits of the proposed model include new form of associative memory that is able to help in visual reference resolution, which is key to any dialog system. Time permitting, I will also talk about slightly older work on perceptual control of garment simulation, where we used very simple perceptual studies and machine learning techniques to re-parametrise simulators for ease of use by animators.

Event directions

Once you are inside the Forestry Science building walk to the rear (south-east) of the building by passing through the large open study area and up the stairs to the 2nd level student (“treetop”) lounge area. Turn left, pass through the double doors, and room 2300 will be immediately to your right.

See a map