In this project, we explore the use of expected value of information (EVI) to
control the use and analysis of data coming from multiple perceptual sensors
used in the SEER system for identifying office
activities. SEER uses a layering of HMMs (LHMMs) at different temporal
granularities for diagnosing situations in offices from real-time streams of
evidence (video, audio and computer interactions). We review the overall
architecture of the legacy SEER system, describe how we integrated the EVI
analyses, and show how EVI computations endow SEER's descendant, named
Selective SEER or just S-SEER, with the ability to balance computation
required for perceptual analysis with the discriminatory power of the sensors.
Finally, we report on several experiments to probe the value of using EVI in the
system.
'Selective Perception Policies for Guiding Sensing and Computation in Multimodal Systems: A Comparative Analysis', Nuria Oliver & Eric Horvitz. Submitted to CVIU Journal.
'Layered Representations for Learning and Inferring Office Activity from Multiple Sensory Channels', Nuria Oliver, Ashutosh Garg & Eric Horvitz. To appear in CVIU Journal.
'Selective Perception Policies for Guiding Sensing and Computation in Multimodal Systems: A Comparative Analysis', Nuria Oliver & Eric Horvitz. Paper presented at ICMI 2003 (Vancouver, BC, Canada, November 2003)
'Layered Representations for Human Activity Recognition', Nuria Oliver, Eric Horvitz & Ashutosh Garg. Paper presented at ICMI 2002 (Pittsburgh, October 2002)
Paper presented at CVPR2001 (Cues in Communication Workshop), Nuria Oliver, Eric Horvitz & Ashutosh Garg
Video showing S-SEER in action as of June 2004
Live demonstration during Bill Gates invited speech at IJCAI2001