Situated, Individualized and Personalized Man-Machine-Interaction
|Project Duration||06.2002 - 06.2005|
|Research Requirements|| |
|Applications in Practice|| |
|Processed Project Packages|
|Contact||Michael Kellner -->|
As a partner of the recently founded Bavarian Research Cooperation for Situated, Individualized and Personalized Man-Machine-Interaction (FORSIP), FORWISS Passau is responsible for the subproject SIKOWO. The project name also refers to the objectives, namely the development of a Situated and Personalized Communication with Convenience Control Systems for Domiciles (SIKOWO1). In SIKOWO the passive, presence-referred acquisition of the actual situation in housing accomodations and the active need-led control of the regulation system for the heating and the ventilation as well as the control of the sunblinds and lighting in several rooms shall be made possible, in order to increase housing comfort and minimize energy consumption.
The main task of SIKOWO is the presence-referred recognition of the current situation in appropriate rooms by installing a multi-camera image processing system. In the first phase it will be tried to perceive moving objects and to determine their position. The objects will be classified in the next step to attain the goal to differentiate whether there is a domestic animal, a child or an adult in the room. Furthermore items have to be detected in order to handle partial occlusions of moving objects.
The activity classification represents the third step of the project. The system should be able to differ between a person doing e.g. some sports activities, reading or sleeping. The last and most difficult phase of the project is to register a specific person's presence, so that its individual needs can be considered. All these data will be transmitted to the intelligent convenience control system, which then activates its subsystems appropriately.
The project partners from FAU Erlangen develop amongst other things for SIKOWO a dialog management system (subproject SIPaDIM) including speech recognition and the partners from Munich Technical University (subproject SIPBILD) concentrate on the aquisition and analysis of gesture and mimic.