Yoshinari Kameda, Professor
Our research group focuses on sophisticated collaboration on video medium that connects people and data information by the help of rich computational resources. One major topic is data acquisition and recognition from image and video. The other one is mixed reality technology that aligns useful but invisible information to the real world in front of our eyes.
About our group
Computational Media are advanced information media on which high sensing functionality and huge computing resource over computer network are smartly unified. We can feed appropriate information to everyone wherever and whenever it is necessary by the computational media.
Computational media stand on advanced and intelligent visual information processing technologies. Our research area includes developing new technologies, e.g., mechanism for searching appropriate sensory devices, mechanism for searching and identifying information receiver, mechanism for appropriate conversion to fit property of receivers, and mechanism for preventing information bugging, alteration, and unauthorized copy.
While many surveillance cameras have been installed in public space, some people may feel uncomfortable with cameras though they play important role of keeping security and safety of our daily life. The computational media will give a new role to cameras by which people can enjoy the advantage of IT life. For example, users can effectively utilize the video information by our method of “see-through vision” and “visual support system for car drivers in intelligent transportation system”.
- ・3D Live Video of Soccer Games: Free-viewpoint video for multiple viewers over computer network on line. Viewers can freely fly through soccer field and see the game from arbitrary viewpoint.
- ・See-Through Vision: A frame work that can enable people to see through objects by processing video from surveillance cameras. It also covers privacy-safe visualization for the case where other people are targeted.
- ・Collaborative Mixed Reality: Augmented communication system to fuse both of real and virtual visual information.
- ・Massive Sensing: Intelligent video processing and human behavior understanding by networked sensors.
- ・Visual Support in Intelligent Transportation System: Enhanced vision support for vehicle drivers based on augmented reality technology. It utilizes videos of road surveillance cameras on line.
- ・Video Surveillance by mutual utilization of fixed surveillance camera and mobile camera: Enhancement of surveillance area by integrating the advantages of fixed surveillance cameras and mobile cameras.
- ・Hostile Intent Analysis: An advanced face analysis by way of extraordinary image sensing data, with intelligent media processing inspired by psychological analysis.
Fig.1 Sensing web
We have joined a national funded project named “Sensing Web; content engineering for social use of sensing information,” which is intended to promote safe use of ubiquitous sensor networks widely spreading into our community, from 2007th to 2009th.
We have proposed a new see-through vision scheme in the project and we have succeeded in building a preliminary system that can process video data flow over the sensor network intelligently, from environmental cameras to a mobile visualization device at a user’s hand so that the user can see through objects.
We have presented the system in a popular commercial shopping mall in Kyoto (Shinpukan, Nakagyo-ku, Kyoto, Japan). The 1st figure is a snapshot of a user at the 2nd floor who is seeing through the parasol on the courtyard at the ground level. The 2nd figure shows the processing flow of see-through vision. An image from an environmental camera is taken and a region below the parasol in the image is segmented so that see-through view of the parasol (bottom-right) is shown on the screen of the mobile device. Over 200 ordinary people has visited to the exhibition of Sensing Web and experienced our system, and they posted affirmative comments on our see-through vision.
Fig.2 See-Through Vision Mechanism