Periodic Reporting for period 2 - SCENEUNDERLIGHT (Time-lapse understanding of the static and human scene and its lighting)
Reporting period: 2017-10-01 to 2019-09-30
SCENEUNDERLIGHT has demonstrated the full automation of light measurement, the online estimation of human light perception, and the integration of both aspects into a smart Light Management System (LMS). Light estimation leverages an RGBD camera and a radiosity model for light propagation, and it distinguishes the scene 3D structure, the object reflectance and the light positions. For human light perception, people were located in the environment, their visual frustum of attention (VFOA) was estimated, as well as the incident light onto their VFOA, depending on their position and gaze. Finally, SCENEUNDERLIGHT has introduced a new end-to-end system architecture and implemented the autonomous system that we call the “invisible light switch” (ILS). ILS encompasses an RGBD camera, a processor, a light controller, a communication bus and the luminaires. Since ILS estimates how much light each person receives, it may switch off or dim those luminaires which are not visible, e.g. on the other side of large open spaces or behind cubicle panels. This removes the need for manual switches and provides a boost in energy efficiency, saving up to 66%, as it was demonstrated, without compromising the light quality.
SCENEUNDERLIGHT educated two early stage researchers into experts of computer vision and machine learning, as well as innovation and technology transfer in smart lighting. The research and innovation mission of the project was carried out within an international doctoral programme. The two early stage researchers (ESRs) were enrolled into the University of Verona (UNIVR) PhD program in Computer Science (2015-2018). Throughout the project, the Istituto Italiano di Tecnologia (IIT) provided expert knowledge for the specific challenge of studying the scene structure material properties and hosted ESR1 for six months to pursue fundamental research on those aspects. The industrial partner OSRAM hosted both ESRs for half of the project, provided scientific expertise and supervision on computer vision and machine learning, and offered essential means and support to realize the project demonstrators. Both ESRs proposed and participated in workshops, seminars, courses and schools, to disseminate their research, discuss with world-renowned scientists and peers, and to grow awareness of the state-of-the-art in their field. The training happened in academic as well as industrial environments, to let them experience diverse perspectives in science.
Concerning the research and innovation, ESR1 contributed to key advancements in the study of the scene structure properties, under the co-supervision of Dr. Alessio Del Bue (IIT) and Dr. Fabio Galasso (OSRAM). Work includes a system to calibrate the room lighting (the position of luminaires), to estimate the current illumination and to compute new parameters to re-light the room to some desirable lighting pattern. The proposed method holds the state of the art performance and provides better accuracy than commercial CAD modelling solutions.
ESR2 focussed on the detection and recognition of the people and their activities, under the supervision of Prof. Marco Cristani and Dr. Fabio Galasso. He developed models to detect people and estimate their VFOA, to track and forecast their motion. Our model, which is motivated by social theories and experimental psychology results regarding attention, is currently state-of-the-art in the challenging recent task of people trajectory forecasting.
Excellence of research in SCENEUNDERLIGHT is proven by project publications which appeared at top venues such as CVPR, ICCV, ICIP and WACV. The new disruptive technology, represented by the innovative ILS smart lighting system, has been filed into four patents. We expect the research and innovation to make an impact in the academic community, where it has been acknowledged as original, and the industrial sector, whereby performance beyond current commercial systems and the large energy savings makes it innovative and sustainable.
Project website: http://profs.scienze.univr.it/~cristanm/sceneunderlight/
Project coordinator: Dr Fabio Galasso (fabio.galasso@gmail.com)
The second goal regarded the human activities. We have proposed the use of visual frustum of attention (VFOA) for scene understanding, activity recognition and activity forecasting, and we have implemented models to extract it from images. Furthermore, we have proposed and developed a novel LSTM-based model that learns how motion and head pose streams relate, and use them jointly for the forecasting of people trajectories and motion.
The two aspects are tightly intertwined, since the structure of the scene supports and constraints the human activities, while at the same time the human activities influence the scene structure. We have integrated both aspects into a smart lighting management system with the key idea of the "Invisible Light Switch" (ILS), which stands for giving the user the feeling of "all lit" while the scene is only partially lit, thus providing a comfortable illumination while saving energy in the invisible. ILS is a new end-to-end system architecture. Out of tests in offices, we have managed to save up to 65% without compromising the light quality.
This research has integrated for the first time smart lighting and computer vision and addressed those factors limiting application in real scenarios, such as real-luminaire illumination characteristics.
Towards the human-centric scene understanding, our new Long Short Term Memory (LSTM)-based approach has been first to have included head pose into trajectory forecasting and it holds state-of-the-art on most recent challenging benchmarks. Additionally, we have been first in forecasting the future people attention.
The resulting Intelligent Light Switch (ILS) is an end-to-end system, which demonstrated how computer vision may shape the future of smart lighting. Novelty of the work is demonstrated by conference and journal papers at top-venues, including TPAMI, CVPR, WACV, and ICIP, and by four filed and granted patents.