In computer vision, human identity matching from images and/or video has been an active research topic for more than two decades and its popularity is increasing with the increase in computing power. The state of the art techniques are based on face images and gait recognition from long video sequences. However, in many real applications only some static images of the subject may be available where face information is missing (e.g. posterior views). These scenarios have not been addressed by the research community as they are difficult to handle. In this action, we propose a method for matching identities from a set of 2D images of a person without any facial information. The method consists of two steps: at first, the human body is modelled by a 3D articulated model whose pose is estimated by its 2D projections onto the images. Then, biometric features are computed by fitting 3D deformable models to the image data, thus capturing the form and size of the main parts of the anatomy. The overall framework works under a probabilistic framework, with a learning step, in order to encode pose and anatomy variations between a set of individuals that are to be identified.
Fields of science
- natural sciencescomputer and information sciencesartificial intelligencepattern recognition
- engineering and technologyelectrical engineering, electronic engineering, information engineeringelectronic engineeringsensorsoptical sensors
- natural sciencescomputer and information sciencesartificial intelligencecomputer vision
- natural sciencescomputer and information sciencesartificial intelligencemachine learningdeep learning
Call for proposal
See other projects for this call