We live in a world where we are surrounded by ever increasing numbers of images. More often than not, these images have very little metadata by which they can be indexed and searched. In order to avoid information overload, techniques need to be developed to enable these image collections to be searched by their content. Much of the previous work on image retrieval has used global features such as colour and texture to describe the content of the image. However, these global features are insufficient to accurately describe the image content when different parts of the image have different characteristics. This talk will initially discuss how this problem can be circumvented by using salient interest regions to select the areas of the image that are most interesting and generate local descriptors to describe the image characteristics in that region.
The talk will then demonstrate how salient regions can be used for image retrieval using a number of techniques, but most importantly, two techniques inspired from the field of textual information retrieval. Using these robust retrieval techniques, a new paradigm in image retrieval is discussed, whereby the retrieval takes place on a mobile device using a query image captured by a built-in camera. This paradigm is demonstrated in the context of an art gallery, in which the device can be used to find more information about particular images.
Finally, the talk will discuss some approaches to bridging the semantic gap in image retrieval. The talk will explore ways in which un-annotated image collections can be searched by keyword. Two techniques are discussed; the first explicitly attempts to automatically annotate the un-annotated images so that the automatically applied annotations can be used for searching. The second approach does not try to explicitly annotate images, but rather, through the use of linear algebra, it attempts to create a semantic space in which images and keywords are positioned such that images are close to the keywords that represent them within the space.
Jonathon Hare is a research assistant within the Intelligence, Agents, Multimedia Group at the University of Southampton. Jonathon undertook a PhD within the IAM group, and has now submitted. Currently, he is working on a project funded by the AHRC in collaboration with the University of Brighton, investigating the semantic gap in image retrieval.
His PhD research, in collaboration with Motorola's UK Research Laboratory, was focused on the application of visual saliency to content-based image retrieval tasks.
Jonathon holds a BEng. in Aerospace Engineering (Systems) from the University of Southampton. Prior to joining the IAM group, he worked at the Motorola UK Research Laboratory (formally the Motorola European Research Laboratory), and participated in a number of projects ranging from DRM technologies for digital video to quality assessment and optimisation of H.263 and MPEG4 video transmitted over lossy channels.