Overview

Digital images are now ubiquitous and easy to aquire. While humans easily recognize the objects and other semantic content in images, it has been much more difficult to do so automatically. Images of the same object can vary significantly because of lighting, slight differences in orientation, shadows and camera parameters. Even when the set of objects is limited and reference images are avaliable, direct comparison of images on a pixel-by-pixel basis is unlikely to yield satisfactory recognition. Higher-level semantics are required.

To obtain higher-level semantics, a set of features must be computed from eash image, and these features are then compared to the features of the reference image. Finding the right set of features by trial and error can be time-consuming and difficult. Therefore, in this project, we will build a system to learn an appropriate set of features from a training set of images, and then apply them to a test set of images. The methodology used in this project has been used successfully for face recognition.

The aim of this project is to develop an interpreter for images of America Sign Language letters. The test and training sets will consist of relatively high-contrast images of hand signing each of the 26 letters, and the goal of the project will be to successfully classify all of the images in the test set.

The learning objectives of the project are:
  • Learning the basics of Object Recognition in Computer Vision
  • Becoming familiar with the concepts of feature spaces and classification
  • Familiarity with the techniques of Principle Component Analysis
  • Experience analyzing experimental results
In this project, you will classify images of sign language characters based on the characters portrayed (i.e. all A's will be 1 class, all E's will be another, etv). Each class will be represented by a feature vector consisting of values for a number of features. When a new image is presented to the system, the feature vector of each new image is computed and compared to the predefined feature vector for each class. The unkown image is given the label that correponds with the closest predefined feature vectore.
The detailed project description is available in the PDF file RASLUPC.pdf. You will need the free Adobe Acrobat Reader to view this file.