Digikam/GSoC2012/FaceRecognition: Difference between revisions

From KDE Community Wiki
No edit summary
 
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
=== Ideas list ===
=== OpenTLD based Appoach ===
* use SQLite instead XMLs
Libkface is facedetection and recognition library.Current  Face-recognition implementation used is traditional Eigenface based minimum distace approach.Now after OpenTLD a generic object-detection and tracking algorithm is gaining popularity which has been modified to recognise faces.OpenTLD uses many types of features and concepts including haar- wavelets,Local binary Patterns,Intensity normalissed patches which makes it robust against intensity,orientation aspectratio variations.


* different way of merging multiple faces together. Currently, in the interests of saving space, as new faces are added to a particular person, rather than keeping them all separate they are merged together using eigen projections.It is good bcause it does save space, loading time and memory in general it tends to lose precision. So a differentmethod of merging would great addition, or perhaps an altogether different approach.
Using SQLite instead XMLs for storing data
Face Recognition process requires to store and retrieve data as an when needed,so sqlite is good choice to store data.Data used by OpenTLD based face-recognition(discussed in subsequent paragraphs) method can be subdivided into basic datatypes(int,float).There will be a dynamic generation of data which is stored using QList container and are serialised to QBytearray and QString to store in sqlite database.A table containing ID(unique for one facemodel),Name and serialised modeldata is maintaned in QSqlDatabase.


* New algorithm for face recognition. Currently Eigenfaces is the working algorithm, however it is coloury and rotation variant. This obviously has an adverse effect on quality of recognition. Fisherfacesis much more tollerant to rotation and colour changes. Hidden Markov Model based approach is another possible way of doing it. There used to be good helper function in OpenCV, however they reside in legacy headers and sadly a lot of documentation has been lost so revere engineering will be required from source. Ultimately an entirely new approach would be equally interesting to consider e.g. recognition based neural maps(network). Ideally the projectis not to invent a new method, but rather implement already existing one from the literature.
different way of merging multiple faces together. Currently, in the interests of saving space, as new faces are added to a particular person, rather than keeping them all separate they are merged together using eigen projections.It is good because it does save space, loading time and memory in general it tends to lose precision. So a different method of merging would great addition, or perhaps an altogether different approach.


* Sub-categorising unknown faces into similar groups. So when one is tagged all others in the same group are also tagged.
New algorithm for face recognition. Currently PoenTLD is the working algorithm, however it is not the best algorithm,but known to work fairely well compared to any other opensource algorithms. This obviously has an adverse effect on quality of recognition. Hidden Markov Model based approach is another possible way of doing it. There used to be good helper function in OpenCV, however they reside in legacy headers and sadly a lot of documentation has been lost so revere engineering will be required from source. Ultimately an entirely new approach would be equally interesting to consider e.g. recognition based neural maps(network). Ideally the projectis not to invent a new method, but rather implement already existing one from the literature.


* Face recognition in the videos: add tags based on people scanned from the videos in the library.


* ... (waiting for the end of the exam session)
'''Eigenface Improvement in Libface'''  
 
=== Where should I start? ===
You might be asking yourself what should I do next? Where should I begin? The ideal student would be the one that knows what they are trying to do and don't need to have their "hand held" at every step of the way. Asking question is great and we will try to be as helpful as we can be. There are some simple things you can do:
* Open Google Scholar and search for "face recognition" this will give you some general papers and more specific implementations of face recognition. You should have access to the papers, if not let me know and I will try to get the paper you want.
* An embedded HMM-based approach for face detection and recognition is a good paper to read for example of Hidden Markov Model based recognition. If you think you can implement it that would be a good idea. OpenCV has some functions to help with that. There are also elastic graph based approaches.
* Choose one method that you think is most appropriate, having 1 or 2 reasons for choosing it would be good.
* Get the HEAD copies of digikam (git) and libface (svn), libface is much smaller and probably the easiest thing to start playing around with. Try things, see what works what doesn't. Don't be afraid to break some code, you can always go back and checkout the HEAD once again.
* Ask if you are really stuck and have no idea on where to go.
 
 
----
----
 
=== '''Project Update''' ===
 
----
 
'''Eigenface Improvement in Libface:'''  


It was planned that, the eigenface improvement will be done using different distance measures instead of Euclidean distance. But, the problem is in libface, opencv function is currently being used for eigenface calculation. So, the distance measure is also in the opencv code. Improvement this way necessitates the change in opencv code which is impossible or create a new eigenface calculation code of our own which seems distracting from the main purpose of the project.  
It was planned that, the eigenface improvement will be done using different distance measures instead of Euclidean distance. But, the problem is in libface, opencv function is currently being used for eigenface calculation. So, the distance measure is also in the opencv code. Improvement this way necessitates the change in opencv code which is impossible or create a new eigenface calculation code of our own which seems distracting from the main purpose of the project.  


I will try some other improvements later.
The support has already been given in the code and also been committed to the svn head. Currently, PGM support of Qt is being used. We may also have used PGM type support from opencv in our code.  
 
 
'''Fisherface Implementation Completion :'''
 
 
 
'''PGM image type support for testing :'''
 
The support has already been given in the code and also been committed to the svn head. Currently, PGM support of Qt is being used. We may also have used PGM type support from opencv in our code. This seems unnecessary at this point. If in later stage, we think of giving the support for digikam we can use the opencv or e


'''OpenTLD Implementation'''


----
Merits of OpenTLD over other algorithms.
== '''Design of Algorithm and the System :''' ==
Eigenface recognition is suitable or works for large size biometric images(faces) and fails with intensity and size variations.
HMM is suppose to be best algorithm but implementation is difficult and computationally expensive if proper optimizations are not considered.And partial implementation is done.


A draft design has already been made of the system in the proposal. I am currently working on some amendments.
OpenTLD being realtime object tracking algorithm is expected to be fast and recognition accuracy is good.Moreover it do not need all the images of aperticular person to recognise,but it needs stored facemodel,which is then correlated with the current image and is used to get recognition,so stored model is necessary and sufficient.If a new face of person who already has entry in databse is fed to the system and asked to recognise,it computes tracker and detector fusehypothesis to determine the
whether it matches any entry in database,if not user has to tag with the name which is then stored in the database.More  about OpenTLD can be obtained from : http://gnebehay.github.com/OpenTLD/


...............currently this implementation is in progress for development and testing and latest code is available in libkface/opentld https://projects.kde.org/projects/extragear/libs/libkface/repository/show?rev=opentld branch and working feature in digikam/libkface https://projects.kde.org/projects/extragear/graphics/digikam/repository/show?rev=libkface branch,for testing in limited environment one can try with the code availble in https://github.com/maheshmhegade/NewFaceRecognition


----
A demo video is uploaded in the web https://www.youtube.com/watch?v=iaFGy0n0R-g .


== ''' Implementation of HMM based Face Recognition System :'''  ==
Any kind of suggestions to improve the existing algorithm,or new algorithm proposal from you is appreciated.

Latest revision as of 13:58, 27 April 2013

OpenTLD based Appoach

Libkface is facedetection and recognition library.Current Face-recognition implementation used is traditional Eigenface based minimum distace approach.Now after OpenTLD a generic object-detection and tracking algorithm is gaining popularity which has been modified to recognise faces.OpenTLD uses many types of features and concepts including haar- wavelets,Local binary Patterns,Intensity normalissed patches which makes it robust against intensity,orientation aspectratio variations.

Using SQLite instead XMLs for storing data Face Recognition process requires to store and retrieve data as an when needed,so sqlite is good choice to store data.Data used by OpenTLD based face-recognition(discussed in subsequent paragraphs) method can be subdivided into basic datatypes(int,float).There will be a dynamic generation of data which is stored using QList container and are serialised to QBytearray and QString to store in sqlite database.A table containing ID(unique for one facemodel),Name and serialised modeldata is maintaned in QSqlDatabase.

different way of merging multiple faces together. Currently, in the interests of saving space, as new faces are added to a particular person, rather than keeping them all separate they are merged together using eigen projections.It is good because it does save space, loading time and memory in general it tends to lose precision. So a different method of merging would great addition, or perhaps an altogether different approach.

New algorithm for face recognition. Currently PoenTLD is the working algorithm, however it is not the best algorithm,but known to work fairely well compared to any other opensource algorithms. This obviously has an adverse effect on quality of recognition. Hidden Markov Model based approach is another possible way of doing it. There used to be good helper function in OpenCV, however they reside in legacy headers and sadly a lot of documentation has been lost so revere engineering will be required from source. Ultimately an entirely new approach would be equally interesting to consider e.g. recognition based neural maps(network). Ideally the projectis not to invent a new method, but rather implement already existing one from the literature.


Eigenface Improvement in Libface

It was planned that, the eigenface improvement will be done using different distance measures instead of Euclidean distance. But, the problem is in libface, opencv function is currently being used for eigenface calculation. So, the distance measure is also in the opencv code. Improvement this way necessitates the change in opencv code which is impossible or create a new eigenface calculation code of our own which seems distracting from the main purpose of the project.

The support has already been given in the code and also been committed to the svn head. Currently, PGM support of Qt is being used. We may also have used PGM type support from opencv in our code.

OpenTLD Implementation

Merits of OpenTLD over other algorithms. Eigenface recognition is suitable or works for large size biometric images(faces) and fails with intensity and size variations. HMM is suppose to be best algorithm but implementation is difficult and computationally expensive if proper optimizations are not considered.And partial implementation is done.

OpenTLD being realtime object tracking algorithm is expected to be fast and recognition accuracy is good.Moreover it do not need all the images of aperticular person to recognise,but it needs stored facemodel,which is then correlated with the current image and is used to get recognition,so stored model is necessary and sufficient.If a new face of person who already has entry in databse is fed to the system and asked to recognise,it computes tracker and detector fusehypothesis to determine the whether it matches any entry in database,if not user has to tag with the name which is then stored in the database.More about OpenTLD can be obtained from : http://gnebehay.github.com/OpenTLD/

...............currently this implementation is in progress for development and testing and latest code is available in libkface/opentld https://projects.kde.org/projects/extragear/libs/libkface/repository/show?rev=opentld branch and working feature in digikam/libkface https://projects.kde.org/projects/extragear/graphics/digikam/repository/show?rev=libkface branch,for testing in limited environment one can try with the code availble in https://github.com/maheshmhegade/NewFaceRecognition

A demo video is uploaded in the web https://www.youtube.com/watch?v=iaFGy0n0R-g .

Any kind of suggestions to improve the existing algorithm,or new algorithm proposal from you is appreciated.