Digikam/GSoC2019/AIFaceRecognition: Difference between revisions

From KDE Community Wiki
No edit summary
No edit summary
Line 1: Line 1:
Hello reader,
=Introduction=
Tis article describes the current state of the face detection algorithms of digiKam and the desired outcome of the corresponding GSoC project.      <br>
Hello reader, <br>
This article describes the current state of the face detection algorithms of digiKam and the desired outcome of the corresponding GSoC project.      <br>
It is recommended to read [https://community.kde.org/Digikam/GSoC2019/FacesManagementWorkflowImprovements Faces Management workflow improvements], as this describes the entire face management workflow. Thus it helps to understand the scope of these algorithms and where it need clarification about its structure and interfaces with other parties (code modules).
It is recommended to read [https://community.kde.org/Digikam/GSoC2019/FacesManagementWorkflowImprovements Faces Management workflow improvements], as this describes the entire face management workflow. Thus it helps to understand the scope of these algorithms and where it need clarification about its structure and interfaces with other parties (code modules).


Currently, there are four different methods using the corresponding algorithm, which are more or less functional. The used algorhtmy can be chosen in the one Face Scan dialog. <br>
Currently, there are four different methods using the corresponding algorithm, which are more or less functional. The used algorithm can be chosen in the one Face Scan dialogue. <br>
The goal is to be able to recognize automatically faces in images, which are not tagged, using a previous ''face tag'' registered in the face recognition database. The algorithms are complex but explained in more detail below.
The goal is to be able to recognize automatically faces in images, which are not tagged, using a previous ''face tag'' registered in the face recognition database. The algorithms are complex but explained in more detail below.


=currently implemented face recognition algorythms
=currently implemented face recognition algorithms=
<ol>
  <li>Deep Neural Network (DNN) DLib <br>
        This is an experimental implementation of a neural network to perform faces recognition. <br>
        This DNN is based on DLib code, a low-level library used by OpenFace project. This code works, but it slow and complex to maintain. It is rather a proof of concept than being used for productive use. <br>
Moreover, the documentation in the source code is non-existent.


#Deep Neural Network (DNN) DLib
      The code of Dlib is mostly the machine learning core implementation of http://dlib.net/ml.html and https://sourceforge.net/p/dclib/wiki/Known_users.  
This is an experimental implementation of neural network to perform faces recognition. <br>
<br> <br>
This DNN is based on DLib code, a low level library used by OpenFace project. This code works, but it slow and complex to maintain. It is rather a proof of concept than being used for productive use. <br>
  </li>
Moreover the documentation in the source code is non-existant.
  <li> OpenCV -  lbph <br>
 
This is the most complete implementation of a face detection algorithm. Moreover, it is the oldest implementation of such an algorithm in digiKam. It's not perfect and requires at least six faces already tagged manually by the user to identify the same faces in non-tagged images. <br>
The code of Dlib is mostly the machine learning core implementation of http://dlib.net/ml.html and https://sourceforge.net/p/dclib/wiki/Known_users.  
This algorithm records a histogram of the face in the database, which is used later to perform the comparisons against new/non-tagged faces.  
 
#OpenCV -  lbph  
This is the most complete implementation of a face detection algorythm. Moreover it is the oldest implementation of such an algorythm in digiKam. It's not perfect and require at least 6 face already tagged manually by the user to identify the same faces in non-tagged images. <br>
This algorithm records a histogram of the face in the database, which is used later to perform the comparisons against new/non-tagged faces.  
This one use OpenCV backend.https://towardsdatascience.com/face-recognition-how-lbph-works-90ec258c3d6b
This one use OpenCV backend.https://towardsdatascience.com/face-recognition-how-lbph-works-90ec258c3d6b
</li> 
<br> <br>
<li>OpenCV - [https://en.wikipedia.org/wiki/Eigenface Eigen Faces] <br>
An alternative algorithm what uses the OpenCV backend. It was introduced to have a different source of results for face detection, enabling to proof the DNN approaches.
<br> <br>
</li>
  <li> OpenCV - [http://www.scholarpedia.org/article/Fisherfaces Fisher Face] <br>
Another algorithm what uses the OpenCV backend. It was introduced for the same purposes as Eigen Faces.  <br>
According to rumours, this one is not finalized, it is said that not all methods are implemented.
</li>
</ol>
<br>
There is a paper explaining the difference between Fisher and Eigen Faces, see http://disp.ee.ntu.edu.tw/~pujols/Eigenfaces%20and%20Fisherfaces.pdf


#OpenCV - [https://en.wikipedia.org/wiki/Eigenface Eigen Faces]
==why so many different approaches?==
An alternative algorithm what uses the OpenCV backend. It was introduced to have different source of results for face detection, enabling to proof the DNN approaches.
 
#OpenCV - Fisher Face
Another algorithm what uses the OpenCV backend. It was introduced for the same pruposes as Eigen Faces.  <br>
According rumors this one is not finalized, it is said that not all methods are implemented.
 
Theree is paper explainingl the difference between Fisher and Eigen Faces, see http://disp.ee.ntu.edu.tw/~pujols/Eigenfaces%20and%20Fisherfaces.pdf
 
==why so many differnt approaches?==


The idea why four different algorythm were implmented is simply to be able make an comprehnsive assesment of the currntly avaialbe tehcnolgoeies applicalbe ni digiKam and eventuelly choose the best one.  <br>
:The idea why four different algorithms were implemented is simply to be able to make a comprehnsive assessment of the currently available technologies applicable in digiKam and eventually choose the best one.  <br>
The student who worked on the DNN project few years ago has concluded that DNN was the best method to recognize with less error as possible. Unfortuentley the training and recognition process took too longs and slowed down the application.
:The student who worked on the DNN project a few years ago has concluded that DNN was the best method to recognize with less error as possible. Unfortunately, the training and recognition process took too long and slowed down the application.
Regardless that fallback, it is agreed that DNN is the best way to go, but not using the current implementation based on DLib.
Regardless that fallback, it is agreed that DNN is the best way to go, but not using the current implementation based on DLib.


=prevoius work=
=prevoius work=
#  DNN
#  DNN
All code code was introduced by a student Yingjie Liu <[email protected]> in a previous GoSC project
All code was introduced by a student Yingjie Liu <[email protected]> in a previous GoSC project


=code=
=code=
Line 44: Line 50:
https://cgit.kde.org/digikam.git/tree/core/utilities/facemanagement?h=development/dplugins
https://cgit.kde.org/digikam.git/tree/core/utilities/facemanagement?h=development/dplugins


{{construction}}
=database=
Which kind of info are stored in database?
This depend of recognition algorithm used. Histograms, Vector, Binary data, all responsible of algorithm computation, and of course all not compatible. Typically, when you change the recognition algorithm in Face Scan dialog, the database must be clear as well.
But in fact this kind of database mechanism must be dropped, when DNN algorithm will be finalized, and only this one retained to do the job. As i said previously, 4 algorithms are implemented to choose the best one. At end, only one must still in digiKam face engine, and all the code must be simplified.
 
=Expected results of this GSoc 2019 project=
 
=requirements on the student(s)=
Typically, the student must review all Bugzilla entries will be presented in separated subsection created by the maintainers. If this page does not provide enough guidance, the student(s) must identify the top level entries to engage but with help by the listed mentors.
The student is expected to work autonomous technically-wise, so the answers to challenges will not be found necessarilly by the support of the maintainer. This does not mean that the maintainers cannot be reached by the student. Guidance will be given at any time in any case but shall that be limited to occasional situations to allow the maintainers to follow up on their work. <br>
Regardless of the above-mentioned channel of communication, the maintainers review and validate the code in their development branch bevor merging it to the master branch.


Why 4 kind of recognition algorithms
Besides coding, it is required a technical proposal, where to list :
the problematic,
the code to patch,
the coding tasks, the tests, the plan for this summer, and of course the documentation to write (mostly in code), etc.
while this summer, the student must analyze the code in details, identify the problems, and start to patch implementations. While these stage, he will ask Q about coding, and about functionalities. We must respond to both, and i will respond to code stuff in prior...


With 3.x versions, OpenCV has introduced a DNN API.
I shall be used instead of the others approaches as done for the face detection




{{construction}}


6/ Which kind of info are stored in database?
With 3.x versions, OpenCV has introduced a DNN API.  
This depend of recognition algorithm used. Histograms, Vector, Binary data, all responsible of algorithm computation, and of course all not compatible. Typically, when you change the recognition algorithm in Face Scan dialog, the database must be clear as well.
I shall be used instead of the others approaches as done for the face detection
But in fact this kind of database mechanism must be dropped, when DNN algorithm will be finalized, and only this one retained to do the job. As i said previously, 4 algorithms are implemented to choose the best one. At end, only one must still in digiKam face engine, and all the code must be simplified.

Revision as of 21:36, 26 February 2019

Introduction

Hello reader,
This article describes the current state of the face detection algorithms of digiKam and the desired outcome of the corresponding GSoC project.
It is recommended to read Faces Management workflow improvements, as this describes the entire face management workflow. Thus it helps to understand the scope of these algorithms and where it need clarification about its structure and interfaces with other parties (code modules).

Currently, there are four different methods using the corresponding algorithm, which are more or less functional. The used algorithm can be chosen in the one Face Scan dialogue.
The goal is to be able to recognize automatically faces in images, which are not tagged, using a previous face tag registered in the face recognition database. The algorithms are complex but explained in more detail below.

currently implemented face recognition algorithms

  1. Deep Neural Network (DNN) DLib
    This is an experimental implementation of a neural network to perform faces recognition.
    This DNN is based on DLib code, a low-level library used by OpenFace project. This code works, but it slow and complex to maintain. It is rather a proof of concept than being used for productive use.
    Moreover, the documentation in the source code is non-existent. The code of Dlib is mostly the machine learning core implementation of http://dlib.net/ml.html and https://sourceforge.net/p/dclib/wiki/Known_users.

  2. OpenCV - lbph
    This is the most complete implementation of a face detection algorithm. Moreover, it is the oldest implementation of such an algorithm in digiKam. It's not perfect and requires at least six faces already tagged manually by the user to identify the same faces in non-tagged images.
    This algorithm records a histogram of the face in the database, which is used later to perform the comparisons against new/non-tagged faces. This one use OpenCV backend.https://towardsdatascience.com/face-recognition-how-lbph-works-90ec258c3d6b


  3. OpenCV - Eigen Faces
    An alternative algorithm what uses the OpenCV backend. It was introduced to have a different source of results for face detection, enabling to proof the DNN approaches.

  4. OpenCV - Fisher Face
    Another algorithm what uses the OpenCV backend. It was introduced for the same purposes as Eigen Faces.
    According to rumours, this one is not finalized, it is said that not all methods are implemented.


There is a paper explaining the difference between Fisher and Eigen Faces, see http://disp.ee.ntu.edu.tw/~pujols/Eigenfaces%20and%20Fisherfaces.pdf

why so many different approaches?

The idea why four different algorithms were implemented is simply to be able to make a comprehnsive assessment of the currently available technologies applicable in digiKam and eventually choose the best one.
The student who worked on the DNN project a few years ago has concluded that DNN was the best method to recognize with less error as possible. Unfortunately, the training and recognition process took too long and slowed down the application.

Regardless that fallback, it is agreed that DNN is the best way to go, but not using the current implementation based on DLib.

prevoius work

  1. DNN

All code was introduced by a student Yingjie Liu <[email protected]> in a previous GoSC project

code

All the low level steps to train and recognize faces are done in this class. For the middle level codes, muti-threaded and chained, started by the face scan dialog, all is here: https://cgit.kde.org/digikam.git/tree/core/utilities/facemanagement?h=development/dplugins

database

Which kind of info are stored in database? This depend of recognition algorithm used. Histograms, Vector, Binary data, all responsible of algorithm computation, and of course all not compatible. Typically, when you change the recognition algorithm in Face Scan dialog, the database must be clear as well. But in fact this kind of database mechanism must be dropped, when DNN algorithm will be finalized, and only this one retained to do the job. As i said previously, 4 algorithms are implemented to choose the best one. At end, only one must still in digiKam face engine, and all the code must be simplified.

Expected results of this GSoc 2019 project

requirements on the student(s)

Typically, the student must review all Bugzilla entries will be presented in separated subsection created by the maintainers. If this page does not provide enough guidance, the student(s) must identify the top level entries to engage but with help by the listed mentors. The student is expected to work autonomous technically-wise, so the answers to challenges will not be found necessarilly by the support of the maintainer. This does not mean that the maintainers cannot be reached by the student. Guidance will be given at any time in any case but shall that be limited to occasional situations to allow the maintainers to follow up on their work.
Regardless of the above-mentioned channel of communication, the maintainers review and validate the code in their development branch bevor merging it to the master branch.

Besides coding, it is required a technical proposal, where to list : the problematic, the code to patch, the coding tasks, the tests, the plan for this summer, and of course the documentation to write (mostly in code), etc. while this summer, the student must analyze the code in details, identify the problems, and start to patch implementations. While these stage, he will ask Q about coding, and about functionalities. We must respond to both, and i will respond to code stuff in prior...


 
Under Construction
This is a new page, currently under construction!


With 3.x versions, OpenCV has introduced a DNN API. I shall be used instead of the others approaches as done for the face detection