Digikam/GSoC2019/AIFaceRecognition: Difference between revisions

From KDE Community Wiki
No edit summary
(added Thanh-Trung Dinh paper and mentioned that the scope is extended)
 
(12 intermediate revisions by the same user not shown)
Line 4: Line 4:
It is recommended to read [https://community.kde.org/Digikam/GSoC2019/FacesManagementWorkflowImprovements Faces Management workflow improvements], as this describes the entire face management workflow. Thus it helps to understand the scope of these algorithms and where it need clarification about its structure and interfaces with other parties (code modules).
It is recommended to read [https://community.kde.org/Digikam/GSoC2019/FacesManagementWorkflowImprovements Faces Management workflow improvements], as this describes the entire face management workflow. Thus it helps to understand the scope of these algorithms and where it need clarification about its structure and interfaces with other parties (code modules).


Currently, there are four different methods using the corresponding algorithm, which are more or less functional. The used algorithm can be chosen in the one Face Scan dialogue. <br>
Currently, there are four different methods using the corresponding algorithm, which are more or less operational. The used algorithm can be chosen in the one Face Scan dialogue. <br>
The goal is to be able to recognize automatically faces in images, which are not tagged, using a previous ''face tag'' registered in the face recognition database. The algorithms are complex but explained in more detail below.
The goal is to be able to recognize automatically faces in images, which are not tagged, using a previous ''face tag'' registered in the face recognition database. The algorithms are complex but explained in more detail below.


Line 11: Line 11:
   <li>Deep Neural Network (DNN) DLib <br>
   <li>Deep Neural Network (DNN) DLib <br>
         This is an experimental implementation of a neural network to perform faces recognition. <br>  
         This is an experimental implementation of a neural network to perform faces recognition. <br>  
         This DNN is based on DLib code, a low-level library used by OpenFace project. This code works, but it slow and complex to maintain. It is rather a proof of concept than being used for productive use. <br>  
         This DNN is based on the DLib code, a low-level library used by OpenFace project. This code works, but it slow and complex to maintain. It is rather a proof of concept than being used for productive use. <br>  
Moreover, the documentation in the source code is non-existent.
Moreover, the documentation in the source code is non-existent.


       The code of Dlib is mostly the machine learning core implementation of http://dlib.net/ml.html and https://sourceforge.net/p/dclib/wiki/Known_users.  
       The code of Dlib is mostly the machine learning core implementation of [http://dlib.net/ml.html Dlib C++  Library] and referenced in projects in [https://sourceforge.net/p/dclib/wiki/Known_users the Dlib users list on SourceForge].  
<br> <br>
<br> <br>
   </li>
   </li>
   <li> OpenCV -  lbph <br>
   <li> [https://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_tutorial.html#local-binary-patterns-histograms OpenCV] [https://en.wikipedia.org/wiki/Local_binary_patterns Local Binary Patterns Histograms] ([http://www.scholarpedia.org/article/Local_Binary_Patterns LBPH])<br>
This is the most complete implementation of a face detection algorithm. Moreover, it is the oldest implementation of such an algorithm in digiKam. It's not perfect and requires at least six faces already tagged manually by the user to identify the same faces in non-tagged images. <br>
This is the most complete implementation of a face detection algorithm. Moreover, it is the oldest implementation of such an algorithm in digiKam. It's not perfect and requires at least six faces already tagged manually by the user to identify the same faces in non-tagged images. <br>
This algorithm records a histogram of the face in the database, which is used later to perform the comparisons against new/non-tagged faces.  
This algorithm records a histogram of the face in the database, which is used later to perform the comparisons against new/non-tagged faces.  
This one use OpenCV backend.https://towardsdatascience.com/face-recognition-how-lbph-works-90ec258c3d6b
This one use OpenCV backend based on [https://towardsdatascience.com/face-recognition-how-lbph-works-90ec258c3d6b Towards Data Science - Face Recognition: Understanding LBPH Algorithm].
<br> <br>
</li>   
</li>   
<br> <br>
<li>[https://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_tutorial.html#eigenfaces OpenCV] - [https://en.wikipedia.org/wiki/Eigenface Eigen Faces] <br>
<li>OpenCV - [https://en.wikipedia.org/wiki/Eigenface Eigen Faces] <br>
An alternative algorithm what uses the OpenCV backend. It was introduced to have a different source of results for face detection, enabling to proof the DNN approaches.
An alternative algorithm what uses the OpenCV backend. It was introduced to have a different source of results for face detection, enabling to proof the DNN approaches.
<br> <br>
<br> <br>
</li>
</li>
   <li> OpenCV - [http://www.scholarpedia.org/article/Fisherfaces Fisher Face] <br>
   <li> [https://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_tutorial.html#fisherfaces OpenCV] - [http://www.scholarpedia.org/article/Fisherfaces Fisher Face] <br>
Another algorithm what uses the OpenCV backend. It was introduced for the same purposes as Eigen Faces.  <br>
Another algorithm what uses the OpenCV backend. It was introduced for the same purposes as Eigen Faces.  <br>
According to rumours, this one is not finalized, it is said that not all methods are implemented.
According to rumours, this one is not finalized, it is said that not all methods are implemented.
Line 33: Line 33:
</ol>
</ol>
<br>
<br>
There is a paper explaining the difference between Fisher and Eigen Faces, see http://disp.ee.ntu.edu.tw/~pujols/Eigenfaces%20and%20Fisherfaces.pdf
There is a paper explaining the difference between Fisher and Eigen Faces, see [http://disp.ee.ntu.edu.tw/~pujols/Eigenfaces%20and%20Fisherfaces.pdf Eigenfaces and Fisherfaces - Presenter: Harry Chao - Multimedia  Analysis  and  Indexing –Course 2010 .pdf]


==why so many different approaches?==
==why so many different approaches?==


:The idea why four different algorithms were implemented is simply to be able to make a comprehnsive assessment of the currently available technologies applicable in digiKam and eventually choose the best one.  <br>
: The idea why four different algorithms were implemented is simply to be able to make a comprehnsive assessment of the currently available technologies applicable in digiKam and eventually choose the best one.  <br>
:The student who worked on the DNN project a few years ago has concluded that DNN was the best method to recognize with less error as possible. Unfortunately, the training and recognition process took too long and slowed down the application.
: The student who worked on the DNN project a few years ago has concluded that DNN was the best method to recognize faces with little error rate as possible. Unfortunately, the training and recognition process took too long and slowed down the application.
Regardless that fallback, it is agreed that DNN is the best way to go, but not using the current implementation based on DLib.
 
: Regardless that fall-back, it is agreed that DNN is the best way to go, but the current implementation based on DLib shall not be used.


=prevoius work=
=prevoius work=
# DNN
#DNN
All code was introduced by a student Yingjie Liu <[email protected]> in a previous GoSC project
#Eigen Faces
Lui papers:
#Fisher Face
* https://docs.google.com/document/d/123p766jocGVT9aX2O9OL7FivXfYXd73ieMAET-BJN7M/edit?usp=sharing
: All above-mentioned algorithms were introduced by the student [mailto:[email protected] Yingjie Liu] during the [https://community.kde.org/GSoC/2017/Ideas GSoC 2017]. <br>  
* https://docs.google.com/document/d/1A7ocCm90RNRUlbde_ywWYvuukY-VEFyqnuMMbPod8Xc/edit?usp=sharing
: More information is given in Lui's [https://community.kde.org/GSoC/2017/StatusReports/YingjieLiu GSoC 2017 status reports] and  his papers:
* https://docs.google.com/document/d/1OE6w6D8Zr26VV7AzTRpZtgzyz0T9tjWBxOuHYXwDdS4/edit?usp=sharing
:# [https://docs.google.com/document/d/123p766jocGVT9aX2O9OL7FivXfYXd73ieMAET-BJN7M/edit?usp=sharing Face Management improvements] covering Eigen Faces and Fisher Face
:# [https://docs.google.com/document/d/1A7ocCm90RNRUlbde_ywWYvuukY-VEFyqnuMMbPod8Xc/edit?usp=sharing Work Report]
:# [https://docs.google.com/document/d/1OE6w6D8Zr26VV7AzTRpZtgzyz0T9tjWBxOuHYXwDdS4/edit?usp=sharing Added the possibility to manually sort the digiKam icon view] but that was done in the [https://community.kde.org/GSoC/2018/Ideas#digiKam GSoc 2018]
<ol start="4">
<li>LBPH</li>
tba
</ol>


=code=
=code=
All the low level steps to train and recognize faces are done in this class.
All the low-level steps, initiating the entire workflow, what is to detect and recognize faces by algorithm training are initiated in the class [https://cgit.kde.org/digikam.git/tree/core/libs/facesengine/recognitiondatabase.cpp?h=development/dplugins#n87 root/core/libs/facesengine/recognitiondatabase.cpp]
For the middle level codes, muti-threaded and chained, started by the face scan dialog, all is here:
 
https://cgit.kde.org/digikam.git/tree/core/utilities/facemanagement?h=development/dplugins
All the middle-level codes, the subsequent actions, are multi-threaded and chained, starting by the face scan dialogue .These are listed [https://cgit.kde.org/digikam.git/tree/core/utilities/facemanagement?h=development/dplugins in the directory root/core/utilities/facemanagement] for better visibility. In the past, this code was mainly written by [mailto:[email protected] Marcel Wiesweg].
 
=database operation=
Which kind of info is stored in the database? <br>
This depends on the recognition algorithm used. Histograms, Vector, Binary data, all are responsible for algorithm computation, and of course, not each of them is compatible to each other. Thus it is necessary to clear the database, when you change the recognition algorithm in Face Scan dialogue. But in fact this kind of database mechanism must be dropped, when the openCV DNN algorithm will be finalized and remaines as the only one to do the job.
 
During the scan process, the following will be done (as described in [https://cgit.kde.org/digikam.git/tree/core/utilities/facemanagement/README.FACE?h=development/dplugins root/core/utilities/facemanagement/README.FACE]
<ol>
<li> DETECTION - MARK IMAGE AS SCANNED, "IMAGE SCANNED"<br>
assign a tag to images, indicating whether they have been scanned or not. <br>
In this case, the scanned images are tagged with the "/Scanned/Scanned for Faces" tag. <br>
This is the most simple approach as avoid to code a new database table. <br>
Other jobs which need to "mark" images like this can create their own "/Scanned/<Name of job>" tag.
<br><br>
<li> DETECTION - MARK THAT AN FACE IS FOUND IN THE IMAGE , ''"/PEOPLE/UNKNOWN''"-TAG <br>
Initially, when any face scan is run, tag the ''People'' tag is added to that image, plus the subtag for ''Unknown People'', resulting in the following database entry "/People/Unknown".
<br><br>
<li> DETECTION - ADD FACE LOCATION TO DATABASE <br>
Subsequently, the position in the detected face is added as a property (with a key "faceRegion") to the corresponding core database entry of the image.
The value of the property is the "region-rectangle" complying with the [https://developer.mozilla.org/en-US/docs/Web/SVG/Tutorial/Basic_Shapes rectangle shape formalism] of [https://developer.mozilla.org/en-US/docs/Web/SVG/Tutorial/Basic_Shapes Scalable Vector Graphics (SVG)]
<br><br>
<li> RECOGNITION  - LINK FACE TO RECOGNITION DATABASE ENTRY
Each image tagged with "/People/Unknown" is scanned, but only partial, as only the region defined in "region-rectangle" without a "face tag" are scanned. <br>
If a face is recognized in a region an ID is written to the "region-rectangle" property, corresponding to the ID of the face in the recognition database (e,g the corresponding histogram when LBPH is selected).
<br><br>
<li> RECOGNITION  - MARK THAT A FACE IS RECOGNIZED , ''"/PEOPLE/UNCONFIRMED''"-TAG <br>
As the algorithm does not tag the names fully automatic the "region-rectangle" is tagged with "/People/Unconfirmed" <br>
Moreover, the "region-rectangle" is unlinked from"/People/Unconfirmed". "/People/Unknown"
<br><br>
<li> RECOGNITION  - CONFIRM FACES, ''"/PEOPLE/<PERSON NAME>''"-TAG <br>
When the face is later identified by the user, the new tag "/People/<Person Name>" is assigned  to the "region-rectangle".
Moreover, the "region-rectangle" is unlinked from "/People/Unknown". <br>
In addition the <Person Name> is added as a keyword to the metadata of the image complying with the schema "/People/<Person Name>" in order to make it findable/filterable by means of metadata tags.
</ol>
 


=database=
Since the metadata shall not be flooded by this process, anything in "/Scanned/..." is not shown in digiKam's GUI and ignored in any metadata related process. <br>
Which kind of info is stored in the database?
Furthermore only confirmed faces are taken into account in metadata related processes.
This depends on the recognition algorithm used. Histograms, Vector, Binary data, all responsible for algorithm computation, and of course all not compatible. Typically, when you change the recognition algorithm in Face Scan dialogue, the database must be clear as well.
But in fact this kind of database mechanism must be dropped, when DNN algorithm will be finalized, and only this one retained to do the job. As I said previously, 4 algorithms are implemented to choose the best one. At the end, only one must still in digiKam face engine, and all the code must be simplified.


=Expected results of this GSoc 2019 project=
=Expected results of this GSoc 2019 project=
With 3.x versions, OpenCV has introduced a DNN API.  
DigiKam core already depends on OpenCV library to perform complex image processing.  Moreover. the OpenCV >= 3.3 release provides a new [https://docs.opencv.org/3.4.3/d2/d58/tutorial_table_of_content_dnn.html OpenCV DNN (Deep Neural Network) module].
I shall be used instead of the others approaches as done for the face detection
 
The goal now is to port the current digiKam core face recognition DNN extension to the new OpenCV API and write all unit tests to validate the algorithm usability, efficiency, and performance, while learning and recognizing faces automatically.
The outcome shall be used instead of the other face recognition and may detection approaches, mentioned further above.
 
[[update:]]
As OpenCV provides an integrated workflow of face detection and recognition the scope is extended to face detection.
There is an excellent GSoC proposal made by Thanh-Trung Dinh, it will be published as soon as the proposal submittal period closes.


=Project tasks=
All relevant bug reports can be found in
<ul>
<li>  [https://bugs.kde.org/buglist.cgi?bug_status=__open__&component=Faces-Recognition&list_id=1583144&product=digikam digikam Bug List - Component: Faces-Recognition Status: REPORTED, CONFIRMED, ASSIGNED, REOPENED]  <br>
but the workflow entries shall be also present to the student.
<li> [https://bugs.kde.org/buglist.cgi?bug_status=UNCONFIRMED&bug_status=CONFIRMED&bug_status=ASSIGNED&bug_status=REOPENED&component=Faces-Workflow&list_id=1595307&product=digikam  digikam Bug List - Component: Faces-Workflow Status: REPORTED, CONFIRMED, ASSIGNED, REOPENED]
</ul>


=requirements on the student(s)=
=requirements on the student(s)=
Typically, the student must review all Bugzilla entries will be presented in separated subsection created by the maintainers. If this page does not provide enough guidance, the student(s) must identify the top level entries to engage but with help by the listed mentors.  
This is a break-down fo the description of how to [https://community.kde.org/GSoC participate in the Summer of Code program with KDE]. <br>
The student is expected to work autonomous technically-wise, so the answers to challenges will not be found necessarilly by the support of the maintainer. This does not mean that the maintainers cannot be reached by the student. Guidance will be given at any time in any case but shall that be limited to occasional situations to allow the maintainers to follow up on their work. <br>  
Typically, the student must review all related Bugzilla entries given in the corresponding Bugzilla section of the project. If this project or the Bugzilla does not provide enough guidance, the student(s) must identify the top level entries to engage but with help by the listed mentors.  
The student is expected to work autonomous technically-wise, so the answers to challenges will not be found independently of the support of the maintainer. This does not mean that the maintainers cannot be reached by the student. Guidance will be given at any time in any case but shall that be limited to occasional situations to allow the maintainers to follow up on their work. <br>  
Regardless of the above-mentioned channel of communication, the maintainers review and validate the code in their development branch bevor merging it to the master branch.
Regardless of the above-mentioned channel of communication, the maintainers review and validate the code in their development branch bevor merging it to the master branch.


Besides coding, it is required a technical proposal, where to list :  
Besides coding, it is required to submit a technical proposal, wherein is to list :  
* the problematic,  
* the problematic,  
* the code outlining, being merged into the master branch
* the code outlining, being merged into the master branch

Latest revision as of 19:38, 28 March 2019

Introduction

Hello reader,
This article describes the current state of the face detection algorithms of digiKam and the desired outcome of the corresponding GSoC project.
It is recommended to read Faces Management workflow improvements, as this describes the entire face management workflow. Thus it helps to understand the scope of these algorithms and where it need clarification about its structure and interfaces with other parties (code modules).

Currently, there are four different methods using the corresponding algorithm, which are more or less operational. The used algorithm can be chosen in the one Face Scan dialogue.
The goal is to be able to recognize automatically faces in images, which are not tagged, using a previous face tag registered in the face recognition database. The algorithms are complex but explained in more detail below.

currently implemented face recognition algorithms

  1. Deep Neural Network (DNN) DLib
    This is an experimental implementation of a neural network to perform faces recognition.
    This DNN is based on the DLib code, a low-level library used by OpenFace project. This code works, but it slow and complex to maintain. It is rather a proof of concept than being used for productive use.
    Moreover, the documentation in the source code is non-existent. The code of Dlib is mostly the machine learning core implementation of Dlib C++ Library and referenced in projects in the Dlib users list on SourceForge.

  2. OpenCV - Local Binary Patterns Histograms (LBPH)
    This is the most complete implementation of a face detection algorithm. Moreover, it is the oldest implementation of such an algorithm in digiKam. It's not perfect and requires at least six faces already tagged manually by the user to identify the same faces in non-tagged images.
    This algorithm records a histogram of the face in the database, which is used later to perform the comparisons against new/non-tagged faces. This one use OpenCV backend based on Towards Data Science - Face Recognition: Understanding LBPH Algorithm.

  3. OpenCV - Eigen Faces
    An alternative algorithm what uses the OpenCV backend. It was introduced to have a different source of results for face detection, enabling to proof the DNN approaches.

  4. OpenCV - Fisher Face
    Another algorithm what uses the OpenCV backend. It was introduced for the same purposes as Eigen Faces.
    According to rumours, this one is not finalized, it is said that not all methods are implemented.


There is a paper explaining the difference between Fisher and Eigen Faces, see Eigenfaces and Fisherfaces - Presenter: Harry Chao - Multimedia Analysis and Indexing –Course 2010 .pdf

why so many different approaches?

The idea why four different algorithms were implemented is simply to be able to make a comprehnsive assessment of the currently available technologies applicable in digiKam and eventually choose the best one.
The student who worked on the DNN project a few years ago has concluded that DNN was the best method to recognize faces with little error rate as possible. Unfortunately, the training and recognition process took too long and slowed down the application.
Regardless that fall-back, it is agreed that DNN is the best way to go, but the current implementation based on DLib shall not be used.

prevoius work

  1. DNN
  2. Eigen Faces
  3. Fisher Face
All above-mentioned algorithms were introduced by the student Yingjie Liu during the GSoC 2017.
More information is given in Lui's GSoC 2017 status reports and his papers:
  1. Face Management improvements covering Eigen Faces and Fisher Face
  2. Work Report
  3. Added the possibility to manually sort the digiKam icon view but that was done in the GSoc 2018
  1. LBPH
  2. tba

code

All the low-level steps, initiating the entire workflow, what is to detect and recognize faces by algorithm training are initiated in the class root/core/libs/facesengine/recognitiondatabase.cpp

All the middle-level codes, the subsequent actions, are multi-threaded and chained, starting by the face scan dialogue .These are listed in the directory root/core/utilities/facemanagement for better visibility. In the past, this code was mainly written by Marcel Wiesweg.

database operation

Which kind of info is stored in the database?
This depends on the recognition algorithm used. Histograms, Vector, Binary data, all are responsible for algorithm computation, and of course, not each of them is compatible to each other. Thus it is necessary to clear the database, when you change the recognition algorithm in Face Scan dialogue. But in fact this kind of database mechanism must be dropped, when the openCV DNN algorithm will be finalized and remaines as the only one to do the job.

During the scan process, the following will be done (as described in root/core/utilities/facemanagement/README.FACE

  1. DETECTION - MARK IMAGE AS SCANNED, "IMAGE SCANNED"
    assign a tag to images, indicating whether they have been scanned or not.
    In this case, the scanned images are tagged with the "/Scanned/Scanned for Faces" tag.
    This is the most simple approach as avoid to code a new database table.
    Other jobs which need to "mark" images like this can create their own "/Scanned/<Name of job>" tag.

  2. DETECTION - MARK THAT AN FACE IS FOUND IN THE IMAGE , "/PEOPLE/UNKNOWN"-TAG
    Initially, when any face scan is run, tag the People tag is added to that image, plus the subtag for Unknown People, resulting in the following database entry "/People/Unknown".

  3. DETECTION - ADD FACE LOCATION TO DATABASE
    Subsequently, the position in the detected face is added as a property (with a key "faceRegion") to the corresponding core database entry of the image. The value of the property is the "region-rectangle" complying with the rectangle shape formalism of Scalable Vector Graphics (SVG)

  4. RECOGNITION - LINK FACE TO RECOGNITION DATABASE ENTRY Each image tagged with "/People/Unknown" is scanned, but only partial, as only the region defined in "region-rectangle" without a "face tag" are scanned.
    If a face is recognized in a region an ID is written to the "region-rectangle" property, corresponding to the ID of the face in the recognition database (e,g the corresponding histogram when LBPH is selected).

  5. RECOGNITION - MARK THAT A FACE IS RECOGNIZED , "/PEOPLE/UNCONFIRMED"-TAG
    As the algorithm does not tag the names fully automatic the "region-rectangle" is tagged with "/People/Unconfirmed"
    Moreover, the "region-rectangle" is unlinked from"/People/Unconfirmed". "/People/Unknown"

  6. RECOGNITION - CONFIRM FACES, "/PEOPLE/<PERSON NAME>"-TAG
    When the face is later identified by the user, the new tag "/People/<Person Name>" is assigned to the "region-rectangle". Moreover, the "region-rectangle" is unlinked from "/People/Unknown".
    In addition the <Person Name> is added as a keyword to the metadata of the image complying with the schema "/People/<Person Name>" in order to make it findable/filterable by means of metadata tags.


Since the metadata shall not be flooded by this process, anything in "/Scanned/..." is not shown in digiKam's GUI and ignored in any metadata related process.
Furthermore only confirmed faces are taken into account in metadata related processes.

Expected results of this GSoc 2019 project

DigiKam core already depends on OpenCV library to perform complex image processing. Moreover. the OpenCV >= 3.3 release provides a new OpenCV DNN (Deep Neural Network) module.

The goal now is to port the current digiKam core face recognition DNN extension to the new OpenCV API and write all unit tests to validate the algorithm usability, efficiency, and performance, while learning and recognizing faces automatically. The outcome shall be used instead of the other face recognition and may detection approaches, mentioned further above.

update: As OpenCV provides an integrated workflow of face detection and recognition the scope is extended to face detection. There is an excellent GSoC proposal made by Thanh-Trung Dinh, it will be published as soon as the proposal submittal period closes.

Project tasks

All relevant bug reports can be found in

requirements on the student(s)

This is a break-down fo the description of how to participate in the Summer of Code program with KDE.
Typically, the student must review all related Bugzilla entries given in the corresponding Bugzilla section of the project. If this project or the Bugzilla does not provide enough guidance, the student(s) must identify the top level entries to engage but with help by the listed mentors. The student is expected to work autonomous technically-wise, so the answers to challenges will not be found independently of the support of the maintainer. This does not mean that the maintainers cannot be reached by the student. Guidance will be given at any time in any case but shall that be limited to occasional situations to allow the maintainers to follow up on their work.
Regardless of the above-mentioned channel of communication, the maintainers review and validate the code in their development branch bevor merging it to the master branch.

Besides coding, it is required to submit a technical proposal, wherein is to list :

  • the problematic,
  • the code outlining, being merged into the master branch
  • the tests
  • the overall project plan for this summer,
  • documentation to write (mostly in code), etc.