Tis article describes the current state of the face detection algorithms of digiKam and the desired outcome of the corresponding GSoC project.
It is recommended to read Faces Management workflow improvements, as this describes the entire face management workflow. Thus it helps to understand the scope of these algorithms and where it need clarification about its structure and interfaces with other parties (code modules).
Currently, there are four different methods based on the corresponding algorithm, which are more or less functional. The goal is to be able to recognize automatically faces in images, which are not tagged, using a previous face tag registered in the face recognition database. The algorithms are complex but explained in more detail below
- Deep Neural Network DLib
digiKam has already an experimental implementation of Neural Network to perform faces recognition This DNN is based on DLib code, a low level library used by OpenFace project. his code work, but it slow and is complex to maintain. It's more and less a proof of concept. The documentation in source code is inexistant.
The code from Dlib is mostly the machine learning core implementation : http://dlib.net/ml.html https://sourceforge.net/p/dclib/wiki/Known_users/ This DNN code was introduced by a student with a previous GoSC project : Yingjie Liu <[email protected]>. This code work, but it slow and is complex to maintain. It's more and less a proof of concept. The documentation in source code is inexistant. 2b Deep Neural Network lbph : this is the most complete implementation and the older one implemented in digiKam. It's not perfect and require at least 6 face already tagged manually by end user to identify the same person. This algorithm record a histogram of face in database, which is used later to perform the comparisons. This one use OpenCV backend.https://towardsdatascience.com/face-recognition-how-lbph-works-90ec258c3d6b
2b Eigen Faces : Another algorithm what uses OpenCV backend. https://en.wikipedia.org/wiki/Eigenface. It was used to compare it with the DNN approaches. 2c Fisher Faces : Another algorithm what uses OpenCV backend. https://en.wikipedia.org/wiki/Eigenface. It was used to compare it with the DNN approaches. According rumors this one is not finalized, as i remember some method not implemented. This paper explain well the difference between fisher and eigen faces http://disp.ee.ntu.edu.tw/~pujols/Eigenfaces%20and%20Fisherfaces.pdf
All code code was introduced by a student with a previous GoSC project : Yingjie Liu <[email protected]> The 4 kind of recognizer algorithm are instanced and right one is used depending of user choice from Face Scan dialog. All the low level steps to train and recognize faces are done in this class. For the middle level codes, muti-threaded and chained, started by the face scan dialog, all is here: https://cgit.kde.org/digikam.git/tree/core/utilities/facemanagement?h=development/dplugins Why 4 kind of recognition algorithms To compare and choose the best one. The student working on DNN project few year ago has concluded that DNN was the best method to recognize with less error as possible. But the training and recognition process take age and slow down the application. It is agreed that DNN is the best way to go, but not using the current implementation based on DLib. With 3.x versions, OpenCV has introduced a DNN API. I shall be used instead of the others approaches as done for the face detection
6/ Which kind of info are stored in database? This depend of recognition algorithm used. Histograms, Vector, Binary data, all responsible of algorithm computation, and of course all not compatible. Typically, when you change the recognition algorithm in Face Scan dialog, the database must be clear as well. But in fact this kind of database mechanism must be dropped, when DNN algorithm will be finalized, and only this one retained to do the job. As i said previously, 4 algorithms are implemented to choose the best one. At end, only one must still in digiKam face engine, and all the code must be simplified.