GSoC/2020/Ideas: Difference between revisions

From KDE Community Wiki
< GSoC‎ | 2020
(Added an Akonadi project idea (EteSync backend))
Line 205: Line 205:
==== Project: DNN based Faces Recognition Improvements ====
==== Project: DNN based Faces Recognition Improvements ====


'''Brief Explanation''': During GSoC 2019, we have proposed a project to implement an AI extension to the [https://cgit.kde.org/digikam.git/tree/core/libs/facesengine/recognition digiKam core face recognition]. The project has used the C++ OpenCV Deep Learning Module to detect and recognize faces with success. The project needs to be continued as recognition mechanisms which need faces [https://en.wikipedia.org/wiki/Cluster_analysis clustering analysis] to improve results while assigning automatically face-tags, especially when more than one face are detected in same photo.
'''Brief Explanation''': During GSoC 2019, we have proposed a project to implement an AI extension to the [https://cgit.kde.org/digikam.git/tree/core/libs/facesengine/recognition digiKam core face recognition]. The project has used the C++ OpenCV Deep Learning Module to detect and recognize faces with success. Certainly, there are still many places in the recognition mechanism to be improved, in order to obtain more accurate results while avoiding "re-recognization" on the same faces, especially when more than one face are detected in the same photo. In addition, [https://en.wikipedia.org/wiki/Cluster_analysis clustering analysis] on unknown faces should be studied more carefully during this project, as it may be interestingly useful to improve the UX of digiKam face engine.  


In parallel the Faces Detection and Faces Recognition codes from digiKam core needs to be hosted as plugins to be able to extend this features later (detect and recognize animals, monuments, plants, etc...)
In parallel, Faces Detection and Recognition codes from digiKam core should be restructured into plugin architecture, so as to facilitate future contribution on extending this feature to other kinds of detection and recognition (e.g. detect and recognize animals, monuments, plants, etc.)


'''Expected Results''':  
'''Expected Results''':  

Revision as of 10:24, 1 February 2020

Konqi is giving a lesson!

See also: GSoC Instructions, Last year ideas

Guidelines

Information for Students

These ideas were contributed by our developers and users. They are sometimes vague or incomplete. If you wish to submit a proposal based on these ideas, you are urged to contact the developers and find out more about the particular suggestion you're looking at.

Becoming accepted as a Google Summer of Code student is quite competitive. Accepted students typically have thoroughly researched the technologies of their proposed project and have been in frequent contact with potential mentors. Simply copying and pasting an idea here will not work. On the other hand, creating a completely new idea without first consulting potential mentors rarely works.

When writing your proposal or asking for help from the general KDE community don't assume people are familiar with the ideas here. KDE is really big!

If there is no specific contact given you can ask questions on the general KDE development list [email protected]. See the KDE mailing lists page for information on available mailing lists and how to subscribe.

Note

These are all proposals! We are open to new ideas you might have!! Do you have an awesome idea you want to work on with KDE but that is not among the ideas below? That's cool. We love that! But please do us a favor: Get in touch with a mentor early on and make sure your project is realistic and within the scope of KDE.


Adding a Proposal

Note

Follow the template of other proposals!


Project:

If appropriate, screenshot or another image

Brief explanation:

Expected results:

Knowledge Prerequisite:

Mentor:

When adding an idea to this section, please try to include the following data:

  • if the application is not widely known, a description of what it does and where its code lives
  • a brief explanation
  • the expected results
  • pre-requisites for working on your project
  • if applicable, links to more information or discussions
  • mailing list or IRC channel for your application/library/module
  • your name and email address for contact (if you're willing to be a mentor)
    • Ideas with no mentors listed and their contact info will be removed **

If you are not a developer but have a good idea for a proposal, get in contact with relevant developers first.

Ideas

Your Own Idea

Project: Something that you're totally excited about

Brief explanation: Do you have an awesome idea you want to work on with KDE but that is not among the ideas below? That's cool. We love that! But please do us a favor: Get in touch with a mentor early on and make sure your project is realistic and within the scope of KDE. That will spare you and us a lot of frustration.

Expected results: Something you and KDE loves

Knowledge Prerequisite: Probably C++ and Qt but depends on your project

Mentor: Try to see who in KDE is interested in what you want to work on and approach them. If you are unsure you can always ask in #kde-soc on Freenode IRC.


Krita

Krita: digital painting for artists. It supports creating images from scratch from beginning to end. Krita is a complex application and developers need to have a fair amount of experience in order to be able to do something.

Krita is a widely used digital painting application for professional artists. Last year, Krita gained the ability to create hand-drawn 2D animations, among other new features. For this year, projects that the Krita team would be interested in include the following ideas.

Note that we're always open to ideas you bring in yourself: if you're passionate about something you've come up with yourself, that you want for Krita, that's a big plus for us.

We also expect prospective students to submit at least three patches for bugs or wishes or small features. We want to know how good you are! See https://phabricator.kde.org/T7724 for some smaller tasks that you could work on that are not bugs.

Talk to the team in IRC (freenode): #krita or via the mail list: https://mail.kde.org/mailman/listinfo/kimageshop

Project: Integrating the MyPaint Brush Engine

Brief Explanation: The MyPaint brush engine has been separated from the MyPaint application and has been completely rewritten. Artists still like the mypaint brush engine a lot and it would be great to have the engine integrated in Krita as a new brush engine. Libmypaint can be found here: https://github.com/mypaint/libmypaint and the brush set here: https://github.com/mypaint/mypaint-brushes . The first goal is to integreate libmypaint in a Krita brushengine and make it load the brushes. The second goal is to expose the MyPaint brush options in Krita's brush editor and allow the modification and creation of MyPaint brushes in Krita. GIMP is an example of an application that has already integrated the MyPaint brush engine.

Expected Results:

Artists should be able to effectively paint with MyPaint brushes in Krita.

Knowledge Prerequisite:

  • C, C++, Qt, Krita

Mentor: Boudewijn Rempt (IRC: boud)

Project: Supporting Vertical Text and SVG2 Text in the Text Shape

Brief Explanation: Krita's Text Shape was rewritten for Krita 4.0. It is now SVG based, instead of ODF. There are many things lacking, though. The original goal was to support SVG2. Currently the text shape only supports SVG1. There is no automatic wordwrap and vertical text (e.g. Chinese and Japanese) is not supported either. The goal of this project is to support wordwrap and vertical text layout. Other improvements to the text shape can be proposed as well. The level of this project is advanced.

Expected Results:

Artists should be able to create and edit vertical text. Text shapes should be able to automatically wrap text to the bounding box.

Knowledge Prerequisite:

  • C, C++, Qt, Krita, SVG, Typography, Text Layout

Level Advanced

Mentor: Boudewijn Rempt (IRC: boud)


Project: Add New Fill Layer Types

Brief Explanation: Fill layers are layers that automatically generate content. Krita currently has two types of fill layers: Color and Pattern. There used to be another type that generated content dynamically using the OpenShiva scripting language. However, that language hasn't been maintained for a long time. The goal of this project is to add a new dynamic fill layer types that could fill an area with different effects such as perlin and other types of noise, clouds, hatching, fractals.

Expected Results:

Several new fill layer types that allow the user to add dynamically generated content as a layer in the layer stack

Knowledge Prerequisite:

  • C, C++, Qt, Krita

Level Medium

Mentor: Boudewijn Rempt (IRC: boud)

Project: Improve Krita for Touch Systems

Brief Explanation: Krita Gemini/Krita Sketch were version of Krita based on QtQuick 1 that provided a decent touch-only experience. Because of the technical limitations of QtQuick 2, the approach used in Gemini and Sketch is no longer viable. Since Krita 4, there is a QtQuick2 based touch docker that mimics the button bar found on some wacom devices. This is not configurable, and quite limited. This project involves working with Krita's UX designers and users to define a new approach to supporting touch devices, then implementing that support.

Expected Results:

Artists should be able to work with Krita on a touch-only device such as a Surface Pro or Wacom Mobile Studio without wanting to chop their devices in two.

Knowledge Prerequisite:

  • C, C++, Qt, Krita

Level Medium

Mentor: Boudewijn Rempt (IRC: boud)

Project: SVG Mesh Gradients

Brief Explanation: Even though Mesh Gradients are not officially part of the truncated SVG2 specification anymore, having a second implementation next to Inkscape would help improving the standard. Plus, mesh gradients are very useful for artists. This project entails implementing a new gradient type. Whether this should be based on QGradient or not is up for discussion. The gradients should render exactly the same as in inkscape. See https://svgwg.org/svg-next/pservers.html#MeshGradientElement.

Expected Results:

A new gradient type, UI to create and edit these gradients and apply them. Gradients should work both on vector objects as well as on paint layers.

Knowledge Prerequisite:

  • C, C++, Qt, Krita, SVG, Inkscape

Level Advanced

Mentor: Boudewijn Rempt (IRC: boud)

Project: Extending Animation Support for curves

Brief Explanation: In Krita, you can already add curves that could be applied to some properties of a layer, like opacity, animating those properties. We want the animation support extended by allowing users to place masks (filter masks, transformation masks, transparency masks) on the timeline and animate their properties using curves. Every property of a layer or mask placed on the timeline should be animatable.

Expected results:

  • Implementation of a gui for applying the curve settings to one or more properties of a mask or layer
  • Implementation of the actual rendering of the properties in the frames
  • Saving of these settings

Knowledge Prerequisite:

  • C++ and Qt

Level Advanced

Mentor: Jouni Pentikainen (tyyppi on IRC)


Project: Adding support for high-channel depth brush tips

Brief Explanation: Currently, brush tips are 8 bits and based on QImage objects. With the advent of 16 bit/channel and 32 bit/channel support in QImage, we can start supporting higher bit depth brush tips. The 16 bit/channel GBR format from Cinepaint is not so relevant: we should support EXR and PNG for predefined brush tips and extend the autogenerated brush tips to support higher channel depths as well.

Expected results:

  • A gui to select the channel depth when creating brush tips
  • Loading of high-channel depth brush tips
  • Support for high-channel depth brush tips when painting

Knowledge Prerequisite:

  • C++ and Qt

Level Advanced

Mentor: Jouni Pentikainen (tyyppi on IRC)

Project: Extend Arrange Docker to support alignment and distribution of Layers

Brief Explanation: Currently, the arrange docker only supports aligning and distributing vector objects of a singlet vector layer. This project aims to extend the arrange docker support for Layers too.

Expected results:

  • All the current operations available in Arrange docker could be done with the layers.

Knowledge Prerequisite:

  • C++ and Qt

Level Easy

Mentor: TBD

digiKam

digiKam is an advanced open-source digital photo management application that runs on Linux, Windows, and MacOS. The application provides a comprehensive set of tools for importing, managing, editing, and sharing photos and raw files.

Project: DNN based Faces Recognition Improvements

Brief Explanation: During GSoC 2019, we have proposed a project to implement an AI extension to the digiKam core face recognition. The project has used the C++ OpenCV Deep Learning Module to detect and recognize faces with success. Certainly, there are still many places in the recognition mechanism to be improved, in order to obtain more accurate results while avoiding "re-recognization" on the same faces, especially when more than one face are detected in the same photo. In addition, clustering analysis on unknown faces should be studied more carefully during this project, as it may be interestingly useful to improve the UX of digiKam face engine.

In parallel, Faces Detection and Recognition codes from digiKam core should be restructured into plugin architecture, so as to facilitate future contribution on extending this feature to other kinds of detection and recognition (e.g. detect and recognize animals, monuments, plants, etc.)

Expected Results:

Improve the Face Recognition workflow using clustering and open the recognition architecture with plugins for future extensions. Implement unit tests, and code documentations.

Knowledge Prerequisite:

  • C++, Qt, OpenCV, Neural Network

Level Advanced

Mentors: Maik Qualmann ([email protected]), Thanh Trung Dinh ([email protected]), and Gilles Caulier ([email protected])

Project: Faces Management workflow improvements

Brief Explanation: digiKam provide a Faces detection algorithm which work mostly in 80% of use cases. It detect faces position in image automatically and register these information in database. Event if a lots of tasks can be done in background by digiKam, the end-users needs to adjust, re-organize, rename, delete Face tags in database through the user interface.

Since many year, a lots of improvements have been identified by digiKam users community to improve the face tags management workflow in graphical use interface. See this list of bugzilla entries for details

Note: Face Recognition is another part of Faces management, but this project is concerned by algorithms used while recognition.

Expected Results:

Provide a better Face Tags management workflow in user interface, with unit test, and documentation.

Knowledge Prerequisite:

  • C, C++, Qt, User interface, digiKam

Mentors: Maik Qualmann ([email protected]) and Gilles Caulier ([email protected])

Project: Factoring all Export Tools with new Export API and port to QtNetworkAuth

Brief Explanation: With GoSC 2018, we proposed a project to implement a huge factorization and improvements with all digiKam export to web service plugins. Our student fixed plenty of code using OAuth version 2 authentification through libo2 library, has simplified classes, and started to write a new API to factorize all these tools, including a common Wizard dialog. Even if the export tools implementation are now better, they do not use the new API and always run as a stand alone session in digiKam core. Due to this fact, the Web Service tools are not yet usable in digiKam Batch Queue Manager as single step runnable at end of a queue processing. So the section of code about factored export tools API is currently disabled in digiKam core. This year, the project will be to fix that and to migrate libo2 dependency to the new QtNetworkAuth framewwork.

Expected Results:

Start to use every the new export tools API, use the new Wizard dialog, factoring codes everywhere, port to QtNetworkAuth, and introduce all export tools to BQM. Write unit tests, and documentation.

Knowledge Prerequisite:

  • C++, Qt, Oauth2

Mentors: Mohamed Anwer ([email protected]), Maik Qualmann ([email protected]) and Gilles Caulier ([email protected])

Kirogi

A Ground Control Station (GSC) application for drones with a modern mindset and codebase philosophy, the project is one of the newest one under KDE organization and we would like your help to make it the best open source GCS around!

We are also open for new ideas, be free to send your own plan for Kirogi.

For more information, take a look in our website: https://kirogi.org/

Be in touch with:


Project: Improve MAVLink integration

Brief Explanation: MAVLink is one of the most popular communication protocols between ground control stations, quadcopters, planes, submarines and others vehicles. Kirogi has an initial integration of MAVLink, but some features are still missing, including: Link configuration, parameter configuration, flight modes and others.

Expected Results:

  • Allow MAVLink connection via serial connection and TCP
  • Support and identify different vehicles
  • Control and change flight modes

Knowledge Prerequisite:

  • C++, QML, JS, CMake

Mentors: Patrick José Pereira ([email protected])

Co-mentor: Eike Hein ([email protected])

Project: Mission planner widget

Brief Explanation: An unmanned vehicle needs mission plans to do his job. To allow end-users to create them, Kirogi needs an intuitive graphical user interface to set waypoints, camera control along the flight path and configure survey patterns. This user interface then needs to make the user-created mission plan data available for further processing by the vehicle- or protocol-specific backends.

Expected Results:

  • A friendly user interface to allow users to plan missions for unmanned vehicles
  • A good abstract interface to provide mission information from the GUI to the vehicle backend plugins

Knowledge Prerequisite:

  • C++, QML, JS, CMake

Mentors: Patrick José Pereira ([email protected])

Co-mentor: Eike Hein ([email protected])

Okular

Okular is a universal document viewer developed by KDE. Okular works on multiple platforms, including but not limited to Linux, Windows, Mac OS X, *BSD. Contact the Okular developers.

Project: Improve custom stamp annotation handling

Brief explanation: Okular does display stamp annotations, but the support is somewhat incomplete. This particularly shows when trying to use stamp annotations with a custom image. For example, such annotations can be added in Okular, but they cannot be saved to the pdf file in a way that any other pdf viewer can read. Also, they will not appear on print-outs.

The underlying reason for this is that Okular renders these stamps itself, rather than relying on the poppler library, which does all other pdf rendering. Goal of this project is therefore to teach poppler how to render stamp annotations, and then make Okular use that new functionality. More details can be found in the bug report [0].

[0] https://bugs.kde.org/show_bug.cgi?id=383651

Expected results: Poppler should render stamp annotations. Annotations should be printable from Okular. Custom stamps inserted via the Okular GUI should be visible in other pdf readers.

Knowledge prerequisite: C++, and a bit about the pdf format.

Mentor: Albert Astals Cid [email protected]

KtoBlzCheck

KtoBLZCheck is a library for checking account numbers and bank codes of German banks. The basic data used by the library is also used by other applications for the administration of finances such as KMyMoney and AqBanking.

Project: Provide the bank data needed for financial applications in SQLite format

Brief Explanation: To avoid duplicate data and to support multiple countries, the query and generation of SQLite databases will be integrated into ktoblzcheck.

Expected Results: The data format (text file) used so far should be replaced by SQLite databases and the available command line tools should be changed to use the SQLite databases. Furthermore an API for querying the SQLite databases is required to integrate these databases into other applications. For example, for KDE application support for KServiceTypeTrader::query() is required.

Knowledge requirement: C++, technical english speaking and writing

Level: intermediate level

Mentor: contact Rhabacker

KStars

KStars is free, open source, cross-platform astronomy software. It provides an accurate graphical simulation of the night sky, from anywhere on Earth, at any date and time. It includes up to 100 million stars, 13,000 deep sky objects, all 8 planets, Sun and Moon, and thousands of comets, asteroids, supernovae and satellites.

Project: Support of virtual reality

Short explanation: KStars should support virtual reality devices

Brief Explanation: Valve announced the release of their VR flagship https://twitter.com/valvesoftware/status/1196566870360387584, which is likely to increase the acceptance of VR devices and is a good opportunity to follow this trend. KStars with virtual reality support could be published like Krita in the Steam Shop and thus gain a wider distribution.

Expected Results: A version of KStars that can be used with an HTC Vive or other software-compatible openvr base virtual reality device [1]. This includes support for spatial representation, an extension for stereoscopic representation, the connection of rendering with openvr and the connection of virtual reality controllers for interaction and navigation in the spatial scene, which could be based on the openvr viewer for OSG.

Knowledge requirement: C++, openGL, 3D programming, KStars, openvr, technical english speaking and writing

Requirement:


Level: intermediate level

Mentor: in progress (contact Contact)

Project: Spatial representation for KStars

Short explanation: KStars currently uses a 2D drawing interface to display graphical objects. In preparation for virtual reality support, an extension is required to display a spatial scene.

Brief explanation: It should be possible to extend the SkyGLPainter class to use the corresponding functions of the 3D api instead of the 2D OpenGL api.

Expected Results: A version of KStars that uses a spatial scene for rendering.

Knowledge requirement: C++, openGL, 3D programming, KStars

Notes: This project is optional, because you can probably mount the current 2D scene for VR glasses on a sphere.

Level: intermediate level

Mentor: in progress (contact Contact)


Project: Stereoscope display for KStars

Short explanation: Virtual reality support requires the representation of the scene for each eye, which must be added to KStars

Brief explanation: In an implementation an additional instance for the second eye and the control of this second view would have to be added to the existing SkyMapGLDraw instance. Via a menu item in the settings menu the activation of this view form should be added and the documentation should be adapted accordingly.

Expected results: A version of KStars that supports stereoscopic viewing

Knowledge requirement: C++, openGL, 3D programming, KStars

Level: intermediate level

Mentor: in progress (contact Contact)


Project: Connect stereoscopic rendering to VR api to KStars

Short explanation: To support a virtual reality device based on the openvr api, the rendering must be connected to the openvr api.

Expected results: A version of KStars that allows rendering to a display on an HTC Vive or similar. device.

Knowledge requirement: C++, openGL, 3D programming, KStars

Notes: The submission of images to openvr depends on DirectX, OpenGL, Metal and/or Vulkan. With the port to KF5, the OpenGL backend of kstars has been removed and must be added again [2]

Requirement: Access to a virtual reality device supported by openvr api like HTC Vive or Valve Index

Level: intermediate level

Mentor: in progress (contact Contact)

Project: Adding Virtual Reality Controller support to KStars

Short explanation: For interaction with the application and navigation in the spatial scene, support for the use of virtual reality controllers is required.

Expected Results: A version of KStars that supports navigation in the spatial scene by using a virtual reality device.

Knowledge requirement: C++, openGL, 3D programming, KStars

Requirement: Access to a virtual reality device supported by openvr api like HTC Vive

Level: intermediate level

Mentor: in progress (contact Contact)


Project: Linux port of an openVR driver for navigating in a SteamVR environment

Long Explanation: For the development of virtual reality applications with SteamVR, a real Head Mounted Display (HMD) is not always required, often a corresponding display on the screen and rudimentary control options are sufficient. SteamVR provides a so-called "null" driver [3] for the display, which emulates the availability of a Head Mounted Display (HMD). What is missing in this driver is an easy way to navigate the scene, e.g. moving the HMD or the hand controls and their buttons. On [4] this has already been realized for Windows, what is still missing is a port to Linux. The biggest challenge when using Qt or SDL is to get the keyboard input for the current VR application to process it. Under Windows, a system API function is therefore used, which is probably also required under Linux.

Requirements: SteamVR account and SteamVR installation

Knowledge requirement: C++, ???

Level: Intermediate

Mentor: Contact


Akonadi

The Akonadi framework is responsible for providing applications with a centralized database to store, index and retrieve the user's personal information. This includes the user's emails, contacts, calendars, events, journals, alarms, notes, etc.

Project: EteSync sync backend for Akonadi

Brief explanation: EteSync is a secure, end-to-end encrypted and FLOSS sync solution for your contacts, calendars and tasks. There are are clients for Android, iOS, the desktop (Cal/CardDAV bridge) and the web, and a Thunderbird plugin is in the works. The idea is to implement a KDE PIM backend to enable KDE users to use EteSync to easily end-to-end encrypt and sync their contacts, calendars and tasks.

Expected results: KDE users will be able to end-to-end encrypt and sync their PIM information using the EteSync protocol.

Knowledge Prerequisite: C++ and basic familiarity with Qt

Level: Medium

Mentor: Daniel Vrátil (dvratil on IRC) for the Akonadi part and Tom Hacohen (TAsn on IRC, [email protected] by email) for EteSync