MPRIS: Difference between revisions

From KDE Community Wiki
m (Babe is called vvave)
 
(9 intermediate revisions by 3 users not shown)
Line 12: Line 12:
== KDE Software ==
== KDE Software ==
* Amarok
* Amarok
* Babe
* Vvave
* Elisa
* Elisa
* JuK
* JuK
* Plasma Media Center
* Plasma Media Center
* Gwenview (soon, see https://phabricator.kde.org/D10972)
* Gwenview
* Plasma Browser Integration
* IDEA: Okular (Presentation)


== Other ==
== Other ==
Vlc
* Vlc
 
* Spotify (implementation is broken, no support for volume and seeking)


= Features for MPRIS 3.0 =
= Features for MPRIS 3.0 =


Dumping ground of use cases with sketched solutions (Properties, Actions). Nothing official, used for brain storming for now.
Dumping ground of use cases with sketched solutions (Properties, Actions). Nothing official, used for brainstorming for now.


== General ==
== General ==
Line 31: Line 33:
There are players for media without sound and players not exposing control about the sound volume (see https://lists.freedesktop.org/archives/mpris/2018q1/000070.html).
There are players for media without sound and players not exposing control about the sound volume (see https://lists.freedesktop.org/archives/mpris/2018q1/000070.html).
Some players provide thumbnails for the whole track and others even individuals for parts of the track.
Some players provide thumbnails for the whole track and others even individuals for parts of the track.
Mimetype for the current track (xesam:mimeType) in the Metadata property.


Properties:
Properties:
Line 39: Line 42:
* SupportedThumbnailSizes
* SupportedThumbnailSizes
* MetaDataRanges
* MetaDataRanges
* Supported resolutions/quality levels
Actions:
Actions:
* OpenMediaPicker/OpenMediaManager
* OpenMediaPicker/OpenMediaManager
* GetThumbnail (formats?)
* GetThumbnail (formats?)
* Pick resolution/quality level


== Streaming ==
== Streaming ==
Stream players can have the option to buffer if instant replay is on hold.
Stream players can have the option to buffer if instant replay is on hold.
The buffering could be done data-provider/source side or player-side.
The buffering could be done data-provider/source side or player-side.
Length of buffered track can depend on policies (fixed) or available storage (variable).
Length of the buffered track can depend on policies (fixed) or available storage (variable).
Players would allow to seek in the buffer and playing the stream with an offset.
Players would allow to seek in the buffer and playing the stream with an offset.


Line 83: Line 88:
= Braindump =
= Braindump =
What can consider a "track" a media object which can have multiple parallel subtracks of types like typically sound and image-frames (not wide-spread but possible would be physical object control like for puppets moving, fountains shooting, pipes of street organ blowing, light spots glowing, or, he, odor spraying ;) ).
What can consider a "track" a media object which can have multiple parallel subtracks of types like typically sound and image-frames (not wide-spread but possible would be physical object control like for puppets moving, fountains shooting, pipes of street organ blowing, light spots glowing, or, he, odor spraying ;) ).
And the player then goes and "renders" the data from the tracks. The data themselves would either allow random access because coming as fixed object from some storage like filesystem or full database or coming from some deterministic data generator. Or the data would not allow random access, because it is generated non-deterministically e.g. from sensors on the physical world (like microphone, camera) without any buffering.
And the player then goes and "renders" the data from the tracks. The data themselves would either allow random access because coming as a fixed object from some storage like the filesystem or full database or coming from some deterministic data generator. Or the data would not allow random access because it is generated non-deterministically e.g. from sensors on the physical world (like microphone, camera) without any buffering.


So with that abstract thinking a simple static slide with some timeout is the same as a short video just showing only the same image. And a simple static slide with no timeout is the same as a video livestream showing only the same image. And thus "Stop" and "Pause" with their different concepts at least by design should be applied the same.
So with that abstract thinking, a simple static slide with some timeout is the same as a short video just showing only the same image. And a simple static slide with no timeout is the same as a video livestream showing only the same image. And thus "Stop" and "Pause" with their different concepts at least by design should be applied the same.


Thinking further slides in a presentation show also can have a let's-call-it sub-slideshow, where a slide can reach several states by items appearing, changing or disappearing (and just thinking about linear organized shows :) ). Once we get there and try to create model concpets for the needed new MPRIS interfaces, perhaps the right now proposed mapping has to be rethought indeed. But for what I drafted some time ago, for now mapping a single (main) slide, which is usually shown for some minutes, to a track, which usually is some 3-minutes pop music, should work out with what exists in MPRIS.
Thinking further slides in a presentation show also can have a let's-call-it sub-slideshow, where a slide can reach several states by items appearing, changing or disappearing (and just thinking about linear organized shows :) ). Once we get there and try to create model concepts for the needed new MPRIS interfaces, perhaps the right now proposed mapping has to be rethought indeed. But for what I drafted some time ago, for now mapping a single (main) slide, which is usually shown for some minutes, to a track, which usually is some 3-minutes pop music, should work out with what exists in MPRIS.
Multi-hierarchy track notation might be also interesting for non-3-minutes-pop-music tracks. Think movies separated in story chapters, classical Western music (operas, synfonies) being composed of units of units. So the same structuring as known e.g. from books might be useful to have, to allow navigation using the same interaction patterns where sane.
Multi-hierarchy track notation might be also interesting for non-3-minutes-pop-music tracks. Think movies separated in story chapters, classical Western music (operas, symphonies) being composed of units of units. So the same structuring as known e.g. from books might be useful to have, to allow navigation using the same interaction patterns where sane.

Latest revision as of 15:52, 23 September 2020

Specification: https://www.freedesktop.org/wiki/Specifications/mpris-spec/

Controllers

KDE software

  • Plasma media controller applet: plasma-workspace/applets/mediacontroller
  • Plasma mediakey handler: plasma-workspace/dataengines/mpris2/multiplexedservice.cpp
  • Plasma taskmanager applet (tooltip & context menu): plasma-desktop/applets/taskmanager
  • KDE Connect mediaplayer

Players

KDE Software

  • Amarok
  • Vvave
  • Elisa
  • JuK
  • Plasma Media Center
  • Gwenview
  • Plasma Browser Integration
  • IDEA: Okular (Presentation)

Other

  • Vlc
  • Spotify (implementation is broken, no support for volume and seeking)

Features for MPRIS 3.0

Dumping ground of use cases with sketched solutions (Properties, Actions). Nothing official, used for brainstorming for now.

General

Some players (e.g. Youtube) do not have a Stop action. Plasma-Browser-Integration does not have it as well (because browser API does not expose it?). Some players have elaborated media picker dialogs going beyond what can be implemented using the MPRIS properties "supported schemes" and "supported mimetypes". It might be nice to alternatively trigger the native media picker dialog. There are players for media without sound and players not exposing control about the sound volume (see https://lists.freedesktop.org/archives/mpris/2018q1/000070.html). Some players provide thumbnails for the whole track and others even individuals for parts of the track. Mimetype for the current track (xesam:mimeType) in the Metadata property.

Properties:

  • CanStop
  • CanControlVolume
  • HasMediaPicker
  • PlayerType ("music player", "video player", "image player", "presentation player")
  • SupportedThumbnailSizes
  • MetaDataRanges
  • Supported resolutions/quality levels

Actions:

  • OpenMediaPicker/OpenMediaManager
  • GetThumbnail (formats?)
  • Pick resolution/quality level

Streaming

Stream players can have the option to buffer if instant replay is on hold. The buffering could be done data-provider/source side or player-side. Length of the buffered track can depend on policies (fixed) or available storage (variable). Players would allow to seek in the buffer and playing the stream with an offset.

Properties:

  • CanBuffer

Advertisment handling

Players for commercial media often embed advertisements. E.g. embedded clips. Or overlay information. Some allow skipping or hiding the advertisement, some after some timeout.

Properties:

  • CanSkipAd
  • state "Showing Ads"

Presentation

Players support showing a pointer.

Properties:

  • CanBlankScreen
  • CanPointer
  • CanMarker
  • BlankScreenColors (black, white, RGB)
  • Marker

Actions:

  • BlankScreen(color)
  • UnblankScreen
  • AddPointer
  • MovePointer
  • RemovePointer
  • AddMarker
  • MoveMarker
  • RemoveMarker

Braindump

What can consider a "track" a media object which can have multiple parallel subtracks of types like typically sound and image-frames (not wide-spread but possible would be physical object control like for puppets moving, fountains shooting, pipes of street organ blowing, light spots glowing, or, he, odor spraying ;) ). And the player then goes and "renders" the data from the tracks. The data themselves would either allow random access because coming as a fixed object from some storage like the filesystem or full database or coming from some deterministic data generator. Or the data would not allow random access because it is generated non-deterministically e.g. from sensors on the physical world (like microphone, camera) without any buffering.

So with that abstract thinking, a simple static slide with some timeout is the same as a short video just showing only the same image. And a simple static slide with no timeout is the same as a video livestream showing only the same image. And thus "Stop" and "Pause" with their different concepts at least by design should be applied the same.

Thinking further slides in a presentation show also can have a let's-call-it sub-slideshow, where a slide can reach several states by items appearing, changing or disappearing (and just thinking about linear organized shows :) ). Once we get there and try to create model concepts for the needed new MPRIS interfaces, perhaps the right now proposed mapping has to be rethought indeed. But for what I drafted some time ago, for now mapping a single (main) slide, which is usually shown for some minutes, to a track, which usually is some 3-minutes pop music, should work out with what exists in MPRIS. Multi-hierarchy track notation might be also interesting for non-3-minutes-pop-music tracks. Think movies separated in story chapters, classical Western music (operas, symphonies) being composed of units of units. So the same structuring as known e.g. from books might be useful to have, to allow navigation using the same interaction patterns where sane.