|This is the developer wiki and this page is outdated. The manual is here: https://userbase.kde.org/Krita/Manual|
Generator layers are layers that generate channel data from parameters, instead of raw painted data. They are similar to filter layers, except that they do not use any layer/channel input (though they can be blended like normal layers). They cannot be painted on directly.
Filters, conversely, either perform operations on the input pixels that are not covered by basic blending modes, or are "convolution-like", which is to say that the output for a particular pixel depends on the input values of pixels without a one-to-one correspondence (usually a many-to-one relation, which is effectively the same as many-to-many, though some - such as pixelize - may be one-to-many).
Produces a solid color. Ideally, should be able to specify the color in any color space.
No parameters other than the color; use blending modes and layer opacity as needed.
Produces a gradient, which may include alpha as well as color values. Ideal would be to specify type (linear, radial, conic, etc.) and relevant control points.
(Maybe this should be implemented with Flake shapes instead?)
Generates purely random data in a single channel (probably i16). Use blending modes and layer opacity to control how noise interacts with other layers.
Configuration, if any other than random seed, would consist of a noise-generating algorithm to use (e.g. normal vs. Gaussian, the latter of which tends to make "higher contrast" noise, however we shouldn't do anything that does not differ significantly from the generator plus a simple filter, e.g. brightness/contrast curves).
Generates Perlin noise, according to requested frequency and octaves. From talking with Cyrille, it seems it would be nice for the filter from 1.6 to reappear here (i.e. as a generator, and not as a filter), as it doesn't need to be a filter (remembering that a "filter" does something beyond simple layer compositing with the layer below).
Generates cloud-like noise somewhat similar to Perlin noise, using frequency and three parameters that control the "look":
Basically, how this works, an initial grid (the first octave) is laid down. Successive midpoints are interpolated adding a random jitter in amount equal to the "octave strength", which is the jitter times [the damping raised to the power of [the octave number minus two]]. The initial value for each midpoint (before adding jitter) is a weighted average of the values of the two points being interpolated modified by the phase.
Low values for damping result in smoother output, while high values give more effect to later octaves. (An interesting effect is to use a low jitter and damping > 1, which gives something resembling halftoned clouds.) High values for phase (especially around 1.5 and above) tend to give a "stucco", or "quantized" look to the result.
Mwoehlke has code for this as a stand-alone Qt sample program. All that is needed is to port this to a Krita framework.
Basically, a random walk, with density-based color mapping similar to the flames. Parameters could be initial seed, duration, stride length, stride length bias function, number of starting points, gravity (linear, towards-center) and edge behavior (none, wrap, reflect). The result is vaguely similar to Plasma.
Mwoehlke has code for this as a stand-alone Qt sample program.
A basic wave-reflection generator. Parameters include wavelength, number of initial sources, and minimum/maximum age.
Similar to Brownian, but without history. The seed is one or more points of super-density. Each iteration performs a number of Brownian iterations proportional to density for all pixels with a density above a certain threshold. In other words, this simulates diffusion of a liquid/gas inside of another liquid/gas.
All flames have parameters controlling the camera (i.e. zoom, pan, rotation).
A basic flame using Peter de Jong's map, as implemented in fyre:
x' = sin(a * y) - cos(b * x) y' = sin(c * x) - cos(d * y)
Parameters are of course the constants A, B, C, D. See below for color mapping. Also the options for transient (initial point, shape to map, number of iterations to run). Smoothing is assigned by a process that essentially accumulates "antialiased points", which achieves a similar result to oversampling with less time and memory penalty.
Mwoehlke has code for this as a stand-alone Qt sample program for the non-transient case. It has been written without any consultation of the fyre code (just observation and knowledge of the above formula).
Another flame similar to de Jong. (In fact, I don't know the algorithm it uses, it might even be the same as de Jong.) It may be desirable to combine this with de Jong and include other algorithms.
The "traditional" flame as in Apophysis and GIMP, and as demonstrated in the X screensaver by the same name. Ideally we should be able to import saved settings from some popular IFS programs.
Produces a julia set for a given point. Depending on how interesting that is, possibly also for the points in a Brownian walk.
There are probably a number of filters (especially if we start borrowing heavily from GIMP) that could be generators instead. For example, lens flare, GIMP's "supernova". We should also investigate if it is necessary for noise to be a filter, or if we can assign a sane default blending mode (overlay?) that would obviate the need for it as a filter. There are UI considerations here, though (and backward compatibility might be an issue).
To do useful things with some of the generators, we definitely want a gradient map filter. This would take a specified input channel (Gray, Luma, Red, Hue, etc) and map the values (normalizing if necessary) onto a gradient. For example, mapping gray to the gradient from red to green to blue, black would become red, medium gray would become green, and white would become blue.
Noise, fluids, and non-transient attractors are colorized independent of the rendering process (i.e. the part that takes longest), and can be colored by gradient mapping, or even use of the solid generator with appropriate blending modes. It would be best if we can handle exposure by adding a brightness/contrast curves filter into the stack, rather than needing to handle it within the generator (less code duplication, more separation of function). This means that these can be recolored in near real-time, without having to re-render the generated layer, since the colorizing process depends only on the output of the generator.
Coming Eventually! (Possibly on it's own page...)
GA output (gray + alpha) might work, but mapping becomes "interesting" in that we need to map alpha to exposure, with support for overexposure, while mapping gray to a color via a normal gradient map. A gradient map that accepts two input channels to map onto a two-dimensional mapping might not be a bad idea, especially as there may be other uses for 2d gradients.