We always do BeginClip/EndClip if it's a solid tile and the blend mode
is not default.
Also fix missing entry in pipeline layout (affects Vulkan but not Metal).
This patch switches to a variable size encoding of draw objects.
In addition to the CPU-side scene encoding, it changes the representation of intermediate per draw object state from the `Annotated` struct to a variable "info" encoding. In addition, the bounding boxes are moved to a separate array (for a more "structure of "arrays" approach). Data that's unchanged from the scene encoding is not copied. Rather, downstream stages can access the data from the scene buffer (reducing allocation and copying).
Prefix sums, computed in `DrawMonoid` track the offset of both scene and intermediate data. The tags for the CPU-side encoding have been split into their own stream (again a change from AoS to SoA style).
This is not necessarily the final form. There's some stuff (including at least one piet-gpu-derive type) that can be deleted. In addition, the linewidth field should probably move from the info to path-specific. Also, the 1:1 correspondence between draw object and path has not yet been broken.
Closes#152
This just runs ninja on the piet-gpu/shaders on a Windows machine, so
translated shaders match the existing pipeline.
At some point, we'll rework this to reduce friction.
* Add blend and composition mode enums to API
* Mirror these in the shaders
* Add new public blend function to PietGpuRenderContext that mirrors clip
* Plumb the modes through the pipeline from scene to kernel4
This PR reworks the clip implementation. The highlight is that clip bounding box accounting is now done on GPU rather than CPU. The clip mask is also rasterized on EndClip rather than BeginClip, which decreases memory traffic needed for the clip stack.
This is a pretty good working state, but not all cleanup has been applied. An important next step is to remove the CPU clip accounting (it is computed and encoded, but that result is not used). Another step is to remove the Annotated structure entirely.
Fixes#88. Also relevant to #119
Also updates comment.
We know the implementation is incomplete and needs refinement, but it
seems useful to commit as a starting point for further work.
This exposes interfaces to render glyphs into a texture atlas. The main changes are:
* Methods to plumb raw Metal GPU resources (device, texture, etc) into piet-gpu-hal objects.
* A new glyph_render API specialized to rendering glyphs. This is basically the same as just painting to a canvas, but will allow better caching (and has more direct access to fonts, bypassing the Piet font type which is underdeveloped).
* Ability to render to A8 target in addition to RGBA.
WIP, there are some rough edges, not least of which is that the image format changes are only on mac and cause compile errors elsewhere.
Make max workgroup size 256 and respect LG_WG_FACTOR.
Because the monoid scans only support a height of 2, this will reduce
the maximum scene complexity we can render. But it also increases
compatibility. Supporting larger scans is a TODO.
Fix incorrect workgroup sizes, and change strategy for assigning binding
numbers; ultimately we should get correct values for those from shader
compilation, but this works for now.
This is one of the stages in the new element pipeline. It's a simple
one, just a prefix sum of a couple counts, and some of it will probably
get merged with a downstream stage, but we'll do it separately for now
for convenience.
This patch also contains an update to Vulkan tools 1.2.198, which
accounts for the large diff of translated shaders.
This patch contains the core of the path stream processing, though some
integration bits are missing. The core logic is tested, though
combinations of path types, transforms, and line widths are not (yet).
Progress towards #119
There's a bit of reorganizing as well. Shader stages are made available
from piet-gpu to the test rig, config is now a proper structure
(marshaled with bytemuck).
This commit just has the transform stage, which is a simple monoid scan
of affine transforms.
Progress toward #119
This gets it working on mac. Also delete old implementation.
There's also an update to winit 0.25 in here, because it was easier to
roll forward than fix inconsistent Cargo.lock. At some point, we should
systematically update all deps.
Use an array of bindtypes rather than the previous situation, which was
a choice of buffer counts, or a heavier builder pattern.
The main thing this unlocks is distinguishing between readonly and
read/write buffers, which is important for DX12.
This is WIP, the Metal part hasn't been done, and the old stuff not
deleted.
Part of #125
This was motivated by experiments with the Vulkan memory model. To use
that, we actually need to explicitly enable the relevant feature on
device creation time. That's a lot easier to do now that push_next works
on the structs in that chain. This PR doesn't do that though, it only
upgrades the dependency and cleans up deprecations.
The flag read needs acquire semantics. There are a number of ways that
could be expressed, but a generally portable way is to have a barrier
after. However, in the translation to Metal, that barrier needs to be in
uniform control flow. This patch does some workarounds to ensure that.
Reuse submitted command buffers rather than continually allocating them.
This patch also improves the story across the different backends. On
DX12 it was reusing allocators without resetting them, which could be a
leak. And on Metal the reset "fails," so there's always a new alloc.
This patch gets rid of warnings and runs cargo fmt.
A lot of the warnings were unused items (especially in DX12 land). At
some point we might want to bring some of that back, at which point it
might be useful to refer to what was deleted in this commit.
Pipeline the CPU and GPU work so that two frames can be in flight at
once.
This dramatically improves the performance especially on Android. Note
that I've also changed the default configuration to be 3 frames in
flight and FIFO mode.
If there is a command buffer in flight on exit from the winit app, wait
on it so that the resources get destroyed cleanly.
There may be a more aggressive strategy to quick-exit, but this is
probably the most reliable approach and I see it in other code bases.
Make the scene dependent on timing.
This commit patches the HAL to reuse command buffers; this works well on
Vulkan and prevents a leak, but breaks the other back-ends. That will
require a solution, possibly including plumbing up the resource lifetime
responsibilities to the client.
Other things might be hacky as well.
memoryBarrierBuffer is mapped to the threadgroup_barrier function in
Metal, which is a control barrier that must be executed by all threads
(or none). This change establishes that property for the two memory
barriers we have.
While here, remove ENABLE_IMAGE_INDICES completely; it was disabled in
an earlier change.
Signed-off-by: Elias Naur <mail@eliasnaur.com>
Separate out render context upload from renderer creation. Upload ramps
to GPU buffer. Encode gradients to scene description. Fix a number of
bugs in uploading and processing.
This renders gradients in a test image, but has some shortcomings. For
one, staging buffers need to be applied for a couple things (they're
just host mapped for now). Also, the interaction between sRGB and
premultiplied alpha isn't quite right. The size of the gradient ramp
buffer is fixed and should be dynamic.
And of course there's always more optimization to be done, including
making the upload of gradient ramps more incremental, and probably
hashing of the stops instead of the processed ramps.
Don't recompute the parameters from quadratic subdivision, but rather
retain them across the two phases (summing the subdivision estimate, and
generating the subdivisions). The motivation for this is that the values
were subtly different (differing by 1 or 2 least signficant bits) across
the two phases. It *might* also be faster depending on ALU/memory
relative performance.
Fixes#107
WIP. Most of the GPU-side work should be done (though it's not tested
end-to-end and it's certainly possible I missed something), but still
needs work on encoding side.
Move types into the toplevel and hide implementation details. Remove
deref of hub CmdBuf to mux. Restrict public visibility of internals.
Most items have some docs, though improvements are still possible. In
particular, there should be detailed safety info.
Add workgroup size to dispatch call (needed by metal). Change all fence
references to mutable for consistency.
Move backend traits to a separate file (move them out of the toplevel
namespace in preparation for the hub types going there, to make the
public API nicer).
Add a method and macro for automatically choosing shader code, and
change collatz example to generate all 3 kinds on build.
Make the hub abstraction connect to the mux, rather than directly to the
Vulkan back-end.
As of this commit, both command line and winit examples work (on
Vulkan). In theory it should be possible to get them working on Dx12 as
well by translating the shader code, but there's a lot that can go
wrong.
This commit also contains a bunch of changes to mux to make conditional
compilation of match arms work, and new methods to support swapchain.
Adds a new "mux" module which can have multiple backends. As of this
commit, it's not wired up at all, but the functionality should be
reasonably complete.
Minor tweaks to the backend trait to accommodate this, mostly changing
Fence and Semaphore to references so they don't need to be Copy.
Part of the work toward #95
Add a method to create a buffer with initial content, which requires
staging buffers under the hood.
This patch also changes the lower-level (Vulkan) interface to be closer
to the raw Vulkan call.
Test whether the GPU supports subgroups (including size control) and
memory model.
This patch does all the ceremony needed for runtime query, including
testing the Vulkan version and only probing the extensions when
available. Thus, it should work fine on older devices (not yet tested).
The reporting of capabilities follows Vulkan concepts, but is not
particularly Vulkan-specific.
The compute shaders have a check for the succesful completion of their
preceding stage. However, consider a shader execution path like the
following:
void main()
if (mem_error != NO_ERROR) {
return;
}
...
malloc(...);
...
barrier();
...
}
and shader execution that fails to allocate memory, thereby setting
mem_error to ERR_MALLOC_FAILED in malloc before reaching the barrier. If
another shader execution then begins execution, its mem_eror check will
make it return early and not reach the barrier.
All GPU APIs require (dynamically) uniform control flow for barriers,
and the above case may lead to GPU hangs in practice.
Fix this issue by replacing the early exits with careful checks that
don't interrupt barrier control flow.
Unfortunately, it's harder to prove the soundness of the new checks, so
this change also clears dynamic memory ranges in MEM_DEBUG mode when
memory is exhausted. The result is that accessing memory after
exhaustion triggers an error.
Signed-off-by: Elias Naur <mail@eliasnaur.com>
Don't run extensions unless they're available. This includes querying
for descriptor indexing, and running one of two versions of kernel4
depending on whether it's enabled.
Part of the support needed for #78
The BeginClip and EndClip bounding boxes are absolute and must pairwise
match. I mistakenly modified the BeginClip bounding box for stroked
clips.
Signed-off-by: Elias Naur <mail@eliasnaur.com>
Adds an example binary that can be run with `cargo apk`.
One thing that will still need manual tuning (for now) is the size of
the canvas. A good followup is to sense that from the window size.
Don't run extensions unless they're available. This includes querying
for descriptor indexing, and running one of two versions of kernel4
depending on whether it's enabled.
Part of the support needed for #78
coarse.comp knows the maximum stack depth, and can pre-allocate scratch
space for kernel4.comp. Kernel4 no longer contains allocations nor
control barriers.
The invocation local blend stack is gone as well; it didn't seem to make
any difference in performance to always use global memory for pushing
and popping.
Signed-off-by: Elias Naur <mail@eliasnaur.com>
See http://ssp.impulsetrain.com/gamma-premult.html for a description of
the format.
Pre-multiplied alpha only matters for translucent objects; draw a few
such shapes in the test render.
Signed-off-by: Elias Naur <mail@eliasnaur.com>
Reclaims the space waste from splitting fill mode commands from fill
commands.
For example, a CmdStroke + CmdColor use an extra tag word compared to
the former combined CmdStroke. This change shaves off that one word.
In the future, we can pack several command tags into one tag word,
saving even more space.
Fixes#66
Signed-off-by: Elias Naur <mail@eliasnaur.com>
This change completes general support for stroked fills for clips and
images.
Annotated_size increases from 28 to 32, because of the linewidth field
added to AnnoImage. Stroked image fills are presumably rare, and if
memory pressure turns out to be a bottleneck, we could replace the
linewidth field with a separate AnnoLinewidth elements.
Updates #70
Signed-off-by: Elias Naur <mail@eliasnaur.com>
Before this change, every command (FillColor, FillImage, BeginClip)
had (or would need) stroke, (non-zero) fill and solid variants.
This change adds a command for each fill mode and their parameters,
reducing code duplication and adds support for stroked FillImage and
BeginClip as a side-effect.
The rest of the pipeline doesn't yet support Stroked FillImage and
BeginClip. That's a follow-up change.
Since each command includes a tag, this change adds an extra word for
each fill and stroke. That waste is also addressed in a follow-up.
Updates #70
Signed-off-by: Elias Naur <mail@eliasnaur.com>