Merge pull request #136 from linebender/path_element

Path stream processing
This commit is contained in:
Raph Levien 2021-12-02 13:36:34 -08:00 committed by GitHub
commit 50153a7797
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
48 changed files with 3436 additions and 183 deletions

65
doc/pathseg.md Normal file
View file

@ -0,0 +1,65 @@
# Path segment encoding
The new (November 2021) element processing pipeline has a particularly clever approach to path segment encoding, and this document explains that.
By way of motivation, in the old scene encoding, all elements take a fixed amount of space, currently 36 bytes, but that's at risk of expanding if a new element type requires even more space. The new design is based on stream compaction. The input is separated into multiple streams, so in particular path segment data gets its own stream. Further, that stream can be packed.
As explained in [#119], the path stream is separated into one stream for tag bytes, and another stream for the path segment data.
## Prefix sum for unpacking
The key to this encoding is a prefix sum over the size of each element's payload. The payload size can be readily derived from the tag byte itself (see below for details on this), then an exclusive prefix sum gives the start offset of the packed encoding for each element. The combination of the tag byte and that offset gives you everything needed to unpack a segment.
## Tag byte encoding
Bits 0-1 indicate the type of path segment: 1 is line, 2 is quadratic bezier, 3 is cubic bezier.
Bit 2 indicates whether this is the last segment in a subpath; see below.
Bit 3 indicates whether the coordinates are i16 or f32.
Thus, values of 1-7 indicate the following combinations in a 16 bit encoding, so `size` counts both points and u32 indices.
```
value op size
1 lineto 1
2 quadto 2
3 curveto 3
5 lineto + end 2
6 quadto + end 3
7 curveto + end 4
```
Values of 9-15 are the same but with a 32 bit encoding, so double `size` to compute the size in u32 units.
A value of 0 indicates no path segment present; it may be a nop, for example padding at the end of the stream to make it an integral number of workgroups, or other bits in the tag byte might indicate a transform, end path, or line width marker (with one bit left for future expansion). Values of 4, 8, and 12 are unused.
In addition to path segments, bits 4-6 are "one hot" encodings of other element types. Bit 4 set (0x10) is a path (encoded after all path segments). Bit 5 set (0x20) is a transform. Bit 6 set (0x40) is a line width setting. Transforms and line widths have their own streams in the encoded scene buffer, so prefix sums of the counts serve as indices into those streams.
### End subpath handling
In the previous encoding, every path segment was encoded independently; the segments could be shuffled within a path without affecting the results. However, that encoding failed to take advantage of the fact that subpaths are continuous, meaning that the start point of each segment is equal to the end point of the previous segment. Thus, there was redundancy in the encoding, and more CPU-side work for the encoder.
This encoding fixes that. Bit 2 of the tag byte indicates whether the segment is the last one in the subpath. If it is set, then the size encompasses all the points in the segment. If not, then it is short one, which leaves the offset for the next segment pointing at the last point in this one.
There is a relatively straightforward state maching to convert the usual moveto/lineto representation to this one. In short, the point for the moveto is encoded, a moveto or closepath sets the end bit for the previously encoded segment (if any), and the end bit is also set for the last segment in the path. Certain cases, such as a lone moveto, must be avoided.
### Bit magic
The encoding is carefully designed for fast calculation based on bits, in particular to quickly compute a sum of counts based on all four tag bytes in a u32.
To count whether a path segment is present, compute `(tag | (tag >> 1)) & 1`. Thus, the number of path segments in a 4-byte word is `bitCount((tag | (tag >> 1)) & 0x1010101)`. Also note: `((tag & 3) * 7) & 4` counts the same number of bits and might save one instruction given that `tag & 3` can be reused below.
The number of points (ie the value of the table above) is `(tag & 3) + ((tag >> 2) & 1)`. The value `(tag >> 3) & 1` is 0 for 16 bit encodings and 1 for 32 bit encodings. Thus, `points + (point & (((tag >> 3) & 1) * 7))` is the number of u32 words. All these operations can be performed in parallel on the 4 bytes in a word, justifying the following code:
```glsl
uint point_count = (tag & 0x3030303) + ((tag >> 2) & 0x1010101);
uint word_count = point_count + (point_count & (((tag >> 3) & 0x1010101) * 15));
word_count += word_count >> 8;
word_count += word_count >> 16;
word_count &= 0xff;
```
One possible optimization to explore is packing multiple tags into a byte by or'ing together the flags. This would add a small amount of complexity into the interpretation (mostly in pathseg), and increase utilization a bit.
[#119]: https://github.com/linebender/piet-gpu/issues/119

View file

@ -813,8 +813,8 @@ impl Buffer {
) -> Result<BufReadGuard<'a>, Error> { ) -> Result<BufReadGuard<'a>, Error> {
let offset = match range.start_bound() { let offset = match range.start_bound() {
Bound::Unbounded => 0, Bound::Unbounded => 0,
Bound::Excluded(&s) => s.try_into()?, Bound::Excluded(_) => unreachable!(),
Bound::Included(_) => unreachable!(), Bound::Included(&s) => s.try_into()?,
}; };
let end = match range.end_bound() { let end = match range.end_bound() {
Bound::Unbounded => self.size(), Bound::Unbounded => self.size(),

Binary file not shown.

Binary file not shown.

View file

@ -0,0 +1,29 @@
// SPDX-License-Identifier: Apache-2.0 OR MIT OR Unlicense
// Clear path bbox to prepare for atomic min/max.
#version 450
#extension GL_GOOGLE_include_directive : enable
#include "mem.h"
#include "setup.h"
#define LG_WG_SIZE 9
#define WG_SIZE (1 << LG_WG_SIZE)
layout(local_size_x = WG_SIZE, local_size_y = 1) in;
layout(binding = 1) readonly buffer ConfigBuf {
Config conf;
};
void main() {
uint ix = gl_GlobalInvocationID.x;
if (ix < conf.n_elements) {
uint out_ix = (conf.bbox_alloc.offset >> 2) + 4 * ix;
memory[out_ix] = 0xffff;
memory[out_ix + 1] = 0xffff;
memory[out_ix + 2] = 0;
memory[out_ix + 3] = 0;
}
}

Binary file not shown.

View file

@ -57,3 +57,24 @@ build gen/transform_leaf.spv: glsl transform_leaf.comp | scene.h tile.h setup.h
build gen/transform_leaf.hlsl: hlsl gen/transform_leaf.spv build gen/transform_leaf.hlsl: hlsl gen/transform_leaf.spv
build gen/transform_leaf.dxil: dxil gen/transform_leaf.hlsl build gen/transform_leaf.dxil: dxil gen/transform_leaf.hlsl
build gen/transform_leaf.msl: msl gen/transform_leaf.spv build gen/transform_leaf.msl: msl gen/transform_leaf.spv
build gen/pathtag_reduce.spv: glsl pathtag_reduce.comp | pathtag.h setup.h mem.h
build gen/pathtag_reduce.hlsl: hlsl gen/pathtag_reduce.spv
build gen/pathtag_reduce.dxil: dxil gen/pathtag_reduce.hlsl
build gen/pathtag_reduce.msl: msl gen/pathtag_reduce.spv
build gen/pathtag_root.spv: glsl pathtag_scan.comp | pathtag.h
flags = -DROOT
build gen/pathtag_root.hlsl: hlsl gen/pathtag_root.spv
build gen/pathtag_root.dxil: dxil gen/pathtag_root.hlsl
build gen/pathtag_root.msl: msl gen/pathtag_root.spv
build gen/bbox_clear.spv: glsl bbox_clear.comp | setup.h mem.h
build gen/bbox_clear.hlsl: hlsl gen/bbox_clear.spv
build gen/bbox_clear.dxil: dxil gen/bbox_clear.hlsl
build gen/bbox_clear.msl: msl gen/bbox_clear.spv
build gen/pathseg.spv: glsl pathseg.comp | tile.h pathseg.h pathtag.h setup.h mem.h
build gen/pathseg.hlsl: hlsl gen/pathseg.spv
build gen/pathseg.dxil: dxil gen/pathseg.hlsl
build gen/pathseg.msl: msl gen/pathseg.spv

Binary file not shown.

Binary file not shown.

View file

@ -0,0 +1,55 @@
struct Alloc
{
uint offset;
};
struct Config
{
uint n_elements;
uint n_pathseg;
uint width_in_tiles;
uint height_in_tiles;
Alloc tile_alloc;
Alloc bin_alloc;
Alloc ptcl_alloc;
Alloc pathseg_alloc;
Alloc anno_alloc;
Alloc trans_alloc;
Alloc bbox_alloc;
uint n_trans;
uint trans_offset;
uint pathtag_offset;
uint linewidth_offset;
uint pathseg_offset;
};
static const uint3 gl_WorkGroupSize = uint3(512u, 1u, 1u);
ByteAddressBuffer _21 : register(t1);
RWByteAddressBuffer _44 : register(u0);
static uint3 gl_GlobalInvocationID;
struct SPIRV_Cross_Input
{
uint3 gl_GlobalInvocationID : SV_DispatchThreadID;
};
void comp_main()
{
uint ix = gl_GlobalInvocationID.x;
if (ix < _21.Load(0))
{
uint out_ix = (_21.Load(40) >> uint(2)) + (4u * ix);
_44.Store(out_ix * 4 + 8, 65535u);
_44.Store((out_ix + 1u) * 4 + 8, 65535u);
_44.Store((out_ix + 2u) * 4 + 8, 0u);
_44.Store((out_ix + 3u) * 4 + 8, 0u);
}
}
[numthreads(512, 1, 1)]
void main(SPIRV_Cross_Input stage_input)
{
gl_GlobalInvocationID = stage_input.gl_GlobalInvocationID;
comp_main();
}

View file

@ -0,0 +1,57 @@
#include <metal_stdlib>
#include <simd/simd.h>
using namespace metal;
struct Alloc
{
uint offset;
};
struct Config
{
uint n_elements;
uint n_pathseg;
uint width_in_tiles;
uint height_in_tiles;
Alloc tile_alloc;
Alloc bin_alloc;
Alloc ptcl_alloc;
Alloc pathseg_alloc;
Alloc anno_alloc;
Alloc trans_alloc;
Alloc bbox_alloc;
uint n_trans;
uint trans_offset;
uint pathtag_offset;
uint linewidth_offset;
uint pathseg_offset;
};
struct ConfigBuf
{
Config conf;
};
struct Memory
{
uint mem_offset;
uint mem_error;
uint memory[1];
};
constant uint3 gl_WorkGroupSize [[maybe_unused]] = uint3(512u, 1u, 1u);
kernel void main0(device Memory& _44 [[buffer(0)]], const device ConfigBuf& _21 [[buffer(1)]], uint3 gl_GlobalInvocationID [[thread_position_in_grid]])
{
uint ix = gl_GlobalInvocationID.x;
if (ix < _21.conf.n_elements)
{
uint out_ix = (_21.conf.bbox_alloc.offset >> uint(2)) + (4u * ix);
_44.memory[out_ix] = 65535u;
_44.memory[out_ix + 1u] = 65535u;
_44.memory[out_ix + 2u] = 0u;
_44.memory[out_ix + 3u] = 0u;
}
}

Binary file not shown.

Binary file not shown.

View file

@ -0,0 +1,643 @@
struct Alloc
{
uint offset;
};
struct TagMonoid
{
uint trans_ix;
uint linewidth_ix;
uint pathseg_ix;
uint path_ix;
uint pathseg_offset;
};
struct TransformSegRef
{
uint offset;
};
struct TransformSeg
{
float4 mat;
float2 translate;
};
struct PathCubicRef
{
uint offset;
};
struct PathCubic
{
float2 p0;
float2 p1;
float2 p2;
float2 p3;
uint path_ix;
uint trans_ix;
float2 stroke;
};
struct PathSegRef
{
uint offset;
};
struct Monoid
{
float4 bbox;
uint flags;
};
struct Config
{
uint n_elements;
uint n_pathseg;
uint width_in_tiles;
uint height_in_tiles;
Alloc tile_alloc;
Alloc bin_alloc;
Alloc ptcl_alloc;
Alloc pathseg_alloc;
Alloc anno_alloc;
Alloc trans_alloc;
Alloc bbox_alloc;
uint n_trans;
uint trans_offset;
uint pathtag_offset;
uint linewidth_offset;
uint pathseg_offset;
};
static const uint3 gl_WorkGroupSize = uint3(512u, 1u, 1u);
static const TagMonoid _135 = { 0u, 0u, 0u, 0u, 0u };
static const Monoid _567 = { 0.0f.xxxx, 0u };
RWByteAddressBuffer _111 : register(u0);
ByteAddressBuffer _574 : register(t2);
ByteAddressBuffer _639 : register(t1);
ByteAddressBuffer _710 : register(t3);
static uint3 gl_WorkGroupID;
static uint3 gl_LocalInvocationID;
static uint3 gl_GlobalInvocationID;
struct SPIRV_Cross_Input
{
uint3 gl_WorkGroupID : SV_GroupID;
uint3 gl_LocalInvocationID : SV_GroupThreadID;
uint3 gl_GlobalInvocationID : SV_DispatchThreadID;
};
groupshared TagMonoid sh_tag[512];
groupshared Monoid sh_scratch[512];
TagMonoid reduce_tag(uint tag_word)
{
uint point_count = tag_word & 50529027u;
TagMonoid c;
c.pathseg_ix = uint(int(countbits((point_count * 7u) & 67372036u)));
c.linewidth_ix = uint(int(countbits(tag_word & 1077952576u)));
c.path_ix = uint(int(countbits(tag_word & 269488144u)));
c.trans_ix = uint(int(countbits(tag_word & 538976288u)));
uint n_points = point_count + ((tag_word >> uint(2)) & 16843009u);
uint a = n_points + (n_points & (((tag_word >> uint(3)) & 16843009u) * 15u));
a += (a >> uint(8));
a += (a >> uint(16));
c.pathseg_offset = a & 255u;
return c;
}
TagMonoid combine_tag_monoid(TagMonoid a, TagMonoid b)
{
TagMonoid c;
c.trans_ix = a.trans_ix + b.trans_ix;
c.linewidth_ix = a.linewidth_ix + b.linewidth_ix;
c.pathseg_ix = a.pathseg_ix + b.pathseg_ix;
c.path_ix = a.path_ix + b.path_ix;
c.pathseg_offset = a.pathseg_offset + b.pathseg_offset;
return c;
}
TagMonoid tag_monoid_identity()
{
return _135;
}
float2 read_f32_point(uint ix)
{
float x = asfloat(_574.Load(ix * 4 + 0));
float y = asfloat(_574.Load((ix + 1u) * 4 + 0));
return float2(x, y);
}
float2 read_i16_point(uint ix)
{
uint raw = _574.Load(ix * 4 + 0);
float x = float(int(raw << uint(16)) >> 16);
float y = float(int(raw) >> 16);
return float2(x, y);
}
bool touch_mem(Alloc alloc, uint offset)
{
return true;
}
uint read_mem(Alloc alloc, uint offset)
{
Alloc param = alloc;
uint param_1 = offset;
if (!touch_mem(param, param_1))
{
return 0u;
}
uint v = _111.Load(offset * 4 + 8);
return v;
}
TransformSeg TransformSeg_read(Alloc a, TransformSegRef ref)
{
uint ix = ref.offset >> uint(2);
Alloc param = a;
uint param_1 = ix + 0u;
uint raw0 = read_mem(param, param_1);
Alloc param_2 = a;
uint param_3 = ix + 1u;
uint raw1 = read_mem(param_2, param_3);
Alloc param_4 = a;
uint param_5 = ix + 2u;
uint raw2 = read_mem(param_4, param_5);
Alloc param_6 = a;
uint param_7 = ix + 3u;
uint raw3 = read_mem(param_6, param_7);
Alloc param_8 = a;
uint param_9 = ix + 4u;
uint raw4 = read_mem(param_8, param_9);
Alloc param_10 = a;
uint param_11 = ix + 5u;
uint raw5 = read_mem(param_10, param_11);
TransformSeg s;
s.mat = float4(asfloat(raw0), asfloat(raw1), asfloat(raw2), asfloat(raw3));
s.translate = float2(asfloat(raw4), asfloat(raw5));
return s;
}
void write_mem(Alloc alloc, uint offset, uint val)
{
Alloc param = alloc;
uint param_1 = offset;
if (!touch_mem(param, param_1))
{
return;
}
_111.Store(offset * 4 + 8, val);
}
void PathCubic_write(Alloc a, PathCubicRef ref, PathCubic s)
{
uint ix = ref.offset >> uint(2);
Alloc param = a;
uint param_1 = ix + 0u;
uint param_2 = asuint(s.p0.x);
write_mem(param, param_1, param_2);
Alloc param_3 = a;
uint param_4 = ix + 1u;
uint param_5 = asuint(s.p0.y);
write_mem(param_3, param_4, param_5);
Alloc param_6 = a;
uint param_7 = ix + 2u;
uint param_8 = asuint(s.p1.x);
write_mem(param_6, param_7, param_8);
Alloc param_9 = a;
uint param_10 = ix + 3u;
uint param_11 = asuint(s.p1.y);
write_mem(param_9, param_10, param_11);
Alloc param_12 = a;
uint param_13 = ix + 4u;
uint param_14 = asuint(s.p2.x);
write_mem(param_12, param_13, param_14);
Alloc param_15 = a;
uint param_16 = ix + 5u;
uint param_17 = asuint(s.p2.y);
write_mem(param_15, param_16, param_17);
Alloc param_18 = a;
uint param_19 = ix + 6u;
uint param_20 = asuint(s.p3.x);
write_mem(param_18, param_19, param_20);
Alloc param_21 = a;
uint param_22 = ix + 7u;
uint param_23 = asuint(s.p3.y);
write_mem(param_21, param_22, param_23);
Alloc param_24 = a;
uint param_25 = ix + 8u;
uint param_26 = s.path_ix;
write_mem(param_24, param_25, param_26);
Alloc param_27 = a;
uint param_28 = ix + 9u;
uint param_29 = s.trans_ix;
write_mem(param_27, param_28, param_29);
Alloc param_30 = a;
uint param_31 = ix + 10u;
uint param_32 = asuint(s.stroke.x);
write_mem(param_30, param_31, param_32);
Alloc param_33 = a;
uint param_34 = ix + 11u;
uint param_35 = asuint(s.stroke.y);
write_mem(param_33, param_34, param_35);
}
void PathSeg_Cubic_write(Alloc a, PathSegRef ref, uint flags, PathCubic s)
{
Alloc param = a;
uint param_1 = ref.offset >> uint(2);
uint param_2 = (flags << uint(16)) | 1u;
write_mem(param, param_1, param_2);
PathCubicRef _458 = { ref.offset + 4u };
Alloc param_3 = a;
PathCubicRef param_4 = _458;
PathCubic param_5 = s;
PathCubic_write(param_3, param_4, param_5);
}
Monoid combine_monoid(Monoid a, Monoid b)
{
Monoid c;
c.bbox = b.bbox;
bool _472 = (a.flags & 1u) == 0u;
bool _480;
if (_472)
{
_480 = b.bbox.z <= b.bbox.x;
}
else
{
_480 = _472;
}
bool _488;
if (_480)
{
_488 = b.bbox.w <= b.bbox.y;
}
else
{
_488 = _480;
}
if (_488)
{
c.bbox = a.bbox;
}
else
{
bool _498 = (a.flags & 1u) == 0u;
bool _505;
if (_498)
{
_505 = (b.flags & 2u) == 0u;
}
else
{
_505 = _498;
}
bool _522;
if (_505)
{
bool _512 = a.bbox.z > a.bbox.x;
bool _521;
if (!_512)
{
_521 = a.bbox.w > a.bbox.y;
}
else
{
_521 = _512;
}
_522 = _521;
}
else
{
_522 = _505;
}
if (_522)
{
float4 _529 = c.bbox;
float2 _531 = min(a.bbox.xy, _529.xy);
c.bbox.x = _531.x;
c.bbox.y = _531.y;
float4 _540 = c.bbox;
float2 _542 = max(a.bbox.zw, _540.zw);
c.bbox.z = _542.x;
c.bbox.w = _542.y;
}
}
c.flags = (a.flags & 2u) | b.flags;
c.flags |= ((a.flags & 1u) << uint(1));
return c;
}
Monoid monoid_identity()
{
return _567;
}
uint round_down(float x)
{
return uint(max(0.0f, floor(x) + 32768.0f));
}
uint round_up(float x)
{
return uint(min(65535.0f, ceil(x) + 32768.0f));
}
void comp_main()
{
uint ix = gl_GlobalInvocationID.x * 4u;
uint tag_word = _574.Load(((_639.Load(52) >> uint(2)) + (ix >> uint(2))) * 4 + 0);
uint param = tag_word;
TagMonoid local_tm = reduce_tag(param);
sh_tag[gl_LocalInvocationID.x] = local_tm;
for (uint i = 0u; i < 9u; i++)
{
GroupMemoryBarrierWithGroupSync();
if (gl_LocalInvocationID.x >= (1u << i))
{
TagMonoid other = sh_tag[gl_LocalInvocationID.x - (1u << i)];
TagMonoid param_1 = other;
TagMonoid param_2 = local_tm;
local_tm = combine_tag_monoid(param_1, param_2);
}
GroupMemoryBarrierWithGroupSync();
sh_tag[gl_LocalInvocationID.x] = local_tm;
}
GroupMemoryBarrierWithGroupSync();
TagMonoid tm = tag_monoid_identity();
if (gl_WorkGroupID.x > 0u)
{
TagMonoid _716;
_716.trans_ix = _710.Load((gl_WorkGroupID.x - 1u) * 20 + 0);
_716.linewidth_ix = _710.Load((gl_WorkGroupID.x - 1u) * 20 + 4);
_716.pathseg_ix = _710.Load((gl_WorkGroupID.x - 1u) * 20 + 8);
_716.path_ix = _710.Load((gl_WorkGroupID.x - 1u) * 20 + 12);
_716.pathseg_offset = _710.Load((gl_WorkGroupID.x - 1u) * 20 + 16);
tm.trans_ix = _716.trans_ix;
tm.linewidth_ix = _716.linewidth_ix;
tm.pathseg_ix = _716.pathseg_ix;
tm.path_ix = _716.path_ix;
tm.pathseg_offset = _716.pathseg_offset;
}
if (gl_LocalInvocationID.x > 0u)
{
TagMonoid param_3 = tm;
TagMonoid param_4 = sh_tag[gl_LocalInvocationID.x - 1u];
tm = combine_tag_monoid(param_3, param_4);
}
uint ps_ix = (_639.Load(60) >> uint(2)) + tm.pathseg_offset;
uint lw_ix = (_639.Load(56) >> uint(2)) + tm.linewidth_ix;
uint save_path_ix = tm.path_ix;
TransformSegRef _769 = { _639.Load(36) + (tm.trans_ix * 24u) };
TransformSegRef trans_ref = _769;
PathSegRef _779 = { _639.Load(28) + (tm.pathseg_ix * 52u) };
PathSegRef ps_ref = _779;
float2 p0;
float2 p1;
float2 p2;
float2 p3;
Alloc param_13;
Monoid local[4];
PathCubic cubic;
Alloc param_15;
for (uint i_1 = 0u; i_1 < 4u; i_1++)
{
uint tag_byte = tag_word >> (i_1 * 8u);
uint seg_type = tag_byte & 3u;
if (seg_type != 0u)
{
if ((tag_byte & 8u) != 0u)
{
uint param_5 = ps_ix;
p0 = read_f32_point(param_5);
uint param_6 = ps_ix + 2u;
p1 = read_f32_point(param_6);
if (seg_type >= 2u)
{
uint param_7 = ps_ix + 4u;
p2 = read_f32_point(param_7);
if (seg_type == 3u)
{
uint param_8 = ps_ix + 6u;
p3 = read_f32_point(param_8);
}
}
}
else
{
uint param_9 = ps_ix;
p0 = read_i16_point(param_9);
uint param_10 = ps_ix + 1u;
p1 = read_i16_point(param_10);
if (seg_type >= 2u)
{
uint param_11 = ps_ix + 2u;
p2 = read_i16_point(param_11);
if (seg_type == 3u)
{
uint param_12 = ps_ix + 3u;
p3 = read_i16_point(param_12);
}
}
}
float linewidth = asfloat(_574.Load(lw_ix * 4 + 0));
Alloc _865;
_865.offset = _639.Load(36);
param_13.offset = _865.offset;
TransformSegRef param_14 = trans_ref;
TransformSeg transform = TransformSeg_read(param_13, param_14);
p0 = ((transform.mat.xy * p0.x) + (transform.mat.zw * p0.y)) + transform.translate;
p1 = ((transform.mat.xy * p1.x) + (transform.mat.zw * p1.y)) + transform.translate;
float4 bbox = float4(min(p0, p1), max(p0, p1));
if (seg_type >= 2u)
{
p2 = ((transform.mat.xy * p2.x) + (transform.mat.zw * p2.y)) + transform.translate;
float4 _935 = bbox;
float2 _938 = min(_935.xy, p2);
bbox.x = _938.x;
bbox.y = _938.y;
float4 _943 = bbox;
float2 _946 = max(_943.zw, p2);
bbox.z = _946.x;
bbox.w = _946.y;
if (seg_type == 3u)
{
p3 = ((transform.mat.xy * p3.x) + (transform.mat.zw * p3.y)) + transform.translate;
float4 _971 = bbox;
float2 _974 = min(_971.xy, p3);
bbox.x = _974.x;
bbox.y = _974.y;
float4 _979 = bbox;
float2 _982 = max(_979.zw, p3);
bbox.z = _982.x;
bbox.w = _982.y;
}
else
{
p3 = p2;
p2 = lerp(p1, p2, 0.3333333432674407958984375f.xx);
p1 = lerp(p1, p0, 0.3333333432674407958984375f.xx);
}
}
else
{
p3 = p1;
p2 = lerp(p3, p0, 0.3333333432674407958984375f.xx);
p1 = lerp(p0, p3, 0.3333333432674407958984375f.xx);
}
float2 stroke = 0.0f.xx;
if (linewidth >= 0.0f)
{
stroke = float2(length(transform.mat.xz), length(transform.mat.yw)) * (0.5f * linewidth);
bbox += float4(-stroke, stroke);
}
local[i_1].bbox = bbox;
local[i_1].flags = 0u;
cubic.p0 = p0;
cubic.p1 = p1;
cubic.p2 = p2;
cubic.p3 = p3;
cubic.path_ix = tm.path_ix;
cubic.trans_ix = (gl_GlobalInvocationID.x * 4u) + i_1;
cubic.stroke = stroke;
uint fill_mode = uint(linewidth >= 0.0f);
Alloc _1071;
_1071.offset = _639.Load(28);
param_15.offset = _1071.offset;
PathSegRef param_16 = ps_ref;
uint param_17 = fill_mode;
PathCubic param_18 = cubic;
PathSeg_Cubic_write(param_15, param_16, param_17, param_18);
ps_ref.offset += 52u;
uint n_points = (tag_byte & 3u) + ((tag_byte >> uint(2)) & 1u);
uint n_words = n_points + (n_points & (((tag_byte >> uint(3)) & 1u) * 15u));
ps_ix += n_words;
}
else
{
local[i_1].bbox = 0.0f.xxxx;
uint is_path = (tag_byte >> uint(4)) & 1u;
local[i_1].flags = is_path;
tm.path_ix += is_path;
trans_ref.offset += (((tag_byte >> uint(5)) & 1u) * 24u);
lw_ix += ((tag_byte >> uint(6)) & 1u);
}
}
Monoid agg = local[0];
for (uint i_2 = 1u; i_2 < 4u; i_2++)
{
Monoid param_19 = agg;
Monoid param_20 = local[i_2];
agg = combine_monoid(param_19, param_20);
local[i_2] = agg;
}
sh_scratch[gl_LocalInvocationID.x] = agg;
for (uint i_3 = 0u; i_3 < 9u; i_3++)
{
GroupMemoryBarrierWithGroupSync();
if (gl_LocalInvocationID.x >= (1u << i_3))
{
Monoid other_1 = sh_scratch[gl_LocalInvocationID.x - (1u << i_3)];
Monoid param_21 = other_1;
Monoid param_22 = agg;
agg = combine_monoid(param_21, param_22);
}
GroupMemoryBarrierWithGroupSync();
sh_scratch[gl_LocalInvocationID.x] = agg;
}
GroupMemoryBarrierWithGroupSync();
uint path_ix = save_path_ix;
uint bbox_out_ix = (_639.Load(40) >> uint(2)) + (path_ix * 4u);
Monoid row = monoid_identity();
if (gl_LocalInvocationID.x > 0u)
{
row = sh_scratch[gl_LocalInvocationID.x - 1u];
}
for (uint i_4 = 0u; i_4 < 4u; i_4++)
{
Monoid param_23 = row;
Monoid param_24 = local[i_4];
Monoid m = combine_monoid(param_23, param_24);
bool do_atomic = false;
bool _1241 = i_4 == 3u;
bool _1248;
if (_1241)
{
_1248 = gl_LocalInvocationID.x == 511u;
}
else
{
_1248 = _1241;
}
if (_1248)
{
do_atomic = true;
}
if ((m.flags & 1u) != 0u)
{
if ((m.flags & 2u) == 0u)
{
do_atomic = true;
}
else
{
float param_25 = m.bbox.x;
_111.Store(bbox_out_ix * 4 + 8, round_down(param_25));
float param_26 = m.bbox.y;
_111.Store((bbox_out_ix + 1u) * 4 + 8, round_down(param_26));
float param_27 = m.bbox.z;
_111.Store((bbox_out_ix + 2u) * 4 + 8, round_up(param_27));
float param_28 = m.bbox.w;
_111.Store((bbox_out_ix + 3u) * 4 + 8, round_up(param_28));
bbox_out_ix += 4u;
do_atomic = false;
}
}
if (do_atomic)
{
bool _1300 = m.bbox.z > m.bbox.x;
bool _1309;
if (!_1300)
{
_1309 = m.bbox.w > m.bbox.y;
}
else
{
_1309 = _1300;
}
if (_1309)
{
float param_29 = m.bbox.x;
uint _1318;
_111.InterlockedMin(bbox_out_ix * 4 + 8, round_down(param_29), _1318);
float param_30 = m.bbox.y;
uint _1326;
_111.InterlockedMin((bbox_out_ix + 1u) * 4 + 8, round_down(param_30), _1326);
float param_31 = m.bbox.z;
uint _1334;
_111.InterlockedMax((bbox_out_ix + 2u) * 4 + 8, round_up(param_31), _1334);
float param_32 = m.bbox.w;
uint _1342;
_111.InterlockedMax((bbox_out_ix + 3u) * 4 + 8, round_up(param_32), _1342);
}
bbox_out_ix += 4u;
}
}
}
[numthreads(512, 1, 1)]
void main(SPIRV_Cross_Input stage_input)
{
gl_WorkGroupID = stage_input.gl_WorkGroupID;
gl_LocalInvocationID = stage_input.gl_LocalInvocationID;
gl_GlobalInvocationID = stage_input.gl_GlobalInvocationID;
comp_main();
}

View file

@ -0,0 +1,699 @@
#pragma clang diagnostic ignored "-Wmissing-prototypes"
#pragma clang diagnostic ignored "-Wmissing-braces"
#pragma clang diagnostic ignored "-Wunused-variable"
#include <metal_stdlib>
#include <simd/simd.h>
#include <metal_atomic>
using namespace metal;
template<typename T, size_t Num>
struct spvUnsafeArray
{
T elements[Num ? Num : 1];
thread T& operator [] (size_t pos) thread
{
return elements[pos];
}
constexpr const thread T& operator [] (size_t pos) const thread
{
return elements[pos];
}
device T& operator [] (size_t pos) device
{
return elements[pos];
}
constexpr const device T& operator [] (size_t pos) const device
{
return elements[pos];
}
constexpr const constant T& operator [] (size_t pos) const constant
{
return elements[pos];
}
threadgroup T& operator [] (size_t pos) threadgroup
{
return elements[pos];
}
constexpr const threadgroup T& operator [] (size_t pos) const threadgroup
{
return elements[pos];
}
};
struct Alloc
{
uint offset;
};
struct TagMonoid
{
uint trans_ix;
uint linewidth_ix;
uint pathseg_ix;
uint path_ix;
uint pathseg_offset;
};
struct TransformSegRef
{
uint offset;
};
struct TransformSeg
{
float4 mat;
float2 translate;
};
struct PathCubicRef
{
uint offset;
};
struct PathCubic
{
float2 p0;
float2 p1;
float2 p2;
float2 p3;
uint path_ix;
uint trans_ix;
float2 stroke;
};
struct PathSegRef
{
uint offset;
};
struct Monoid
{
float4 bbox;
uint flags;
};
struct Memory
{
uint mem_offset;
uint mem_error;
uint memory[1];
};
struct SceneBuf
{
uint scene[1];
};
struct Alloc_1
{
uint offset;
};
struct Config
{
uint n_elements;
uint n_pathseg;
uint width_in_tiles;
uint height_in_tiles;
Alloc_1 tile_alloc;
Alloc_1 bin_alloc;
Alloc_1 ptcl_alloc;
Alloc_1 pathseg_alloc;
Alloc_1 anno_alloc;
Alloc_1 trans_alloc;
Alloc_1 bbox_alloc;
uint n_trans;
uint trans_offset;
uint pathtag_offset;
uint linewidth_offset;
uint pathseg_offset;
};
struct ConfigBuf
{
Config conf;
};
struct TagMonoid_1
{
uint trans_ix;
uint linewidth_ix;
uint pathseg_ix;
uint path_ix;
uint pathseg_offset;
};
struct ParentBuf
{
TagMonoid_1 parent[1];
};
constant uint3 gl_WorkGroupSize [[maybe_unused]] = uint3(512u, 1u, 1u);
static inline __attribute__((always_inline))
TagMonoid reduce_tag(thread const uint& tag_word)
{
uint point_count = tag_word & 50529027u;
TagMonoid c;
c.pathseg_ix = uint(int(popcount((point_count * 7u) & 67372036u)));
c.linewidth_ix = uint(int(popcount(tag_word & 1077952576u)));
c.path_ix = uint(int(popcount(tag_word & 269488144u)));
c.trans_ix = uint(int(popcount(tag_word & 538976288u)));
uint n_points = point_count + ((tag_word >> uint(2)) & 16843009u);
uint a = n_points + (n_points & (((tag_word >> uint(3)) & 16843009u) * 15u));
a += (a >> uint(8));
a += (a >> uint(16));
c.pathseg_offset = a & 255u;
return c;
}
static inline __attribute__((always_inline))
TagMonoid combine_tag_monoid(thread const TagMonoid& a, thread const TagMonoid& b)
{
TagMonoid c;
c.trans_ix = a.trans_ix + b.trans_ix;
c.linewidth_ix = a.linewidth_ix + b.linewidth_ix;
c.pathseg_ix = a.pathseg_ix + b.pathseg_ix;
c.path_ix = a.path_ix + b.path_ix;
c.pathseg_offset = a.pathseg_offset + b.pathseg_offset;
return c;
}
static inline __attribute__((always_inline))
TagMonoid tag_monoid_identity()
{
return TagMonoid{ 0u, 0u, 0u, 0u, 0u };
}
static inline __attribute__((always_inline))
float2 read_f32_point(thread const uint& ix, const device SceneBuf& v_574)
{
float x = as_type<float>(v_574.scene[ix]);
float y = as_type<float>(v_574.scene[ix + 1u]);
return float2(x, y);
}
static inline __attribute__((always_inline))
float2 read_i16_point(thread const uint& ix, const device SceneBuf& v_574)
{
uint raw = v_574.scene[ix];
float x = float(int(raw << uint(16)) >> 16);
float y = float(int(raw) >> 16);
return float2(x, y);
}
static inline __attribute__((always_inline))
bool touch_mem(thread const Alloc& alloc, thread const uint& offset)
{
return true;
}
static inline __attribute__((always_inline))
uint read_mem(thread const Alloc& alloc, thread const uint& offset, device Memory& v_111)
{
Alloc param = alloc;
uint param_1 = offset;
if (!touch_mem(param, param_1))
{
return 0u;
}
uint v = v_111.memory[offset];
return v;
}
static inline __attribute__((always_inline))
TransformSeg TransformSeg_read(thread const Alloc& a, thread const TransformSegRef& ref, device Memory& v_111)
{
uint ix = ref.offset >> uint(2);
Alloc param = a;
uint param_1 = ix + 0u;
uint raw0 = read_mem(param, param_1, v_111);
Alloc param_2 = a;
uint param_3 = ix + 1u;
uint raw1 = read_mem(param_2, param_3, v_111);
Alloc param_4 = a;
uint param_5 = ix + 2u;
uint raw2 = read_mem(param_4, param_5, v_111);
Alloc param_6 = a;
uint param_7 = ix + 3u;
uint raw3 = read_mem(param_6, param_7, v_111);
Alloc param_8 = a;
uint param_9 = ix + 4u;
uint raw4 = read_mem(param_8, param_9, v_111);
Alloc param_10 = a;
uint param_11 = ix + 5u;
uint raw5 = read_mem(param_10, param_11, v_111);
TransformSeg s;
s.mat = float4(as_type<float>(raw0), as_type<float>(raw1), as_type<float>(raw2), as_type<float>(raw3));
s.translate = float2(as_type<float>(raw4), as_type<float>(raw5));
return s;
}
static inline __attribute__((always_inline))
void write_mem(thread const Alloc& alloc, thread const uint& offset, thread const uint& val, device Memory& v_111)
{
Alloc param = alloc;
uint param_1 = offset;
if (!touch_mem(param, param_1))
{
return;
}
v_111.memory[offset] = val;
}
static inline __attribute__((always_inline))
void PathCubic_write(thread const Alloc& a, thread const PathCubicRef& ref, thread const PathCubic& s, device Memory& v_111)
{
uint ix = ref.offset >> uint(2);
Alloc param = a;
uint param_1 = ix + 0u;
uint param_2 = as_type<uint>(s.p0.x);
write_mem(param, param_1, param_2, v_111);
Alloc param_3 = a;
uint param_4 = ix + 1u;
uint param_5 = as_type<uint>(s.p0.y);
write_mem(param_3, param_4, param_5, v_111);
Alloc param_6 = a;
uint param_7 = ix + 2u;
uint param_8 = as_type<uint>(s.p1.x);
write_mem(param_6, param_7, param_8, v_111);
Alloc param_9 = a;
uint param_10 = ix + 3u;
uint param_11 = as_type<uint>(s.p1.y);
write_mem(param_9, param_10, param_11, v_111);
Alloc param_12 = a;
uint param_13 = ix + 4u;
uint param_14 = as_type<uint>(s.p2.x);
write_mem(param_12, param_13, param_14, v_111);
Alloc param_15 = a;
uint param_16 = ix + 5u;
uint param_17 = as_type<uint>(s.p2.y);
write_mem(param_15, param_16, param_17, v_111);
Alloc param_18 = a;
uint param_19 = ix + 6u;
uint param_20 = as_type<uint>(s.p3.x);
write_mem(param_18, param_19, param_20, v_111);
Alloc param_21 = a;
uint param_22 = ix + 7u;
uint param_23 = as_type<uint>(s.p3.y);
write_mem(param_21, param_22, param_23, v_111);
Alloc param_24 = a;
uint param_25 = ix + 8u;
uint param_26 = s.path_ix;
write_mem(param_24, param_25, param_26, v_111);
Alloc param_27 = a;
uint param_28 = ix + 9u;
uint param_29 = s.trans_ix;
write_mem(param_27, param_28, param_29, v_111);
Alloc param_30 = a;
uint param_31 = ix + 10u;
uint param_32 = as_type<uint>(s.stroke.x);
write_mem(param_30, param_31, param_32, v_111);
Alloc param_33 = a;
uint param_34 = ix + 11u;
uint param_35 = as_type<uint>(s.stroke.y);
write_mem(param_33, param_34, param_35, v_111);
}
static inline __attribute__((always_inline))
void PathSeg_Cubic_write(thread const Alloc& a, thread const PathSegRef& ref, thread const uint& flags, thread const PathCubic& s, device Memory& v_111)
{
Alloc param = a;
uint param_1 = ref.offset >> uint(2);
uint param_2 = (flags << uint(16)) | 1u;
write_mem(param, param_1, param_2, v_111);
Alloc param_3 = a;
PathCubicRef param_4 = PathCubicRef{ ref.offset + 4u };
PathCubic param_5 = s;
PathCubic_write(param_3, param_4, param_5, v_111);
}
static inline __attribute__((always_inline))
Monoid combine_monoid(thread const Monoid& a, thread const Monoid& b)
{
Monoid c;
c.bbox = b.bbox;
bool _472 = (a.flags & 1u) == 0u;
bool _480;
if (_472)
{
_480 = b.bbox.z <= b.bbox.x;
}
else
{
_480 = _472;
}
bool _488;
if (_480)
{
_488 = b.bbox.w <= b.bbox.y;
}
else
{
_488 = _480;
}
if (_488)
{
c.bbox = a.bbox;
}
else
{
bool _498 = (a.flags & 1u) == 0u;
bool _505;
if (_498)
{
_505 = (b.flags & 2u) == 0u;
}
else
{
_505 = _498;
}
bool _522;
if (_505)
{
bool _512 = a.bbox.z > a.bbox.x;
bool _521;
if (!_512)
{
_521 = a.bbox.w > a.bbox.y;
}
else
{
_521 = _512;
}
_522 = _521;
}
else
{
_522 = _505;
}
if (_522)
{
float4 _529 = c.bbox;
float2 _531 = fast::min(a.bbox.xy, _529.xy);
c.bbox.x = _531.x;
c.bbox.y = _531.y;
float4 _540 = c.bbox;
float2 _542 = fast::max(a.bbox.zw, _540.zw);
c.bbox.z = _542.x;
c.bbox.w = _542.y;
}
}
c.flags = (a.flags & 2u) | b.flags;
c.flags |= ((a.flags & 1u) << uint(1));
return c;
}
static inline __attribute__((always_inline))
Monoid monoid_identity()
{
return Monoid{ float4(0.0), 0u };
}
static inline __attribute__((always_inline))
uint round_down(thread const float& x)
{
return uint(fast::max(0.0, floor(x) + 32768.0));
}
static inline __attribute__((always_inline))
uint round_up(thread const float& x)
{
return uint(fast::min(65535.0, ceil(x) + 32768.0));
}
kernel void main0(device Memory& v_111 [[buffer(0)]], const device ConfigBuf& _639 [[buffer(1)]], const device SceneBuf& v_574 [[buffer(2)]], const device ParentBuf& _710 [[buffer(3)]], uint3 gl_GlobalInvocationID [[thread_position_in_grid]], uint3 gl_LocalInvocationID [[thread_position_in_threadgroup]], uint3 gl_WorkGroupID [[threadgroup_position_in_grid]])
{
threadgroup TagMonoid sh_tag[512];
threadgroup Monoid sh_scratch[512];
uint ix = gl_GlobalInvocationID.x * 4u;
uint tag_word = v_574.scene[(_639.conf.pathtag_offset >> uint(2)) + (ix >> uint(2))];
uint param = tag_word;
TagMonoid local_tm = reduce_tag(param);
sh_tag[gl_LocalInvocationID.x] = local_tm;
for (uint i = 0u; i < 9u; i++)
{
threadgroup_barrier(mem_flags::mem_threadgroup);
if (gl_LocalInvocationID.x >= (1u << i))
{
TagMonoid other = sh_tag[gl_LocalInvocationID.x - (1u << i)];
TagMonoid param_1 = other;
TagMonoid param_2 = local_tm;
local_tm = combine_tag_monoid(param_1, param_2);
}
threadgroup_barrier(mem_flags::mem_threadgroup);
sh_tag[gl_LocalInvocationID.x] = local_tm;
}
threadgroup_barrier(mem_flags::mem_threadgroup);
TagMonoid tm = tag_monoid_identity();
if (gl_WorkGroupID.x > 0u)
{
uint _713 = gl_WorkGroupID.x - 1u;
tm.trans_ix = _710.parent[_713].trans_ix;
tm.linewidth_ix = _710.parent[_713].linewidth_ix;
tm.pathseg_ix = _710.parent[_713].pathseg_ix;
tm.path_ix = _710.parent[_713].path_ix;
tm.pathseg_offset = _710.parent[_713].pathseg_offset;
}
if (gl_LocalInvocationID.x > 0u)
{
TagMonoid param_3 = tm;
TagMonoid param_4 = sh_tag[gl_LocalInvocationID.x - 1u];
tm = combine_tag_monoid(param_3, param_4);
}
uint ps_ix = (_639.conf.pathseg_offset >> uint(2)) + tm.pathseg_offset;
uint lw_ix = (_639.conf.linewidth_offset >> uint(2)) + tm.linewidth_ix;
uint save_path_ix = tm.path_ix;
TransformSegRef trans_ref = TransformSegRef{ _639.conf.trans_alloc.offset + (tm.trans_ix * 24u) };
PathSegRef ps_ref = PathSegRef{ _639.conf.pathseg_alloc.offset + (tm.pathseg_ix * 52u) };
float2 p0;
float2 p1;
float2 p2;
float2 p3;
Alloc param_13;
spvUnsafeArray<Monoid, 4> local;
PathCubic cubic;
Alloc param_15;
for (uint i_1 = 0u; i_1 < 4u; i_1++)
{
uint tag_byte = tag_word >> (i_1 * 8u);
uint seg_type = tag_byte & 3u;
if (seg_type != 0u)
{
if ((tag_byte & 8u) != 0u)
{
uint param_5 = ps_ix;
p0 = read_f32_point(param_5, v_574);
uint param_6 = ps_ix + 2u;
p1 = read_f32_point(param_6, v_574);
if (seg_type >= 2u)
{
uint param_7 = ps_ix + 4u;
p2 = read_f32_point(param_7, v_574);
if (seg_type == 3u)
{
uint param_8 = ps_ix + 6u;
p3 = read_f32_point(param_8, v_574);
}
}
}
else
{
uint param_9 = ps_ix;
p0 = read_i16_point(param_9, v_574);
uint param_10 = ps_ix + 1u;
p1 = read_i16_point(param_10, v_574);
if (seg_type >= 2u)
{
uint param_11 = ps_ix + 2u;
p2 = read_i16_point(param_11, v_574);
if (seg_type == 3u)
{
uint param_12 = ps_ix + 3u;
p3 = read_i16_point(param_12, v_574);
}
}
}
float linewidth = as_type<float>(v_574.scene[lw_ix]);
param_13.offset = _639.conf.trans_alloc.offset;
TransformSegRef param_14 = trans_ref;
TransformSeg transform = TransformSeg_read(param_13, param_14, v_111);
p0 = ((transform.mat.xy * p0.x) + (transform.mat.zw * p0.y)) + transform.translate;
p1 = ((transform.mat.xy * p1.x) + (transform.mat.zw * p1.y)) + transform.translate;
float4 bbox = float4(fast::min(p0, p1), fast::max(p0, p1));
if (seg_type >= 2u)
{
p2 = ((transform.mat.xy * p2.x) + (transform.mat.zw * p2.y)) + transform.translate;
float4 _935 = bbox;
float2 _938 = fast::min(_935.xy, p2);
bbox.x = _938.x;
bbox.y = _938.y;
float4 _943 = bbox;
float2 _946 = fast::max(_943.zw, p2);
bbox.z = _946.x;
bbox.w = _946.y;
if (seg_type == 3u)
{
p3 = ((transform.mat.xy * p3.x) + (transform.mat.zw * p3.y)) + transform.translate;
float4 _971 = bbox;
float2 _974 = fast::min(_971.xy, p3);
bbox.x = _974.x;
bbox.y = _974.y;
float4 _979 = bbox;
float2 _982 = fast::max(_979.zw, p3);
bbox.z = _982.x;
bbox.w = _982.y;
}
else
{
p3 = p2;
p2 = mix(p1, p2, float2(0.3333333432674407958984375));
p1 = mix(p1, p0, float2(0.3333333432674407958984375));
}
}
else
{
p3 = p1;
p2 = mix(p3, p0, float2(0.3333333432674407958984375));
p1 = mix(p0, p3, float2(0.3333333432674407958984375));
}
float2 stroke = float2(0.0);
if (linewidth >= 0.0)
{
stroke = float2(length(transform.mat.xz), length(transform.mat.yw)) * (0.5 * linewidth);
bbox += float4(-stroke, stroke);
}
local[i_1].bbox = bbox;
local[i_1].flags = 0u;
cubic.p0 = p0;
cubic.p1 = p1;
cubic.p2 = p2;
cubic.p3 = p3;
cubic.path_ix = tm.path_ix;
cubic.trans_ix = (gl_GlobalInvocationID.x * 4u) + i_1;
cubic.stroke = stroke;
uint fill_mode = uint(linewidth >= 0.0);
param_15.offset = _639.conf.pathseg_alloc.offset;
PathSegRef param_16 = ps_ref;
uint param_17 = fill_mode;
PathCubic param_18 = cubic;
PathSeg_Cubic_write(param_15, param_16, param_17, param_18, v_111);
ps_ref.offset += 52u;
uint n_points = (tag_byte & 3u) + ((tag_byte >> uint(2)) & 1u);
uint n_words = n_points + (n_points & (((tag_byte >> uint(3)) & 1u) * 15u));
ps_ix += n_words;
}
else
{
local[i_1].bbox = float4(0.0);
uint is_path = (tag_byte >> uint(4)) & 1u;
local[i_1].flags = is_path;
tm.path_ix += is_path;
trans_ref.offset += (((tag_byte >> uint(5)) & 1u) * 24u);
lw_ix += ((tag_byte >> uint(6)) & 1u);
}
}
Monoid agg = local[0];
for (uint i_2 = 1u; i_2 < 4u; i_2++)
{
Monoid param_19 = agg;
Monoid param_20 = local[i_2];
agg = combine_monoid(param_19, param_20);
local[i_2] = agg;
}
sh_scratch[gl_LocalInvocationID.x] = agg;
for (uint i_3 = 0u; i_3 < 9u; i_3++)
{
threadgroup_barrier(mem_flags::mem_threadgroup);
if (gl_LocalInvocationID.x >= (1u << i_3))
{
Monoid other_1 = sh_scratch[gl_LocalInvocationID.x - (1u << i_3)];
Monoid param_21 = other_1;
Monoid param_22 = agg;
agg = combine_monoid(param_21, param_22);
}
threadgroup_barrier(mem_flags::mem_threadgroup);
sh_scratch[gl_LocalInvocationID.x] = agg;
}
threadgroup_barrier(mem_flags::mem_threadgroup);
uint path_ix = save_path_ix;
uint bbox_out_ix = (_639.conf.bbox_alloc.offset >> uint(2)) + (path_ix * 4u);
Monoid row = monoid_identity();
if (gl_LocalInvocationID.x > 0u)
{
row = sh_scratch[gl_LocalInvocationID.x - 1u];
}
for (uint i_4 = 0u; i_4 < 4u; i_4++)
{
Monoid param_23 = row;
Monoid param_24 = local[i_4];
Monoid m = combine_monoid(param_23, param_24);
bool do_atomic = false;
bool _1241 = i_4 == 3u;
bool _1248;
if (_1241)
{
_1248 = gl_LocalInvocationID.x == 511u;
}
else
{
_1248 = _1241;
}
if (_1248)
{
do_atomic = true;
}
if ((m.flags & 1u) != 0u)
{
if ((m.flags & 2u) == 0u)
{
do_atomic = true;
}
else
{
float param_25 = m.bbox.x;
v_111.memory[bbox_out_ix] = round_down(param_25);
float param_26 = m.bbox.y;
v_111.memory[bbox_out_ix + 1u] = round_down(param_26);
float param_27 = m.bbox.z;
v_111.memory[bbox_out_ix + 2u] = round_up(param_27);
float param_28 = m.bbox.w;
v_111.memory[bbox_out_ix + 3u] = round_up(param_28);
bbox_out_ix += 4u;
do_atomic = false;
}
}
if (do_atomic)
{
bool _1300 = m.bbox.z > m.bbox.x;
bool _1309;
if (!_1300)
{
_1309 = m.bbox.w > m.bbox.y;
}
else
{
_1309 = _1300;
}
if (_1309)
{
float param_29 = m.bbox.x;
uint _1318 = atomic_fetch_min_explicit((device atomic_uint*)&v_111.memory[bbox_out_ix], round_down(param_29), memory_order_relaxed);
float param_30 = m.bbox.y;
uint _1326 = atomic_fetch_min_explicit((device atomic_uint*)&v_111.memory[bbox_out_ix + 1u], round_down(param_30), memory_order_relaxed);
float param_31 = m.bbox.z;
uint _1334 = atomic_fetch_max_explicit((device atomic_uint*)&v_111.memory[bbox_out_ix + 2u], round_up(param_31), memory_order_relaxed);
float param_32 = m.bbox.w;
uint _1342 = atomic_fetch_max_explicit((device atomic_uint*)&v_111.memory[bbox_out_ix + 3u], round_up(param_32), memory_order_relaxed);
}
bbox_out_ix += 4u;
}
}
}

Binary file not shown.

Binary file not shown.

View file

@ -0,0 +1,127 @@
struct TagMonoid
{
uint trans_ix;
uint linewidth_ix;
uint pathseg_ix;
uint path_ix;
uint pathseg_offset;
};
struct Alloc
{
uint offset;
};
struct Config
{
uint n_elements;
uint n_pathseg;
uint width_in_tiles;
uint height_in_tiles;
Alloc tile_alloc;
Alloc bin_alloc;
Alloc ptcl_alloc;
Alloc pathseg_alloc;
Alloc anno_alloc;
Alloc trans_alloc;
Alloc bbox_alloc;
uint n_trans;
uint trans_offset;
uint pathtag_offset;
uint linewidth_offset;
uint pathseg_offset;
};
static const uint3 gl_WorkGroupSize = uint3(128u, 1u, 1u);
ByteAddressBuffer _139 : register(t1);
ByteAddressBuffer _151 : register(t2);
RWByteAddressBuffer _239 : register(u3);
RWByteAddressBuffer _259 : register(u0);
static uint3 gl_WorkGroupID;
static uint3 gl_LocalInvocationID;
static uint3 gl_GlobalInvocationID;
struct SPIRV_Cross_Input
{
uint3 gl_WorkGroupID : SV_GroupID;
uint3 gl_LocalInvocationID : SV_GroupThreadID;
uint3 gl_GlobalInvocationID : SV_DispatchThreadID;
};
groupshared TagMonoid sh_scratch[128];
TagMonoid reduce_tag(uint tag_word)
{
uint point_count = tag_word & 50529027u;
TagMonoid c;
c.pathseg_ix = uint(int(countbits((point_count * 7u) & 67372036u)));
c.linewidth_ix = uint(int(countbits(tag_word & 1077952576u)));
c.path_ix = uint(int(countbits(tag_word & 269488144u)));
c.trans_ix = uint(int(countbits(tag_word & 538976288u)));
uint n_points = point_count + ((tag_word >> uint(2)) & 16843009u);
uint a = n_points + (n_points & (((tag_word >> uint(3)) & 16843009u) * 15u));
a += (a >> uint(8));
a += (a >> uint(16));
c.pathseg_offset = a & 255u;
return c;
}
TagMonoid combine_tag_monoid(TagMonoid a, TagMonoid b)
{
TagMonoid c;
c.trans_ix = a.trans_ix + b.trans_ix;
c.linewidth_ix = a.linewidth_ix + b.linewidth_ix;
c.pathseg_ix = a.pathseg_ix + b.pathseg_ix;
c.path_ix = a.path_ix + b.path_ix;
c.pathseg_offset = a.pathseg_offset + b.pathseg_offset;
return c;
}
void comp_main()
{
uint ix = gl_GlobalInvocationID.x * 4u;
uint scene_ix = (_139.Load(52) >> uint(2)) + ix;
uint tag_word = _151.Load(scene_ix * 4 + 0);
uint param = tag_word;
TagMonoid agg = reduce_tag(param);
for (uint i = 1u; i < 4u; i++)
{
tag_word = _151.Load((scene_ix + i) * 4 + 0);
uint param_1 = tag_word;
TagMonoid param_2 = agg;
TagMonoid param_3 = reduce_tag(param_1);
agg = combine_tag_monoid(param_2, param_3);
}
sh_scratch[gl_LocalInvocationID.x] = agg;
for (uint i_1 = 0u; i_1 < 7u; i_1++)
{
GroupMemoryBarrierWithGroupSync();
if ((gl_LocalInvocationID.x + (1u << i_1)) < 128u)
{
TagMonoid other = sh_scratch[gl_LocalInvocationID.x + (1u << i_1)];
TagMonoid param_4 = agg;
TagMonoid param_5 = other;
agg = combine_tag_monoid(param_4, param_5);
}
GroupMemoryBarrierWithGroupSync();
sh_scratch[gl_LocalInvocationID.x] = agg;
}
if (gl_LocalInvocationID.x == 0u)
{
_239.Store(gl_WorkGroupID.x * 20 + 0, agg.trans_ix);
_239.Store(gl_WorkGroupID.x * 20 + 4, agg.linewidth_ix);
_239.Store(gl_WorkGroupID.x * 20 + 8, agg.pathseg_ix);
_239.Store(gl_WorkGroupID.x * 20 + 12, agg.path_ix);
_239.Store(gl_WorkGroupID.x * 20 + 16, agg.pathseg_offset);
}
}
[numthreads(128, 1, 1)]
void main(SPIRV_Cross_Input stage_input)
{
gl_WorkGroupID = stage_input.gl_WorkGroupID;
gl_LocalInvocationID = stage_input.gl_LocalInvocationID;
gl_GlobalInvocationID = stage_input.gl_GlobalInvocationID;
comp_main();
}

View file

@ -0,0 +1,143 @@
#pragma clang diagnostic ignored "-Wmissing-prototypes"
#include <metal_stdlib>
#include <simd/simd.h>
using namespace metal;
struct TagMonoid
{
uint trans_ix;
uint linewidth_ix;
uint pathseg_ix;
uint path_ix;
uint pathseg_offset;
};
struct Alloc
{
uint offset;
};
struct Config
{
uint n_elements;
uint n_pathseg;
uint width_in_tiles;
uint height_in_tiles;
Alloc tile_alloc;
Alloc bin_alloc;
Alloc ptcl_alloc;
Alloc pathseg_alloc;
Alloc anno_alloc;
Alloc trans_alloc;
Alloc bbox_alloc;
uint n_trans;
uint trans_offset;
uint pathtag_offset;
uint linewidth_offset;
uint pathseg_offset;
};
struct ConfigBuf
{
Config conf;
};
struct SceneBuf
{
uint scene[1];
};
struct TagMonoid_1
{
uint trans_ix;
uint linewidth_ix;
uint pathseg_ix;
uint path_ix;
uint pathseg_offset;
};
struct OutBuf
{
TagMonoid_1 outbuf[1];
};
struct Memory
{
uint mem_offset;
uint mem_error;
uint memory[1];
};
constant uint3 gl_WorkGroupSize [[maybe_unused]] = uint3(128u, 1u, 1u);
static inline __attribute__((always_inline))
TagMonoid reduce_tag(thread const uint& tag_word)
{
uint point_count = tag_word & 50529027u;
TagMonoid c;
c.pathseg_ix = uint(int(popcount((point_count * 7u) & 67372036u)));
c.linewidth_ix = uint(int(popcount(tag_word & 1077952576u)));
c.path_ix = uint(int(popcount(tag_word & 269488144u)));
c.trans_ix = uint(int(popcount(tag_word & 538976288u)));
uint n_points = point_count + ((tag_word >> uint(2)) & 16843009u);
uint a = n_points + (n_points & (((tag_word >> uint(3)) & 16843009u) * 15u));
a += (a >> uint(8));
a += (a >> uint(16));
c.pathseg_offset = a & 255u;
return c;
}
static inline __attribute__((always_inline))
TagMonoid combine_tag_monoid(thread const TagMonoid& a, thread const TagMonoid& b)
{
TagMonoid c;
c.trans_ix = a.trans_ix + b.trans_ix;
c.linewidth_ix = a.linewidth_ix + b.linewidth_ix;
c.pathseg_ix = a.pathseg_ix + b.pathseg_ix;
c.path_ix = a.path_ix + b.path_ix;
c.pathseg_offset = a.pathseg_offset + b.pathseg_offset;
return c;
}
kernel void main0(const device ConfigBuf& _139 [[buffer(1)]], const device SceneBuf& _151 [[buffer(2)]], device OutBuf& _239 [[buffer(3)]], uint3 gl_GlobalInvocationID [[thread_position_in_grid]], uint3 gl_LocalInvocationID [[thread_position_in_threadgroup]], uint3 gl_WorkGroupID [[threadgroup_position_in_grid]])
{
threadgroup TagMonoid sh_scratch[128];
uint ix = gl_GlobalInvocationID.x * 4u;
uint scene_ix = (_139.conf.pathtag_offset >> uint(2)) + ix;
uint tag_word = _151.scene[scene_ix];
uint param = tag_word;
TagMonoid agg = reduce_tag(param);
for (uint i = 1u; i < 4u; i++)
{
tag_word = _151.scene[scene_ix + i];
uint param_1 = tag_word;
TagMonoid param_2 = agg;
TagMonoid param_3 = reduce_tag(param_1);
agg = combine_tag_monoid(param_2, param_3);
}
sh_scratch[gl_LocalInvocationID.x] = agg;
for (uint i_1 = 0u; i_1 < 7u; i_1++)
{
threadgroup_barrier(mem_flags::mem_threadgroup);
if ((gl_LocalInvocationID.x + (1u << i_1)) < 128u)
{
TagMonoid other = sh_scratch[gl_LocalInvocationID.x + (1u << i_1)];
TagMonoid param_4 = agg;
TagMonoid param_5 = other;
agg = combine_tag_monoid(param_4, param_5);
}
threadgroup_barrier(mem_flags::mem_threadgroup);
sh_scratch[gl_LocalInvocationID.x] = agg;
}
if (gl_LocalInvocationID.x == 0u)
{
_239.outbuf[gl_WorkGroupID.x].trans_ix = agg.trans_ix;
_239.outbuf[gl_WorkGroupID.x].linewidth_ix = agg.linewidth_ix;
_239.outbuf[gl_WorkGroupID.x].pathseg_ix = agg.pathseg_ix;
_239.outbuf[gl_WorkGroupID.x].path_ix = agg.path_ix;
_239.outbuf[gl_WorkGroupID.x].pathseg_offset = agg.pathseg_offset;
}
}

Binary file not shown.

Binary file not shown.

View file

@ -0,0 +1,115 @@
struct TagMonoid
{
uint trans_ix;
uint linewidth_ix;
uint pathseg_ix;
uint path_ix;
uint pathseg_offset;
};
static const uint3 gl_WorkGroupSize = uint3(512u, 1u, 1u);
static const TagMonoid _18 = { 0u, 0u, 0u, 0u, 0u };
RWByteAddressBuffer _78 : register(u0);
static uint3 gl_LocalInvocationID;
static uint3 gl_GlobalInvocationID;
struct SPIRV_Cross_Input
{
uint3 gl_LocalInvocationID : SV_GroupThreadID;
uint3 gl_GlobalInvocationID : SV_DispatchThreadID;
};
groupshared TagMonoid sh_scratch[512];
TagMonoid combine_tag_monoid(TagMonoid a, TagMonoid b)
{
TagMonoid c;
c.trans_ix = a.trans_ix + b.trans_ix;
c.linewidth_ix = a.linewidth_ix + b.linewidth_ix;
c.pathseg_ix = a.pathseg_ix + b.pathseg_ix;
c.path_ix = a.path_ix + b.path_ix;
c.pathseg_offset = a.pathseg_offset + b.pathseg_offset;
return c;
}
TagMonoid tag_monoid_identity()
{
return _18;
}
void comp_main()
{
uint ix = gl_GlobalInvocationID.x * 8u;
TagMonoid _82;
_82.trans_ix = _78.Load(ix * 20 + 0);
_82.linewidth_ix = _78.Load(ix * 20 + 4);
_82.pathseg_ix = _78.Load(ix * 20 + 8);
_82.path_ix = _78.Load(ix * 20 + 12);
_82.pathseg_offset = _78.Load(ix * 20 + 16);
TagMonoid local[8];
local[0].trans_ix = _82.trans_ix;
local[0].linewidth_ix = _82.linewidth_ix;
local[0].pathseg_ix = _82.pathseg_ix;
local[0].path_ix = _82.path_ix;
local[0].pathseg_offset = _82.pathseg_offset;
TagMonoid param_1;
for (uint i = 1u; i < 8u; i++)
{
TagMonoid param = local[i - 1u];
TagMonoid _115;
_115.trans_ix = _78.Load((ix + i) * 20 + 0);
_115.linewidth_ix = _78.Load((ix + i) * 20 + 4);
_115.pathseg_ix = _78.Load((ix + i) * 20 + 8);
_115.path_ix = _78.Load((ix + i) * 20 + 12);
_115.pathseg_offset = _78.Load((ix + i) * 20 + 16);
param_1.trans_ix = _115.trans_ix;
param_1.linewidth_ix = _115.linewidth_ix;
param_1.pathseg_ix = _115.pathseg_ix;
param_1.path_ix = _115.path_ix;
param_1.pathseg_offset = _115.pathseg_offset;
local[i] = combine_tag_monoid(param, param_1);
}
TagMonoid agg = local[7];
sh_scratch[gl_LocalInvocationID.x] = agg;
for (uint i_1 = 0u; i_1 < 9u; i_1++)
{
GroupMemoryBarrierWithGroupSync();
if (gl_LocalInvocationID.x >= (1u << i_1))
{
TagMonoid other = sh_scratch[gl_LocalInvocationID.x - (1u << i_1)];
TagMonoid param_2 = other;
TagMonoid param_3 = agg;
agg = combine_tag_monoid(param_2, param_3);
}
GroupMemoryBarrierWithGroupSync();
sh_scratch[gl_LocalInvocationID.x] = agg;
}
GroupMemoryBarrierWithGroupSync();
TagMonoid row = tag_monoid_identity();
if (gl_LocalInvocationID.x > 0u)
{
row = sh_scratch[gl_LocalInvocationID.x - 1u];
}
for (uint i_2 = 0u; i_2 < 8u; i_2++)
{
TagMonoid param_4 = row;
TagMonoid param_5 = local[i_2];
TagMonoid m = combine_tag_monoid(param_4, param_5);
uint _211 = ix + i_2;
_78.Store(_211 * 20 + 0, m.trans_ix);
_78.Store(_211 * 20 + 4, m.linewidth_ix);
_78.Store(_211 * 20 + 8, m.pathseg_ix);
_78.Store(_211 * 20 + 12, m.path_ix);
_78.Store(_211 * 20 + 16, m.pathseg_offset);
}
}
[numthreads(512, 1, 1)]
void main(SPIRV_Cross_Input stage_input)
{
gl_LocalInvocationID = stage_input.gl_LocalInvocationID;
gl_GlobalInvocationID = stage_input.gl_GlobalInvocationID;
comp_main();
}

View file

@ -0,0 +1,146 @@
#pragma clang diagnostic ignored "-Wmissing-prototypes"
#pragma clang diagnostic ignored "-Wmissing-braces"
#include <metal_stdlib>
#include <simd/simd.h>
using namespace metal;
template<typename T, size_t Num>
struct spvUnsafeArray
{
T elements[Num ? Num : 1];
thread T& operator [] (size_t pos) thread
{
return elements[pos];
}
constexpr const thread T& operator [] (size_t pos) const thread
{
return elements[pos];
}
device T& operator [] (size_t pos) device
{
return elements[pos];
}
constexpr const device T& operator [] (size_t pos) const device
{
return elements[pos];
}
constexpr const constant T& operator [] (size_t pos) const constant
{
return elements[pos];
}
threadgroup T& operator [] (size_t pos) threadgroup
{
return elements[pos];
}
constexpr const threadgroup T& operator [] (size_t pos) const threadgroup
{
return elements[pos];
}
};
struct TagMonoid
{
uint trans_ix;
uint linewidth_ix;
uint pathseg_ix;
uint path_ix;
uint pathseg_offset;
};
struct TagMonoid_1
{
uint trans_ix;
uint linewidth_ix;
uint pathseg_ix;
uint path_ix;
uint pathseg_offset;
};
struct DataBuf
{
TagMonoid_1 data[1];
};
constant uint3 gl_WorkGroupSize [[maybe_unused]] = uint3(512u, 1u, 1u);
static inline __attribute__((always_inline))
TagMonoid combine_tag_monoid(thread const TagMonoid& a, thread const TagMonoid& b)
{
TagMonoid c;
c.trans_ix = a.trans_ix + b.trans_ix;
c.linewidth_ix = a.linewidth_ix + b.linewidth_ix;
c.pathseg_ix = a.pathseg_ix + b.pathseg_ix;
c.path_ix = a.path_ix + b.path_ix;
c.pathseg_offset = a.pathseg_offset + b.pathseg_offset;
return c;
}
static inline __attribute__((always_inline))
TagMonoid tag_monoid_identity()
{
return TagMonoid{ 0u, 0u, 0u, 0u, 0u };
}
kernel void main0(device DataBuf& _78 [[buffer(0)]], uint3 gl_GlobalInvocationID [[thread_position_in_grid]], uint3 gl_LocalInvocationID [[thread_position_in_threadgroup]])
{
threadgroup TagMonoid sh_scratch[512];
uint ix = gl_GlobalInvocationID.x * 8u;
spvUnsafeArray<TagMonoid, 8> local;
local[0].trans_ix = _78.data[ix].trans_ix;
local[0].linewidth_ix = _78.data[ix].linewidth_ix;
local[0].pathseg_ix = _78.data[ix].pathseg_ix;
local[0].path_ix = _78.data[ix].path_ix;
local[0].pathseg_offset = _78.data[ix].pathseg_offset;
TagMonoid param_1;
for (uint i = 1u; i < 8u; i++)
{
uint _109 = ix + i;
TagMonoid param = local[i - 1u];
param_1.trans_ix = _78.data[_109].trans_ix;
param_1.linewidth_ix = _78.data[_109].linewidth_ix;
param_1.pathseg_ix = _78.data[_109].pathseg_ix;
param_1.path_ix = _78.data[_109].path_ix;
param_1.pathseg_offset = _78.data[_109].pathseg_offset;
local[i] = combine_tag_monoid(param, param_1);
}
TagMonoid agg = local[7];
sh_scratch[gl_LocalInvocationID.x] = agg;
for (uint i_1 = 0u; i_1 < 9u; i_1++)
{
threadgroup_barrier(mem_flags::mem_threadgroup);
if (gl_LocalInvocationID.x >= (1u << i_1))
{
TagMonoid other = sh_scratch[gl_LocalInvocationID.x - (1u << i_1)];
TagMonoid param_2 = other;
TagMonoid param_3 = agg;
agg = combine_tag_monoid(param_2, param_3);
}
threadgroup_barrier(mem_flags::mem_threadgroup);
sh_scratch[gl_LocalInvocationID.x] = agg;
}
threadgroup_barrier(mem_flags::mem_threadgroup);
TagMonoid row = tag_monoid_identity();
if (gl_LocalInvocationID.x > 0u)
{
row = sh_scratch[gl_LocalInvocationID.x - 1u];
}
for (uint i_2 = 0u; i_2 < 8u; i_2++)
{
TagMonoid param_4 = row;
TagMonoid param_5 = local[i_2];
TagMonoid m = combine_tag_monoid(param_4, param_5);
uint _211 = ix + i_2;
_78.data[_211].trans_ix = m.trans_ix;
_78.data[_211].linewidth_ix = m.linewidth_ix;
_78.data[_211].pathseg_ix = m.pathseg_ix;
_78.data[_211].path_ix = m.path_ix;
_78.data[_211].pathseg_offset = m.pathseg_offset;
}
}

Binary file not shown.

View file

@ -37,8 +37,12 @@ struct Config
Alloc pathseg_alloc; Alloc pathseg_alloc;
Alloc anno_alloc; Alloc anno_alloc;
Alloc trans_alloc; Alloc trans_alloc;
Alloc bbox_alloc;
uint n_trans; uint n_trans;
uint trans_offset; uint trans_offset;
uint pathtag_offset;
uint linewidth_offset;
uint pathseg_offset;
}; };
static const uint3 gl_WorkGroupSize = uint3(512u, 1u, 1u); static const uint3 gl_WorkGroupSize = uint3(512u, 1u, 1u);
@ -144,7 +148,7 @@ void TransformSeg_write(Alloc a, TransformSegRef ref, TransformSeg s)
void comp_main() void comp_main()
{ {
uint ix = gl_GlobalInvocationID.x * 8u; uint ix = gl_GlobalInvocationID.x * 8u;
TransformRef _285 = { _278.Load(44) + (ix * 24u) }; TransformRef _285 = { _278.Load(48) + (ix * 24u) };
TransformRef ref = _285; TransformRef ref = _285;
TransformRef param = ref; TransformRef param = ref;
Transform agg = Transform_read(param); Transform agg = Transform_read(param);

View file

@ -100,8 +100,12 @@ struct Config
Alloc_1 pathseg_alloc; Alloc_1 pathseg_alloc;
Alloc_1 anno_alloc; Alloc_1 anno_alloc;
Alloc_1 trans_alloc; Alloc_1 trans_alloc;
Alloc_1 bbox_alloc;
uint n_trans; uint n_trans;
uint trans_offset; uint trans_offset;
uint pathtag_offset;
uint linewidth_offset;
uint pathseg_offset;
}; };
struct ConfigBuf struct ConfigBuf

View file

@ -26,8 +26,12 @@ struct Config
Alloc pathseg_alloc; Alloc pathseg_alloc;
Alloc anno_alloc; Alloc anno_alloc;
Alloc trans_alloc; Alloc trans_alloc;
Alloc bbox_alloc;
uint n_trans; uint n_trans;
uint trans_offset; uint trans_offset;
uint pathtag_offset;
uint linewidth_offset;
uint pathseg_offset;
}; };
static const uint3 gl_WorkGroupSize = uint3(512u, 1u, 1u); static const uint3 gl_WorkGroupSize = uint3(512u, 1u, 1u);
@ -81,7 +85,7 @@ Transform combine_monoid(Transform a, Transform b)
void comp_main() void comp_main()
{ {
uint ix = gl_GlobalInvocationID.x * 8u; uint ix = gl_GlobalInvocationID.x * 8u;
TransformRef _168 = { _161.Load(44) + (ix * 24u) }; TransformRef _168 = { _161.Load(48) + (ix * 24u) };
TransformRef ref = _168; TransformRef ref = _168;
TransformRef param = ref; TransformRef param = ref;
Transform agg = Transform_read(param); Transform agg = Transform_read(param);

View file

@ -38,8 +38,12 @@ struct Config
Alloc pathseg_alloc; Alloc pathseg_alloc;
Alloc anno_alloc; Alloc anno_alloc;
Alloc trans_alloc; Alloc trans_alloc;
Alloc bbox_alloc;
uint n_trans; uint n_trans;
uint trans_offset; uint trans_offset;
uint pathtag_offset;
uint linewidth_offset;
uint pathseg_offset;
}; };
struct ConfigBuf struct ConfigBuf

Binary file not shown.

Binary file not shown.

View file

@ -0,0 +1,284 @@
// SPDX-License-Identifier: Apache-2.0 OR MIT OR Unlicense
// Processing of the path stream, after the tag scan.
#version 450
#extension GL_GOOGLE_include_directive : enable
#include "mem.h"
#include "setup.h"
#include "pathtag.h"
#define N_SEQ 4
#define LG_WG_SIZE 9
#define WG_SIZE (1 << LG_WG_SIZE)
#define PARTITION_SIZE (WG_SIZE * N_SEQ)
layout(local_size_x = WG_SIZE, local_size_y = 1) in;
layout(binding = 1) readonly buffer ConfigBuf {
Config conf;
};
layout(binding = 2) readonly buffer SceneBuf {
uint[] scene;
};
#include "tile.h"
#include "pathseg.h"
layout(binding = 3) readonly buffer ParentBuf {
TagMonoid[] parent;
};
struct Monoid {
vec4 bbox;
uint flags;
};
#define FLAG_RESET_BBOX 1
#define FLAG_SET_BBOX 2
Monoid combine_monoid(Monoid a, Monoid b) {
Monoid c;
c.bbox = b.bbox;
// TODO: I think this should be gated on b & SET_BBOX == false also.
if ((a.flags & FLAG_RESET_BBOX) == 0 && b.bbox.z <= b.bbox.x && b.bbox.w <= b.bbox.y) {
c.bbox = a.bbox;
} else if ((a.flags & FLAG_RESET_BBOX) == 0 && (b.flags & FLAG_SET_BBOX) == 0 &&
(a.bbox.z > a.bbox.x || a.bbox.w > a.bbox.y))
{
c.bbox.xy = min(a.bbox.xy, c.bbox.xy);
c.bbox.zw = max(a.bbox.zw, c.bbox.zw);
}
c.flags = (a.flags & FLAG_SET_BBOX) | b.flags;
c.flags |= ((a.flags & FLAG_RESET_BBOX) << 1);
return c;
}
Monoid monoid_identity() {
return Monoid(vec4(0.0, 0.0, 0.0, 0.0), 0);
}
// These are not both live at the same time. A very smart shader compiler
// would be able to figure that out, but I suspect many won't.
shared TagMonoid sh_tag[WG_SIZE];
shared Monoid sh_scratch[WG_SIZE];
vec2 read_f32_point(uint ix) {
float x = uintBitsToFloat(scene[ix]);
float y = uintBitsToFloat(scene[ix + 1]);
return vec2(x, y);
}
vec2 read_i16_point(uint ix) {
uint raw = scene[ix];
float x = float(int(raw << 16) >> 16);
float y = float(int(raw) >> 16);
return vec2(x, y);
}
// Note: these are 16 bit, which is adequate, but we could use 32 bits.
// Round down and saturate to minimum integer; add bias
uint round_down(float x) {
return uint(max(0.0, floor(x) + 32768.0));
}
// Round up and saturate to maximum integer; add bias
uint round_up(float x) {
return uint(min(65535.0, ceil(x) + 32768.0));
}
void main() {
Monoid local[N_SEQ];
uint ix = gl_GlobalInvocationID.x * N_SEQ;
uint tag_word = scene[(conf.pathtag_offset >> 2) + (ix >> 2)];
// Scan the tag monoid
TagMonoid local_tm = reduce_tag(tag_word);
sh_tag[gl_LocalInvocationID.x] = local_tm;
for (uint i = 0; i < LG_WG_SIZE; i++) {
barrier();
if (gl_LocalInvocationID.x >= (1u << i)) {
TagMonoid other = sh_tag[gl_LocalInvocationID.x - (1u << i)];
local_tm = combine_tag_monoid(other, local_tm);
}
barrier();
sh_tag[gl_LocalInvocationID.x] = local_tm;
}
barrier();
// sh_tag is now the partition-wide inclusive scan of the tag monoid.
TagMonoid tm = tag_monoid_identity();
if (gl_WorkGroupID.x > 0) {
tm = parent[gl_WorkGroupID.x - 1];
}
if (gl_LocalInvocationID.x > 0) {
tm = combine_tag_monoid(tm, sh_tag[gl_LocalInvocationID.x - 1]);
}
// tm is now the full exclusive scan of the tag monoid.
// Indices to scene buffer in u32 units.
uint ps_ix = (conf.pathseg_offset >> 2) + tm.pathseg_offset;
uint lw_ix = (conf.linewidth_offset >> 2) + tm.linewidth_ix;
uint save_path_ix = tm.path_ix;
TransformSegRef trans_ref = TransformSegRef(conf.trans_alloc.offset + tm.trans_ix * TransformSeg_size);
PathSegRef ps_ref = PathSegRef(conf.pathseg_alloc.offset + tm.pathseg_ix * PathSeg_size);
for (uint i = 0; i < N_SEQ; i++) {
// if N_SEQ > 4, need to load tag_word from local if N_SEQ % 4 == 0
uint tag_byte = tag_word >> (i * 8);
uint seg_type = tag_byte & 3;
if (seg_type != 0) {
// 1 = line, 2 = quad, 3 = cubic
// Unpack path segment from input
vec2 p0;
vec2 p1;
vec2 p2;
vec2 p3;
if ((tag_byte & 8) != 0) {
// 32 bit encoding
p0 = read_f32_point(ps_ix);
p1 = read_f32_point(ps_ix + 2);
if (seg_type >= 2) {
p2 = read_f32_point(ps_ix + 4);
if (seg_type == 3) {
p3 = read_f32_point(ps_ix + 6);
}
}
} else {
// 16 bit encoding
p0 = read_i16_point(ps_ix);
p1 = read_i16_point(ps_ix + 1);
if (seg_type >= 2) {
p2 = read_i16_point(ps_ix + 2);
if (seg_type == 3) {
p3 = read_i16_point(ps_ix + 3);
}
}
}
float linewidth = uintBitsToFloat(scene[lw_ix]);
TransformSeg transform = TransformSeg_read(conf.trans_alloc, trans_ref);
p0 = transform.mat.xy * p0.x + transform.mat.zw * p0.y + transform.translate;
p1 = transform.mat.xy * p1.x + transform.mat.zw * p1.y + transform.translate;
vec4 bbox = vec4(min(p0, p1), max(p0, p1));
// Degree-raise and compute bbox
if (seg_type >= 2) {
p2 = transform.mat.xy * p2.x + transform.mat.zw * p2.y + transform.translate;
bbox.xy = min(bbox.xy, p2);
bbox.zw = max(bbox.zw, p2);
if (seg_type == 3) {
p3 = transform.mat.xy * p3.x + transform.mat.zw * p3.y + transform.translate;
bbox.xy = min(bbox.xy, p3);
bbox.zw = max(bbox.zw, p3);
} else {
p3 = p2;
p2 = mix(p1, p2, 1.0 / 3.0);
p1 = mix(p1, p0, 1.0 / 3.0);
}
} else {
p3 = p1;
p2 = mix(p3, p0, 1.0 / 3.0);
p1 = mix(p0, p3, 1.0 / 3.0);
}
vec2 stroke = vec2(0.0, 0.0);
if (linewidth >= 0.0) {
// See https://www.iquilezles.org/www/articles/ellipses/ellipses.htm
stroke = 0.5 * linewidth * vec2(length(transform.mat.xz), length(transform.mat.yw));
bbox += vec4(-stroke, stroke);
}
local[i].bbox = bbox;
local[i].flags = 0;
// Write path segment to output
PathCubic cubic;
cubic.p0 = p0;
cubic.p1 = p1;
cubic.p2 = p2;
cubic.p3 = p3;
cubic.path_ix = tm.path_ix;
// Not needed, TODO remove from struct
cubic.trans_ix = gl_GlobalInvocationID.x * 4 + i;
cubic.stroke = stroke;
uint fill_mode = uint(linewidth >= 0.0);
PathSeg_Cubic_write(conf.pathseg_alloc, ps_ref, fill_mode, cubic);
ps_ref.offset += PathSeg_size;
uint n_points = (tag_byte & 3) + ((tag_byte >> 2) & 1);
uint n_words = n_points + (n_points & (((tag_byte >> 3) & 1) * 15));
ps_ix += n_words;
} else {
local[i].bbox = vec4(0.0, 0.0, 0.0, 0.0);
// These shifts need to be kept in sync with setup.h
uint is_path = (tag_byte >> 4) & 1;
// Relies on the fact that RESET_BBOX == 1
local[i].flags = is_path;
tm.path_ix += is_path;
trans_ref.offset += ((tag_byte >> 5) & 1) * TransformSeg_size;
lw_ix += (tag_byte >> 6) & 1;
}
}
// Partition-wide monoid scan for bbox monoid
Monoid agg = local[0];
for (uint i = 1; i < N_SEQ; i++) {
// Note: this could be fused with the map above, but probably
// a thin performance gain not worth the complexity.
agg = combine_monoid(agg, local[i]);
local[i] = agg;
}
// local is N_SEQ sub-partition inclusive scan of bbox monoid.
sh_scratch[gl_LocalInvocationID.x] = agg;
for (uint i = 0; i < LG_WG_SIZE; i++) {
barrier();
if (gl_LocalInvocationID.x >= (1u << i)) {
Monoid other = sh_scratch[gl_LocalInvocationID.x - (1u << i)];
agg = combine_monoid(other, agg);
}
barrier();
sh_scratch[gl_LocalInvocationID.x] = agg;
}
// sh_scratch is the partition-wide inclusive scan of the bbox monoid,
// sampled at the end of the N_SEQ sub-partition.
barrier();
uint path_ix = save_path_ix;
uint bbox_out_ix = (conf.bbox_alloc.offset >> 2) + path_ix * 4;
// Write bboxes to paths; do atomic min/max if partial
Monoid row = monoid_identity();
if (gl_LocalInvocationID.x > 0) {
row = sh_scratch[gl_LocalInvocationID.x - 1];
}
for (uint i = 0; i < N_SEQ; i++) {
Monoid m = combine_monoid(row, local[i]);
// m is partition-wide inclusive scan of bbox monoid.
bool do_atomic = false;
if (i == N_SEQ - 1 && gl_LocalInvocationID.x == WG_SIZE - 1) {
// last element
do_atomic = true;
}
if ((m.flags & FLAG_RESET_BBOX) != 0) {
if ((m.flags & FLAG_SET_BBOX) == 0) {
do_atomic = true;
} else {
memory[bbox_out_ix] = round_down(m.bbox.x);
memory[bbox_out_ix + 1] = round_down(m.bbox.y);
memory[bbox_out_ix + 2] = round_up(m.bbox.z);
memory[bbox_out_ix + 3] = round_up(m.bbox.w);
bbox_out_ix += 4;
do_atomic = false;
}
}
if (do_atomic) {
if (m.bbox.z > m.bbox.x || m.bbox.w > m.bbox.y) {
// atomic min/max
atomicMin(memory[bbox_out_ix], round_down(m.bbox.x));
atomicMin(memory[bbox_out_ix + 1], round_down(m.bbox.y));
atomicMax(memory[bbox_out_ix + 2], round_up(m.bbox.z));
atomicMax(memory[bbox_out_ix + 3], round_up(m.bbox.w));
}
bbox_out_ix += 4;
}
}
}

49
piet-gpu/shader/pathtag.h Normal file
View file

@ -0,0 +1,49 @@
// SPDX-License-Identifier: Apache-2.0 OR MIT OR Unlicense
// Common data structures and functions for the path tag stream.
// This is the layout for tag bytes in the path stream. See
// doc/pathseg.md for an explanation.
#define PATH_TAG_PATHSEG_BITS 0xf
#define PATH_TAG_PATH 0x10
#define PATH_TAG_TRANSFORM 0x20
#define PATH_TAG_LINEWIDTH 0x40
struct TagMonoid {
uint trans_ix;
uint linewidth_ix;
uint pathseg_ix;
uint path_ix;
uint pathseg_offset;
};
TagMonoid tag_monoid_identity() {
return TagMonoid(0, 0, 0, 0, 0);
}
TagMonoid combine_tag_monoid(TagMonoid a, TagMonoid b) {
TagMonoid c;
c.trans_ix = a.trans_ix + b.trans_ix;
c.linewidth_ix = a.linewidth_ix + b.linewidth_ix;
c.pathseg_ix = a.pathseg_ix + b.pathseg_ix;
c.path_ix = a.path_ix + b.path_ix;
c.pathseg_offset = a.pathseg_offset + b.pathseg_offset;
return c;
}
TagMonoid reduce_tag(uint tag_word) {
TagMonoid c;
// Some fun bit magic here, see doc/pathseg.md for explanation.
uint point_count = tag_word & 0x3030303;
c.pathseg_ix = bitCount((point_count * 7) & 0x4040404);
c.linewidth_ix = bitCount(tag_word & (PATH_TAG_LINEWIDTH * 0x1010101));
c.path_ix = bitCount(tag_word & (PATH_TAG_PATH * 0x1010101));
c.trans_ix = bitCount(tag_word & (PATH_TAG_TRANSFORM * 0x1010101));
uint n_points = point_count + ((tag_word >> 2) & 0x1010101);
uint a = n_points + (n_points & (((tag_word >> 3) & 0x1010101) * 15));
a += a >> 8;
a += a >> 16;
c.pathseg_offset = a & 0xff;
return c;
}

View file

@ -0,0 +1,61 @@
// SPDX-License-Identifier: Apache-2.0 OR MIT OR Unlicense
// The reduction phase for path tag scan implemented as a tree reduction.
#version 450
#extension GL_GOOGLE_include_directive : enable
#include "mem.h"
#include "setup.h"
#include "pathtag.h"
// Note: the partition size is smaller than pathseg by a factor
// of 4, as there are 4 tag bytes to a tag word.
#define N_ROWS 4
#define LG_WG_SIZE 7
#define WG_SIZE (1 << LG_WG_SIZE)
#define PARTITION_SIZE (WG_SIZE * N_ROWS)
layout(local_size_x = WG_SIZE, local_size_y = 1) in;
layout(binding = 1) readonly buffer ConfigBuf {
Config conf;
};
layout(binding = 2) readonly buffer SceneBuf {
uint[] scene;
};
#define Monoid TagMonoid
layout(set = 0, binding = 3) buffer OutBuf {
Monoid[] outbuf;
};
shared Monoid sh_scratch[WG_SIZE];
void main() {
uint ix = gl_GlobalInvocationID.x * N_ROWS;
uint scene_ix = (conf.pathtag_offset >> 2) + ix;
uint tag_word = scene[scene_ix];
Monoid agg = reduce_tag(tag_word);
for (uint i = 1; i < N_ROWS; i++) {
tag_word = scene[scene_ix + i];
agg = combine_tag_monoid(agg, reduce_tag(tag_word));
}
sh_scratch[gl_LocalInvocationID.x] = agg;
for (uint i = 0; i < LG_WG_SIZE; i++) {
barrier();
// We could make this predicate tighter, but would it help?
if (gl_LocalInvocationID.x + (1u << i) < WG_SIZE) {
Monoid other = sh_scratch[gl_LocalInvocationID.x + (1u << i)];
agg = combine_tag_monoid(agg, other);
}
barrier();
sh_scratch[gl_LocalInvocationID.x] = agg;
}
if (gl_LocalInvocationID.x == 0) {
outbuf[gl_WorkGroupID.x] = agg;
}
}

View file

@ -0,0 +1,74 @@
// SPDX-License-Identifier: Apache-2.0 OR MIT OR Unlicense
// A scan for path tag scan implemented as a tree reduction.
#version 450
#extension GL_GOOGLE_include_directive : enable
#include "pathtag.h"
#define N_ROWS 8
#define LG_WG_SIZE 9
#define WG_SIZE (1 << LG_WG_SIZE)
#define PARTITION_SIZE (WG_SIZE * N_ROWS)
layout(local_size_x = WG_SIZE, local_size_y = 1) in;
#define Monoid TagMonoid
#define combine_monoid combine_tag_monoid
#define monoid_identity tag_monoid_identity
layout(binding = 0) buffer DataBuf {
Monoid[] data;
};
#ifndef ROOT
layout(binding = 1) readonly buffer ParentBuf {
Monoid[] parent;
};
#endif
shared Monoid sh_scratch[WG_SIZE];
void main() {
Monoid local[N_ROWS];
uint ix = gl_GlobalInvocationID.x * N_ROWS;
local[0] = data[ix];
for (uint i = 1; i < N_ROWS; i++) {
local[i] = combine_monoid(local[i - 1], data[ix + i]);
}
Monoid agg = local[N_ROWS - 1];
sh_scratch[gl_LocalInvocationID.x] = agg;
for (uint i = 0; i < LG_WG_SIZE; i++) {
barrier();
if (gl_LocalInvocationID.x >= (1u << i)) {
Monoid other = sh_scratch[gl_LocalInvocationID.x - (1u << i)];
agg = combine_monoid(other, agg);
}
barrier();
sh_scratch[gl_LocalInvocationID.x] = agg;
}
barrier();
// This could be a semigroup instead of a monoid if we reworked the
// conditional logic, but that might impact performance.
Monoid row = monoid_identity();
#ifdef ROOT
if (gl_LocalInvocationID.x > 0) {
row = sh_scratch[gl_LocalInvocationID.x - 1];
}
#else
if (gl_WorkGroupID.x > 0) {
row = parent[gl_WorkGroupID.x - 1];
}
if (gl_LocalInvocationID.x > 0) {
row = combine_monoid(row, sh_scratch[gl_LocalInvocationID.x - 1]);
}
#endif
for (uint i = 0; i < N_ROWS; i++) {
Monoid m = combine_monoid(row, local[i]);
data[ix + i] = m;
}
}

View file

@ -40,10 +40,20 @@ struct Config {
Alloc trans_alloc; Alloc trans_alloc;
// new element pipeline stuff follows // new element pipeline stuff follows
// Bounding boxes of paths, stored as int (so atomics work)
Alloc bbox_alloc;
// Number of transforms in scene // Number of transforms in scene
// This is probably not needed.
uint n_trans; uint n_trans;
// Offset (in bytes) of transform stream in scene buffer // Offset (in bytes) of transform stream in scene buffer
uint trans_offset; uint trans_offset;
// Offset (in bytes) of path tag stream in scene
uint pathtag_offset;
// Offset (in bytes) of linewidth stream in scene
uint linewidth_offset;
// Offset (in bytes) of path segment stream in scene
uint pathseg_offset;
}; };
// Fill modes. // Fill modes.

Binary file not shown.

View file

@ -48,7 +48,6 @@ void main() {
uint ix = gl_GlobalInvocationID.x * N_ROWS; uint ix = gl_GlobalInvocationID.x * N_ROWS;
// TODO: gate buffer read
local[0] = data[ix]; local[0] = data[ix];
for (uint i = 1; i < N_ROWS; i++) { for (uint i = 1; i < N_ROWS; i++) {
local[i] = combine_monoid(local[i - 1], data[ix + i]); local[i] = combine_monoid(local[i - 1], data[ix + i]);

View file

@ -298,7 +298,6 @@ impl Renderer {
alloc += (n_paths * ANNO_SIZE + 3) & !3; alloc += (n_paths * ANNO_SIZE + 3) & !3;
let trans_base = alloc; let trans_base = alloc;
alloc += (n_trans * TRANS_SIZE + 3) & !3; alloc += (n_trans * TRANS_SIZE + 3) & !3;
let trans_offset = 0; // For new element pipeline, not yet used
let config = Config { let config = Config {
n_elements: n_paths as u32, n_elements: n_paths as u32,
n_pathseg: n_pathseg as u32, n_pathseg: n_pathseg as u32,
@ -311,7 +310,8 @@ impl Renderer {
anno_alloc: anno_base as u32, anno_alloc: anno_base as u32,
trans_alloc: trans_base as u32, trans_alloc: trans_base as u32,
n_trans: n_trans as u32, n_trans: n_trans as u32,
trans_offset: trans_offset as u32, // We'll fill the rest of the fields in when we hook up the new element pipeline.
..Default::default()
}; };
unsafe { unsafe {
let scene = render_ctx.get_scene_buf(); let scene = render_ctx.get_scene_buf();

View file

@ -16,12 +16,13 @@
//! Stages for new element pipeline, exposed for testing. //! Stages for new element pipeline, exposed for testing.
mod path;
mod transform;
use bytemuck::{Pod, Zeroable}; use bytemuck::{Pod, Zeroable};
use piet::kurbo::Affine; pub use path::{PathBinding, PathCode, PathEncoder, PathStage};
use piet_gpu_hal::{ pub use transform::{Transform, TransformBinding, TransformCode, TransformStage};
include_shader, BindType, Buffer, BufferUsage, CmdBuf, DescriptorSet, Pipeline, Session,
};
/// The configuration block passed to piet-gpu shaders. /// The configuration block passed to piet-gpu shaders.
/// ///
@ -39,171 +40,10 @@ pub struct Config {
pub pathseg_alloc: u32, pub pathseg_alloc: u32,
pub anno_alloc: u32, pub anno_alloc: u32,
pub trans_alloc: u32, pub trans_alloc: u32,
pub bbox_alloc: u32,
pub n_trans: u32, pub n_trans: u32,
pub trans_offset: u32, pub trans_offset: u32,
} pub pathtag_offset: u32,
pub linewidth_offset: u32,
// The individual stages will probably be separate files but for now, all in one. pub pathseg_offset: u32,
// This is equivalent to the version in piet-gpu-types, but the bytemuck
// representation will likely be faster.
#[repr(C)]
#[derive(Clone, Copy, Debug, Default, Zeroable, Pod)]
pub struct Transform {
pub mat: [f32; 4],
pub translate: [f32; 2],
}
const TRANSFORM_WG: u64 = 512;
const TRANSFORM_N_ROWS: u64 = 8;
const TRANSFORM_PART_SIZE: u64 = TRANSFORM_WG * TRANSFORM_N_ROWS;
pub struct TransformCode {
reduce_pipeline: Pipeline,
root_pipeline: Pipeline,
leaf_pipeline: Pipeline,
}
pub struct TransformStage {
// Right now we're limited to partition^2 (~16M) elements. This can be
// expanded but is tedious.
root_buf: Buffer,
root_ds: DescriptorSet,
}
pub struct TransformBinding {
reduce_ds: DescriptorSet,
leaf_ds: DescriptorSet,
}
impl TransformCode {
pub unsafe fn new(session: &Session) -> TransformCode {
let reduce_code = include_shader!(session, "../shader/gen/transform_reduce");
let reduce_pipeline = session
.create_compute_pipeline(
reduce_code,
&[
BindType::Buffer,
BindType::BufReadOnly,
BindType::BufReadOnly,
BindType::Buffer,
],
)
.unwrap();
let root_code = include_shader!(session, "../shader/gen/transform_root");
let root_pipeline = session
.create_compute_pipeline(root_code, &[BindType::Buffer])
.unwrap();
let leaf_code = include_shader!(session, "../shader/gen/transform_leaf");
let leaf_pipeline = session
.create_compute_pipeline(
leaf_code,
&[
BindType::Buffer,
BindType::BufReadOnly,
BindType::BufReadOnly,
BindType::BufReadOnly,
],
)
.unwrap();
TransformCode {
reduce_pipeline,
root_pipeline,
leaf_pipeline,
}
}
}
impl TransformStage {
pub unsafe fn new(session: &Session, code: &TransformCode) -> TransformStage {
// We're limited to TRANSFORM_PART_SIZE^2
// Also note: size here allows padding
let root_buf_size = TRANSFORM_PART_SIZE * 32;
let root_buf = session
.create_buffer(root_buf_size, BufferUsage::STORAGE)
.unwrap();
let root_ds = session
.create_simple_descriptor_set(&code.root_pipeline, &[&root_buf])
.unwrap();
TransformStage { root_buf, root_ds }
}
pub unsafe fn bind(
&self,
session: &Session,
code: &TransformCode,
config_buf: &Buffer,
scene_buf: &Buffer,
memory_buf: &Buffer,
) -> TransformBinding {
let reduce_ds = session
.create_simple_descriptor_set(
&code.reduce_pipeline,
&[memory_buf, config_buf, scene_buf, &self.root_buf],
)
.unwrap();
let leaf_ds = session
.create_simple_descriptor_set(
&code.leaf_pipeline,
&[memory_buf, config_buf, scene_buf, &self.root_buf],
)
.unwrap();
TransformBinding { reduce_ds, leaf_ds }
}
pub unsafe fn record(
&self,
cmd_buf: &mut CmdBuf,
code: &TransformCode,
binding: &TransformBinding,
size: u64,
) {
if size > TRANSFORM_PART_SIZE.pow(2) {
panic!("very large scan not yet implemented");
}
let n_workgroups = (size + TRANSFORM_PART_SIZE - 1) / TRANSFORM_PART_SIZE;
if n_workgroups > 1 {
cmd_buf.dispatch(
&code.reduce_pipeline,
&binding.reduce_ds,
(n_workgroups as u32, 1, 1),
(TRANSFORM_WG as u32, 1, 1),
);
cmd_buf.memory_barrier();
cmd_buf.dispatch(
&code.root_pipeline,
&self.root_ds,
(1, 1, 1),
(TRANSFORM_WG as u32, 1, 1),
);
cmd_buf.memory_barrier();
}
cmd_buf.dispatch(
&code.leaf_pipeline,
&binding.leaf_ds,
(n_workgroups as u32, 1, 1),
(TRANSFORM_WG as u32, 1, 1),
);
}
}
impl Transform {
pub fn from_kurbo(a: Affine) -> Transform {
let c = a.as_coeffs();
Transform {
mat: [c[0] as f32, c[1] as f32, c[2] as f32, c[3] as f32],
translate: [c[4] as f32, c[5] as f32],
}
}
pub fn to_kurbo(self) -> Affine {
Affine::new([
self.mat[0] as f64,
self.mat[1] as f64,
self.mat[2] as f64,
self.mat[3] as f64,
self.translate[0] as f64,
self.translate[1] as f64,
])
}
} }

339
piet-gpu/src/stages/path.rs Normal file
View file

@ -0,0 +1,339 @@
// Copyright 2021 The piet-gpu authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// Also licensed under MIT license, at your choice.
//! The path stage (includes substages).
use piet_gpu_hal::{
include_shader, BindType, Buffer, BufferUsage, CmdBuf, DescriptorSet, Pipeline, Session,
};
pub struct PathCode {
reduce_pipeline: Pipeline,
tag_root_pipeline: Pipeline,
clear_pipeline: Pipeline,
pathseg_pipeline: Pipeline,
}
pub struct PathStage {
tag_root_buf: Buffer,
tag_root_ds: DescriptorSet,
}
pub struct PathBinding {
reduce_ds: DescriptorSet,
clear_ds: DescriptorSet,
path_ds: DescriptorSet,
}
const REDUCE_WG: u32 = 128;
const REDUCE_N_ROWS: u32 = 4;
const REDUCE_PART_SIZE: u32 = REDUCE_WG * REDUCE_N_ROWS;
const ROOT_WG: u32 = 512;
const ROOT_N_ROWS: u32 = 8;
const ROOT_PART_SIZE: u32 = ROOT_WG * ROOT_N_ROWS;
const SCAN_WG: u32 = 512;
const SCAN_N_ROWS: u32 = 4;
const SCAN_PART_SIZE: u32 = SCAN_WG * SCAN_N_ROWS;
const CLEAR_WG: u32 = 512;
impl PathCode {
pub unsafe fn new(session: &Session) -> PathCode {
// TODO: add cross-compilation
let reduce_code = include_shader!(session, "../../shader/gen/pathtag_reduce");
let reduce_pipeline = session
.create_compute_pipeline(
reduce_code,
&[
BindType::Buffer,
BindType::BufReadOnly,
BindType::BufReadOnly,
BindType::Buffer,
],
)
.unwrap();
let tag_root_code = include_shader!(session, "../../shader/gen/pathtag_root");
let tag_root_pipeline = session
.create_compute_pipeline(tag_root_code, &[BindType::Buffer])
.unwrap();
let clear_code = include_shader!(session, "../../shader/gen/bbox_clear");
let clear_pipeline = session
.create_compute_pipeline(clear_code, &[BindType::Buffer, BindType::BufReadOnly])
.unwrap();
let pathseg_code = include_shader!(session, "../../shader/gen/pathseg");
let pathseg_pipeline = session
.create_compute_pipeline(
pathseg_code,
&[
BindType::Buffer,
BindType::BufReadOnly,
BindType::BufReadOnly,
BindType::BufReadOnly,
],
)
.unwrap();
PathCode {
reduce_pipeline,
tag_root_pipeline,
clear_pipeline,
pathseg_pipeline,
}
}
}
impl PathStage {
pub unsafe fn new(session: &Session, code: &PathCode) -> PathStage {
let tag_root_buf_size = (ROOT_PART_SIZE * 20) as u64;
let tag_root_buf = session
.create_buffer(tag_root_buf_size, BufferUsage::STORAGE)
.unwrap();
let tag_root_ds = session
.create_simple_descriptor_set(&code.tag_root_pipeline, &[&tag_root_buf])
.unwrap();
PathStage {
tag_root_buf,
tag_root_ds,
}
}
pub unsafe fn bind(
&self,
session: &Session,
code: &PathCode,
config_buf: &Buffer,
scene_buf: &Buffer,
memory_buf: &Buffer,
) -> PathBinding {
let reduce_ds = session
.create_simple_descriptor_set(
&code.reduce_pipeline,
&[memory_buf, config_buf, scene_buf, &self.tag_root_buf],
)
.unwrap();
let clear_ds = session
.create_simple_descriptor_set(&code.clear_pipeline, &[memory_buf, config_buf])
.unwrap();
let path_ds = session
.create_simple_descriptor_set(
&code.pathseg_pipeline,
&[memory_buf, config_buf, scene_buf, &self.tag_root_buf],
)
.unwrap();
PathBinding {
reduce_ds,
clear_ds,
path_ds,
}
}
/// Record the path stage.
///
/// Note: no barrier is needed for transform output, we have a barrier before
/// those are consumed. Result is written without barrier.
pub unsafe fn record(
&self,
cmd_buf: &mut CmdBuf,
code: &PathCode,
binding: &PathBinding,
n_paths: u32,
n_tags: u32,
) {
if n_tags > ROOT_PART_SIZE * SCAN_PART_SIZE {
println!(
"number of pathsegs exceeded {} > {}",
n_tags,
ROOT_PART_SIZE * SCAN_PART_SIZE
);
}
// Number of tags consumed in a tag reduce workgroup
let reduce_part_tags = REDUCE_PART_SIZE * 4;
let n_wg_tag_reduce = (n_tags + reduce_part_tags - 1) / reduce_part_tags;
if n_wg_tag_reduce > 1 {
cmd_buf.dispatch(
&code.reduce_pipeline,
&binding.reduce_ds,
(n_wg_tag_reduce, 1, 1),
(REDUCE_WG, 1, 1),
);
// I think we can skip root if n_wg_tag_reduce == 2
cmd_buf.memory_barrier();
cmd_buf.dispatch(
&code.tag_root_pipeline,
&self.tag_root_ds,
(1, 1, 1),
(ROOT_WG, 1, 1),
);
// No barrier needed here; clear doesn't depend on path tags
}
let n_wg_clear = (n_paths + CLEAR_WG - 1) / CLEAR_WG;
cmd_buf.dispatch(
&code.clear_pipeline,
&binding.clear_ds,
(n_wg_clear, 1, 1),
(CLEAR_WG, 1, 1),
);
cmd_buf.memory_barrier();
let n_wg_pathseg = (n_tags + SCAN_PART_SIZE - 1) / SCAN_PART_SIZE;
cmd_buf.dispatch(
&code.pathseg_pipeline,
&binding.path_ds,
(n_wg_pathseg, 1, 1),
(SCAN_WG, 1, 1),
);
}
}
pub struct PathEncoder<'a> {
tag_stream: &'a mut Vec<u8>,
// If we're never going to use the i16 encoding, it might be
// slightly faster to store this as Vec<u32>, we'd get aligned
// stores on ARM etc.
pathseg_stream: &'a mut Vec<u8>,
first_pt: [f32; 2],
state: State,
n_pathseg: u32,
}
#[derive(PartialEq)]
enum State {
Start,
MoveTo,
NonemptySubpath,
}
impl<'a> PathEncoder<'a> {
pub fn new(tags: &'a mut Vec<u8>, pathsegs: &'a mut Vec<u8>) -> PathEncoder<'a> {
PathEncoder {
tag_stream: tags,
pathseg_stream: pathsegs,
first_pt: [0.0, 0.0],
state: State::Start,
n_pathseg: 0,
}
}
pub fn move_to(&mut self, x: f32, y: f32) {
let buf = [x, y];
let bytes = bytemuck::bytes_of(&buf);
self.first_pt = buf;
if self.state == State::MoveTo {
let new_len = self.pathseg_stream.len() - 8;
self.pathseg_stream.truncate(new_len);
}
if self.state == State::NonemptySubpath {
if let Some(tag) = self.tag_stream.last_mut() {
*tag |= 4;
}
}
self.pathseg_stream.extend_from_slice(bytes);
self.state = State::MoveTo;
}
pub fn line_to(&mut self, x: f32, y: f32) {
if self.state == State::Start {
// should warn or error
return;
}
let buf = [x, y];
let bytes = bytemuck::bytes_of(&buf);
self.pathseg_stream.extend_from_slice(bytes);
self.tag_stream.push(9);
self.state = State::NonemptySubpath;
self.n_pathseg += 1;
}
pub fn quad_to(&mut self, x0: f32, y0: f32, x1: f32, y1: f32) {
if self.state == State::Start {
return;
}
let buf = [x0, y0, x1, y1];
let bytes = bytemuck::bytes_of(&buf);
self.pathseg_stream.extend_from_slice(bytes);
self.tag_stream.push(10);
self.state = State::NonemptySubpath;
self.n_pathseg += 1;
}
pub fn cubic_to(&mut self, x0: f32, y0: f32, x1: f32, y1: f32, x2: f32, y2: f32) {
if self.state == State::Start {
return;
}
let buf = [x0, y0, x1, y1, x2, y2];
let bytes = bytemuck::bytes_of(&buf);
self.pathseg_stream.extend_from_slice(bytes);
self.tag_stream.push(11);
self.state = State::NonemptySubpath;
self.n_pathseg += 1;
}
pub fn close_path(&mut self) {
match self.state {
State::Start => return,
State::MoveTo => {
let new_len = self.pathseg_stream.len() - 8;
self.pathseg_stream.truncate(new_len);
return;
}
State::NonemptySubpath => (),
}
let len = self.pathseg_stream.len();
if len < 8 {
// can't happen
return;
}
let first_bytes = bytemuck::bytes_of(&self.first_pt);
if &self.pathseg_stream[len - 8..len] != first_bytes {
self.pathseg_stream.extend_from_slice(first_bytes);
self.tag_stream.push(13);
self.n_pathseg += 1;
} else {
if let Some(tag) = self.tag_stream.last_mut() {
*tag |= 4;
}
}
self.state = State::Start;
}
fn finish(&mut self) {
if self.state == State::MoveTo {
let new_len = self.pathseg_stream.len() - 8;
self.pathseg_stream.truncate(new_len);
}
if let Some(tag) = self.tag_stream.last_mut() {
*tag |= 4;
}
}
/// Finish encoding a path.
///
/// Encode this after encoding path segments.
pub fn path(&mut self) {
self.finish();
// maybe don't encode if path is empty? might throw off sync though
self.tag_stream.push(0x10);
}
/// Get the number of path segments.
///
/// This is the number of path segments that will be written by the
/// path stage; use this for allocating the output buffer.
pub fn n_pathseg(&self) -> u32 {
self.n_pathseg
}
}

View file

@ -0,0 +1,188 @@
// Copyright 2021 The piet-gpu authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// Also licensed under MIT license, at your choice.
//! The transform stage of the element processing pipeline.
use bytemuck::{Pod, Zeroable};
use piet::kurbo::Affine;
use piet_gpu_hal::{
include_shader, BindType, Buffer, BufferUsage, CmdBuf, DescriptorSet, Pipeline, Session,
};
/// An affine transform.
// This is equivalent to the version in piet-gpu-types, but the bytemuck
// representation will likely be faster.
#[repr(C)]
#[derive(Clone, Copy, Debug, Default, Zeroable, Pod)]
pub struct Transform {
pub mat: [f32; 4],
pub translate: [f32; 2],
}
const TRANSFORM_WG: u64 = 512;
const TRANSFORM_N_ROWS: u64 = 8;
const TRANSFORM_PART_SIZE: u64 = TRANSFORM_WG * TRANSFORM_N_ROWS;
pub struct TransformCode {
reduce_pipeline: Pipeline,
root_pipeline: Pipeline,
leaf_pipeline: Pipeline,
}
pub struct TransformStage {
// Right now we're limited to partition^2 (~16M) elements. This can be
// expanded but is tedious.
root_buf: Buffer,
root_ds: DescriptorSet,
}
pub struct TransformBinding {
reduce_ds: DescriptorSet,
leaf_ds: DescriptorSet,
}
impl TransformCode {
pub unsafe fn new(session: &Session) -> TransformCode {
let reduce_code = include_shader!(session, "../../shader/gen/transform_reduce");
let reduce_pipeline = session
.create_compute_pipeline(
reduce_code,
&[
BindType::Buffer,
BindType::BufReadOnly,
BindType::BufReadOnly,
BindType::Buffer,
],
)
.unwrap();
let root_code = include_shader!(session, "../../shader/gen/transform_root");
let root_pipeline = session
.create_compute_pipeline(root_code, &[BindType::Buffer])
.unwrap();
let leaf_code = include_shader!(session, "../../shader/gen/transform_leaf");
let leaf_pipeline = session
.create_compute_pipeline(
leaf_code,
&[
BindType::Buffer,
BindType::BufReadOnly,
BindType::BufReadOnly,
BindType::BufReadOnly,
],
)
.unwrap();
TransformCode {
reduce_pipeline,
root_pipeline,
leaf_pipeline,
}
}
}
impl TransformStage {
pub unsafe fn new(session: &Session, code: &TransformCode) -> TransformStage {
// We're limited to TRANSFORM_PART_SIZE^2
// Also note: size here allows padding
let root_buf_size = TRANSFORM_PART_SIZE * 32;
let root_buf = session
.create_buffer(root_buf_size, BufferUsage::STORAGE)
.unwrap();
let root_ds = session
.create_simple_descriptor_set(&code.root_pipeline, &[&root_buf])
.unwrap();
TransformStage { root_buf, root_ds }
}
pub unsafe fn bind(
&self,
session: &Session,
code: &TransformCode,
config_buf: &Buffer,
scene_buf: &Buffer,
memory_buf: &Buffer,
) -> TransformBinding {
let reduce_ds = session
.create_simple_descriptor_set(
&code.reduce_pipeline,
&[memory_buf, config_buf, scene_buf, &self.root_buf],
)
.unwrap();
let leaf_ds = session
.create_simple_descriptor_set(
&code.leaf_pipeline,
&[memory_buf, config_buf, scene_buf, &self.root_buf],
)
.unwrap();
TransformBinding { reduce_ds, leaf_ds }
}
pub unsafe fn record(
&self,
cmd_buf: &mut CmdBuf,
code: &TransformCode,
binding: &TransformBinding,
size: u64,
) {
if size > TRANSFORM_PART_SIZE.pow(2) {
panic!("very large scan not yet implemented");
}
let n_workgroups = (size + TRANSFORM_PART_SIZE - 1) / TRANSFORM_PART_SIZE;
if n_workgroups > 1 {
cmd_buf.dispatch(
&code.reduce_pipeline,
&binding.reduce_ds,
(n_workgroups as u32, 1, 1),
(TRANSFORM_WG as u32, 1, 1),
);
cmd_buf.memory_barrier();
cmd_buf.dispatch(
&code.root_pipeline,
&self.root_ds,
(1, 1, 1),
(TRANSFORM_WG as u32, 1, 1),
);
cmd_buf.memory_barrier();
}
cmd_buf.dispatch(
&code.leaf_pipeline,
&binding.leaf_ds,
(n_workgroups as u32, 1, 1),
(TRANSFORM_WG as u32, 1, 1),
);
}
}
impl Transform {
pub fn from_kurbo(a: Affine) -> Transform {
let c = a.as_coeffs();
Transform {
mat: [c[0] as f32, c[1] as f32, c[2] as f32, c[3] as f32],
translate: [c[4] as f32, c[5] as f32],
}
}
pub fn to_kurbo(self) -> Affine {
Affine::new([
self.mat[0] as f64,
self.mat[1] as f64,
self.mat[2] as f64,
self.mat[3] as f64,
self.translate[0] as f64,
self.translate[1] as f64,
])
}
}

View file

@ -25,6 +25,8 @@ mod prefix_tree;
mod runner; mod runner;
mod test_result; mod test_result;
#[cfg(feature = "piet-gpu")]
mod path;
#[cfg(feature = "piet-gpu")] #[cfg(feature = "piet-gpu")]
mod transform; mod transform;
@ -134,6 +136,7 @@ fn main() {
#[cfg(feature = "piet-gpu")] #[cfg(feature = "piet-gpu")]
if config.groups.matches("piet") { if config.groups.matches("piet") {
report(&transform::transform_test(&mut runner, &config)); report(&transform::transform_test(&mut runner, &config));
report(&path::path_test(&mut runner, &config));
} }
} }
} }

293
tests/src/path.rs Normal file
View file

@ -0,0 +1,293 @@
// Copyright 2021 The piet-gpu authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// Also licensed under MIT license, at your choice.
//! Tests for the piet-gpu path stage.
use crate::{Config, Runner, TestResult};
use bytemuck::{Pod, Zeroable};
use piet_gpu::stages::{self, PathCode, PathEncoder, PathStage};
use piet_gpu_hal::{BufWrite, BufferUsage};
use rand::{prelude::ThreadRng, Rng};
struct PathData {
n_trans: u32,
n_linewidth: u32,
n_path: u32,
n_pathseg: u32,
tags: Vec<u8>,
pathsegs: Vec<u8>,
bbox: Vec<(f32, f32, f32, f32)>,
lines: Vec<([f32; 2], [f32; 2])>,
}
// This is designed to match pathseg.h
#[repr(C)]
#[derive(Clone, Copy, Debug, Default, Zeroable, Pod)]
struct PathSeg {
tag: u32,
p0: [f32; 2],
p1: [f32; 2],
p2: [f32; 2],
p3: [f32; 2],
path_ix: u32,
trans_ix: u32,
stroke: [f32; 2],
}
#[repr(C)]
#[derive(Clone, Copy, Debug, Default, PartialEq, Zeroable, Pod)]
struct Bbox {
left: u32,
top: u32,
right: u32,
bottom: u32,
}
pub unsafe fn path_test(runner: &mut Runner, config: &Config) -> TestResult {
let mut result = TestResult::new("path");
let n_path: u64 = config.size.choose(1 << 12, 1 << 16, 1 << 18);
let path_data = PathData::new(n_path as u32);
let stage_config = path_data.get_config();
let config_buf = runner
.session
.create_buffer_init(std::slice::from_ref(&stage_config), BufferUsage::STORAGE)
.unwrap();
let scene_size = n_path * 256;
let scene_buf = runner
.session
.create_buffer_with(
scene_size,
|b| path_data.fill_scene(b),
BufferUsage::STORAGE,
)
.unwrap();
let memory_init = runner
.session
.create_buffer_with(
path_data.memory_init_size(),
|b| path_data.fill_memory(b),
BufferUsage::COPY_SRC,
)
.unwrap();
let memory = runner.buf_down(path_data.memory_full_size(), BufferUsage::empty());
let code = PathCode::new(&runner.session);
let stage = PathStage::new(&runner.session, &code);
let binding = stage.bind(
&runner.session,
&code,
&config_buf,
&scene_buf,
&memory.dev_buf,
);
let mut total_elapsed = 0.0;
let n_iter = config.n_iter;
for i in 0..n_iter {
let mut commands = runner.commands();
commands.cmd_buf.copy_buffer(&memory_init, &memory.dev_buf);
commands.cmd_buf.memory_barrier();
commands.write_timestamp(0);
stage.record(
&mut commands.cmd_buf,
&code,
&binding,
path_data.n_path,
path_data.tags.len() as u32,
);
commands.write_timestamp(1);
if i == 0 || config.verify_all {
commands.cmd_buf.memory_barrier();
commands.download(&memory);
}
total_elapsed += runner.submit(commands);
if i == 0 || config.verify_all {
let dst = memory.map_read(..);
if let Some(failure) = path_data.verify(&dst) {
result.fail(failure);
}
}
}
let n_elements = path_data.n_pathseg as u64;
result.timing(total_elapsed, n_elements * n_iter);
result
}
fn rand_point(rng: &mut ThreadRng) -> (f32, f32) {
let x = rng.gen_range(0.0, 100.0);
let y = rng.gen_range(0.0, 100.0);
(x, y)
}
// Must match shader/pathseg.h
const PATHSEG_SIZE: u32 = 52;
impl PathData {
fn new(n_path: u32) -> PathData {
let mut rng = rand::thread_rng();
let n_trans = 1;
let n_linewidth = 1;
let segments_per_path = 8;
let mut tags = Vec::new();
let mut pathsegs = Vec::new();
let mut bbox = Vec::new();
let mut lines = Vec::new();
let mut encoder = PathEncoder::new(&mut tags, &mut pathsegs);
for _ in 0..n_path {
let (x, y) = rand_point(&mut rng);
let mut min_x = x;
let mut max_x = x;
let mut min_y = y;
let mut max_y = y;
let first_pt = [x, y];
let mut last_pt = [x, y];
encoder.move_to(x, y);
for _ in 0..segments_per_path {
let (x, y) = rand_point(&mut rng);
lines.push((last_pt, [x, y]));
last_pt = [x, y];
encoder.line_to(x, y);
min_x = min_x.min(x);
max_x = max_x.max(x);
min_y = min_y.min(y);
max_y = max_y.max(y);
}
bbox.push((min_x, min_y, max_x, max_y));
encoder.close_path();
// With very low probability last_pt and first_pt might be equal, which
// would cause a test failure - might want to seed RNG.
lines.push((last_pt, first_pt));
encoder.path();
}
let n_pathseg = encoder.n_pathseg();
//println!("tags: {:x?}", &tags[0..8]);
//println!("path: {:?}", bytemuck::cast_slice::<u8, f32>(&pathsegs[0..64]));
PathData {
n_trans,
n_linewidth,
n_path,
n_pathseg,
tags,
pathsegs,
bbox,
lines,
}
}
fn get_config(&self) -> stages::Config {
let n_trans = self.n_trans;
// Layout of scene buffer
let linewidth_offset = 0;
let pathtag_offset = linewidth_offset + self.n_linewidth * 4;
let n_tagbytes = self.tags.len() as u32;
// Depends on workgroup size, maybe get from stages?
let padded_n_tagbytes = (n_tagbytes + 2047) & !2047;
let pathseg_offset = pathtag_offset + padded_n_tagbytes;
// Layout of memory
let trans_alloc = 0;
let pathseg_alloc = trans_alloc + n_trans * 24;
let bbox_alloc = pathseg_alloc + self.n_pathseg * PATHSEG_SIZE;
let stage_config = stages::Config {
n_elements: self.n_path,
pathseg_alloc,
trans_alloc,
bbox_alloc,
n_trans,
pathtag_offset,
linewidth_offset,
pathseg_offset,
..Default::default()
};
stage_config
}
fn fill_scene(&self, buf: &mut BufWrite) {
let linewidth = -1.0f32;
buf.push(linewidth);
buf.extend_slice(&self.tags);
buf.fill_zero(self.tags.len().wrapping_neg() & 2047);
buf.extend_slice(&self.pathsegs);
}
fn memory_init_size(&self) -> u64 {
let mut size = 8; // offset and error
size += self.n_trans * 24;
size as u64
}
fn memory_full_size(&self) -> u64 {
let mut size = self.memory_init_size();
size += (self.n_pathseg * PATHSEG_SIZE) as u64;
size += (self.n_path * 16) as u64;
size
}
fn fill_memory(&self, buf: &mut BufWrite) {
// This stage is not dynamically allocating memory
let mem_offset = 0u32;
let mem_error = 0u32;
let mem_init = [mem_offset, mem_error];
buf.push(mem_init);
let trans = [1.0f32, 0.0, 0.0, 1.0, 0.0, 0.0];
buf.push(trans);
}
fn verify(&self, memory: &[u8]) -> Option<String> {
fn round_down(x: f32) -> u32 {
(x.floor() + 32768.0) as u32
}
fn round_up(x: f32) -> u32 {
(x.ceil() + 32768.0) as u32
}
let begin_pathseg = 32;
for i in 0..self.n_pathseg {
let offset = (begin_pathseg + PATHSEG_SIZE * i) as usize;
let actual =
bytemuck::from_bytes::<PathSeg>(&memory[offset..offset + PATHSEG_SIZE as usize]);
let expected = self.lines[i as usize];
const EPSILON: f32 = 1e-9;
if (expected.0[0] - actual.p0[0]).abs() > EPSILON
|| (expected.0[1] - actual.p0[1]).abs() > EPSILON
|| (expected.1[0] - actual.p3[0]).abs() > EPSILON
|| (expected.1[1] - actual.p3[1]).abs() > EPSILON
{
println!("{}: {:.1?} {:.1?}", i, actual, expected);
}
}
let begin_bbox = 32 + PATHSEG_SIZE * self.n_pathseg;
for i in 0..self.n_path {
let offset = (begin_bbox + 16 * i) as usize;
let actual = bytemuck::from_bytes::<Bbox>(&memory[offset..offset + 16]);
let expected_f32 = self.bbox[i as usize];
let expected = Bbox {
left: round_down(expected_f32.0),
top: round_down(expected_f32.1),
right: round_up(expected_f32.2),
bottom: round_up(expected_f32.3),
};
if expected != *actual {
println!("{}: {:?} {:?}", i, actual, expected);
return Some(format!("bbox mismatch at {}", i));
}
}
None
}
}

View file

@ -14,7 +14,7 @@
// //
// Also licensed under MIT license, at your choice. // Also licensed under MIT license, at your choice.
//! Tests for piet-gpu shaders. //! Tests for the piet-gpu transform stage.
use crate::{Config, Runner, TestResult}; use crate::{Config, Runner, TestResult};
@ -37,11 +37,9 @@ pub unsafe fn transform_test(runner: &mut Runner, config: &Config) -> TestResult
.session .session
.create_buffer_init(&data.input_data, BufferUsage::STORAGE) .create_buffer_init(&data.input_data, BufferUsage::STORAGE)
.unwrap(); .unwrap();
let memory = runner.buf_down(data_buf.size() + 24, BufferUsage::empty()); let memory = runner.buf_down(data_buf.size() + 8, BufferUsage::empty());
let stage_config = stages::Config { let stage_config = stages::Config {
n_trans: n_elements as u32, n_trans: n_elements as u32,
// This is a hack to get elements aligned.
trans_alloc: 16,
..Default::default() ..Default::default()
}; };
let config_buf = runner let config_buf = runner
@ -71,9 +69,8 @@ pub unsafe fn transform_test(runner: &mut Runner, config: &Config) -> TestResult
} }
total_elapsed += runner.submit(commands); total_elapsed += runner.submit(commands);
if i == 0 || config.verify_all { if i == 0 || config.verify_all {
let mut dst: Vec<Transform> = Default::default(); let dst = memory.map_read(8..);
memory.read(&mut dst); if let Some(failure) = data.verify(dst.cast_slice()) {
if let Some(failure) = data.verify(&dst[1..]) {
result.fail(failure); result.fail(failure);
} }
} }