Age | Commit message (Collapse) | Author |
|
|
|
|
|
This new API allows buffer implementations to know when a user is
actively accessing the buffer's underlying storage. This is
important for the upcoming client-backed wlr_buffer implementation.
|
|
This allows compositors to choose a wlr_buffer to render to. This
is a less awkward interface than having to call bind_buffer() before
and after begin() and end().
Closes: https://github.com/swaywm/wlroots/issues/2618
|
|
This probably already felt apart, but let's make it explicit that
this is not allowed.
|
|
This allows users to know the capabilities of the buffers that
will be allocated. The buffer capability is important to
know when negotiating buffer formats.
|
|
If we aren't the DRM master, allocating dumb buffers will fail with
EPERM.
|
|
When importing a DMA-BUF wlr_buffer as a wlr_texture, the GLES2
renderer caches the result, in case the buffer is used for texturing
again in the future. When the wlr_texture is destroyed by the caller,
the wlr_buffer is unref'ed, but the wlr_gles2_texture is kept around.
This is fine because wlr_gles2_texture listens for wlr_buffer's destroy
event to avoid any use-after-free.
However, with this logic wlr_texture_destroy doesn't "really" destroy
the wlr_gles2_texture. It just decrements the wlr_buffer ref'count.
Each wlr_texture_destroy call must have a matching prior
wlr_texture_create_from_buffer call or the ref'counting will go south.
Wehn destroying the renderer, we don't want to decrement any wlr_buffer
ref'count. Instead, we want to go through any cached wlr_gles2_texture
and destroy our GL state. So instead of calling wlr_texture_destroy, we
need to call our internal gles2_texture_destroy function.
Closes: https://github.com/swaywm/wlroots/issues/2941
|
|
Some formats have a byte-per-pixel lower than 1. Let's not encode
an arbitrary limitation into the wlr_renderer API.
|
|
When the matrix doesn't have a rotation, we can avoid a sqrt() call.
Tested with Sway's tabbed containers.
|
|
See [1]. This allows us to remove the workaround for GBM API
limitations.
[1]: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5442
|
|
Make it so wlr_gles2_texture is ref'counted (via wlr_buffer). This
is similar to wlr_gles2_buffer or wlr_drm_fb work.
When creating a wlr_texture from a wlr_buffer, first check if we
already have a texture for the buffer. If so, increase the
wlr_buffer ref'count and make sure any changes made by an external
process are made visible (by invalidating the texture).
When destroying a wlr_texture created from a wlr_buffer, decrease
the ref'count, but keep the wlr_texture around in case the caller
uses it again. When the wlr_buffer is destroyed, cleanup the
wlr_texture.
|
|
This adds a a function to create a wlr_texture from a wlr_buffer.
The main motivation for this is to allow the renderer to create a
single wlr_texture per wlr_buffer. This can avoid needless imports
by re-using existing textures.
|
|
This centralizes the wlr_texture initialization.
In future commits, more fields will need to get initialized.
|
|
We require the ext in the renderer init function.
|
|
Users can just access the width/height fields directly.
|
|
GL_RENDERER typically displays a human-readable string for the name
of the GPU, and EGL_VENDOR typically displays a human-readable string
for the GPU manufacturer. EGL_DRIVER_NAME_EXT should give the name of
the driver in use.
References: https://github.com/KhronosGroup/EGL-Registry/commit/e8baa0bf39120803505c6e360e1e33af0d9b9745
|
|
|
|
|
|
Rendering a wlr_texture with a different wlr_renderer is invalid.
Add an assert to make sure this doesn't happen.
|
|
Same as wlr_allocator_autocreate, but allows the caller to force a
DRM FD.
Similar to renderer_autocreate_with_drm_fd.
|
|
This function is only required because the DRM backend still needs
to perform multi-GPU magic under-the-hood. Remove the wlr_ prefix
to make it clear it's not a candidate for being made public.
|
|
Makes it easier to figure out why a backend/renderer is picked.
|
|
|
|
|
|
|
|
|
|
|
|
Allow wlr_buffer_impl.get_data_ptr to return a format.
This allows the Pixman renderer to not care about get_dmabuf/get_shm,
and only care about get_data_ptr. This will also help with [1], because
client wl_shm buffers can't implement get_shm.
[1]: https://github.com/swaywm/wlroots/pull/2892
References: https://github.com/swaywm/wlroots/issues/2864
|
|
|
|
|
|
|
|
Prior to this commit, WLR_RENDERER was only looked up when the
backend returned a DRM FD.
Make it so WLR_RENDERER is always looked up, so that running wlroots
on a system without a GPU and with WLR_RENDERER=gles2 fails as
expected instead of falling back to the Pixman renderer.
|
|
|
|
The NULL check already exists in wlr_texture_destroy, no need to
duplicate it in each impl
|
|
|
|
|
|
We were leaking wlr_pixman_buffers and a wlr_drm_format_set.
|
|
Make it clear GLES2 is being used. Before this commit, various
GL-related information was printed, but not an easy-to-find line
about which renderer is being picked up.
|
|
This env var forces the creation of a specific renderer. If no renderer
is specified, the function will try to create all of the renderers one
by one until one is created successfuly.
|
|
|
|
|
|
Allow selecting whether the GLES2 renderer gets enabled.
Co-authored-by: Simon Ser <contact@emersion.fr>
|
|
|
|
|
|
It allocates in local main memory via shm_open, and provides a FD
to allow sharing with other processes.
This is suitable for software rendering under the Wayland and X11
backends.
|
|
|
|
The compositor shouldn't write to client buffers if the client
attaches a DMA-BUF to a wl_surface, then attaches a shm buffer.
Make gles2_texture_write_pixels return an error to prevent this
from happening.
|
|
The get_drm_fd was made available in an internal header with a53ab146f. Move it
now to the public header so consumers opting in to the unstable interfaces can
make use of it.
|
|
PRIME support for buffer sharing has become mandatory since the renderer
rewrite. Make sure we check for the appropriate capabilities in backend,
allocator and renderer.
See also #2819.
|