30 screenshot Per Second

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

30 screenshot Per Second

1,772 Views
scalvi
Contributor I

Hello,

Using an iMX8M quad core board, I need to take 30 screenshot per second.

So far I have looked into grim and slurp code but my Yocto build is Zeus and the Weston 9.0 Compositor does not allow the screenshot to be processed. I have then looked into the screenshooter.c code and I am able to take up to 8 screenshot per second by removing the file write (only the raw screenshot is needed for me). I am very new at working with Wayland protocol and before I used to do the same with a X11GetImage call (slightly modified to not re-allocate the memory all the time) but this is not doable with Wayland.

I am ready to try a solution with Qt QScreen::GrabWindow but I'm afraid this is going to be the same problem...

I'm thinking that it will be great to be able to address the compositor directly but my understanding of Wayland is that this is not made for such interaction.

Please help,

Any suggestion welcome!!!

Labels (1)
0 Kudos
Reply
8 Replies

1,766 Views
Zhiming_Liu
NXP TechSupport
NXP TechSupport

Hi @scalvi 

The screenshot client costs 0.17s to grab a frame.QT uses wayland as a backend, i think it could get same issue.But you should try QT, as the weston compositor can't capture so fast.

Regards

Zhiming

0 Kudos
Reply

1,752 Views
scalvi
Contributor I

Hi Zhiming,

 

I am going to try Qt.

Also, I came accross libweston/backend-fbdev/fbdev.c, is that a Weston frame-buffer backend? If so, how to use it? Sorry but Wayland is totally new for me. Using a frame buffer might be the way to do that kind of quick screenshooting.

Thank you,

Sebastien.

0 Kudos
Reply

1,742 Views
Zhiming_Liu
NXP TechSupport
NXP TechSupport

Hi @scalvi 

i.MX8M series can't support framebuffer.You can also try gnome shell in ubuntu desktop.

https://www.nxp.com/webapp/sps/download/preDownload.jsp?render=true

0 Kudos
Reply

1,722 Views
scalvi
Contributor I

Hi Zhiming,

Trying to do the screenshot with Qt5, I've made a small app, here is the function implementation that I connect to a ButtonReleased signal:

QScreen *screen = QGuiApplication::primaryScreen();
auto geom = screen->geometry();
QPixmap screenGrab = screen->grabWindow(0, geom.x(), geom.y(), geom.width(), geom.height());

screenGrab.save("screenshot.png");

I compile and run the code for both platform my dev_laptop and the iMX8M board. On my laptop I can see a png file being created, all good. Exact same code on the imx board, I have the same window with my button but no png file created. "screenGrab" is NULL!

What am I missing? Isn't it the right way to get the "screen"? What's the difference? My dev_=Laptop has x11 for XDG_SESSION_TYPE while on the board echo $XDG_SESSION_TYPE returns "tty" (weird).

Please let me know.

 

Thank you,

Sebastien.

0 Kudos
Reply

1,700 Views
Zhiming_Liu
NXP TechSupport
NXP TechSupport

Hi @scalvi 

Please refer this guide to debug your QT program :https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/Remote-debug-QT5-User-Guide/ta-p/1391161

$XDG_SESSION_TYPE returns "tty" may caused by environment missing.

 

BR

 

 

0 Kudos
Reply

1,709 Views
scalvi
Contributor I

Hi,

 

My previous remark/question was about Qt...

I am trying to perform my screenshots with libdrm now and trying to use a dumb buffer I get this:

struct drm_mode_create_dumb dreq;
dreq.height = 1080,
dreq.width = 1920,
dreq.bpp = 32,
dreq.flags = 0,
dreq.handle = 0,
dreq.pitch = 0,
dreq.size = 0;
int ret = ioctl(fd, DRM_IOCTL_MODE_CREATE_DUMB, &dreq);

 

RETURNS:

Call to DRM_IOCTL_MODE_CREATE_DUMB failed: Function not implemented

But that seems to be quite a common way to use that dumb buffer.

Is that a dead end for me trying to do screenshots with libdrm?

0 Kudos
Reply

1,696 Views
Zhiming_Liu
NXP TechSupport
NXP TechSupport

dumb buffer is used to create a GPU memory for userspace write.It's not used to dumb current display buffer.

Weston use opengl to read pixels(libweston/renderer-gl/gl-renderer.c):

static int
gl_renderer_read_pixels(struct weston_output *output,
			pixman_format_code_t format, void *pixels,
			uint32_t x, uint32_t y,
			uint32_t width, uint32_t height)
{
	GLenum gl_format;
	struct gl_output_state *go = get_output_state(output);

	x += go->borders[GL_RENDERER_BORDER_LEFT].width;
	y += go->borders[GL_RENDERER_BORDER_BOTTOM].height;

	switch (format) {
	case PIXMAN_a8r8g8b8:
		gl_format = GL_BGRA_EXT;
		break;
	case PIXMAN_a8b8g8r8:
		gl_format = GL_RGBA;
		break;
	default:
		return -1;
	}

	if (use_output(output) < 0)
		return -1;

	glPixelStorei(GL_PACK_ALIGNMENT, 1);
	glReadPixels(x, y, width, height, gl_format,
		     GL_UNSIGNED_BYTE, pixels);

	return 0;
}

 

0 Kudos
Reply

1,674 Views
scalvi
Contributor I

Hi Zhiming,

That part of the code is from the compositor, opengl needs to have a context created before doing glReadPixels or the call just returns all black buffer. Each time the limitation is that no matter what a client A can not access/read the buffer of client B and that's Wayland protocol specification anyway.

 

I do not know how to make these30 screenshot per second under wayland

 

Thank you,

Sebastien.

0 Kudos
Reply