For videocaptrue.read takes about 190ms per frame
Can someone tells me which part have I missed?
Hey,
This is similar to the problem I am facing right now, and it is reported HERE.
@gnar_fang I created an OpenCV application that uses the GStreamer pipeline to play an MP4 file, but the CPU consumption is high.
@Zhiming_Liu as you mentioned that hardware acceleration is not imported in OpenCV. Based on similar performance issues, I assume it is not imported into the Qt framework either. Please share steps to import/enable hardware acceleration for OpenCV and Qt.
I am planning to create an application using the Qt framework on i.MX8M Plus EVK.
As per the Yocto document, Qt6 is built/provided with the full image, but Qt6 does not support providing gstreamer pipeline (explicitly) to QMediaPlayer class.
So all I can do is rely on the internal pre-defined pipeline it has or constructs at run-time. Not sure if it uses hardware components.
Looking forward to your response.
Currently, G2D doesn't integrate into OpenCV.
If you use IMX8MP, you can try G2D and OpenCL with GStreamer pipeline to
get video frames. In my demo, the CPU loading can reduce to less than 10%.
https://github.com/fangxiaoying/opencl_study/blob/main/gst_case/open_camera_v3.cc
This is research from a long time ago, and threre questions that can be discussed.
1. Build yocto and use.wic file inside yocto for SD card here's the steps
sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential chrpath socat cpio python python3 python3-pip python3-pexpect xz-utils debianutils iputils-ping python3-git python3-jinja2 libegl1-mesa libsdl1.2-dev pylint3 xterm
mkdir ~/bin
curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo
chmod a+x ~/bin/repo
export PATH=~/bin:$PATH
sudo apt-get install repo
git config --global user.name xxx
git config --global user.email xxx@gmail.com
git config --list
mkdir imx-yocto-bsp
cd imx-yocto-bsp
repo init -u https://source.codeaurora.org/external/imx/imx-manifest -b imx-linux-zeus -m imx-5.4.70-2.3.0.xml
repo sync
DISTRO=fsl-imx-xwayland MACHINE=imx8mpevk source imx-setup-release.sh -b build-yocto
bitbake imx-image-full
run this inside iMX8 (weston --tty=1 --device=/dev/fb0,/dev/fb2 --use-g2d=1 &)
2. Use aarch64-poky to build a simple opencv videoCapture read a frame a imshow for output
and scp into iMX then ./test test.mp4
Can you share your steps?
1. Build yocto
sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential chrpath socat cpio python python3 python3-pip python3-pexpect xz-utils debianutils iputils-ping python3-git python3-jinja2 libegl1-mesa libsdl1.2-dev pylint3 xterm
mkdir ~/bin
curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo
chmod a+x ~/bin/repo
export PATH=~/bin:$PATH
sudo apt-get install repo
git config --global user.name davison
git config --global user.email davison12071994@gmail.com
git config --list
mkdir imx-yocto-bsp
cd imx-yocto-bsp
repo init -u https://source.codeaurora.org/external/imx/imx-manifest -b imx-linux-zeus -m imx-5.4.70-2.3.0.xml
repo sync
DISTRO=fsl-imx-xwayland MACHINE=imx8mpevk source imx-setup-release.sh -b build-yocto
bitbake imx-image-full
2. SD card
use .wic file in yocto for sd card
3. Run on iMX 8
run this in iMX8 (weston --tty=1 --device=/dev/fb0,/dev/fb2 --use-g2d=1 &)
Build a simple program by aarch64-poky and scp into iMX8 then ./test
Here's the code
#include "opencv2/opencv.hpp"
#include <iostream>
using namespace std;
using namespace cv;
int main(){
// Create a VideoCapture object
VideoCapture cap("test.mp4");
// Check if camera opened successfully
if(!cap.isOpened()){
cout << "Error opening video stream" << endl;
return -1;
}
while(1){
Mat frame;
// Capture frame-by-frame
cap >> frame;
// If the frame is empty, break immediately
if (frame.empty())
break;
// Display the resulting frame
imshow( "Frame", frame );
// Press ESC on keyboard to exit
char c = (char)waitKey(1);
if( c == 27 )
break;
}
// When everything done, release the video capture
cap.release();
// Closes all the frames
destroyAllWindows();
return 0;
}
Code looks like 95% like this
#include "opencv2/opencv.hpp" #include <iostream> using namespace std; using namespace cv; int main(){ // Create a VideoCapture object VideoCapture cap("test.mp4"); // Check if camera opened successfully if(!cap.isOpened()){ cout << "Error opening video stream" << endl; return -1; }
while(1){ Mat frame; // Capture frame-by-frame cap >> frame; // If the frame is empty, break immediately if (frame.empty()) break; // Display the resulting frame imshow( "Frame", frame ); // Press ESC on keyboard to exit char c = (char)waitKey(1); if( c == 27 ) break; } // When everything done, release the video capture cap.release(); // Closes all the frames destroyAllWindows(); return 0; }
I am not sure which camera are you used. it may need GStreamer plugin to accelerate image decode.
For example. open ov5640
if you can open camera with the command.
$ gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw,framerate=30/1 width=1920,height=1080! waylandsink
for OpenCV, you can use the code to open camera.
const std::string pipeline = "v4l2src device=/dev/video3 ! video/x-raw,framerate=30/1, width=1920,height=1080 ! appsink";
cam = cv::VideoCapture(pipeline,cv::CAP_GSTREAMER);
for MJPG or other encoded formats need to add VPU / G2D plugin in the pipeline.
OK,i will test it and give you feeback
Hi there, any solution? Or is there a way to use gstreamer in code to replace videocapture
You can try to use gstreamer, we don't import hardware acceleration in opencv