OpenGL实现相机视频NV21格式转RGB格式

来源:互联网 发布:网络骚扰电话怎么举报 编辑:程序博客网 时间:2024/06/10 00:36

笔者介绍:姜雪伟,IT公司技术合伙人,IT高级讲师,CSDN社区专家,特邀编辑,畅销书作者,已出版书籍:《手把手教你架构3D游戏引擎》电子工业出版社和《Unity3D实战核心技术详解》电子工业出版社等。

CSDN视频网址:http://edu.csdn.net/lecturer/144

最近公司项目做人脸识别追踪,从相机出来的视频格式是YUV420或者说是NV21格式,在面板上显示出来的使用的是RGB格式,所以要将其转化成该格式,下面先了解一下YUV420格式,网上很多这方面的介绍,理解起来很麻烦,本篇博客通过使用简介的语言希望能够帮助读者更好的理解:

视频数据也是一张张图片组成的,每张图片的大小是由图片的(width * height)*3/2字节组成。图片分两部分:Y通道的长度是width * height。UV平面字节长度是:(width / 2) x (height / 2) x 2 = width x height / 2 。每两个连续的字节是2 x 2 = 4个原始像素的V,U(按照NV21规范的顺序)色度字节。换句话说,UV平面尺寸为(宽/ 2)×(高/ 2)像素,并且在每个维度中被下采样因子2, 此外,U,V色度字节是交错的。

下面给读者展示一副关于YUV-NV12, NV21存储的图片:


接下来介绍如何将其转化成RGB

如问题所述,如果在Android代码中完成,此转换将需要太多时间才能生效。 幸运的是,它可以在GPU上运行的GL着色器中完成。 这将允许它运行非常快。

一般的想法是将我们的图像的纹理作为纹理传递给着色器,并以RGB转换的方式渲染它们。 为此,我们必须首先将图像中的通道复制到可传递给纹理的缓冲区中:

byte[] image;ByteBuffer yBuffer, uvBuffer;...yBuffer.put(image, 0, width*height);yBuffer.position(0);uvBuffer.put(image, width*height, width*height/2);uvBuffer.position(0);
然后,我们将这些缓冲区传递给实际的GL纹理:

/* * Prepare the Y channel texture *///Set texture slot 0 as active and bind our texture object to itGdx.gl.glActiveTexture(GL20.GL_TEXTURE0);yTexture.bind();//Y texture is (width*height) in size and each pixel is one byte; //by setting GL_LUMINANCE, OpenGL puts this byte into R,G and B //components of the textureGdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE,     width, height, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE, yBuffer);//Use linear interpolation when magnifying/minifying the texture to //areas larger/smaller than the texture sizeGdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,     GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,     GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,     GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,     GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);/* * Prepare the UV channel texture *///Set texture slot 1 as active and bind our texture object to itGdx.gl.glActiveTexture(GL20.GL_TEXTURE1);uvTexture.bind();//UV texture is (width/2*height/2) in size (downsampled by 2 in //both dimensions, each pixel corresponds to 4 pixels of the Y channel) //and each pixel is two bytes. By setting GL_LUMINANCE_ALPHA, OpenGL //puts first byte (V) into R,G and B components and of the texture//and the second byte (U) into the A component of the texture. That's //why we find U and V at A and R respectively in the fragment shader code.//Note that we could have also found V at G or B as well. Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE_ALPHA,     width/2, height/2, 0, GL20.GL_LUMINANCE_ALPHA, GL20.GL_UNSIGNED_BYTE,     uvBuffer);//Use linear interpolation when magnifying/minifying the texture to //areas larger/smaller than the texture sizeGdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,     GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,     GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,     GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,     GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);

接下来,我们渲染之前准备的网格(覆盖整个屏幕), 着色器将负责渲染网格上的绑定纹理:

shader.begin();//Set the uniform y_texture object to the texture at slot 0shader.setUniformi("y_texture", 0);//Set the uniform uv_texture object to the texture at slot 1shader.setUniformi("uv_texture", 1);mesh.render(shader, GL20.GL_TRIANGLES);shader.end();
最后,着色器接管将纹理渲染到网格的任务, 实现实际转换的片段着色器如下所示:
String fragmentShader =     "#ifdef GL_ES\n" +    "precision highp float;\n" +    "#endif\n" +    "varying vec2 v_texCoord;\n" +    "uniform sampler2D y_texture;\n" +    "uniform sampler2D uv_texture;\n" +    "void main (void){\n" +    "   float r, g, b, y, u, v;\n" +    //We had put the Y values of each pixel to the R,G,B components by     //GL_LUMINANCE, that's why we're pulling it from the R component,    //we could also use G or B    "   y = texture2D(y_texture, v_texCoord).r;\n" +     //We had put the U and V values of each pixel to the A and R,G,B     //components of the texture respectively using GL_LUMINANCE_ALPHA.     //Since U,V bytes are interspread in the texture, this is probably     //the fastest way to use them in the shader    "   u = texture2D(uv_texture, v_texCoord).a - 0.5;\n" +    "   v = texture2D(uv_texture, v_texCoord).r - 0.5;\n" +    //The numbers are just YUV to RGB conversion constants    "   r = y + 1.13983*v;\n" +    "   g = y - 0.39465*u - 0.58060*v;\n" +    "   b = y + 2.03211*u;\n" +    //We finally set the RGB color of our pixel    "   gl_FragColor = vec4(r, g, b, 1.0);\n" +    "}\n"; 
请注意,我们使用相同的坐标变量v_texCoord访问Y和UV纹理,这是由于v_texCoord在-1.0和1.0之间,从纹理的一端到另一端,而不是实际的纹理像素坐标, 这是着色器最好的功能之一。

最后为了方便读者学习,给出完整的代码:

由于libgdx是跨平台的,因此我们需要一个可以在处理设备摄像头和渲染的不同平台中进行不同扩展的对象。 例如,如果您可以让硬件为您提供RGB图像,则可能需要绕过YUV-RGB着色器转换。 因此,我们需要一个将由每个不同平台实现的设备摄像头控制器接口:

public interface PlatformDependentCameraController {    void init();    void renderBackground();    void destroy();} 
该界面的Android版本如下(实时摄像机图像假定为1280x720像素):

public class AndroidDependentCameraController implements PlatformDependentCameraController, Camera.PreviewCallback {    private static byte[] image; //The image buffer that will hold the camera image when preview callback arrives    private Camera camera; //The camera object    //The Y and UV buffers that will pass our image channel data to the textures    private ByteBuffer yBuffer;    private ByteBuffer uvBuffer;    ShaderProgram shader; //Our shader    Texture yTexture; //Our Y texture    Texture uvTexture; //Our UV texture    Mesh mesh; //Our mesh that we will draw the texture on    public AndroidDependentCameraController(){        //Our YUV image is 12 bits per pixel        image = new byte[1280*720/8*12];    }    @Override    public void init(){        /*         * Initialize the OpenGL/libgdx stuff         */        //Do not enforce power of two texture sizes        Texture.setEnforcePotImages(false);        //Allocate textures        yTexture = new Texture(1280,720,Format.Intensity); //A 8-bit per pixel format        uvTexture = new Texture(1280/2,720/2,Format.LuminanceAlpha); //A 16-bit per pixel format        //Allocate buffers on the native memory space, not inside the JVM heap        yBuffer = ByteBuffer.allocateDirect(1280*720);        uvBuffer = ByteBuffer.allocateDirect(1280*720/2); //We have (width/2*height/2) pixels, each pixel is 2 bytes        yBuffer.order(ByteOrder.nativeOrder());        uvBuffer.order(ByteOrder.nativeOrder());        //Our vertex shader code; nothing special        String vertexShader =                 "attribute vec4 a_position;                         \n" +                 "attribute vec2 a_texCoord;                         \n" +                 "varying vec2 v_texCoord;                           \n" +                 "void main(){                                       \n" +                 "   gl_Position = a_position;                       \n" +                 "   v_texCoord = a_texCoord;                        \n" +                "}                                                  \n";        //Our fragment shader code; takes Y,U,V values for each pixel and calculates R,G,B colors,        //Effectively making YUV to RGB conversion        String fragmentShader =                 "#ifdef GL_ES                                       \n" +                "precision highp float;                             \n" +                "#endif                                             \n" +                "varying vec2 v_texCoord;                           \n" +                "uniform sampler2D y_texture;                       \n" +                "uniform sampler2D uv_texture;                      \n" +                "void main (void){                                  \n" +                "   float r, g, b, y, u, v;                         \n" +                //We had put the Y values of each pixel to the R,G,B components by GL_LUMINANCE,                 //that's why we're pulling it from the R component, we could also use G or B                "   y = texture2D(y_texture, v_texCoord).r;         \n" +                 //We had put the U and V values of each pixel to the A and R,G,B components of the                //texture respectively using GL_LUMINANCE_ALPHA. Since U,V bytes are interspread                 //in the texture, this is probably the fastest way to use them in the shader                "   u = texture2D(uv_texture, v_texCoord).a - 0.5;  \n" +                                                   "   v = texture2D(uv_texture, v_texCoord).r - 0.5;  \n" +                //The numbers are just YUV to RGB conversion constants                "   r = y + 1.13983*v;                              \n" +                "   g = y - 0.39465*u - 0.58060*v;                  \n" +                "   b = y + 2.03211*u;                              \n" +                //We finally set the RGB color of our pixel                "   gl_FragColor = vec4(r, g, b, 1.0);              \n" +                "}                                                  \n";         //Create and compile our shader        shader = new ShaderProgram(vertexShader, fragmentShader);        //Create our mesh that we will draw on, it has 4 vertices corresponding to the 4 corners of the screen        mesh = new Mesh(true, 4, 6,                 new VertexAttribute(Usage.Position, 2, "a_position"),                 new VertexAttribute(Usage.TextureCoordinates, 2, "a_texCoord"));        //The vertices include the screen coordinates (between -1.0 and 1.0) and texture coordinates (between 0.0 and 1.0)        float[] vertices = {                -1.0f,  1.0f,   // Position 0                0.0f,   0.0f,   // TexCoord 0                -1.0f,  -1.0f,  // Position 1                0.0f,   1.0f,   // TexCoord 1                1.0f,   -1.0f,  // Position 2                1.0f,   1.0f,   // TexCoord 2                1.0f,   1.0f,   // Position 3                1.0f,   0.0f    // TexCoord 3        };        //The indices come in trios of vertex indices that describe the triangles of our mesh        short[] indices = {0, 1, 2, 0, 2, 3};        //Set vertices and indices to our mesh        mesh.setVertices(vertices);        mesh.setIndices(indices);        /*         * Initialize the Android camera         */        camera = Camera.open(0);        //We set the buffer ourselves that will be used to hold the preview image        camera.setPreviewCallbackWithBuffer(this);         //Set the camera parameters        Camera.Parameters params = camera.getParameters();        params.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO);        params.setPreviewSize(1280,720);         camera.setParameters(params);        //Start the preview        camera.startPreview();        //Set the first buffer, the preview doesn't start unless we set the buffers        camera.addCallbackBuffer(image);    }    @Override    public void onPreviewFrame(byte[] data, Camera camera) {        //Send the buffer reference to the next preview so that a new buffer is not allocated and we use the same space        camera.addCallbackBuffer(image);    }    @Override    public void renderBackground() {        /*         * Because of Java's limitations, we can't reference the middle of an array and          * we must copy the channels in our byte array into buffers before setting them to textures         */        //Copy the Y channel of the image into its buffer, the first (width*height) bytes are the Y channel        yBuffer.put(image, 0, 1280*720);        yBuffer.position(0);        //Copy the UV channels of the image into their buffer, the following (width*height/2) bytes are the UV channel; the U and V bytes are interspread        uvBuffer.put(image, 1280*720, 1280*720/2);        uvBuffer.position(0);        /*         * Prepare the Y channel texture         */        //Set texture slot 0 as active and bind our texture object to it        Gdx.gl.glActiveTexture(GL20.GL_TEXTURE0);        yTexture.bind();        //Y texture is (width*height) in size and each pixel is one byte; by setting GL_LUMINANCE, OpenGL puts this byte into R,G and B components of the texture        Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE, 1280, 720, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE, yBuffer);        //Use linear interpolation when magnifying/minifying the texture to areas larger/smaller than the texture size        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);        /*         * Prepare the UV channel texture         */        //Set texture slot 1 as active and bind our texture object to it        Gdx.gl.glActiveTexture(GL20.GL_TEXTURE1);        uvTexture.bind();        //UV texture is (width/2*height/2) in size (downsampled by 2 in both dimensions, each pixel corresponds to 4 pixels of the Y channel)         //and each pixel is two bytes. By setting GL_LUMINANCE_ALPHA, OpenGL puts first byte (V) into R,G and B components and of the texture        //and the second byte (U) into the A component of the texture. That's why we find U and V at A and R respectively in the fragment shader code.        //Note that we could have also found V at G or B as well.         Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE_ALPHA, 1280/2, 720/2, 0, GL20.GL_LUMINANCE_ALPHA, GL20.GL_UNSIGNED_BYTE, uvBuffer);        //Use linear interpolation when magnifying/minifying the texture to areas larger/smaller than the texture size        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);        /*         * Draw the textures onto a mesh using our shader         */        shader.begin();        //Set the uniform y_texture object to the texture at slot 0        shader.setUniformi("y_texture", 0);        //Set the uniform uv_texture object to the texture at slot 1        shader.setUniformi("uv_texture", 1);        //Render our mesh using the shader, which in turn will use our textures to render their content on the mesh        mesh.render(shader, GL20.GL_TRIANGLES);        shader.end();    }    @Override    public void destroy() {        camera.stopPreview();        camera.setPreviewCallbackWithBuffer(null);        camera.release();    }}
主应用程序部分只是确保在开始时调用一次init(),renderBackground()每帧渲染循环,并且destroy()最后调用一次:
public class YourApplication implements ApplicationListener {    private final PlatformDependentCameraController deviceCameraControl;    public YourApplication(PlatformDependentCameraController cameraControl) {        this.deviceCameraControl = cameraControl;    }    @Override    public void create() {                      deviceCameraControl.init();    }    @Override    public void render() {              Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());        Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);        //Render the background that is the live camera image        deviceCameraControl.renderBackground();        /*         * Render anything here (sprites/models etc.) that you want to go on top of the camera image         */    }    @Override    public void dispose() {        deviceCameraControl.destroy();    }    @Override    public void resize(int width, int height) {    }    @Override    public void pause() {    }    @Override    public void resume() {    }}
唯一的其他Android特定部分是以下非常短的主要Android代码,您只需创建一个新的Android特定设备相机处理程序并将其传递到主要的libgdx对象:

public class MainActivity extends AndroidApplication {    @Override    public void onCreate(Bundle savedInstanceState) {        super.onCreate(savedInstanceState);        AndroidApplicationConfiguration cfg = new AndroidApplicationConfiguration();        cfg.useGL20 = true; //This line is obsolete in the newest libgdx version        cfg.a = 8;        cfg.b = 8;        cfg.g = 8;        cfg.r = 8;        PlatformDependentCameraController cameraControl = new AndroidDependentCameraController();        initialize(new YourApplication(cameraControl), cfg);        graphics.getView().setKeepScreenOn(true);    }}
运行速度是非常快的,测试结果如下:

三星Galaxy Note II LTE - (GT-N7105):拥有ARM Mali-400 MP4 GPU。
渲染一帧需要大约5-6毫秒,每隔几秒偶尔会跳到15 ms左右
实际渲染线(mesh.render(着色器,GL20.GL_TRIANGLES);)一致地需要0-1 ms
两个纹理的创建和绑定总共需要1-3毫秒
ByteBuffer副本通常需要1-3毫秒,但偶尔跳到大约7ms,这可能是由于图像缓冲区在JVM堆中移动。


三星Galaxy Note 10.1 2014 - (SM-P600):具有ARM Mali-T628 GPU。
渲染一帧需要大约2-4毫秒,罕见的跳到大约6-10毫秒
实际渲染线(mesh.render(着色器,GL20.GL_TRIANGLES);)一致地需要0-1 ms
两个纹理的创建和绑定总共需要1-3毫秒,但每两秒钟跳到大约6-9毫秒
ByteBuffer副本通常总共需要0-2 ms,但很少跳到大约6ms

另外将Shader的顶点和片段着色器展现如下:

attribute vec4 position;attribute vec2 inputTextureCoordinate;varying vec2 v_texCoord;void main(){   gl_Position = position;   v_texCoord = inputTextureCoordinate;}


precision mediump float;varying vec2 v_texCoord;uniform sampler2D yTexture; uniform sampler2D uvTexture;const mat3 yuv2rgb = mat3(                        1, 0, 1.2802,                        1, -0.214821, -0.380589,                        1, 2.127982, 0                        );void main() {        vec3 yuv = vec3(                1.1643 * (texture2D(yTexture, v_texCoord).r - 0.0627),                texture2D(uvTexture, v_texCoord).a - 0.5,                texture2D(uvTexture, v_texCoord).r - 0.5                );    vec3 rgb = yuv * yuv2rgb;    gl_FragColor = vec4(rgb, 1.0);}



原创粉丝点击