You can write your own code to implement it here, it is not difficult. But YUV has a variety of sampling formats and storage methods, so what you get here byte[] is not enough, you also need to know what YUV looks like. So it is recommended that you directly find a way to obtain RGB pixel data.
2. RGB to Bitmap
This RGB also has the same problem, it may be RGB565/RGB555/RGB24/RGB32 etc. You still have to know what it looks like. General image libraries have implementations related to Bitmap operations, which can directly set pixel data. I don’t know about it here. For example, QBitmap::fromData
in Qt
If you want to do it yourself, you have to write the file header yourself. For relevant information, please refer to http://blog.csdn.net/o_sun_o/...
3. Bitmap to BGRA
If you do this yourself, just take the pixels and save them yourself. All the added Alpha channels are 0.
I don’t know if there is a better way in Android.
If C++ is used for processing, it is recommended to find a library to do it, such as ffmpeg.
1. YUV to RGB
The formula for converting the two to each other is as follows
RGB to YUV
YUV to RGB
You can write your own code to implement it here, it is not difficult.
But YUV has a variety of sampling formats and storage methods, so what you get here
byte[]
is not enough, you also need to know what YUV looks like.So it is recommended that you directly find a way to obtain RGB pixel data.
2. RGB to Bitmap
This RGB also has the same problem, it may be RGB565/RGB555/RGB24/RGB32 etc. You still have to know what it looks like.
in QtGeneral image libraries have implementations related to Bitmap operations, which can directly set pixel data. I don’t know about it here. For example,
QBitmap::fromData
If you want to do it yourself, you have to write the file header yourself. For relevant information, please refer to http://blog.csdn.net/o_sun_o/...
3. Bitmap to BGRA
If you do this yourself, just take the pixels and save them yourself. All the added Alpha channels are 0.