After this was released in Meipai, our team immediately researched and made (copied) a small app called snapshow. The android version was also ready. However, in the end it was not launched due to strategic issues. Only the iOS version was launched. Later it was The company has taken it off the shelves.
Let’s talk about the general idea, which is to get the video/photo image, throw it into the OpenGL Render Chain at the right time, make different changes according to different points in time, and display the generated picture or write it into the video file. ,that's all.
Research specifically based on your own platform. For Android, it depends on the system version you need to support. If it is higher, it is recommended to use MediaExtractor to encapsulate decoding. The bottom layer is hard decoded and highly efficient. Then you can make the render layer yourself and display it on the surfaceView. We have tested it and found it to be highly efficient and completely free of lags, which is much better than the Android version of Meipai. iOS directly uses the AVFoundation framework, and also makes its own render layer, which is displayed on CAEAGLLayer, with high efficiency and no lag.
This is the display.
For exporting, Android can use ffmpeg to write frame by frame, and iOS can use AVFoundation to export directly.
You need to have sufficient knowledge of OpenGL, encoding and decoding come second.
There should be an open source framework that allows you to add filters and so on. I have only used ffmpeg’s open source video processing before. That one is simpler.
After this was released in Meipai, our team immediately researched and made (copied) a small app called snapshow. The android version was also ready. However, in the end it was not launched due to strategic issues. Only the iOS version was launched. Later it was The company has taken it off the shelves.
Let’s talk about the general idea, which is to get the video/photo image, throw it into the OpenGL Render Chain at the right time, make different changes according to different points in time, and display the generated picture or write it into the video file. ,that's all.
Research specifically based on your own platform. For Android, it depends on the system version you need to support. If it is higher, it is recommended to use MediaExtractor to encapsulate decoding. The bottom layer is hard decoded and highly efficient. Then you can make the render layer yourself and display it on the surfaceView. We have tested it and found it to be highly efficient and completely free of lags, which is much better than the Android version of Meipai.
iOS directly uses the AVFoundation framework, and also makes its own render layer, which is displayed on CAEAGLLayer, with high efficiency and no lag.
This is the display.
For exporting, Android can use ffmpeg to write frame by frame, and iOS can use AVFoundation to export directly.
You need to have sufficient knowledge of OpenGL, encoding and decoding come second.
Can you share the demo as open source?
There should be an open source framework that allows you to add filters and so on. I have only used ffmpeg’s open source video processing before. That one is simpler.