Recently, I was working with camerax for recording video with the front camera only. But I ran into an issue where the video is being mirrored after saving.

Currently, I am using a library(Mp4Composer-android) to mirror the video after recording which takes up a processing time. So, I noticed that Snapchat and Instagram are giving the output without this processing.

After that happened I also noticed that our native camera application is providing an option to select whether we want to mirror the video or not.

The configuration I have added to camerax,

videoCapture = VideoCapture
                .Builder()
                .apply {
                    setBitRate(2000000)
                    setVideoFrameRate(24)
                }
                .build()

How can I make my camera not mirror the video?


Solution 1:

Temporary Solution:

I used this library as a temporary solution. The issue with this library was that I had to process the video after I record it and it took considerably some time to do it. I used this code:

Add this to gradle:

//Video Composer
implementation 'com.github.MasayukiSuda:Mp4Composer-android:v0.4.1'

Code for flipping:

Mp4Composer(videoPath, video)
    .flipHorizontal(true)
    .listener(
        object : Mp4Composer.Listener {
            override fun onProgress(progress: Double) { }
            override fun onCurrentWrittenVideoTime(timeUs: Long) { }
            override fun onCompleted() { }
            override fun onCanceled() { }
            override fun onFailed(exception: Exception?) { }
        }
    )
    .start()

Note: This will compress your video too. Look into the library documentation for more details

An answer that was given to me by a senior developer who worked in a video-based NDK for a long time:

Think of the frames that are given out by the camera going through a dedicated highway. There is a way where we can capture all the frames going through that highway.

  1. Capture the frames coming through that highway

  2. Flip the pixels of each frame

  3. Give out the frames through that same highway

He didn't specify how to capture and release the frames.

Why I didn't use that solution(The issue):

If we have to perform this action in real-time, We have to do that with high efficiency. Ranging with the quality of the camera, We have to capture anywhere from 24 frames to 120 frames per second and process and dispatch the frames.

If we need to do that, we need NDK developers and a lot of engineering which most startups can't afford.