How can I pipe data losslessly to and from FFmpeg?

I piped a stream from one instance of FFmpeg to another, but because compression was used on the intermediate stream the final result was ugly. I need a lossless pipe to prevent this, and I want it to contain both audio and video.

I suspect that there is more than one answer to this problem, so bonus points go to anyone who provides an exhaustive list of solutions (compatible containers and codecs that can be piped). Bonus points also go to anyone who accounts for other data, like subtitles.

EDIT: I'm looking for suitable codec/container combinations. I don't know why people were having difficulty figuring that out, since I said I already used a pipe and now I need it to be lossless.

I don't know how to explain this without sounding conceited, but this is an FAQ site. Asking questions that require extremely specific answers will not help the millions of users who reach this site by entering their own problems into search engines. My question was designed to help anyone else who needs to losslessly pipe data between FFmpeg instances without distracting everyone with a wall of narrative and code explaining what I was doing, why it didn't work, and why this is the only option.


How to losslessly pipe video and audio from ffmpeg to ffmpeg

Requirements by the question asker:

  • losslessly pipe from one instance of ffmpeg to another
  • it doesn't matter if it's going to and from /dev/null

Example:

ffmpeg -s 1280x720 -f rawvideo -i /dev/zero -ar 48000 -ac 2 -f s16le -i \
/dev/zero -c copy -f nut pipe:1 | ffmpeg -y -i pipe:0 -c copy -f nut /dev/null

I see no reason why anyone would do this. Also, there are very few reasons to pipe from ffmpeg to ffmpeg when you can most likely just use one ffmpeg process to do whatever you want.

What the options do:

  • -s 1280x720 -f rawvideo – Options to describe the input since /dev/zero is not a typical input format and therefore these additional options are required.

  • -i /dev/zero – The video input. It is being used in this example to generate a video stream out of "nothing". This was used in the example because the question asker refused to provide any information about the inputs being used.

  • -ar 48000 -ac 2 -f s16le – Options to describe the input since /dev/zero is not a typical audio format.

  • -i /dev/zero – The audio input. It is being used in this example to generate an audio stream out of "nothing".

  • -c copy – Stream copy, or re-mux, the inputs to the output. No re-encoding is being performed so the process is lossless. It is unknown if stream copying is acceptable to the question asker or not. Maybe re-encoding is desired instead?

  • -f nut – You need to tell ffmpeg what format to use for the pipe. Nut is a container format. See ffmpeg -formats for a complete list. Another flexible format is -f matroska, but it is impossible to suggest an appropriate or specific output container format to use without more info from the question asker.

  • pipe:1 – Use the pipe protocol to output to stdout. Alternatively, the number can be omitted (just pipe:) and by default the stdout file descriptor will be used for writing and stdin will be used for reading.


The way I learned to do this (from parts of previous answers) is to use the rawvideo codec for the video, the pcm_s16le audio codec, and FFmpeg's nut wrapper to encode the stream. nut is not supported by major programs outside of FFmpeg, but it's the only container I currently know of that can support the uncompressed formats needed to efficiently pipe data between processes.

The arguments for this encoding might look like this:

... -c:v rawvideo -c:a pcm_16le -f nut - ...

Some audio is stored with 24-bit or larger samples, and for these you should instead use pcm_24le or a different format. The full list of uncompressed audio formats will be listed by running ffmpeg -codecs (you will have to search the list for them). If you don't know what the sample size of your audio is, using pcm_16le should cause no noticeable loss in quality.

On the recieving end of the pipe, set the input to standard input and ffmpeg will detect the format and decode the stream.

... ffmpeg -i - ...

The ellipsii (...) in this answer are not part of the code. These are where your code goes. The lone hyphens (-) tell FFmpeg to use either standard input or standard output, depending on where they appear.

UPDATE:

I tried a simple experiment to improve this, and it seems that a better container is AVI, because other programs will understand it (at least VLC will).

... -c:v rawvideo -c:a pcm_16le -f avi - ...

This will work exactly like the old version, with the added bonus of compatibility.

In hindsight, I regretted posting a question that wasn't helpful in many situations, despite my claim that questions should be helpful to everyone. This makes the answer more useful.


The one problem with the other answer is that it is pcm_s16le, not s16le. Also, it includes a lot of redundant parameters.

I would use pcm instead of flac in the pipe, because it takes far less time to process (PCM is raw audio, FLAC takes lots of time to encode).

Anyway, here's how I would do it.

ffmpeg -i <input video file/stream> -vcodec rawvideo -acodec pcm_s16le pipe:1 | ffmpeg -f rawvideo -i - -vcodec <video output codec> -acodec <audio output codec> -vb <video bitrate if applicable> -ab <audio bitrate if applicable> <final-output-filename>

This worked for me when I last tried, but my goal was to pipe ffmpeg into ffplay, which is a slightly different process.

example:

This pipes a video from ffmpeg to another instance as raw video output and 16 bit little-endian PCM (both lossless unless you have 24 bit PCM, then substitute pcm_s24le.) It then converts them to h.264 in the second instance, with the fraunhoefer AAC library from the Android project (libfaac is more commonly included in ffmpeg builds. You can replace it with this instead.)

ffmpeg -i montypythonsflyingcircus-s1e1.avi -vcodec rawvideo -acodec pcm_s16le pipe:1 | ffmpeg -i - -vcodec libx264 -acodec libfdk_aac -vb 1200k -ab 96k mpfc-s1e01-output.mkv

If this doesn't pipe the subtitles, you can always rip them to SRT's and then mux them back in later, or add them to the pipes above easily.