01-20-2023, 11:31 PM
(This post was last modified: 01-20-2023, 11:58 PM by Kevin Kofler.
Edit Reason: Added a paragraph.
)
Thanks for sharing your edited version! So I see you changed 3 to 4 in two (apparently unrelated) places: the IIO device for the accelerometer, and the camera devices. I will check whether I also need these changes on my PinePhone now and update the script in my SVN in that case.
As for the media-ctl and ffmpeg options, those are basically based on other scripts I found. I can explain what the commands do:
As for the media-ctl and ffmpeg options, those are basically based on other scripts I found. I can explain what the commands do:
- The first media-ctl command disables the selfie camera (gc2145) and enables the main/back camera (ov5640). There are two connections there, from "camera name":0 to 1:0, the latter is the multiplexed output. (Instead of the camera name with quotes, you can also use a number, but I have found that to change more often than the name does, so I changed it back to a name when I wrote the script. If it is in a script, I do not have to type it all the time anyway, so using the long form does not hurt.) The first is set to [0] = disabled, the second to [1] = enabled.
- The second media-ctl command sets the output format (fmt:…) of the camera to something FFmpeg can process. The camera supports a few formats, there is a list somewhere. In any case, UYVY8_2X8 matches what FFmpeg's yuv420p can process, 1280x720 is the resolution, 1/30 means 30 fps.
- The first FFmpeg command sets the video and audio "file formats" (to video4linux2 and pulse, respectively, which are not really file formats) and devices (to a kernel device file for video4linux2 and to a PulseAudio device name for pulse), the input format and resolution to match the above, the thread queue sizes to something that works well with the inputs (I do not remember exactly where those numbers came from), and the output codecs to lossless codecs that can be encoded at real time (which is the main trick in the script, because encoding directly to VP9 is too slow to work in real time).
- The second FFmpeg command reencodes the above lossless file to VP9 video and Opus audio in a WebM container, also applying the video filters for rotation ($VF) autodetected by the first part of the script.