Replies: 1 comment
-
Hello @obecny, If you put apart hls.js and playlists, recording files can be server through the Playback server. Opus is fully supported and all you need is a <video> tag in which you can put the URL of the playback server: I used to have HLS-based recording systems, i replaced them with </video>-based systems since they are much simpler and more responsive. If you still want to use hls.js, the Opus codec seems supported by judging from its source files: If you still have compatibility issues, you can configure the server in order to server 3 paths for each camera:
therefore: paths:
original:
source: rtsp://my-camera
opus:
runOnInit: ffmpeg -rtsp_transport tcp -i rtsp://localhost:8554/original -c:v copy -c:a libopus -f rtsp rtsp://127.0.0.1:$RTSP_PORT/opus
runOnInitRestart: yes
aac:
runOnInit: ffmpeg -rtsp_transport tcp -i rtsp://localhost:8554/original -c:v copy -c:a libfdk_aac -b:a 128k -f rtsp rtsp://127.0.0.1:$RTSP_PORT/aac
runOnInitRestart: yes
|
Beta Was this translation helpful? Give feedback.
-
Question
Hi,
I have rtsp stream which goes into mediamtx. Then the web app plays the webrtc stream. Additionally I'm using record feature using the "MPEG-TS" format with 8 seconds segment duration. I have also a system where those recordings can be played. The system is separated from mediamtx so I only have recordings in mentioned format and then I create hls manifest on a fly depending what time range user chooses. All is playing fine except one thing. I want to have audio both in webrtc and in recordings. If I choose "aac" then webrtc simply does not play so I have to either choose the opus or no audio. But then If I want to play recordings the opus codec cannot be transcoded using the hls.js player so it plays but without audio. What I would like to achieve is to have a separate codec for webrtc when playing in browser and separate codec for recordings when playing also in browser using hls.js player.
When I add rtsp stream into mediamtx I'm adding path configuration via api with those settings
I have also tried to use ffmpeg command and create video fragment with 2 audio tracks (aac and opus) but then I'm having some problems with webrtc. Seems like webrtc does not like when stream has even 1 audio track with different codec than "opus".
So is there a way that I can choose a command for the recordings (this way I would be able to run ffmpeg with "-c:a aac" command for example) ? So I would have to have somehow 2 ffmpeg commands one with opus and 2nd with aac , if sow how do I pass it to "runOnInit"
Otherwise, what would be the best approach to achieve that?
thx a lot for help
Beta Was this translation helpful? Give feedback.
All reactions