avpipe

package module
v0.0.0-...-5dd4828 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 8, 2024 License: MIT Imports: 7 Imported by: 0

README

avpipe

Introduction

The avpipe library is basically a C/Go library on top of FFmpeg with very simple transcoding APIs. This library helps third party developers to develop server programs or apps for transcoding audio/video in C or Go. The avpipe library has the capability to transcode audio/video input files and produce MP4 or M4S (for HLS/DASH playout) segments on the fly. The source of input files for transcoding can be a file on disk, a network connection, or some objects on the cloud. Depending on the location of the source file, the input of avpipe library can be set via some handlers (callback functions) in both C or Go. Similarly the output of avpipe library can be set by some callback functions to send the result to a file on disk, a network connection or some objects on the cloud.

Usually the media segments are either fragmented MP4 or TS segments. So far, the work on avpipe has been focused on generating fragmented MP4 segments (the TS format might be added later).

Build

Prerequisites

The following libraries must be installed:

  • libx264
  • libx265
  • cuda-10.2 or cuda-11 (if you want to compile and build to support cuda/nvidia)

Clone FFmpeg and avpipe

The following repositories can be checked out in any directory, but for better organization it is recommended to be cloned in the same parent directory like src (src is the workspace containing FFmpeg and avpipe).

  • elv-io/FFmpeg:
    • git clone git@github.com:eluv-io/FFmpeg.git
  • eluv-io/avpipe:
    • git clone git@github.com:eluv-io/avpipe.git

Build FFmpeg and avpipe

  • Build eluv-io/FFmpeg: in FFmpeg directory <ffmpeg-path> run
    • ./build.sh
  • Build SRT: checkout SRT code https://github.com/Haivision/srt and build SRT
  • Set environment variables: in avpipe directory run
    • source init-env.sh <ffmpeg-path> <srt_path>
  • Build avpipe: in avpipe directory run
    • make
  • This installs the binaries under <avpipe-path>/bin
  • Note: avpipe has to be built and linked with eluv-io/FFmpeg to be functional.

Test avpipe

Design

  • Avpipe library has been built on top of the different libraries of ffmpeg, the most important ones are libx264, libx265, libavcodec, libavformat, libavfilter and libswresample. But in order to achieve all the features and capabilities some parts of ffmpeg library have been changed. Avpipe library is capable of transcoding or probing an input source (i.e a media file, or an UDP/RTMP stream) and producing output media or probe results. In order to start a transcoding job the transcoding parameters have to be set.
  • This section describes some of the capabilities and features of avpipe library and how to set the parameters to utilize those features.
Parameters
typedef struct xcparams_t {
    char    *url;                       // URL of the input for transcoding
    int     bypass_transcoding;         // if 0 means do transcoding, otherwise bypass transcoding
    char    *format;                    // Output format [Required, Values: dash, hls, mp4, fmp4]
    int64_t start_time_ts;              // Transcode the source starting from this time
    int64_t start_pts;                  // Starting PTS for output
    int64_t duration_ts;                // Transcode time period from start_time_ts (-1 for entire source)
    char    *start_segment_str;         // Specify index of the first segment  TODO: change type to int
    int     video_bitrate;
    int     audio_bitrate;
    int     sample_rate;                // Audio sampling rate
    int     channel_layout;             // Audio channel layout for output
    char    *crf_str;
    char    *preset;                    // Sets encoding speed to compression ratio
    int     rc_max_rate;                // Maximum encoding bit rate, used in conjunction with rc_buffer_size
    int     rc_buffer_size;             // Determines the interval used to limit bit rate [Default: 0]
    int64_t     audio_seg_duration_ts;  // For transcoding and producing audio ABR/mez segments
    int64_t     video_seg_duration_ts;  // For transcoding and producing video ABR/mez segments
    char    *seg_duration;              // In sec units, can be used instead of ts units
    int     seg_duration_fr;
    int     start_fragment_index;
    int     force_keyint;               // Force a key (IDR) frame at this interval
    int     force_equal_fduration;      // Force all frames to have equal frame duration
    char    *ecodec;                    // Video encoder
    char    *ecodec2;                   // Audio encoder when xc_type & xc_audio
    char    *dcodec;                    // Video decoder
    char    *dcodec2;                   // Audio decoder when xc_type & xc_audio
    int     gpu_index;                  // GPU index for transcoding, must be >= 0
    int     enc_height;
    int     enc_width;
    char    *crypt_iv;                  // 16-byte AES IV in hex (Optional, Default: Generated)
    char    *crypt_key;                 // 16-byte AES key in hex (Optional, Default: Generated)
    char    *crypt_kid;                 // 16-byte UUID in hex (Optional, required for CENC)
    char    *crypt_key_url;             // Specify a key URL in the manifest (Optional, Default: key.bin)
    int     skip_decoding;              // If set, then skip the packets until start_time_ts without decoding

    crypt_scheme_t  crypt_scheme;       // Content protection / DRM / encryption (Default: crypt_none)
    xc_type_t       xc_type;            // Default: 0 means transcode 'everything'

    int         seekable;               // Default: 0 means not seekable. A non seekable stream with moov box in
                                            //          the end causes a lot of reads up to moov atom.
    int         listen;                     // Default is 1, listen mode for RTMP
    char        *watermark_text;            // Default: NULL or empty text means no watermark
    char        *watermark_xloc;            // Default 0
    char        *watermark_yloc;            // Default 0
    float       watermark_relative_sz;      // Default 0
    char        *watermark_font_color;      // black
    int         watermark_shadow;           // Default 1, means shadow exist
    char        *overlay_filename;          // Overlay file name
    char        *watermark_overlay;         // Overlay image buffer, default is NULL
    image_type  watermark_overlay_type;     // Overlay image type, default is png
    int         watermark_overlay_len;      // Length of watermark_overlay if there is any
    char        *watermark_shadow_color;    // Watermark shadow color
    char        *watermark_timecode;        // Watermark timecode string (i.e 00\:00\:00\:00)
    float       watermark_timecode_rate;    // Watermark timecode frame rate
    int         audio_index[MAX_AUDIO_MUX]; // Audio index(s) for mez making
    int         n_audio;                    // Number of entries in audio_index
    int         audio_fill_gap;             // Audio only, fills the gap if there is a jump in PTS
    int         sync_audio_to_stream_id;    // mpegts only, default is 0
    int         bitdepth;                   // Can be 8, 10, 12
    char        *max_cll;                   // Maximum Content Light Level (HDR only)
    char        *master_display;            // Master display (HDR only)
    int         stream_id;                  // Stream id to trasncode, should be >= 0
    char        *filter_descriptor;         // Filter descriptor if tx-type == audio-merge
    char        *mux_spec;
    int64_t     extract_image_interval_ts;  // Write frames at this interval. Default: -1 
    int64_t     *extract_images_ts;         // Write frames at these timestamps. 
    int         extract_images_sz;          // Size of the array extract_images_ts

    int         video_time_base;            // New video encoder time_base (1/video_time_base)
    int         video_frame_duration_ts;    // Frame duration of the output video in time base

    int         debug_frame_level;
    int         connection_timeout;         // Connection timeout in sec for RTMP or MPEGTS protocols
} xcparams_t;

  • Determining input: the url parameter uniquely identifies the input source that will be transcoded. It can be a filename, a network URL that identifies a stream (i.e udp://localhost:22001), or another source that contains the input audio/video for transcoding.

  • Determining output format: avpipe library can produce different output formats. These formats are DASH/HLS adaptive bitrate (ABR) segments, fragmented MP4 segments, fragmented MP4 (one file), and image files. The format field has to be set to “dash”, “hls”, “fmp4-segment”, or “image2” to specify corresponding output format.

  • Specifying input streams: this might need setting different params as follows:

    • If xc_type=xc_audio and audio_index is set to audio stream id, then only specified audio stream will be transcoded.
    • If xc_type=xc_video then avpipe library automatically picks the first detected input video stream for transcoding.
    • If xc_type=xc_audio_join then avpipe library creates an audio join filter graph and joins the selected input audio streams to produce a joint audio stream.
    • If xc_type=xc_audio_pan then avpipe library creates an audio pan filter graph to pan multiple channels in one input stream to one output stereo stream.
  • Specifying decoder/encoder: the ecodec/decodec params are used to set video encoder/decoder. Also ecodec2/decodec2 params are used to set audio encoder/decoder. For video the decoder can be one of "h264", "h264_cuvid", "jpeg2000", "hevc" and encoder can be "libx264", "libx265", "h264_nvenc", "h264_videotoolbox", or "mjpeg". For audio the decoder can be “aac” or “ac3” and the encoder can be "aac", "ac3", "mp2" or "mp3".

  • Transcoding multiple audio: avpipe library has the capability to transcode one or multiple audio streams at the same time. The audio_index array includes the audio index of the streams that will be transcoded. The parameter n_audio determines the number of audio indexes in the audio_index array.

  • Using GPU: avpipe library can utilize NVIDIA cards for transcoding. In order to utilize the NVIDIA GPU, the gpu_index must be set (the default is using GPU with index 0). To find the existing GPU indexes on a machine, nvidia-smi command can be used. In addition, the decoder and encoder should be set to "h264_cuvid" or "h264_nvenc" respectively. And finally, in order to pick the correct GPU index the following environment variable must be set “CUDA_DEVICE_ORDER=PCI_BUS_ID” before running the program.

  • Text watermarking: this can be done with setting watermark_text, watermark_xloc, watermark_yloc, watermark_relative_sz, and watermark_font_color while transcoding a video (xc_type=xc_video), which makes specified watermark text to appear at specified location.

  • Image watermarking: this can be done with setting watermark_overlay (the buffer containing overlay image), watermark_overlay_len, watermark_xloc, and watermark_yloc while transcoding a video (xc_type=xc_video).

  • Live streaming with UDP/HLS/RTMP: avpipe library has the capability to transcode an input live stream and generate MP4 or ABR segments. Although the parameter setting would be similar to transcoding any other input file, setting up input/output handlers would be different (this is discussed in sections 6 and 8).

  • Extracting images: avpipe library can extract images either using a time interval or specific timestamps.

  • HDR support: avpipe library allows to create HDR output while transcoding with H.265 encoder. To make an HDR content two parameters max_cll and master_display have to be set.

  • Bypass feature: setting bypass_transcoding to 1, would avoid transcoding and copies the input packets to output. This feature is very useful (saves a lot of CPU and time) when input data matches with output and we can skip transcoding.

  • Muxing audio/video ABR segments and creating MP4 files: this feature allows the creation of MP4 files from transcoded audio/video segments. In order to do this a muxing spec has to be made to tell avpipe which ABR segments should be stitched together to produce the final MP4. To make this feature working xc_type should be set to xc_mux and the mux_spec param should point to a buffer containing muxing spec.

  • Transcoding from specific timebase offset: the parameter start_time_ts can be used to skip some input and transcode from specified TS in start_time_ts. This feature is also very useful to start transcoding from a certain point and not from the beginning of file/stream.

  • Audio join/pan/merge filters:

    • setting xc_type = xc_audio_join would join 2 or more audio inputs and create a new audio output (for example joining two mono streams and creating one stereo).
    • setting xc_type = xc_audio_pan would pick different audio channels from input and create a new audio stream (for example picking different channels from a 5.1 channel layout and producing a stereo containing two channels).
    • setting xc_type = xc_audio_merge would merge different input audio streams and produce a new multi-channel output stream (for example, merging different input mono streams and create a new 5.1)
  • Setting video timebase: setting video_time_base will set the timebase of generated video to 1/video_time_base (the timebase has to be bigger than 10000).

  • Video frame duration: the parameter video_frame_duration_ts can be used to set the duration of each video frame with the specified timebase for output video. This along with video_time_base can be used to normalize the video frames and their duration. For example, for a stream with 60 fps and video_frame_duration_ts equal to 256, the video_time_base would be 15360. As another example, for a 59.94 fps, the video_frame_duration_ts can be 1001 and video_time_base would be 60000. In this case a segment of 1800 frames would be 1801800 timebase long.

  • Debugging with frames: if the parameter debug_frame_level is on then the logs will also include very low level debug messages to trace reading/writing every piece of data.

  • Connection timeout: This parameter is useful when recording / transcoding RTMP or MPEGTS streams. If avpipe is listening for an RTMP stream, connection_timeout determines the time in sec to listen for an incoming RTMP stream. If avpipe is listening for incoming UDP MPEGTS packets, connection_timeout determines the time in sec to wait for the first incoming UDP packet (if no packet is received during connection_timeout, then timeout would happen and an error would be generated).

C/Go interaction architecture

Avpipe library has two main layers (components): avpipe C library and avpipe C/GO library.

Avpipe C library:

The first layer is the C code, mainly libavpipe directory, that has been built on top of the different libraries of ffmpeg like libx264, libx265, libavcodec, libavformat, libavfilter and libswresample. This is the main transcoding engine of avpipe and defines low level C transcoding API of avpipe library. Avpipe uses the callbacks in avio_alloc_context() to read, write, or seek into the media files.

As mentioned before this layer uses the callbacks in avio_alloc_context() to read, write, or seek into the media files. These callbacks are used by FFmpeg libraries to read, and seek into the input file (or write and seek to the output file). Avpipe C library provides avpipe_io_handler_t struct to client applications for transcoding (at this moment exc and Go layer are the users of avpipe_io_handler_t); notice that three of these functions are exactly matched to read, write, seek callback functions of avio_alloc_context(), but three more functions are also added to open, close and stat the input or output:

typedef struct avpipe_io_handler_t {
  avpipe_opener_f avpipe_opener;
  avpipe_closer_f avpipe_closer;
  avpipe_reader_f avpipe_reader;
  avpipe_writer_f avpipe_writer;
  avpipe_seeker_f avpipe_seeker;
  avpipe_stater_f avpipe_stater;
} avpipe_io_handler_t;
  • The avpipe_opener() callback function opens the media file (makes a transcoding session) and initializes an ioctx_t structure.
  • The avpipe_closer() callback function closes the corresponding resources that were allocated for the media file (releases the resources that were allocated with the opener).
  • The avpipe_reader() callback function reads the packets/frames of the opened media file for transcoding. This callback function automatically gets called by ffmpeg during transcoding.
  • The avpipe_writer() callback function writes the transcoded packets/frames to the output media file. This callback function automatically gets called by ffmpeg during transcoding.
  • The avpipe_seeker() callback function seeks to a specific offset and gets called automatically by ffmpeg during transcoding.
  • The avpipe_stater() callback function publishes different statistics about transcoding and gets called by the avpipe library itself.

In order to start a transcoding session with avpipe C library, the following APIs are provided:

  • avpipe_init(xctx_t **xctx, avpipe_io_handler_t *in_handlers, avpipe_io_handler_t *out_handlers, txparams_t *p): this initialises a transcoding context with provided input handlers, output handlers, and transcoding parameters.
  • avpipe_fini(xctx_t **xctx): This releases all the resources associated with the already initialized transcoding context.
  • avpipe_xc(xctx_t *xctx, int do_instrument): this starts the transcoding corresponding to the transcoding context that was already initialized by avpipe_init(). If do_instrument is set it will also do some instrumentation while transcoding.
  • avpipe_probe(avpipe_io_handler_t *in_handlers, txparams_t *p, xcprobe_t **xcprobe, int *n_streams): this function probes an input media which can be accessed by in_handlers callback functions. It is recommended to set the seekable parameter to make searching and finding some meta data faster in the input stream if the input stream is not a live stream. Of course, for a live stream seekable should not be set since it is not possible to seek back and forth in live input data.
C/Go layer:

The second layer is the C/GO code, mainly avpipe.go, that provides the API for Go programs, and avpipe.c, which glues the Go layer to avpipe C library. The handlers in avpipe.c are the callback functions that get called by ffmpeg and they call the Go layer callback functions themselves.

Avpipe C/Go layer provides APIs that can be used by Go programs. The implementation of this layer is in two files avpipe.c and avpipe.go. The transcoding APIs exposed to Go client programs are designed and implemented in two categories:

1) APIs with no handle: these APIs are very simple to use and are useful when the client application is dealing with short transcodings (i.e transcodings that would take 20-30 secs) and the application doesn’t need to cancel the transcoding

2) Handle based APIs: which work based on a handle and corresponding transcoding can be cancelled using the specified handle. These transcoding APIs are useful, for example, when dealing with live streams and the application wants to cancel a live stream recording.

All the APIs in the C/Go library can be categories as the following:

No handle based transcoding APIs are:
  • Xc(params *XcParams): initializes a transcoding context in avpipe and starts running the corresponding transcoding job.
  • Mux(params *XcParams): initializes a transcoding context in avpipe and starts running the corresponding muxing job.
  • Probe(params *XcParams): starts probing the specified input in the url parameter. In order to make probing faster, it is better to set seekable in params to true when probing non-live inputs.
Handle based transcoding APIs are:
  • XcInit(params *XcParams): initializes a transcoding context in avpipe and returns its corresponding 32bit handle to the client code. This handle can be used to start or cancel the transcoding job.
  • XcRun(handle int32): starts the transcoding job that corresponds to the obtained handle by XcInit().
  • XcCancel(handle int32): cancels or stops the transcoding job corresponding to the handle.
IO handler APIs are:
  • InitIOHandler(inputOpener InputOpener, outputOpener OutputOpener): This is used to set global input/output opener for avpipe transcoding. If there is no specific input or output opener for a URL the global input/output opener will be used.
  • InitUrlIOHandler(url string, inputOpener InputOpener, outputOpener OutputOpener): This is used to set input/output opener specific to a URL when transcoding. The input or output opener set by this function is only valid for the specified url and will be unset after Xc() or Probe() is complete.
  • InitMuxIOHandler(inputOpener InputOpener, outputOpener OutputOpener): Sets the global handler for muxing (similar to InitIOHandler for transcoding).
  • InitUrlMuxIOHandler(url string, inputOpener InputOpener, outputOpener OutputOpener): This is used to set input/output opener specific to a URL when muxing (similar to InitUrlIOHandler for transcoding).
Miscellaneous APIs are:
  • H264GuessProfile(bitdepth, width, height int): returns the profile.
  • H264GuessLevel(profile int, bitrate int64, framerate, width, height int): returns the level.
Setting up Go IO handlers

As mentioned before the source of transcoding in avpipe library can be a file on a disk, a TCP connection like RTMP, some UDP datagrams like MPEGTS stream, or even an object on the cloud. Similarly the output of transcoding in avpipe can be a file on a disk, some memory cache, or another object on the cloud. This flexibility in avpipe is achieved by two interfaces: InputOpener and OutputOpener interface.

InputOpener interface:

This interface has only one Open() method that must be implemented. This method is called just before transcoding, probing, or muxing starts. In avpipe library every input is determined by a url and a unique fd (the same way a file is determined by an fd in your operating system) and when open() is called the fd and the url is passed to it. Then the Open() returns an implementation of the InputHandler interface. The Read() method in InputHandler interface is used to read the input, the Seek() method is used to seek to a specific position in the input, Close() is used to close the input, Size() is used to obtain info about size of input, and Stat() is used to report some statistics of input.

type InputOpener interface {
  // fd determines uniquely opening input.
  // url determines input string for transcoding
  Open(fd int64, url string) (InputHandler, error)
}

type InputHandler interface {
  // Reads from input stream into buf.
  // Returns (0, nil) to indicate EOF.
  Read(buf []byte) (int, error)

  // Seeks to a specific offset of the input.
  Seek(offset int64, whence int) (int64, error)

  // Closes the input.
  Close() error

  // Returns the size of input, if the size is not known returns 0 or -1.
  Size() int64

  // Reports some stats
  Stat(statType AVStatType, statArgs interface{}) error
}
OutputOpener interface:

Similar to InputOpener this interface has only one open() method that must be implemented. This open() method is called before a new transcoding segment is generated. The new transcoding segments generated by avpipe can be HLS/DASH segments (m4s files), or fragmented MP4 files; in either case the open() method would be called before the segment is generated. This open() method has to return an implementation of the OutputHandler interface, which is used to seek, write, close and stat output segments. More specifically, in OutputHandler interface the Write() method is used to write to the output segment, the Seek() method is used to seek into the output segment, Close() method is used to close the output segment and Stat() is used to report some statistics of output.

type OutputOpener interface {
  // h determines uniquely opening input.
  // fd determines uniquely opening output.
  Open(h, fd int64, stream_index, seg_index int, pts int64, out_type AVType) (OutputHandler, error)
}

type OutputHandler interface {
  // Writes encoded stream to the output.
  Write(buf []byte) (int, error)

  // Seeks to specific offset of the output.
  Seek(offset int64, whence int) (int64, error)

  // Closes the output.
  Close() error

  // Reports some stats
  Stat(avType AVType, statType AVStatType, statArgs interface{}) error
}

Note that the methods in InputHandler and OutputHandler interfaces are called indirectly by ffmpeg. For some examples of the implementations of these interfaces you can refer to avpipe_test.go or elvxc directory.

Transcoding Audio/Video

Avpipe library has the following transcoding options to transcode audio/video:

  • xc_all: in this mode both audio and video will be decoded and encoded according to transcoding params. Usually there is no need to specify decoder and decoder is detected automatically.
  • xc_video: in this mode video will be decoded and encoded according to transcoding params.
  • xc_audio: in this mode audio will be decoded and encoded according to encoder param (by default it is AAC).
  • xc_audio_pan: in this mode audio pan filter will be used before injecting the audio frames into the encoder.
  • xc_audio_merge: in this mode audio merge filter will be used before injecting the audio frames into the encoder.
  • xc_mux: in this mode avpipe would mux some audio and video ABR segments and produce an MP4 output. In this case, it is needed to provide a mux_spec which points to ABR segments to be muxed.
  • xc_extract_images: in this mode avpipe will extract specific images/frames at specific times from a video.
Audio specific params
  • channel_layout: In all the above cases channel_layout parameter can be set to specify the output channel layout. If the channel_layout param is not set then the input channel layout would carry to the output.
  • audio_index: The audio_index param can be used to pick the specified audio (using stream index) for transcoding.
  • audio_seg_duration_ts: This param determines the duration of the generated audio segment in TS.
  • audio_bitrate: This param sets the audio bitrate in the output.
  • filter_descriptor: The filter_descriptor param must be set when transcoding type is xc_audio_pan/xc_audio_merge.

Avpipe stat reports

  • Avpipe library reports some input and output stats via some events.
  • Input stats include the following events:
    • in_stat_bytes_read: input bytes read (this is true for all inputs but not for RTMP).
    • in_stat_audio_frame_read: input audio frames read so far.
    • in_stat_video_frame_read: input video frames read so far.
    • in_stat_decoding_audio_start_pts: input stream start pts for audio.
    • in_stat_decoding_video_start_pts: input stream start pts for video.
  • Input stats are reported via input handlers avpipe_stater() callback function.
  • A GO client of avpipe library, must implement InputHandler.Stat() method.
  • Output stats include the following events:
    • out_stat_bytes_written: bytes written to current segment so far. Audio and video each have their own output segment.
    • out_stat_frame_written: includes total frames written and frames written to current segment.
    • out_stat_encoding_end_pts: end pts of generated segment. This event is generated when an output segment is complete and it is closing.
  • Output stats are reported via output handlers avpipe_stater() callback function.
  • A GO client of avpipe library, must implement OutputHandler.Stat() method.

Setting up live

  • Avpipe can handle HLS, UDP TS, and RTMP live streams. For each case it is needed to set parameters for live stream properly.
  • If the parameters are set correctly, then avpipe recorder would read the live data and generate live audio/video mezzanine files.
  • Using xc-all transcoding feature, which was added recentely, avpipe can transcode both audio and video of a live stream and produce mezzanine files.
  • In order to have a good quality output, the audio and video live has to be synced.
  • If input has multiple audios, avpipe can sync the selected audio with one of the elementary video streams, specified by sync_audio_to_stream_id, based the first key frame in the video stream. In this case, sync_audio_to_stream_id would be set to the stream id of the video elementary stream.

Transcoding with preset

  • Avpipe has the capability to apply preset parameter when encoding using H264 encoder.
  • The experiments show that using preset faster instead of medium would generate almost the same size output/bandwidth while keeping the picture quality high.
  • The other advantage of using preset faster instead of medium is that it would consume less CPU and encode faster.
  • To compare the following command is used to generate the mezzanines for creed_5_min.mov:
./bin/etx -f ../media/MGM/creed_5_min.mov -tx-type all -force-keyint 48 -seg-duration 30.03 -seekable 1 -format fmp4-segment -> preset medium
./bin/etx -f ../media/MGM/creed_5_min.mov -tx-type all -force-keyint 48 -seg-duration 30.03 -seekable 1 -format fmp4-segment -preset faster -> preset faster

                                                         +-----------------------------------------------------------------------------------------------------------+
                                                         |                                           Generated Segments (Bytes)                                      |
+------------+-----------------------+-------------------+-----------------------------------------------------------------------------------------------------------+
|   Preset   |      Input file       |  transcoding time |  Seg 1  |  Seg 2  |  Seg 3   |  Seg 4   |  Seg 5   |  Seg 6   |   Seg 7  |  Seg 8   |  Seg 9  |  Seg 10   |
+------------+-----------------------+-------------------+-----------------------------------------------------------------------------------------------------------+
|    Medium  |     creed_5_min.mov   |      3m56.218s    | 9832287 | 5068374 | 15237931 | 14886633 | 10418712 | 15105825 | 14009736 | 14090010 | 13788974 | 13206589 |
+------------+-----------------------+-------------------+-----------------------------------------------------------------------------------------------------------+
|    Faster  |     creed_5_min.mov   |      3m3.294s     | 10004519| 4570883 | 14698976 | 14470983 | 10156648 | 13879914 | 13820847 | 13222401 | 13172066 | 12314343 |
+------------+-----------------------+-------------------+-----------------------------------------------------------------------------------------------------------+

  • Expermineting with other input files and some RTMP streams showed the same results.

h.265 Implementation Notes

  • h.265 levels for ingests follow the same behaviour as ffmpeg. Below are examples from our testing
Source Resolution Profile Source h.265 Level Fabric h.265 Level
1080p Main 4.0 4.0
1080p Main 10 4.0 4.0
1080p Main 4.1 4.0
1080p Main 10 4.1 4.0
1080p Main 5.0 4.0
1080p Main 10 5.0 4.0
1080p Main 5.1 4.0
4k Main 10 5.1 5.0
4k Main 10 5.2 5.0

Documentation

Overview

Package avpipe has four main interfaces that has to be implemented by the client code:

  1. InputOpener: is the input factory interface that needs an implementation to generate an InputHandler.

  2. InputHandler: is the input handler with Read/Seek/Size/Close methods. An implementation of this interface is needed by ffmpeg to process input streams properly.

  3. OutputOpener: is the output factory interface that needs an implementation to generate an OutputHandler.

  4. OutputHandler: is the output handler with Write/Seek/Close methods. An implementation of this interface is needed by ffmpeg to write encoded streams properly.

    * Defines various audio/video transcoding errors.

Index

Constants

View Source
const (
	XcNone             XcType = iota
	XcVideo                   = 1
	XcAudio                   = 2
	XcAll                     = 3  // XcAudio | XcVideo
	XcAudioMerge              = 6  // XcAudio | 0x04
	XcAudioJoin               = 10 // XcAudio | 0x08
	XcAudioPan                = 18 // XcAudio | 0x10
	XcMux                     = 32
	XcExtractImages           = 65  // XcVideo | 2^6
	XcExtractAllImages        = 129 // XcVideo | 2^7
	Xcprobe                   = 256
)
View Source
const (
	XcProfileNone         XcProfile = iota
	XcProfileH264BaseLine           = C.FF_PROFILE_H264_BASELINE // 66
	XcProfileH264Heigh              = C.FF_PROFILE_H264_HIGH     // 100
	XcProfileH264Heigh10            = C.FF_PROFILE_H264_HIGH_10  // 110
)
View Source
const (
	UnknownImage = iota
	PngImage
	JpgImage
	GifImage
)
View Source
const (
	AVMEDIA_TYPE_UNKNOWN    = -1
	AVMEDIA_TYPE_VIDEO      = 0
	AVMEDIA_TYPE_AUDIO      = 1
	AVMEDIA_TYPE_DATA       = 2 ///< Opaque data information usually continuous
	AVMEDIA_TYPE_SUBTITLE   = 3
	AVMEDIA_TYPE_ATTACHMENT = 4 ///< Opaque data information usually sparse
	AVMEDIA_TYPE_NB         = 5
)
View Source
const (
	AV_FIELD_UNKNOWN     = 0
	AV_FIELD_PROGRESSIVE = 1
	AV_FIELD_TT          = 2 //< Top coded_first, top displayed first
	AV_FIELD_BB          = 3 //< Bottom coded first, bottom displayed first
	AV_FIELD_TB          = 4 //< Top coded first, bottom displayed first
	AV_FIELD_BT          = 5 //< Bottom coded first, top displayed first
)
View Source
const (
	AV_IN_STAT_BYTES_READ               = 1
	AV_IN_STAT_AUDIO_FRAME_READ         = 2
	AV_IN_STAT_VIDEO_FRAME_READ         = 4
	AV_IN_STAT_DECODING_AUDIO_START_PTS = 8
	AV_IN_STAT_DECODING_VIDEO_START_PTS = 16
	AV_OUT_STAT_BYTES_WRITTEN           = 32
	AV_OUT_STAT_FRAME_WRITTEN           = 64
	AV_IN_STAT_FIRST_KEYFRAME_PTS       = 128
	AV_OUT_STAT_ENCODING_END_PTS        = 256
	AV_IN_STAT_DATA_SCTE35              = 512
)
View Source
const AvNoPtsValue = uint64(C.uint64_t(0x8000000000000000))

This is corresponding to AV_NOPTS_VALUE

View Source
const MaxAudioMux = C.MAX_STREAMS

Variables

View Source
var AVFieldOrderNames = map[AVFieldOrder]string{
	AV_FIELD_UNKNOWN:     "",
	AV_FIELD_PROGRESSIVE: "progressive",
	AV_FIELD_TT:          "tt",
	AV_FIELD_BB:          "bb",
	AV_FIELD_TB:          "tb",
	AV_FIELD_BT:          "bt",
}
View Source
var AVMediaTypeNames = map[AVMediaType]string{
	AVMEDIA_TYPE_UNKNOWN:    "unknown",
	AVMEDIA_TYPE_VIDEO:      "video",
	AVMEDIA_TYPE_AUDIO:      "audio",
	AVMEDIA_TYPE_DATA:       "data",
	AVMEDIA_TYPE_SUBTITLE:   "subtitle",
	AVMEDIA_TYPE_ATTACHMENT: "attachment",
	AVMEDIA_TYPE_NB:         "nb",
}
View Source
var EAV_AUDIO_SAMPLE = errors.New("EAV_AUDIO_SAMPLE")

EAV_AUDIO_SAMPLE is the error returned when avpipe fails to convert audio samples.

View Source
var EAV_BAD_HANDLE = errors.New("EAV_BAD_HANDLE")

EAV_BAD_HANDLE is the error returned when the transcoding session handle is not valid

View Source
var EAV_CANCELLED = errors.New("EAV_CANCELLED")

EAV_CANCELLED is the error returned when a transcoding session is cancelled. This error is returned from Tx() or TxRun() if transcoding session is cancelled by calling TxCancel() and TxCancel() was successful.

View Source
var EAV_CANCEL_FAILED = errors.New("EAV_CANCEL_FAILED")

EAV_CANCEL_FAILED is the error returned when cancelling a transcoding session fails. This error is returned from TxCancel() if cancelling a transcoding fails.

View Source
var EAV_CODEC_CONTEXT = errors.New("EAV_CODEC_CONTEXT")

EAV_CODEC_CONTEXT is the error returned when - codec context doesn't exist for a stream index or - avpipe fails to find codec context for a stream or - avpipe fails to allocate codec format context

View Source
var EAV_CODEC_PARAM = errors.New("EAV_CODEC_PARAM")

EAV_CODEC_PARAM is the error returned when avpipe fails - to copy codec params to codec context for output stream or - codec parameters are not correct

View Source
var EAV_FILTER_INIT = errors.New("EAV_FILTER_INIT")

EAV_FILTER_INIT is the error returned when avpipe fails to initialize a filter.

View Source
var EAV_FILTER_STRING_INIT = errors.New("EAV_FILTER_STRING_INIT")

EAV_FILTER_STRING_INIT is the error returned when avpipe fails to obtain filter string.

View Source
var EAV_IO_TIMEOUT = errors.New("EAV_IO_TIMEOUT")

EAV_IO_TIMEOUT is the error returned when there is a timeout in network/disk io

View Source
var EAV_MEM_ALLOC = errors.New("EAV_MEM_ALLOC")

EAV_MEM_ALLOC is the error returned when avpipe fails to allocate memory for a frame, or a packet, or a context (i.e format context).

View Source
var EAV_NUM_STREAMS = errors.New("EAV_NUM_STREAMS")

EAV_NUM_STREAMS is the error returned when the number of streams are wrong while avpipe does transcoding

View Source
var EAV_OPEN_CODEC = errors.New("EAV_OPEN_CODEC")

EAV_OPEN_CODEC is the error returned when avpipe fails to open the codec.

View Source
var EAV_OPEN_INPUT = errors.New("EAV_OPEN_INPUT")

EAV_OPEN_INPUT is the error returned when avpipe fails to open the input or stream container due to invalid/corrupted data

View Source
var EAV_PARAM = errors.New("EAV_PARAM")

EAV_PARAM is the error returned when one of the transcoding params is not correct.

View Source
var EAV_PTS_WRAPPED = errors.New("EAV_PTS_WRAPPED")

EAV_PTS_WRAPPED is the error returned when PTS is wrapped in the source/input.

View Source
var EAV_READ_INPUT = errors.New("EAV_READ_INPUT")

EAV_READ_INPUT is the error returned when avpipe fails to read input packets.

View Source
var EAV_RECEIVE_FILTER_FRAME = errors.New("EAV_RECEIVE_FILTER_FRAME")

EAV_RECEIVE_FILTER_FRAME is the error returned when avpipe fails to receive frame from filter.

View Source
var EAV_RECEIVE_FRAME = errors.New("EAV_RECEIVE_FRAME")

EAV_RECEIVE_FRAME is the error returned when avpipe fails to receive frame from decoder or audio fifo.

View Source
var EAV_RECEIVE_PACKET = errors.New("EAV_RECEIVE_PACKET")

EAV_RECEIVE_PACKET is the error returned when avpipe fails to receive packets from encoder.

View Source
var EAV_SEEK = errors.New("EAV_SEEK")

EAV_SEEK is the error returned when avpipe fails to seek.

View Source
var EAV_SEND_PACKET = errors.New("EAV_SEND_PACKET")

EAV_SEND_PACKET is the error returned when avpipe fails to send packet to decoder. This error means the packet is invalid.

View Source
var EAV_STREAM_INDEX = errors.New("EAV_STREAM_INDEX")

EAV_STREAM_INDEX is the error returned when stream index is not correct.

View Source
var EAV_STREAM_INFO = errors.New("EAV_STREAM_INFO")

EAV_STREAM_INFO is the error returned when avpipe fails to obtain stream info.

View Source
var EAV_TIMEBASE = errors.New("EAV_TIMEBASE")

EAV_TIMEBASE is the error returned when the timebase is not correct. This can happen if the calculated codec context timebase doesn't match with output stream timebase.

View Source
var EAV_UNKNOWN = errors.New("EAV_UNKNOWN")

EAV_UNKNOWN is the error returned when error code doesn't exist in avpipeErrors table (below).

View Source
var EAV_WRITE_FRAME = errors.New("EAV_WRITE_FRAME")

EAV_WRITE_FRAME is the error returned when avpipe fails to write the frame into output stream or audio fifo.

View Source
var EAV_WRITE_HEADER = errors.New("EAV_WRITE_HEADER")

EAV_WRITE_HEADER is the error returned when avpipe fails to write the output stream header.

View Source
var EAV_XC_TABLE = errors.New("EAV_XC_TABLE")

EAV_XC_TABLE is the error returned when avpipe fails to find xc context in the xc_table.

Functions

func AVPipeCloseInput

func AVPipeCloseInput(fd C.int64_t) C.int

func AVPipeCloseMuxOutput

func AVPipeCloseMuxOutput(fd C.int64_t) C.int

func AVPipeCloseOutput

func AVPipeCloseOutput(handler C.int64_t, fd C.int64_t) C.int

func AVPipeOpenInput

func AVPipeOpenInput(url *C.char, size *C.int64_t) C.int64_t

func AVPipeOpenMuxInput

func AVPipeOpenMuxInput(out_url, url *C.char, size *C.int64_t) C.int64_t

func AVPipeOpenMuxOutput

func AVPipeOpenMuxOutput(url *C.char, stream_type C.int) C.int64_t

func AVPipeOpenOutput

func AVPipeOpenOutput(handler C.int64_t, stream_index, seg_index C.int, pts C.int64_t, stream_type C.int) C.int64_t

func AVPipeReadInput

func AVPipeReadInput(fd C.int64_t, buf *C.uint8_t, sz C.int) C.int

func AVPipeSeekInput

func AVPipeSeekInput(fd C.int64_t, offset C.int64_t, whence C.int) C.int64_t

func AVPipeSeekMuxOutput

func AVPipeSeekMuxOutput(fd C.int64_t, offset C.int64_t, whence C.int) C.int

func AVPipeSeekOutput

func AVPipeSeekOutput(handler C.int64_t, fd C.int64_t, offset C.int64_t, whence C.int) C.int

func AVPipeStatInput

func AVPipeStatInput(fd C.int64_t, stream_index C.int, avp_stat C.avp_stat_t, stat_args unsafe.Pointer) C.int

func AVPipeStatMuxOutput

func AVPipeStatMuxOutput(fd C.int64_t, stream_index C.int, avp_stat C.avp_stat_t, stat_args unsafe.Pointer) C.int

func AVPipeStatOutput

func AVPipeStatOutput(handler C.int64_t,
	fd C.int64_t,
	stream_index C.int,
	buf_type C.avpipe_buftype_t,
	avp_stat C.avp_stat_t,
	stat_args unsafe.Pointer) C.int

func AVPipeWriteMuxOutput

func AVPipeWriteMuxOutput(fd C.int64_t, buf *C.uint8_t, sz C.int) C.int

func AVPipeWriteOutput

func AVPipeWriteOutput(handler C.int64_t, fd C.int64_t, buf *C.uint8_t, sz C.int) C.int

func CDebug

func CDebug(msg *C.char) C.int

func CError

func CError(msg *C.char) C.int

func CInfo

func CInfo(msg *C.char) C.int

func CLog

func CLog(msg *C.char) C.int

func CWarn

func CWarn(msg *C.char) C.int

func ChannelLayout

func ChannelLayout(name string) int

func ChannelLayoutName

func ChannelLayoutName(nbChannels, channelLayout int) string

func GetPixelFormatName

func GetPixelFormatName(pixFmt int) string

func GetProfileName

func GetProfileName(codecId int, profile int) string

func H264GuessLevel

func H264GuessLevel(profile int, bitrate int64, framerate, width, height int) int

func H264GuessProfile

func H264GuessProfile(bitdepth, width, height int) int

func InitIOHandler

func InitIOHandler(inputOpener InputOpener, outputOpener OutputOpener)

This is used to set global input/output opener for avpipe If there is no specific input/output opener for a URL, the global input/output opener will be used.

func InitMuxIOHandler

func InitMuxIOHandler(inputOpener InputOpener, muxOutputOpener MuxOutputOpener)

Sets the global handlers for muxing (similar to InitIOHandler for transcoding)

func InitUrlIOHandler

func InitUrlIOHandler(url string, inputOpener InputOpener, outputOpener OutputOpener)

This is used to set input/output opener specific to a URL. The input/output opener set by this function, is only valid for the URL and will be unset after Xc() or Probe() is complete.

func InitUrlMuxIOHandler

func InitUrlMuxIOHandler(url string, inputOpener InputOpener, muxOutputOpener MuxOutputOpener)

Sets specific IO handler for muxing a url/file (similar to InitUrlIOHandler)

func Mux

func Mux(params *XcParams) error

func SetCLoggers

func SetCLoggers()

func Version

func Version() string

GetVersion ...

func Xc

func Xc(params *XcParams) error

params: transcoding parameters

func XcCancel

func XcCancel(handle int32) error

func XcInit

func XcInit(params *XcParams) (int32, error)

Returns a handle and error (if there is any error) In case of error the handle would be zero

func XcRun

func XcRun(handle int32) error

Types

type AVFieldOrder

type AVFieldOrder int

type AVMediaType

type AVMediaType int

type AVStatType

type AVStatType int

type AVType

type AVType int

AVType ...

const (
	// Unknown 0
	Unknown AVType = iota
	// DASHManifest 1
	DASHManifest
	// DASHVideoInit 2
	DASHVideoInit
	// DASHVideoSegment 3
	DASHVideoSegment
	// DASHAudioInit 4
	DASHAudioInit
	// DASHAudioSegment 5
	DASHAudioSegment
	// HLSMasterM3U 6
	HLSMasterM3U
	// HLSVideoM3U 7
	HLSVideoM3U
	// HLSAudioM3U 8
	HLSAudioM3U
	// AES128Key 9
	AES128Key
	// MP4Stream 10
	MP4Stream
	// FMP4Stream 11 (Fragmented MP4)
	FMP4Stream
	// MP4Segment 12
	MP4Segment
	// FMP4VideoSegment 13
	FMP4VideoSegment
	// FMP4AudioSegment 14
	FMP4AudioSegment
	// MuxSegment 15
	MuxSegment
	// FrameImage 16
	FrameImage
)

type ContainerInfo

type ContainerInfo struct {
	Duration   float64 `json:"duration"`
	FormatName string  `json:"format_name"`
}

type CryptScheme

type CryptScheme int

CryptScheme is the content encryption scheme

const (
	// CryptNone - clear
	CryptNone CryptScheme = iota
	// CryptAES128 - AES-128
	CryptAES128
	// CryptCENC - CENC AES-CTR
	CryptCENC
	// CryptCBC1 - CENC AES-CBC
	CryptCBC1
	// CryptCENS - CENC AES-CTR Pattern
	CryptCENS
	// CryptCBCS - CENC AES-CBC Pattern
	CryptCBCS
)

type EncodingFrameStats

type EncodingFrameStats struct {
	TotalFramesWritten int64 `json:"total_frames_written"`   // Total number of frames encoded in xc session
	FramesWritten      int64 `json:"segment_frames_written"` // Number of frames encoded in current segment
}

type IOHandler

type IOHandler interface {
	InReader(buf []byte) (int, error)
	InSeeker(offset C.int64_t, whence C.int) error
	InCloser() error
	InStat(stream_index C.int, avp_stat C.avp_stat_t, stat_args *C.void) error
	OutWriter(fd C.int, buf []byte) (int, error)
	OutSeeker(fd C.int, offset C.int64_t, whence C.int) (int64, error)
	OutCloser(fd C.int) error
	OutStat(stream_index C.int, avp_stat C.avp_stat_t, stat_args *C.void) error
}

IOHandler defines handlers that will be called from the C interface functions

type ImageType

type ImageType int

type InputHandler

type InputHandler interface {
	// Reads from input stream into buf.
	// Returns (0, nil) to indicate EOF.
	Read(buf []byte) (int, error)

	// Seeks to specific offset of the input.
	Seek(offset int64, whence int) (int64, error)

	// Closes the input.
	Close() error

	// Returns the size of input, if the size is not known returns 0 or -1.
	Size() int64

	// Reports some stats
	Stat(streamIndex int, statType AVStatType, statArgs interface{}) error
}

type InputOpener

type InputOpener interface {
	// fd determines uniquely opening input.
	// url determines input string for transcoding
	Open(fd int64, url string) (InputHandler, error)
}

type MuxOutputOpener

type MuxOutputOpener interface {
	// url and fd determines uniquely opening output.
	Open(url string, fd int64, out_type AVType) (OutputHandler, error)
}

type OutputHandler

type OutputHandler interface {
	// Writes encoded stream to the output.
	Write(buf []byte) (int, error)

	// Seeks to specific offset of the output.
	Seek(offset int64, whence int) (int64, error)

	// Closes the output.
	Close() error

	// Reports some stats
	Stat(streamIndex int, avType AVType, statType AVStatType, statArgs interface{}) error
}

type OutputOpener

type OutputOpener interface {
	// h determines uniquely opening input.
	// fd determines uniquely opening output.
	Open(h, fd int64, stream_index, seg_index int, pts int64, out_type AVType) (OutputHandler, error)
}

type ProbeInfo

type ProbeInfo struct {
	ContainerInfo ContainerInfo `json:"format"`
	StreamInfo    []StreamInfo  `json:"streams"`
}

PENDING: use legacy_imf_dash_extract/media.Probe?

func Probe

func Probe(params *XcParams) (*ProbeInfo, error)

type StreamInfo

type StreamInfo struct {
	StreamIndex        int      `json:"stream_index"`
	StreamId           int32    `json:"stream_id"`
	CodecType          string   `json:"codec_type"`
	CodecID            int      `json:"codec_id,omitempty"`
	CodecName          string   `json:"codec_name,omitempty"`
	DurationTs         int64    `json:"duration_ts,omitempty"`
	TimeBase           *big.Rat `json:"time_base,omitempty"`
	NBFrames           int64    `json:"nb_frames,omitempty"`
	StartTime          int64    `json:"start_time"` // in TS unit
	AvgFrameRate       *big.Rat `json:"avg_frame_rate,omitempty"`
	FrameRate          *big.Rat `json:"frame_rate,omitempty"`
	SampleRate         int      `json:"sample_rate,omitempty"`
	Channels           int      `json:"channels,omitempty"`
	ChannelLayout      int      `json:"channel_layout,omitempty"`
	TicksPerFrame      int      `json:"ticks_per_frame,omitempty"`
	BitRate            int64    `json:"bit_rate,omitempty"`
	Has_B_Frames       bool     `json:"has_b_frame"`
	Width              int      `json:"width,omitempty"`  // Video only
	Height             int      `json:"height,omitempty"` // Video only
	PixFmt             int      `json:"pix_fmt"`          // Video only, it matches with enum AVPixelFormat in FFmpeg
	SampleAspectRatio  *big.Rat `json:"sample_aspect_ratio,omitempty"`
	DisplayAspectRatio *big.Rat `json:"display_aspect_ratio,omitempty"`
	FieldOrder         string   `json:"field_order,omitempty"`
	Profile            int      `json:"profile,omitempty"`
	Level              int      `json:"level,omitempty"`
}

func StreamInfoAsArray

func StreamInfoAsArray(s []StreamInfo) []StreamInfo

StreamInfoAsArray builds an array where each stream is at its corresponsing index by filling in non-existing index positions with codec type "unknown"

type XcParams

type XcParams struct {
	Url                    string             `json:"url"`
	BypassTranscoding      bool               `json:"bypass,omitempty"`
	Format                 string             `json:"format,omitempty"`
	StartTimeTs            int64              `json:"start_time_ts,omitempty"`
	StartPts               int64              `json:"start_pts,omitempty"` // Start PTS for output
	DurationTs             int64              `json:"duration_ts,omitempty"`
	StartSegmentStr        string             `json:"start_segment_str,omitempty"`
	VideoBitrate           int32              `json:"video_bitrate,omitempty"`
	AudioBitrate           int32              `json:"audio_bitrate,omitempty"`
	SampleRate             int32              `json:"sample_rate,omitempty"` // Audio sampling rate
	RcMaxRate              int32              `json:"rc_max_rate,omitempty"`
	RcBufferSize           int32              `json:"rc_buffer_size,omitempty"`
	CrfStr                 string             `json:"crf_str,omitempty"`
	Preset                 string             `json:"preset,omitempty"`
	AudioSegDurationTs     int64              `json:"audio_seg_duration_ts,omitempty"`
	VideoSegDurationTs     int64              `json:"video_seg_duration_ts,omitempty"`
	SegDuration            string             `json:"seg_duration,omitempty"`
	StartFragmentIndex     int32              `json:"start_fragment_index,omitempty"`
	ForceKeyInt            int32              `json:"force_keyint,omitempty"`
	Ecodec                 string             `json:"ecodec,omitempty"`    // Video encoder
	Ecodec2                string             `json:"ecodec2,omitempty"`   // Audio encoder
	Dcodec                 string             `json:"dcodec,omitempty"`    // Video decoder
	Dcodec2                string             `json:"dcodec2,omitempty"`   // Audio decoder
	GPUIndex               int32              `json:"gpu_index,omitempty"` // GPU index if encoder/decoder is GPU (nvidia)
	EncHeight              int32              `json:"enc_height,omitempty"`
	EncWidth               int32              `json:"enc_width,omitempty"`
	CryptIV                string             `json:"crypt_iv,omitempty"`
	CryptKey               string             `json:"crypt_key,omitempty"`
	CryptKID               string             `json:"crypt_kid,omitempty"`
	CryptKeyURL            string             `json:"crypt_key_url,omitempty"`
	CryptScheme            CryptScheme        `json:"crypt_scheme,omitempty"`
	XcType                 XcType             `json:"xc_type,omitempty"`
	Seekable               bool               `json:"seekable,omitempty"`
	WatermarkText          string             `json:"watermark_text,omitempty"`
	WatermarkTimecode      string             `json:"watermark_timecode,omitempty"`
	WatermarkTimecodeRate  float32            `json:"watermark_timecode_rate,omitempty"`
	WatermarkXLoc          string             `json:"watermark_xloc,omitempty"`
	WatermarkYLoc          string             `json:"watermark_yloc,omitempty"`
	WatermarkRelativeSize  float32            `json:"watermark_relative_size,omitempty"`
	WatermarkFontColor     string             `json:"watermark_font_color,omitempty"`
	WatermarkShadow        bool               `json:"watermark_shadow,omitempty"`
	WatermarkShadowColor   string             `json:"watermark_shadow_color,omitempty"`
	WatermarkOverlay       string             `json:"watermark_overlay,omitempty"`      // Buffer containing overlay image
	WatermarkOverlayLen    int                `json:"watermark_overlay_len,omitempty"`  // Length of overlay image
	WatermarkOverlayType   ImageType          `json:"watermark_overlay_type,omitempty"` // Type of overlay image (i.e PngImage, ...)
	StreamId               int32              `json:"stream_id"`                        // Specify stream by ID (instead of index)
	AudioIndex             [MaxAudioMux]int32 `json:"audio_index,omitempty"`
	NumAudio               int32              `json:"n_audio"`
	ChannelLayout          int                `json:"channel_layout"` // Audio channel layout
	MaxCLL                 string             `json:"max_cll,omitempty"`
	MasterDisplay          string             `json:"master_display,omitempty"`
	BitDepth               int32              `json:"bitdepth,omitempty"`
	SyncAudioToStreamId    int                `json:"sync_audio_to_stream_id"`
	ForceEqualFDuration    bool               `json:"force_equal_frame_duration,omitempty"`
	MuxingSpec             string             `json:"muxing_spec,omitempty"`
	Listen                 bool               `json:"listen"`
	ConnectionTimeout      int                `json:"connection_timeout"`
	FilterDescriptor       string             `json:"filter_descriptor"`
	SkipDecoding           bool               `json:"skip_decoding"`
	DebugFrameLevel        bool               `json:"debug_frame_level"`
	ExtractImageIntervalTs int64              `json:"extract_image_interval_ts,omitempty"`
	ExtractImagesTs        []int64            `json:"extract_images_ts,omitempty"`
	VideoTimeBase          int                `json:"video_time_base"`
	VideoFrameDurationTs   int                `json:"video_frame_duration_ts"`
}

XcParams should match with txparams_t in avpipe_xc.h

func NewXcParams

func NewXcParams() *XcParams

NewXcParams initializes a XcParams struct with unset/default values

type XcProfile

type XcProfile int

type XcType

type XcType int

func XcTypeFromString

func XcTypeFromString(xcTypeStr string) XcType

Directories

Path Synopsis
cmd
This module is designed to generate stress load on one/multiple avpipe server persistent cache.
This module is designed to generate stress load on one/multiple avpipe server persistent cache.
ts

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL