video/audio flow in openmax al

Hi,

I am trying to understand the flow of audio/video in openmax al.I have the following doubts.

1.According to my understanding the audio/video flow goes like:

From the android application the mpeg2 stream goes to openmax al which gives it to soc where the stream is given to demux.Then video stream goes to decoding,then preprocessing,then it is rendered to surface/surface texture for display.Audio stream after demuxing–>decoing goes to audio output device.

1.what kind of effects/processing openmax al applies on mpeg2 transport stream.
2.how opensl is different from openmax al audio. at
3.It renders the stream to demuxer–>through audiosink,videosink(little confused here).
if it directly gives to audiosink/videosink where it is demuxed and decoded.
4.what is the input module in openmax al i.e where it received mpeg2ts and gives it for processing.

Regards
Mayank

Hi Mayank,

OpenMAX AL is a stream control layer. It’s purpose is to say when the stream should flow and when it should stop. OpenMAX AL does not instruct the underlying implementation of when that stream is sent to different components. The underlying implementation manages that. Perhaps this presentation can shed some light on it: https://www.khronos.org/assets/uploads/developers/library/2012-siggraph-asia/Noreke-OpenSL-OpenMAX_SIGGRAPH-Asia-Dec12.pdf

OpenMAX AL is designed for easy application access to the multimedia system, without the application developer having to know exactly how things happen in the underlying system. This also allows the underlying system to decide how to manage the stream flow through different components depending on which system it’s running on.

1.what kind of effects/processing openmax al applies on mpeg2 transport stream.
This is implementation specific and is not specified by the OpenMAX AL specification.

2.how opensl is different from openmax al audio. at
For the basic audio playback usecase, the two are virtually identical. OpenSL ES provides advanced audio control and OpenMAX AL provides basic audio control and also video controls. In fact, they were designed together so that an implementer could allow an application to use an OpenMAX AL player and cast it as an OpenSL ES player.

3.It renders the stream to demuxer–>through audiosink,videosink(little confused here).
if it directly gives to audiosink/videosink where it is demuxed and decoded.
You are thinking stream management which is the domain of the underlying implementation, such as OpenMAX IL. OpenMAX AL takes a source, manages when the audio should be rendered and where it should go. OpenMAX IL (when used for the implementation) manages the demuxers, decoders and renderers.

4.what is the input module in openmax al i.e where it received mpeg2ts and gives it for processing.
Your data sink would be your input object. You can specify the source as a URI or you can manually feed it to the implementation through buffer queues.

Hope this clarifies things a bit.

Erik