View Full Version : ML Queue model
07-23-2004, 04:41 PM
in the specification of ML, When we send a message from our application to the logical device,it is copied on the payload area. thus when we call the mlBeginTransfer, exactly what does happen? the messages are transfered or the data are transfered?-my mean of data is video or audio data that is received from an external device and my mean of messages is the messages that are received from the application.also what does happen when we receive a message?
07-25-2004, 10:58 PM
I'm looking forward to hear your suggestions, the response for this question is very important for me...
07-27-2004, 04:16 AM
When you send a message, the actual message (the MLpv list) is copied. This is a *shallow* copy, however: buffers referenced by the MLpv are *not* copied. You must refrain from modifying buffers you send until you receive a message indicating the buffer has been processed.
You can see for yourself what happens, since the code for the ML library is part of the SDK -- it is in oss/lib/ml/common/src
08-04-2004, 05:29 AM
ok.I want to understand the queue model with an example:audiotomem.dsw.
as i saw the source code,you have created 2 buffers. here's my first question: why have you created two buffers? is it true that one buffer is the 'source buffer' and another is the 'destination buffer'?
2)In the first call to mlSendBuffers, exactly what does happen? I know that a shallow copy is created in the payload area. but it seems that after mlBeginTransfer,it also insert two buffers in the queue model(one buffer for source and one buffer for destination)(?).
3)after a call to mlBeginTransfer, we want to insert the audio data in the memory.so it seems that the audio date is *entered* in the source buffer, then is processed with the messages entered from the queue and then is gone to the destination buffer.is it true? please suggest me.
[ August 05, 2004: Message edited by: ehsan_kamrani ]
08-10-2004, 04:39 PM
We create two buffers as a form of "double-buffering" (similar to double-buffered graphics in OpenGL). The idea is that while the application is working on one buffer, the system (ML) is working on the other. In the case of 'audiotomem', the operation is a capture, ie: data is transferred from ML to the application. So both buffers are "sources".
In some cases, you may wish to 'multi-buffer' -- to provide more than 2 buffers. This gives the entire system more flexibility, and more ability to withstand scheduling fluctuations. For instance, with only two buffers, if you application is temporarily stalled by a higher-priority task, and ML finishes working on its buffer before the app finishes its buffer, you will get a buffer under-run: ML will have no place to store incoming samples, and they will be dropped. With multiple buffers, you can ensure that there will always be a fresh buffer in the queue for ML to use.
(Obviously, if your application simply can't keep up with the incoming samples, you will eventually drop samples, no matter how many buffers you have provided. Multi-buffering is only useful if on average, you can maintain the proper processing rate).
2)In the first call to mlSendBuffers, the two (blank) buffers are sent to ML, making them available for receiving incoming samples. When the buffer is full of samples, ML sends it back -- the application receives it (that's the main loop), uses the samples (processes them, copies them, saves them to disk, etc.), and then sends the (now-empty) buffer *back* to ML so it can be used again. That's what the mlSendBuffers() does inside the loop.
3) in this particular example, we do nothing with the samples -- the data is simply ignored. (Not a very useful program -- except to demonstrate ML concepts). But where you see the comment "Here we could do something with the result of the transfer" is where you read the samples from the buffer. For this, you would simply de-reference the buffer pointer -- and do whatever you like with the data. (But remember that you can not store data long-term in the buffer, since it will be sent back to ML for the next batch of samples).
08-12-2004, 02:43 AM
When we send a message to the logical device, i don't know that the messages are entered to the logical jack or the data are entered to the jack(?).is my interpreation true:when we begin he transfer with mlBeginTransfer(),the messages start to do these actions: the external audio and video data are entered to the jack, path and buffers and each message is received in the suitable time.thus, the ml is the engine to collect the audio or video data and the messages control these actions.
08-12-2004, 09:51 AM
I'm not certain I've understood the question... Yes, you can see 'ml' as the engine to capture incoming audio and video samples. (More precisely, however, it is the mlmodule that actually handles the capture). The messages you send are of 2 main types -- control and buffer. The control message control the behaviour of the capture hardware and module, and the buffer messages provide a space for the module to place the incoming samples.
10-02-2004, 06:27 PM
A side effect of opening a path is that space is allocated for the send and receive header queues for messages between the application and the path.After that, we can queue the messages in the payload area. In the other hand, A side effect of opening a transcoder is that it creates any required source and destination pipes.But as i understood, we need to queue the messages between the application and ML. So i think that before we open a transcoder, we need to open a path--to ensure that a space is allocated for the send and receive header queues.is my interpreation true? please suggest me.
[ October 04, 2004: Message edited by: ehsan_kamrani ]
10-13-2004, 09:48 AM
Well in fact, you don't need to open the transcoder itself, you simply want to open a path that goes through the transcoder.
Powered by vBulletin® Version 4.2.2 Copyright © 2015 vBulletin Solutions, Inc. All rights reserved.