[vlc-devel] Re: Multicast udp streaming from playlist - problems

Chris Douty Chris_Douty at ampex.com
Tue Oct 19 00:22:41 CEST 2004


On Friday, October 15, 2004, at 06:02 AM, Dermot McGahon wrote:

> On Thu, 14 Oct 2004 14:24:57 +0200, Derk-Jan Hartman 
> <hartman at videolan.org> wrote:
>> 13:33 < gibalou> hmmm DstreamThread as a VLC_THREAD_PRIORITY_INPUT 
>> priority so this looks alright
>> 13:34 < thedj> i don't reallyl remember.. did this work for him on 
>> mplayer?
[snip]
>> 13:39 < gibalou> thedj: would be worth asking Dermot to increase the 
>> max size of the buffer in demux.c line 325
>> 13:40 < gibalou> something like * 10
>>
>> So could you try to do that..? increase the buffer at that line? 
>> (that's for 0.8.0 btw)
>> It kinda looks like the server is sending the whole file at once or 
>> something.. a bit weird. It shouldn't do that...
>
> Increased the buffer size in stream_DemuxSend from 5000000 to 50000000 
> (*10)
> and there is not change in behaviour. The buffer still fills too 
> quickly,
> just to a higher maximum now and of course it takes longer to fill. 
> There
> are still DStreamRead's of 188 bytes (ts packet size) interspersed. 
> Sometimes
> a video window (black screen) is displayed, more often not even that. 
> I don't
> understand what calls DStreamRead (the decoder?). Would there be a way 
> to
> heighten the decoders priority?

This is the same problem I ran into trying to implement my IRIG 106 
demux.  After two weeks of poking about in the darkness I ended up 
hacking together a single-threaded solution fusing my IRIG demux and 
the MPEG TS demux into a single function.  (Well, a series of functions 
and data structures with my module's Demux() function as the entry 
point)

Initially I tried to use the stream_Demux* funtions to create a slave 
MPEG TS demux using the code in livedotcom.cpp as an example.  I didn't 
have a RTSP server to see if this stuff actually works and did not look 
back as far as 0.7.2.  The problem is that my input source (a local 
file initially) was not rate controlled, and I see no provision to do 
rate feedback.  The DStreamThread function will call your stage 2 
Demux() function as fast as it can AFAICT, while the input thread calls 
the stage 1 Demux() as fast as *it* can.  This led to the buffer to the 
downstream demux overflowing, data dropping on the floor, and resource 
starvation for the TS demux thread.

I then wrote a private version of the stream_Demux* suite where the 
internal buffer had a condition variable to signal when it was full so 
the my stage 1 demux would block.  Even attempting to implement 
hysteresis when the stage 1 would not unblock (i.e. the condition was 
not signaled until the fifo was nearly empty) I got similar behavior to 
case #1 above.  I even tried a double condition  so that the stage 2 TS 
demux thread would not spin when the buffer was empty.  No dice.

Under time constraints for a demo I switched to the 
all-demuxing-in-one-thread solution.  With a could of weeks familiarity 
with the code it was surprisingly easy and worked the first time, at 
least well enough for my demo.

Since my target platform was Windows I was working under MinGW which 
does not seem to profile or debug threaded programs adequately.  I 
don't have a fast enough x86 Linux workstation to pursue the threading 
issues which are going to behave differently on Windows anyways.

I did gain some valuable insight building all of this stuff on my Mac 
OS X dual-cpu G4 though.  (Blessings upon the persons who came up with 
extras/contrib/Makefile.)  The thread viewer showed that all of my time 
was spent in the stage 1 demux thread.  It was always active, even when 
it should have been blocked on vlc_condition_wait().  What I don't know 
is why.  I suspect that adding some soft of semaphore or condition 
variable to the stream_Demux* stuff is the right solution, but then 
every potential downstream demux module would need to know when to 
signal that the upstream could go.

Hmm.  It also occurs to me that the the stage 2 demux function just 
isn't being called often enough to keep up with the input.  The whole 
semantics are broken from my limited understanding of how timing holds 
together.  Calling Demux() on the stage 1 demux module does not yield a 
"frame" as expected.  The stage 2 demux will, but it seems totally 
starved out.

Afraid that I just muddying the waters,
	Chris

-- 
Christopher Douty <Chris_Douty at ampexdata.com> +1-650-367-3129
Senior Engineer, Software & Systems  - AMPEX Data Systems Corp.

-- 
This is the vlc-devel mailing-list, see http://www.videolan.org/vlc/
To unsubscribe, please read http://developers.videolan.org/lists.html



More information about the vlc-devel mailing list