[vlc-devel] 答复: 答复: [PATCH] core: network: recover from reading on invalid sockets
Xie Zhigang
zighouse at hotmail.com
Wed Jan 16 11:42:14 CET 2019
Yes, these patches are working on my case.
The essential idea in these patches is to keep the opened events,
but the fix is not elegant enough for the static array and the 4095 limit.
Is there a way to rewrite the struct of pollfd including a member
and keeping the opened events in it only on Windows?
Regards,
Zhigang.
________________________________
发件人: vlc-devel <vlc-devel-bounces at videolan.org> 代表 Steve Lhomme <robux4 at ycbcr.xyz>
发送时间: 2019年1月16日 16:13
收件人: vlc-devel at videolan.org
主题: Re: [vlc-devel] 答复: [PATCH] core: network: recover from reading on invalid sockets
Do these patches fix your issue ?
https://patches.videolan.org/patch/13959/
https://patches.videolan.org/patch/13960/
https://patches.videolan.org/patch/13961/
On 16/01/2019 07:19, Xie Zhigang wrote:
> Hi,
>
> Exactly YES, it is related to https://trac.videolan.org/vlc/ticket/21153 .
>
> WSAEventSelect() does not get the FD_CLOSE event, and the poll() is in dysfunctional state
> if remote server TCP RST the connection. I have dumped the traffic data as resuming the play:
>
> #28362 30.011007 TCP 68 38912 → 80 [ACK] Seq=350 Ack=21995213 Win=768 Len=0 TSval=4568331 TSecr=555407
> #28392 30.236684 TCP 836 [TCP Window Full] 80 → 38912 [PSH, ACK] Seq=21995213 Ack=350 Win=30080 Len=768 TSval=555474 TSecr=4568331 [TCP segment of a reassembled PDU]
> #28393 30.236717 TCP 68 [TCP ZeroWindow] 38912 → 80 [ACK] Seq=350 Ack=21995981 Win=0 Len=0 TSval=4568387 TSecr=555474
> #28394 30.460288 TCP 68 [TCP Keep-Alive] 80 → 38912 [ACK] Seq=21995980 Ack=350 Win=30080 Len=0 TSval=555530 TSecr=4568387
> #28395 30.460300 TCP 68 [TCP ZeroWindow] 38912 → 80 [ACK] Seq=350 Ack=21995981 Win=0 Len=0 TSval=4568443 TSecr=555474
> #28396 30.908290 TCP 68 [TCP Keep-Alive] 80 → 38912 [ACK] Seq=21995980 Ack=350 Win=30080 Len=0 TSval=555642 TSecr=4568443
> #28397 31.804538 TCP 68 [TCP Keep-Alive] 80 → 38912 [ACK] Seq=21995980 Ack=350 Win=30080 Len=0 TSval=555866 TSecr=4568443
> #28398 31.804564 TCP 68 [TCP ZeroWindow] 38912 → 80 [ACK] Seq=350 Ack=21995981 Win=0 Len=0 TSval=4568779 TSecr=555474
> #28399 33.600277 TCP 68 [TCP Keep-Alive] 80 → 38912 [ACK] Seq=21995980 Ack=350 Win=30080 Len=0 TSval=556315 TSecr=4568779
> #28400 33.600290 TCP 68 [TCP ZeroWindow] 38912 → 80 [ACK] Seq=350 Ack=21995981 Win=0 Len=0 TSval=4569228 TSecr=555474
> #28401 37.188314 TCP 68 [TCP Keep-Alive] 80 → 38912 [ACK] Seq=21995980 Ack=350 Win=30080 Len=0 TSval=557212 TSecr=4569228
> #28402 37.188347 TCP 68 [TCP ZeroWindow] 38912 → 80 [ACK] Seq=350 Ack=21995981 Win=0 Len=0 TSval=4570125 TSecr=555474
> #28435 44.372520 TCP 68 [TCP Keep-Alive] 80 → 38912 [ACK] Seq=21995980 Ack=350 Win=30080 Len=0 TSval=559008 TSecr=4570125
> #28436 44.372531 TCP 68 [TCP ZeroWindow] 38912 → 80 [ACK] Seq=350 Ack=21995981 Win=0 Len=0 TSval=4571921 TSecr=555474
> #28469 58.744435 TCP 68 [TCP Keep-Alive] 80 → 38912 [ACK] Seq=21995980 Ack=350 Win=30080 Len=0 TSval=562601 TSecr=4571921
> #28470 58.744451 TCP 68 [TCP ZeroWindow] 38912 → 80 [ACK] Seq=350 Ack=21995981 Win=0 Len=0 TSval=4575514 TSecr=555474
> #28561 87.508600 TCP 68 [TCP Keep-Alive] 80 → 38912 [ACK] Seq=21995980 Ack=350 Win=30080 Len=0 TSval=569792 TSecr=4575514
> #28562 87.508611 TCP 68 [TCP ZeroWindow] 38912 → 80 [ACK] Seq=350 Ack=21995981 Win=0 Len=0 TSval=4582705 TSecr=555474
> #28739 144.980525 TCP 68 [TCP Keep-Alive] 80 → 38912 [ACK] Seq=21995980 Ack=350 Win=30080 Len=0 TSval=584160 TSecr=4582705
> #28740 144.980549 TCP 68 [TCP ZeroWindow] 38912 → 80 [ACK] Seq=350 Ack=21995981 Win=0 Len=0 TSval=4597073 TSecr=555474
> #29080 259.797508 TCP 68 [TCP Keep-Alive] 80 → 38912 [ACK] Seq=21995980 Ack=350 Win=30080 Len=0 TSval=612864 TSecr=4597073
> #29081 259.797533 TCP 68 [TCP ZeroWindow] 38912 → 80 [ACK] Seq=350 Ack=21995981 Win=0 Len=0 TSval=4625777 TSecr=555474
> #34207 2022.822738 TCP 68 [TCP Window Update] 38912 → 80 [ACK] Seq=350 Ack=21995981 Win=75136 Len=0 TSval=5066533 TSecr=555474
> #34208 2022.823030 TCP 62 80 → 38912 [RST] Seq=21995981 Win=0 Len=0
>
> As we expect an FD_CLOSE event on the TCP RST traffic, but poll() is in ignorance of it.
> It is reproducible on Windows. On Linux, the poll() is in another implementation and it is ok.
>
> I found another patch to it in poll()'s windows implementation,
> that is to reset the errno at the beginning of poll() implementation.
> It seems to enable the capability of WSAEventSelect() for FD_CLOSE,
> but it is more difficult to be considered as a good fix than my first patch is:
>
> ---
> compat/poll.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/compat/poll.c b/compat/poll.c
> index 8020f7d..5ff43ea 100644
> --- a/compat/poll.c
> +++ b/compat/poll.c
> @@ -113,6 +113,8 @@ static int poll_compat(struct pollfd *fds, unsigned nfds, int timeout)
> {
> DWORD to = (timeout >= 0) ? (DWORD)timeout : INFINITE;
>
> + /* clears the errno to enable WSAEventSelect detecting the FD_CLOSE event. */
> + errno = 0;
> if (nfds == 0)
> { /* WSAWaitForMultipleEvents() does not allow zero events */
> if (SleepEx(to, TRUE))
> --
> 2.7.4
>
>
> 发件人: vlc-devel <vlc-devel-bounces at videolan.org> 代表 jeremy.vignelles at dev3i.fr <jeremy.vignelles at dev3i.fr>
> 发送时间: 2019年1月15日 23:37
> 收件人: 'Mailing list for VLC media player developers'
> 主题: Re: [vlc-devel] [PATCH] core: network: recover from reading on invalid sockets
>
> Hi,
> Is this patch related to https://trac.videolan.org/vlc/ticket/21153 ?
>
> This does not look like a proper fix for me either.
>
> Regards,
> Jérémy VIGNELLES
>
> De : vlc-devel <vlc-devel-bounces at videolan.org> De la part de Rémi Denis-Courmont
> Envoyé : mardi 15 janvier 2019 15:39
> À : Mailing list for VLC media player developers <vlc-devel at videolan.org>
> Objet : Re: [vlc-devel] [PATCH] core: network: recover from reading on invalid sockets
>
> Hi,
>
> This statistical approach oooks like it can lead to spurious failures, on all platforms. No way.
>
> The bug must be somewhere else.
> Le 15 janvier 2019 13:52:12 GMT+02:00, Xie Zhigang <mailto:zighouse at hotmail.com> a écrit :
> Fix the bug on Windows: if pausing the on-line VOD playing
> for a long time (5 minutes or so), and the remote VOD server drops the
> tcp connection, vlc_tls_Read() will be in an endless loop of retrial
> with errno as EAGAIN. Use a max retrial count to recover vlc from the
> endless reading loop.
> ________________________________________
> src/network/stream.c | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/src/network/stream.c b/src/network/stream.c
> index 6e015d5..7324f78 100644
> --- a/src/network/stream.c
> +++ b/src/network/stream.c
> @@ -55,6 +55,7 @@ ssize_t vlc_tls_Read(vlc_tls_t *session, void *buf, size_t len, bool waitall)
> {
> struct pollfd ufd;
> struct iovec iov;
> + int turn = 0;
>
> ufd.events = POLLIN;
> ufd.fd = vlc_tls_GetPollFD(session, &ufd.events);
> @@ -89,6 +90,12 @@ ssize_t vlc_tls_Read(vlc_tls_t *session, void *buf, size_t len, bool waitall)
> }
>
> vlc_poll_i11e(&ufd, 1, -1);
> + /*
> + * Use an arbitrary max retrial count (10) to recover from endless loop
> + * on an invalid connection.
> + */
> + if (++turn > 10)
> + return rcvd ? (ssize_t)rcvd : -1;
> }
> }
>
>
> --
> Envoyé de mon appareil Android avec Courriel K-9 Mail. Veuillez excuser ma brièveté.
>
> _______________________________________________
> vlc-devel mailing list
> To unsubscribe or modify your subscription options:
> https://mailman.videolan.org/listinfo/vlc-devel
> _______________________________________________
> vlc-devel mailing list
> To unsubscribe or modify your subscription options:
> https://mailman.videolan.org/listinfo/vlc-devel
_______________________________________________
vlc-devel mailing list
To unsubscribe or modify your subscription options:
https://mailman.videolan.org/listinfo/vlc-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.videolan.org/pipermail/vlc-devel/attachments/20190116/326952c6/attachment-0001.html>
More information about the vlc-devel
mailing list