• 10 dec 2017: forum version update. In case of issues use this topic.
  • 30 nov 2017: pilight moved servers. In case of issues use this topic.
Hello There, Guest! Login Register

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
][solved]Segfault when retrieving big chunked http message
Any idea how i can reproduce this issue? And since this is a http library bug, can i say that the plua_gc_unreg bug is fixed?
I haven't seen the  plua_gc_unreg segfault anymore, so I assume it is fixed. 

Reproducing the bug isn't easy.  I could give you my wunderful (extended wunderground) protocol, because that does an http requests returning a big json result every 5 minutes. Wunderground  sometimes takes more than 3 seconds to respond (as it seems especially during the night). But I am not sure if this will lead to the segfault in a reasonable time.

A different approach could be to create a "website" that deliberately returns a big chunked response with a delay, in order to force http to time out half way. 

But I would suggest to wait until I get the additonal debug logging. If my assumption is correct we can make a fix and try again.
No i'm currently using my testdevice for the development of an mqtt hardware module.  The issue with http  is on my live system. The issue seems related to using http with big responses (over 48k of json code in my case) when they are timing out.

Until now I was nort able to catch the segfault so I could not check where in the http library we are when it occurs.

I did find one other thing: chunked responses that have been received completely always seem to end with a 408 code when there has not been a real timeout. This is caused by the code in http_client_close() shown below. "request->has_chunked" never seems to be set back to 0 once it has been set to 1, so the result of the second if statement is always false for a chunked response.

    if(request->reading == 1) {
        if(request->has_length == 0 && request->has_chunked == 0) {
            if(request->callback != NULL && request->called == 0) {
                request->called = 1;
                request->callback(request->status_code, request->content, strlen(request->content), request->mimetype, request->userdata);
        } else {
         * Callback when we were receiving data
         * that was disrupted early.
            if(request->callback != NULL && request->called == 0) {
                request->called = 1;
                request->callback(408, NULL, 0, NULL, request->userdata);
I would still love to see that unit test that helps me trigger the issue, or a precise description of a unit case. Can you try making one on your own webserver?
I will try, but you have to be a bit patient Smile
I really want to release a new version, but not with known bugs.
Any progression?
Not much really Unsure

I returned from a week hollyday yesterday. Before I left, I added a few debug statements, in order to find the cause of the cpu load going high, finally ending in the segfault. To my surprise however it has not happened ever since. This makes me think that it must be some timng issue where writing one or more of the debug lines is just enough to prevent it from happening. I will remove those debug lines and see if the errors will reappear.

As suggested by you I made a simulator on my website just now and I will try to reoroduce the erros(s) with it.

As soon as I have found anything that can explain the cause of the issues, I will get back to you.
Well, I found the explanation why a chunked response from weather underground aways ends with a 408 code.

The reason is that weather underground is sending its chunked size as 8 characters followed by one empty line. So there are always several "0" characters preceeding the actual size. Eg. a size of 6000 (hex) is sent as "00006000\r\n\". That also means that the terminating empty chunk size is being sent as "00000000\r\n\r\n". Http.c however is expecting a line starting with "0\r\n\r\n", so never sees that the chunked transfer has ended normally.

I don't know if, what weather underground is doing, is a violation of the standard. So far I only saw that a chunked transfer is considered to have ended when ""0\r\n\r\n" has been received. I didn't see that this sequence should start at the first position of the line.

If this has anything to do with the segfault issue I still have to find out.


I changed the webpage so it now simulates simulating the behaviour of wunderground as described above. If you call it in an http action, you will see  that the  resulting code is always 408. The url you can test with is:


I checked with Chrome and it doesn't complain at all when I call it. So I assume that the way wunderground is responding is not a violation of the standard.

Possibly Related Threads...
Thread Author Replies Views Last Post
  [SOLVED] Triggering generic_switch leads to segfault Ulrich.Arnold 19 4,842 10-23-2019, 09:03 AM
Last Post: Ulrich.Arnold
  http code 301 causes segfault Niek 3 3,795 08-14-2018, 06:57 PM
Last Post: curlymo
  no callback on some http requests Niek 20 8,312 07-28-2018, 10:22 AM
Last Post: Niek
  http library doesn't properly handle big response Niek 39 10,441 07-17-2018, 07:26 PM
Last Post: curlymo
  Long label text causes segfault Niek 10 2,353 06-29-2018, 02:18 PM
Last Post: Niek
  [Solved] config.json not updated Niek 6 2,316 12-31-2017, 03:19 PM
Last Post: curlymo
  [Solved] callback not executing when dns lookup fails Niek 1 1,166 10-08-2017, 11:44 AM
Last Post: curlymo
  [Solved] Rules switching state based on another device state not working apartmedia 6 3,368 09-27-2017, 01:41 PM
Last Post: apartmedia
  [Solved] connection to main pilight daemon lost TopdRob 20 6,851 09-17-2017, 04:30 PM
Last Post: curlymo
  [Solved] dimmer protocol kaku_dimmer does not react on dimmervalue apartmedia 25 6,655 09-16-2017, 10:29 PM
Last Post: apartmedia

Forum Jump:

Browsing: 1 Guest(s)