<html xmlns="http://www.w3.org/1999/xhtml"><head> <title></title> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> </head> <body style="font-family:Helvetica;color:#000000;font-size:13px;"><img id="544B591C9AF044601158977562971AF7" alt="" width="0px" src="https://receipts.canarymail.io/track/23BE42F21B750115E03660019DCF8935_544B591C9AF044601158977562971AF7.png" height="0px"><div id="CanaryBody"> <div> I am checking the content type, yes. But in my case, I’m just switching between body parsing methodologies depending on the body type. I do actually have a few scenarios where I have to evaluate body data in multipart submissions, but I only actually need the parts of the multipart form that are not including attachments (basic key/value string data). I can’t say for sure that this is the common use case for POST body parsing but when I’ve read the Lua GitHub issues and various examples and discussions in the past, it has always seemed to me like the only common use cases were for url encoded POST bodies OR for the plain key/value string data (not attachments) in multipart bodies. I can’t say that I’ve seen many discussions that were asking for access to the file attachment data in POST bodies - just for what it is worth.</div><div><br></div><div>I did notice that the boundary had an effect on memory. It seems like memory is sort of contained as long as we’re talking about an actual multipart boundary. When it’s a single char or something otherwise smaller, the memory use is extreme but that’s partly inherent to even spitting data that way.</div><div><br></div><div>For now, I think that I’m just going to have to use a workaround that limits when POST body parsing triggers. There’s just no way to do it at all under certain conditions right now.</div><div><br></div><div>Thank you for all of your feedback and work on NJS and for filing the POST body provision in NJS into a feature request. </div> <div><br></div> </div> <div id="CanarySig"> <div> <div style="font-family:Helvetica;">—<br><div>Lance Dockins</div><div><br></div></div> </div> </div> <div id="CanaryDropbox"> </div> <blockquote id="CanaryBlockquote"> <div> <div>On Thursday, Sep 21, 2023 at 7:47 PM, Dmitry Volyntsev <<a href="mailto:xeioex@nginx.com">xeioex@nginx.com</a>> wrote:<br></div> <div><br>On 9/21/23 4:41 PM, Lance Dockins wrote: <br><blockquote type="cite">That’s good info. Thank you. <br> <br>I have been doing some additional testing since my email last night <br>and I have seen enough evidence to believe that file I/O in NJS is <br>basically the source of the memory issues. I did some testing with <br>very basic commands like readFileSync and Buffer + readSync and in all <br>cases, the memory footprint when doing file handling in NJS is massive. <br> <br>Just doing this: <br> <br>let content = fs.readFileSync(path/to//file); <br>let parts = content.split(boundary); <br> <br>Resulted in memory use that was close to a minimum of 4-8x the size of <br>the file during my testing. We do have an upper bound on files that <br>can be uploaded and that does contain this somwhat but it’s not hard <br>for a larger request that is 99% file attachment to use exhorbitant <br>amounts of memory. <br> <br> <br> <br></blockquote>Regarding the task at hand, do you check for Content-Type of the POST <br>body? So you can exclude anything except probably <br>application/x-www-form-urlencoded. At least what I see in lua: the <br>handler is only looking for application/x-www-form-urlencoded and not <br>multipart/form-data. <br> <br>https://github.com/openresty/lua-nginx-module/blob/c89469e920713d17d703a5f3736c9335edac22bf/src/ngx_http_lua_args.c#L171 <br> <br> <br><blockquote type="cite"> I actually tried doing a Buffer + readSync variation on the same <br>thing and the memory footprint was actually FAR FAR worse when I did that. <br> <br> <br> <br></blockquote>As of now, the resulting memory consumption will depend heavily on the <br>boundary. <br> <br>In worst case, for 1mb of memory file that is split into 1 character <br>array, You will get ~16x memory consumed, because every 1 byte character <br>will be put into a njs_value_t structure. <br> <br>With larger chunks the situation will be less extreme. Right now we are <br>implementing a way to deduplicate identical strings, this may help in <br>some situations. <br> <br><blockquote type="cite">The 4-8x minimum memory commit seems like a problem to me just <br>generally. But the fact that readSync doesn’t seem to be any better <br>on memory (much worse actually) basically means that NJS is only safe <br>to use for processing smaller files (or POST bodies) right now. <br> There’s just no good way to keep data that you don’t care about in a <br>file from occupying excessive amounts of memory that can’t be <br>reclaimed. If there is no way to improve the memory footprint when <br>handling files (or big strings), no memory conservative way to stream <br>a file through some sort of buffer, and no first-party utility for <br>providing parsed POST bodies right now, <br>then it might be worth the time to put some notes in the NJS docs that <br>the fs module may not be appropriate for larger files (e.g. files over <br>1mb). <br> <br>For what it’s worth, I’d also love to see some examples of how to <br>properly use fs.readSync in the NJS examples docs. There really <br>wasn’t much out there for that for NJS (or even in a lot of the Node <br>docs) so I can’t say that my specific test implementation for that was <br>ideal. But that’s just above and beyond the basic problems that I’m <br>seeing with memory use with any form of file I/O at all (since the <br>memory problems seem to be persistent whether doing reads or even log <br>writes). <br> <br>— <br>Lance Dockins <br> <br> <br>On Thursday, Sep 21, 2023 at 5:01 PM, Dmitry Volyntsev <br><xeioex@nginx.com> wrote: <br> <br>On 9/21/23 6:50 AM, Lance Dockins wrote: <br> <br>Hi Lance, <br> <br>See my comments below. <br> <br><blockquote type="cite">Thanky you, Dmitry. <br> <br>One question before I describe what we are doing with NJS. I did <br>read <br>about the VM handling process before switching from Lua to NJS <br>and it <br>sounded very practical but my current understanding is that there <br>could be multiple VM’s instantiated for a single request. A js_set, <br>js_content, and js_header_filter directive that applies to a single <br>request, for example, would instantiate 3 VMs. And were you to need <br>to set multiple variables with js_set, then keep adding to that # <br>of VMs. <br> <br> <br></blockquote>This is not correct. For js_set, js_content and js_header_filter <br>there <br>is only a single VM. <br>The internalRedirect() is the exception, because a VM does not <br>survive <br>it, but the previous VMs will not be freed until current request is <br>finished. BTW, a VM instance itself is pretty small in size (~2kb) <br>so it <br>should not be a problem if you have a reasonable number of redirects. <br> <br> <br><blockquote type="cite"> <br>My original understanding of that was that those VMs would be <br>destroyed once they exited so even if you had multiple VMs <br>instantiated per request, the memory impact would not be <br>cumulative in <br>a single request. Is that understanding correct? Or are you saying <br>that each VM accumulates more and more memory until the entire <br>request <br>completes? <br> <br>As far as how we’re using NJS, we’re mostly using it for header <br>filters, internal redirection, and access control. So there really <br>shouldn’t be a threat to memory in most instances unless we’re not <br>just dealing with a single request memory leak inside of a VM but <br>also <br>a memory leak that involves every VM that NJS instantiates just <br>accumulating memory until the request completes. <br> <br>Right now, my working theory about what is most likely to be <br>creating <br>the memory spikes has to do with POST body analysis. Unfortunately, <br>some of the requests that I have to deal with are POSTs that have to <br>either be denied access or routed differently depending on the <br>contents of the POST body. Unfortunately, these same routes can <br>vary <br>in the size of the POST body and I have no control over how any of <br>that works because the way it works is controlled by third parties. <br> One of those third parties has significant market share on the <br>internet so we can’t really avoid dealing with it. <br> <br>In any case, before we switched to NJS, we were using Lua to do the <br>same things and that gave us the advantage of doing both memory <br>cleanup if needed and also doing easy analysis of POST body args. I <br>was able to do this sort of thing with Lua before: <br>local post_args, post_err = ngx.req.get_post_args() <br>if post_args.arg_name = something then <br> <br>But in NJS, there’s no such POST body utility so I had to write my <br>own. The code that I use to parse out the POST body works for both <br>URL encoded POST bodies and multipart POST bodies, but it has to <br>read <br>the entire POST into a variable before I can use it. For small <br>POSTs, <br>that’s not a problem. For larger POSTs that contain a big <br>attachment, <br>it would be. Ultimately, I only care about the string key/value <br>pairs <br>for my purposes (not file attachments) so I was hoping to discard <br>attachment data while parsing the body. <br> <br> <br> <br></blockquote>Thank you for the feedback, I will add it as to a future feature <br>list. <br> <br><blockquote type="cite"> I think that that is actually how Lua’s version of this works too. <br> So my next thought was that I could use a Buffer and rs.readSync to <br>read the POST body in buffer frames to keep memory minimal so that I <br>could could discard the any file attachments from the POST body and <br>just evaluate the key/value data that uses simple strings. But from <br>what you’re saying, it sounds like there’s basically no difference <br>between fs.readSync w/ a Buffer and rs.readFileSync in terms of <br>actual <br>memory use. So either way, with a large POST body, you’d be <br>steamrolling the memory use in a single Nginx worker thread. When I <br>had to deal with stuff like this in Lua, I’d just run <br>collectgarbage() <br>to clean up memory and it seemed to work fine. But then I also <br>wasn’t <br>having to parse out the POST body myself in Lua either. <br> <br>It’s possible that something else is going on other than that. <br> qs.parse seems like it could get us into some trouble if the <br>query_string that was passed was unusuall long too from what you’re <br>saying about how memory is handled. <br> <br> <br></blockquote>for qs.parse() there is a limit for a number of arguments, which <br>you can <br>specify. <br> <br><blockquote type="cite"> <br>None of the situations that I’m handling are for long running <br>requests. They’re all designed for very fast requests that come <br>into <br>the servers that I manage on a constant basis. <br> <br>If you can shed some light on the way that VM’s and their memory are <br>handled per my question above and any insights into what to do about <br>this type of situation, that would help a lot. I don’t know if <br>there <br>are any plans to offer a POST body parsing feature in NJS for those <br>that need to evalute POST body data like how Lua did it, but if <br>there <br>was some way to be able to do that at the Nginx layer instead of at <br>the NJS layer, it seems like that could be a lot more sensitive to <br>memory use. Right now, if my understanding is correct, the only <br>option that I’d even have would be to just stop doing POST body <br>handling if the POST body is above a certain total size. I guess if <br>there was some way to forcibly free memory, that would help too. <br> But <br>I don’t think that that is as common of a problem as having to deal <br>with very large query strings that some third party appends to a URL <br>(probably maliciously) and/or a very large file upload attached to a <br>multipart POST. So the only concern that I’d have about memory in a <br>situation where I don’t have to worry about memory when parsing a <br>larger file woudl be if multiple js_sets and such would just keep <br>spawning VMs and accumulating memory during a single request. <br> <br>Any thoughts? <br> <br>— <br>Lance Dockins <br> <br> <br>On Thursday, Sep 21, 2023 at 1:45 AM, Dmitry Volyntsev <br><xeioex@nginx.com> wrote: <br> <br>On 20.09.2023 20:37, Lance Dockins wrote: <br><blockquote type="cite">So I guess my question at the moment is whether endless memory use <br>growth being reported by njs.memoryStats.size after file writes is <br>some sort of false positive tied to quirks in how memory use is <br>being <br>reported or whether this is indicative of a memory leak? Any <br>insight <br>would be appreicated. <br></blockquote> <br>Hi Lance, <br>The reason njs.memoryStats.size keeps growing is because NJS uses <br>arena <br>memory allocator linked to a current request and a new object <br>representing memoryStats structure is returned every time <br>njs.memoryStats is accessed. Currently NJS does not free most of the <br>internal objects and structures until the current request is <br>destroyed <br>because it is not intended for a long running code. <br> <br>Regarding the sudden memory spikes, please share some details <br>about JS <br>code you are using. <br>One place to look is to analyze the amount of traffic that goes to <br>NJS <br>locations and what exactly those location do. <br> <br></blockquote></blockquote></div> </div> </blockquote> </body></html>