Running a node.js app in a low-memory environment requires some additional work to ensure that the v8 garbage collector is aware of the memory ceiling. This post outlines an approach to achieve this.
Out of the box, a 64-bit installation of node.js assumes a memory ceiling of 1.5GB per node process. If you are running your node app in a memory constrained environment, e.g. a low-cost VPS server or PaaS instance, it’s necessary to inform the v8 runtime that you have a reduced memory ceiling.
In order to achieve this, we must first understand the basics of v8’s memory allocation and garbage collector.
Memory Allocation Basics
In v8, the running application is held in the Resident Set. The total amount of memory that the application is consuming is known as the Resident Set Size, or RSS for short.
The Resident Set is comprised of three areas:
- The application code
- The stack: which contains primitive types (e.g. numbers, booleans) and references to objects in the heap
- The heap: which contains reference types such as objects, strings, functions and closures.
During the lifetime of your application, it is the heap which will likely consume the most memory, since this is the place where your largest data types are held. It’s therefore necessary to concentrate on the heap when targeting memory usage.
The heap contains two main areas:
- New Space: all newly allocated objects are created here first. The new space is often small (typically 1-8 MB), and it is fast to collect garbage here.
- Old Space: any objects which are not garbage collected from New Space eventually end up here. The vast majority of your heap will be consumed by Old Space. Garbage collection is slower here, as the size of Old Space is much larger than New Space, and a different mechanism is employed to actually perform the collection. For this reason, garbage collection is only performed when there is not much room left in Old Space.
You can therefore see that it makes sense to concentrate on the heap’s Old Space when targeting memory usage.
v8 collects garbage when an object is no longer reachable from the root node. The root node is classed as any global or active local variables.
For example, the following code shows objects which are candidates for garbage collection:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Garbage collection in v8 is an expensive process, as it is employed via a stop the world mechanism. This literally pauses execution of your application whilst the collector is run. For this reason, v8 tries not to run garbage collection unless it is running out of space.
If this has peaked your interest, you can read more about v8’s memory management process here.
Armed with this knowledge, we can now begin to play with v8’s CLI flags in order to tune memory allocation, and thus alter the limits at which the garbage collector will attempt to free memory. The particular flag we’ll be looking at is
max_old_space_size, which controls the size of the Old Space in the heap, and therefore controls when the garbage collector should kick in to free up memory for the vast majority of the application.
Without further ado, here is a startup script (
startup.sh) which I use to bootstrap my node apps.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
So, if we were running in an environment with 512MB of RAM available, we would run the script as follows:
The script above also allows us to support running a node app with cluster. You simply adjust the
WEB_MEMORY parameter according to the number of clustered processes you expect.
Say for example, you want to run 4 processes in a cluster on your 512MB instance. Run your script with:
Each cluster process will use ¼ of the system RAM available.
The variable name
WEB_MEMORY was chosen as this is set automatically for us when running on Heroku, which is my preferred choice for running node apps in production.
WEB_MEMORY is created automatically by Heroku according to the following:
- The memory available for the instance (dyno)
- The value of the
WEB_CONCURRENCYenv var (defaults to 1)
We can therefore support clustering by setting the
WEB_CONCURRENCY variable to a number higher than 1 (e.g. 4).
WEB_MEMORY will automatically report the correct per-process memory ceiling in this case (e.g. 128 for a
WEB_CONCURRENCY of 4), and our script will take care of tuning v8 to take advantage of this new memory ceiling.
Then, in your
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
The process outlined above is merely a way of informing v8 of your memory requirements, but it’s also possible that your application may not be able to run at a small memory footprint. If you use this technique, be aware that if the garbage collector cannot free up any memory when your application reaches the memory ceiling, it will crash with an Out Of Memory error. In this case, you need to evaluate whether you have a memory leak, or you simply need a higher memory footprint to run your application.
For more information on hunting down memory leaks, check out this article.
max_old_space_size v8 flag is a good way of tuning the memory ceiling for your node.js apps. The script above will automatically calculate the optimum value based on the setting of a
WEB_MEMORY environment variable, which is generated for you on Heroku.