
Additionally CPU caches often are mutiples of 8k. A factor in such a choice is that many filesystems use 512 byte blocks and 8k is a handleable multiple of it.

8k is a common size used for multiple buffers. We can see the the pre-allocated block by Node (in the version I'm using for this test) apparantely is 8192 bytes.

This won't copy the data, but use the same memory as above. Giving the two Buffers from above we can see this.Ĭonst buffer3 = om(buffer2.buffer) Ĭonsole.log(buffer4.toString('utf8')) // helloĪ raw ArrayBuffer doesn't provide many things, but we can create a new Buffer on top of it. One thing one has to be careful about is that for a slice the returned ArrayBuffer will be the full buffer and not only a sliced part. We can ask the Buffer to provide us with the underlying ArrayBuffer using the. So how does this work? - Underlying the Buffer, in modern versions of Node is an ArrayBuffer. This indicates that both buffers use the same memory region with an alignment of 8 bytes. This actually works by using the slicing logic. The second one is that allocating a small buffer won't actually go to the operating system and allocate a memory area, but Node.js has a memory region from which small buffers can be derived quickly. Simple example:Ĭonsole.log(buffer.toString('utf8')) // will print 'hallo'

This makes it quite efficient to work on a window of the data, but when writing one has to be careful. The Buffer interface predates ES6 TypedArrays and has some optimizations.įor one the slice() method does not copy data, but returns a view on the underlying data. A Buffer gives access to a memory region in a quite raw form, allowing to handle raw data and allowing to interpret binary streams. When building network libraries for Node.js, as we do at work, one quite quickly comes by Node's Buffer type.
