Node js interview questions Flashcards
What npm is used for?
npm stands for Node Package Manager. npm provides the following two main functionalities:
1. Online repositories for Node.js packages/modules which are searchable on search.nodejs.org
2. Command-line utility to install packages, do version management and dependency management of Node.js packages.
Another important use for npm is dependency management. When you have a node project with a package.json file, you can run npm install from the project root and npm will install all the dependencies listed in the package.json.
Explain the difference between local and global npm packages installation
The main difference between local and global packages is this:
1. local packages are installed in the directory where you run npm install <package-name>, and they are put in the node_modules folder under this directory
2. global packages are all put in a single place in your system (exactly where depends on your setup), regardless of where you run npm install -g <package-name></package-name></package-name>
In general, all packages should be installed locally.
This makes sure you can have dozens of applications in your computer, all running a different version of each package if needed.
Updating a global package would make all your projects use the new release, and as you can imagine this might cause nightmares in terms of maintenance, as some packages might break compatibility with further dependencies, and so on.
What is Callback?
A callback is a function called at the completion of a given task; this prevents any blocking, and allows other code to be run in the meantime. Callbacks are the foundation of Node.js. Callbacks give you an interface with which to say, “and when you’re done doing that, do all this.”
What are the key features of Node.js?
Let’s look at some of the key features of Node.js.
1. Asynchronous event driven IO helps concurrent request handling – All APIs of Node.js are asynchronous. This feature means that if a Node receives a request for some Input/Output operation, it will execute that operation in the background and continue with the processing of other requests. Thus it will not wait for the response from the previous requests.
2. Fast in Code execution – Node.js uses the V8 JavaScript Runtime engine, the one which is used by Google Chrome. Node has a wrapper over the JavaScript engine which makes the runtime engine much faster and hence processing of requests within Node.js also become faster.
3. Single Threaded but Highly Scalable – Node.js uses a single thread model for event looping. The response from these events may or may not reach the server immediately. However, this does not block other operations. Thus making Node.js highly scalable. Traditional servers create limited threads to handle requests while Node.js creates a single thread that provides service to much larger numbers of such requests.
4. Node.js library uses JavaScript – This is another important aspect of Node.js from the developer’s point of view. The majority of developers are already well-versed in JavaScript. Hence, development in Node.js becomes easier for a developer who knows JavaScript.
5. There is an Active and vibrant community for the Node.js framework – The active community always keeps the framework updated with the latest trends in the web development.
6. No Buffering – Node.js applications never buffer any data. They simply output the data in chunks.
Why does Node.js prefer Error-First Callback?
The usual pattern is that the callback is invoked as callback(err, result), where only one of err and result is non-null, depending on whether the operation succeeded or failed. Without this convention, developers would have to maintain different signatures and APIs, without knowing where to place the error in the arguments array.
fs.readFile(filePath, function(err, data) { if (err) { //handle the error } // use the data object });
What is Callback Hell and what is the main cause of it?
Asynchronous JavaScript, or JavaScript that uses callbacks, is hard to get right intuitively. A lot of code ends up looking like this:
fs.readdir(source, function (err, files) { if (err) { console.log('Error finding files: ' + err) } else { files.forEach(function (filename, fileIndex) { console.log(filename) gm(source + filename).size(function (err, values) { if (err) { console.log('Error identifying file size: ' + err) } else { console.log(filename + ' : ' + values) aspect = (values.width / values.height) widths.forEach(function (width, widthIndex) { height = Math.round(width / aspect) console.log('resizing ' + filename + 'to ' + height + 'x' + height) this.resize(width, height).write(dest + 'w' + width + '_' + filename, function(err) { if (err) console.log('Error writing file: ' + err) }) }.bind(this)) } }) }) } })
See the pyramid shape and all the }) at the end? This is affectionately known as callback hell.
The cause of callback hell is when people try to write JavaScript in a way where execution happens visually from top to bottom. Lots of people make this mistake! In other languages like C, Ruby or Python there is the expectation that whatever happens on line 1 will finish before the code on line 2 starts running and so on down the file.
What do you mean by Asynchronous API?
All APIs of Node.js library are aynchronous that is non-blocking. It essentially means a Node.js based server never waits for a API to return data. Server moves to next API after calling it and a notification mechanism of Events of Node.js helps server to get response from the previous API call.
What is the difference between returning a callback and just calling a callback?
“return callback” typically refers to the pattern where a function returns control flow or status information synchronously while passing the actual result of an asynchronous operation to a callback function for further processing. “Just callback” might simply refer to passing a callback function without any return value from the asynchronous function itself.
What is libuv?
libuv is a C library that is used to abstract non-blocking I/O operations to a consistent interface across all supported platforms. It provides mechanisms to handle file system, DNS, network, child processes, pipes, signal handling, polling and streaming. It also includes a thread pool for offloading work for some things that can’t be done asynchronously at the operating system level.
What is V8?
The V8 library provides Node.js with a JavaScript engine (a program that converts Javascript code into lower level or machine code that microprocessors can understand), which Node.js controls via the V8 C++ API. V8 is maintained by Google, for use in Chrome.
The Chrome V8 engine :
1. The V8 engine is written in C++ and used in Chrome and Nodejs.
2. It implements ECMAScript as specified in ECMA-262.
3. The V8 engine can run standalone we can embed it with our own C++ program.
What is the file package.json?
All npm packages contain a file, usually in the project root, called package.json - this file holds various metadata relevant to the project. This file is used to give information to npm that allows it to identify the project as well as handle the project’s dependencies. It can also contain other metadata such as a project description, the version of the project in a particular distribution, license information, even configuration data - all of which can be vital to both npm and to the end users of the package. The package.json file is normally located at the root directory of a Node.js project.
Here is a minimal package.json:
{
“name” : “barebones”,
“version” : “0.0.0”,
}
Name some Built-in Globals in Node.js
Node.js has a number of built-in global identifiers that every Node.js developer should have some familiarity with. Some of these are true globals, being visible everywhere; others exist at the module level, but are inherent to every module, thus being pseudo-globals.
The list of true globals:
1. global - The global namespace. Setting a property to this namespace makes it globally visible within the running process.
2. process - The Node.js built-in process module, which provides interaction with the current Node.js process.
3. console - The Node.js built-in console module, which wraps various STDIO functionality in a browser-like way.
4. setTimeout(), clearTimeout(), setInterval(), clearInterval() - The built-in timer functions are globals.
The pseudo-globals included at the module level in every module:
module, module.exports, exports - These objects all pertain to the Node.js module system.
- __filename - The __filename keyword contains the path of the currently executing file. Note that this is not defined while running the Node.js REPL.
- __dirname - Like __filename, the __dirname keyword contains the path to the root directory of the currently executing script. Also not present in the Node.js REPL.
- require() - The require() function is a built-in function, exposed per-module, that allows other valid modules to be included.
What does Promisifying technique mean in Node.js?
This technique is a way to be able to use a classic Javascript function that takes a callback, and have it return a promise:
For example:
const fs = require('fs') const getFile = (fileName) => { return new Promise((resolve, reject) => { fs.readFile(fileName, (err, data) => { if (err) { reject(err) return } resolve(data) }) }) } getFile('/etc/passwd') .then(data => console.log(data)) .catch(err => console.log(err))
What’s the difference between process.cwd() VS ___dirname ?
cwd is a method of global object process, returns a string value which is the current working directory of the Node.js process.
__dirname is the directory name of the current script as a string value. __dirname is not actually global but rather local to each module.
Consider the project structure:
Project
├── main.js
└──lib
└── script.js
Suppose we have a file script.js files inside a sub directory of project, i.e. C:/Project/lib/script.js and running node main.js which require script.js
main.js
require('./lib/script.js') console.log(process.cwd()) // C:\Project console.log(\_\_dirname) // C:\Project console.log(\_\_dirname === process.cwd()) // true
script.js
console.log(process.cwd()) // C:\Project console.log(\_\_dirname) // C:\Project\lib console.log(\_\_dirname === process.cwd()) // false
Why we always require modules at the top of a file? Can we require modules inside of functions?
Yes, we can but we shall never do it.
Node.js always runs require synchronously. If you require an external module from within functions your module will be synchronously loaded when those functions run and this can cause two problems:
- If that module is only needed in one route handler function it might take some time for the module to load synchronously. As a result, several users would be unable to get any access to your server and requests will queue up.
- If the module you require causes an error and crashes the server you may not know about the error.
What is the preferred method of resolving unhandled exceptions in Node.js for synchronous code?
For synchronous code, if an error happens, return the error:
~~~
// Define divider as a syncrhonous function
var divideSync = function(x,y) {
// if error condition?
if ( y === 0 ) {
// “throw” the error safely by returning it
return new Error(“Can’t divide by zero”)
}
else {
// no error occured, continue on
return x/y
}
}
// Divide 4/2
var result = divideSync(4,2)
// did an error occur?
if ( result instanceof Error ) {
// handle the error safely
console.log(‘4/2=err’, result)
}
else {
// no error occured, continue on
console.log(‘4/2=’+result)
}
// Divide 4/0
result = divideSync(4,0)
// did an error occur?
if ( result instanceof Error ) {
// handle the error safely
console.log(‘4/0=err’, result)
}
else {
// no error occured, continue on
console.log(‘4/0=’+result)
}
```
Explain how does Node.js work?
Node.js is an open-source backend javascript runtime environment. It is used as backend service where javascript works on the server-side of the application. This way javascript is used on both frontend and backend. Node.js runs on chrome v8 engine which converts javascript code into machine code, it is highly scalable, lightweight, fast, and data-intensive.
Working of Node.js: Node.js accepts the request from the clients and sends the response, while working with the request node.js handles them with a single thread. To operate I/O operations or requests node.js use the concept of threads. Thread is a sequence of instructions that the server needs to perform. It runs parallel on the server to provide the information to multiple clients. Node.js is an event loop single-threaded language. It can handle concurrent requests with a single thread without blocking it for one request.
Node.js basically works on two concept
Asynchronous
Non-blocking I/O
Non-blocking I/o: Non-blocking i/o means working with multiple requests without blocking the thread for a single request. I/O basically interacts with external systems such as files, databases. Node.js is not used for CPU-intensive work means for calculations, video processing because a single thread cannot handle the CPU works.
Asynchronous: Asynchronous is executing a callback function. The moment we get the response from the other server or database it will execute a callback function. Callback functions are called as soon as some work is finished and this is because the node.js uses an event-driven architecture. The single thread doesn’t work with the request instead it sends the request to another system which resolves the request and it is accessible for another request.
To implement the concept of the system to handle the request node.js uses the concept of Libuv.
Libuv is an open-source library built-in C. It has a strong focus on asynchronous and I/O, this gives node access to the underlying computer operating system, file system, and networking.
Libuv implements two extremely important features of node.js
Event loop
Thread pool
Event loop: The event loop contains a single thread and is responsible for handling easy tasks like executing callbacks and network I/O. When the program is to initialize all the top-level code is executed, the code is not in the callback function. All the applications code that is inside callback functions will run in the event loop. EventLoop is the heart of node.js. When we start our node application the event loop starts running right away. Most of the work is done in the event loop.
Nodejs use event-driven-architecture.
Events are emitted.
Event loop picks them up.
Callbacks are called.
Event queue: As soon as the request is sent the thread places the request into a queue. It is known as an event queue. The process like app receiving HTTP request or server or a timer will emit event as soon as they are done with the work and event loop will pick up these events and call the callback functions that are associated with each event and response is sent to the client.
The event loop is an indefinite loop that continuously receives the request and processes them. It checks the queue and waits for the incoming request indefinitely.
Thread pool: Though node.js is single-threaded it internally maintains a thread pool. When non-blocking requests are accepted there are processed in an event loop, but while accepting blocking requests it checks for available threads in a thread pool, assigns a thread to the client’s request which is then processed and send back to the event loop, and response is sent to the respective client.
The thread pool size can be change:
process.env.UV_THREADPOOL_SIZE = 1;
What is Stream Chaining in Node.js?
Chaining the stream: Chaining of the stream is a mechanism of creating a chain of multiple stream operations by connecting the output of one stream with another stream. It is normally used with piping operations. For example, we will use piping and chaining to first compress a file and then decompress the same.
What are Event Emitters?
If you worked with JavaScript in the browser, you know how much of the interaction of the user is handled through events: mouse clicks, keyboard button presses, reacting to mouse movements, and so on.
On the backend side, Node.js offers us the option to build a similar system using the events module.
This module, in particular, offers the EventEmitter class, which we’ll use to handle our events.
You initialize that using
import EventEmitter from 'node:events'; const eventEmitter = new EventEmitter();
This object exposes, among many others, the on and emit methods.
1. emit() is used to trigger an event
2. on() is used to add a callback function that’s going to be executed when the event is triggered
3.
For example, let’s create a start event, and as a matter of providing a sample, we react to that by just logging to the console:
eventEmitter.on('start', () => { console.log('started'); }); When we run eventEmitter.emit('start');
the event handler function is triggered, and we get the console log.
What Are Buffer and why to use them in Node.js?
Simply put, a Buffer is a way to store and manipulate binary data in Node.js. Binary data refers to data that consists of binary values, as opposed to text data, which consists of characters and symbols. Examples of binary data include images, audio and video files, and raw data from a network.
Why is this important? The reason is that when you work with binary data, you often need to manipulate it in-memory, which can be difficult and inefficient using JavaScript’s standard data structures. For example, you might need to concatenate two binary data streams, slice a large binary file into smaller pieces, or encode and decode binary data into different character encodings. This is where Buffers come in: they provide a fast and efficient way to store and manipulate binary data in Node.js.
So, how do you use Buffers in Node.js? First, you need to create a Buffer object using the “Buffer” constructor. For example, you might create a Buffer with a fixed size like this:
const myBuffer = Buffer.alloc(10);
Or you might create a Buffer from an existing binary data stream:
const myBuffer = Buffer.from('Hello, world!');
Once you have a Buffer, you can use its various methods to manipulate the binary data it contains. For example, you might use the “slice” method to extract a portion of the binary data:
const slice = myBuffer.slice(0, 5); console.log(slice.toString()); // Output: "Hello"
You can also use the “concat” method to concatenate two or more Buffers:
const firstBuffer = Buffer.from('Hello, '); const secondBuffer = Buffer.from('world!'); const combinedBuffer = Buffer.concat([firstBuffer, secondBuffer]); console.log(combinedBuffer.toString()); // Output: "Hello, world!"
As you can see, Buffers provide a flexible and efficient way to store and manipulate binary data in Node.js. Whether you’re working with images, audio, video, or raw data, you’ll find that Buffers are a powerful tool that can help you build high-performance and scalable applications.
What is a Blocking Code in Node.js?
Blocking is when the execution of additional JavaScript in the Node.js process must wait until a non-JavaScript operation completes. This happens because the event loop is unable to continue running JavaScript while a blocking operation is occurring.
In Node.js, JavaScript that exhibits poor performance due to being CPU intensive rather than waiting on a non-JavaScript operation, such as I/O, isn’t typically referred to as blocking. Synchronous methods in the Node.js standard library that use libuv are the most commonly used blocking operations. Native modules may also have blocking methods.
All of the I/O methods in the Node.js standard library provide asynchronous versions, which are non-blocking, and accept callback functions. Some methods also have blocking counterparts, which have names that end with Sync.
How does concurrency work in Node.js?
JavaScript execution in Node.js is single threaded, so concurrency refers to the event loop’s capacity to execute JavaScript callback functions after completing other work. Any code that is expected to run in a concurrent manner must allow the event loop to continue running as non-JavaScript operations, like I/O, are occurring.
As an example, let’s consider a case where each request to a web server takes 50ms to complete and 45ms of that 50ms is database I/O that can be done asynchronously. Choosing non-blocking asynchronous operations frees up that 45ms per request to handle other requests. This is a significant difference in capacity just by choosing to use non-blocking methods instead of blocking methods.
The event loop is different than models in many other languages where additional threads may be created to handle concurrent work.
How does Node.js handle Child Threads?
When should we use Node.js?
You did a great job of summarizing what’s awesome about Node.js. My feeling is that Node.js is especially suited for applications where you’d like to maintain a persistent connection from the browser back to the server. Using a technique known as “long-polling”, you can write an application that sends updates to the user in real time. Doing long polling on many of the web’s giants, like Ruby on Rails or Django, would create immense load on the server, because each active client eats up one server process. This situation amounts to a tarpit attack. When you use something like Node.js, the server has no need of maintaining separate threads for each open connection.
This means you can create a browser-based chat application in Node.js that takes almost no system resources to serve a great many clients. Any time you want to do this sort of long-polling, Node.js is a great option.
It’s worth mentioning that Ruby and Python both have tools to do this sort of thing (eventmachine and twisted, respectively), but that Node.js does it exceptionally well, and from the ground up. JavaScript is exceptionally well situated to a callback-based concurrency model, and it excels here. Also, being able to serialize and deserialize with JSON native to both the client and the server is pretty nifty.
I look forward to reading other answers here, this is a fantastic question.
It’s worth pointing out that Node.js is also great for situations in which you’ll be reusing a lot of code across the client/server gap. The Meteor framework makes this really easy, and a lot of folks are suggesting this might be the future of web development. I can say from experience that it’s a whole lot of fun to write code in Meteor, and a big part of this is spending less time thinking about how you’re going to restructure your data, so the code that runs in the browser can easily manipulate it and pass it back.
What is the difference between setTimeout(fn, 0) VS setImmediate (fn) ?
setTimeout is simply like calling the function after delay has finished. Whenever a function is called it is not executed immediately, but queued so that it is executed after all the executing and currently queued eventhandlers finish first. setTimeout(,0) essentially means execute after all current functions in the present queue get executed. No guarantees can be made about how long it could take.
setImmediate is similar in this regard except that it doesn’t use queue of functions. It checks queue of I/O eventhandlers. If all I/O events in the current snapshot are processed, it executes the callback. It queues them immediately after the last I/O handler somewhat like process.nextTick. So it is faster.
What’s the Event Loop?
The event loop got its name because of how it’s usually implemented, which usually resembles:
while (queue.waitForMessage()) { queue.processNextMessage(); }
queue.waitForMessage() waits synchronously for a message to arrive (if one is not already available and waiting to be handled).
“Run-to-completion”
Each message is processed completely before any other message is processed.
This offers some nice properties when reasoning about your program, including the fact that whenever a function runs, it cannot be preempted and will run entirely before any other code runs (and can modify data the function manipulates). This differs from C, for instance, where if a function runs in a thread, it may be stopped at any point by the runtime system to run some other code in another thread.
A downside of this model is that if a message takes too long to complete, the web application is unable to process user interactions like click or scroll. The browser mitigates this with the “a script is taking too long to run” dialog. A good practice to follow is to make message processing short and if possible cut down one message into several messages.
Adding messages
In web browsers, messages are added anytime an event occurs and there is an event listener attached to it. If there is no listener, the event is lost. So a click on an element with a click event handler will add a message — likewise with any other event.
The first two arguments to the function setTimeout are a message to add to the queue and a time value (optional; defaults to 0). The time value represents the (minimum) delay after which the message will be pushed into the queue. If there is no other message in the queue, and the stack is empty, the message is processed right after the delay. However, if there are messages, the setTimeout message will have to wait for other messages to be processed. For this reason, the second argument indicates a minimum time — not a guaranteed time.
When should I use EventEmitter ?
Whenever it makes sense for code to SUBSCRIBE to something rather than get a callback from something. The typical use case would be that there’s multiple blocks of code in your application that may need to do something when an event happens.
For example, let’s say you are creating a ticketing system. The common way to handle things might be like this:
function addTicket(ticket, callback) { insertTicketIntoDatabase(ticket, function(err) { if (err) return handleError(err); callback(); }); }
But now, someone has decided that when a ticket is inserted into the database, you should email the user to let them know. That’s fine, you can add it to the callback:
function addTicket(ticket, callback) { insertTicketIntoDatabase(ticket, function(err) { if (err) return handleError(err); emailUser(ticket, callback); }); }
But now, someone wants to also notify another system that the ticket has been inserted. Over time, there could be any number of things that should happen when a ticket is inserted. So let’s change it around a bit:
function addTicket(ticket, callback) { insertTicketIntoDatabase(ticket, function(err) { if (err) return handleError(err); TicketEvent.emit('inserted', ticket); callback(); }); }
We no longer need to wait on all these functions to complete before we notify the user interface. And elsewhere in your code, you can add these functions easily:
TicketEvent.on('inserted', function(ticket) { emailUser(ticket); }); TicketEvent.on('inserted', function(ticket) { notifySlack(ticket); });
What is difference between synchronous and asynchronous method of fs module?
Synchronous methods: Synchronous functions block the execution of the program until the file operation is performed. These functions are also called blocking functions. The synchronous methods have File Descriptor as the last argument. File Descriptor is a reference to opened files. It is a number or a reference id to the file returned after opening the file using fs.open() method of the fs module. All asynchronous methods can perform synchronously just by appending “Sync” to the function name. Some of the synchronous methods of fs module in NodeJS are:
fs.readFileSync()
fs.renameSync()
fs.writeSync()
fs.writeFileSync()
fs.fsyncSync()
fs.appendFileSync()
fs.statSync()
fs.readdirSync()
fs.existsSync()
Asynchronous methods:
Asynchronous functions do not block the execution of the program and each command is executed after the previous command even if the previous command has not computed the result. The previous command runs in the background and loads the result once it has finished processing. Thus, these functions are called non-blocking functions. They take a callback function as the last parameter. Asynchronous functions are generally preferred over synchronous functions as they do not block the execution of the program whereas synchronous functions block the execution of the program until it has finished processing. Some of the asynchronous methods of fs module in NodeJS are:
fs.readFile()
fs.rename()
fs.write()
fs.writeFile()
fs.fsync()
fs.appendFile()
fs.stat()
fs.readdir()
fs.exists()
Heavy operations which consume time for processing such as querying huge data from a database should be done asynchronously as other operations can still be executed and thus, reducing the time of execution of the program.
How to avoid Callback Hell in Node.js?
- Split Functions into Smaller Functions
- Using Promises
- Using Async/ await
What is the preferred method of resolving unhandled exceptions in Node.js for asynchronous code?
For callback-based (ie. asynchronous) code, the first argument of the callback is err, if an error happens err is the error, if an error doesn’t happen then err is null. Any other arguments follow the err argument:
~~~
var divide = function(x,y,next) {
// if error condition?
if ( y === 0 ) {
// “throw” the error safely by calling the completion callback
// with the first argument being the error
next(new Error(“Can’t divide by zero”))
}
else {
// no error occured, continue on
next(null, x/y)
}
}
divide(4,2,function(err,result){
// did an error occur?
if ( err ) {
// handle the error safely
console.log(‘4/2=err’, err)
}
else {
// no error occured, continue on
console.log(‘4/2=’+result)
}
})
divide(4,0,function(err,result){
// did an error occur?
if ( err ) {
// handle the error safely
console.log(‘4/0=err’, err)
}
else {
// no error occured, continue on
console.log(‘4/0=’+result)
}
})
```
what is a stream?
A stream is a way of data handling that helps us to obtain a sequential output by reading or writing the input (files, network communications, and any kind of end-to-end information exchange). That is, they let you read data from a source or write it to a destination or perform any other specific task uninterruptedly and constantly. The stream is not a unique concept to Node.js and it is a part of Unix for quite a long time. A pipe operator is used to make the programs react with each other by passing streams. Hence, the Node.js stream is used as a basis for all streaming APIs.
Example: When you are streaming YouTube, Netflix, or Spotify then, instead of the whole content downloading all at once, it downloads in small chunks while you keep browsing. Another example can be chatting on Facebook or WhatsApp where the data is continuously flowing between two people. This is because instead of reading all the data at once in the memory the stream processes it into smaller pieces to make large files easily readable. It is useful because some files are larger than the available free space that you have on your device. Hence, the stream makes such files readable.
What are the Advantages of Stream?
- Memory efficiency: Stream is memory (spatial) efficient because they enable you to download files in smaller chunks instead of a whole in the memory before you can process it thus, saving space.
- Time efficiency: Stream is time-efficient because you start processing the data in smaller chunks so the procedure starts earlier compared to the general way, where you have to download the whole data to be able to process it. Hence, this early processing saves a lot of time.
- Composable data: Data is composed because of the piping ability of the streams which lets them connect together in spite of however heavy the codes are. It means that the process of one input getting piped to output keeps on happening.
what is readable stream?
It is the stream from where you can receive and read the data in an ordered fashion. However, you are not allowed to send anything. For example fs.createReadStream() lets us read the contents of a file.
what is writable stream?
Writable stream: It is the stream where you can send data in an ordered fashion but you are not allowed to receive it back. For example fs.createWriteStream() lets us write data to a file.
What is duplex stream?
Duplex stream: It is the stream that is both readable and writable. Thus you can send in and receive data together. For example net.Socket is a TCP socket.
What is transform stream?
Transform stream: It is the stream that is used to modify the data or transform it as it is read. The transform stream is basically a duplex in nature. For example, zlib.createGzip stream is used to compress the data using gzip.
Are you familiar with differences between Node.js modules and ES6 modules?
NOTE: In Node.js, using both require
and import
concurrently is prohibited; it’s recommended to use require
over import
to avoid the experimental module flag requirement.