Node js interview questions Flashcards

1
Q

What npm is used for?

A

npm stands for Node Package Manager. npm provides the following two main functionalities:
1. Online repositories for Node.js packages/modules which are searchable on search.nodejs.org
2. Command-line utility to install packages, do version management and dependency management of Node.js packages.

Another important use for npm is dependency management. When you have a node project with a package.json file, you can run npm install from the project root and npm will install all the dependencies listed in the package.json.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Explain the difference between local and global npm packages installation

A

The main difference between local and global packages is this:
1. local packages are installed in the directory where you run npm install <package-name>, and they are put in the node_modules folder under this directory
2. global packages are all put in a single place in your system (exactly where depends on your setup), regardless of where you run npm install -g <package-name></package-name></package-name>

In general, all packages should be installed locally.
This makes sure you can have dozens of applications in your computer, all running a different version of each package if needed.

Updating a global package would make all your projects use the new release, and as you can imagine this might cause nightmares in terms of maintenance, as some packages might break compatibility with further dependencies, and so on.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is Callback?

A

A callback is a function called at the completion of a given task; this prevents any blocking, and allows other code to be run in the meantime. Callbacks are the foundation of Node.js. Callbacks give you an interface with which to say, “and when you’re done doing that, do all this.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the key features of Node.js?

A

Let’s look at some of the key features of Node.js.
1. Asynchronous event driven IO helps concurrent request handling – All APIs of Node.js are asynchronous. This feature means that if a Node receives a request for some Input/Output operation, it will execute that operation in the background and continue with the processing of other requests. Thus it will not wait for the response from the previous requests.
2. Fast in Code execution – Node.js uses the V8 JavaScript Runtime engine, the one which is used by Google Chrome. Node has a wrapper over the JavaScript engine which makes the runtime engine much faster and hence processing of requests within Node.js also become faster.
3. Single Threaded but Highly Scalable – Node.js uses a single thread model for event looping. The response from these events may or may not reach the server immediately. However, this does not block other operations. Thus making Node.js highly scalable. Traditional servers create limited threads to handle requests while Node.js creates a single thread that provides service to much larger numbers of such requests.
4. Node.js library uses JavaScript – This is another important aspect of Node.js from the developer’s point of view. The majority of developers are already well-versed in JavaScript. Hence, development in Node.js becomes easier for a developer who knows JavaScript.
5. There is an Active and vibrant community for the Node.js framework – The active community always keeps the framework updated with the latest trends in the web development.
6. No Buffering – Node.js applications never buffer any data. They simply output the data in chunks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why does Node.js prefer Error-First Callback?

A

The usual pattern is that the callback is invoked as callback(err, result), where only one of err and result is non-null, depending on whether the operation succeeded or failed. Without this convention, developers would have to maintain different signatures and APIs, without knowing where to place the error in the arguments array.

fs.readFile(filePath, function(err, data) {
  if (err) {
    //handle the error
  }
  // use the data object
});
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is Callback Hell and what is the main cause of it?

A

Asynchronous JavaScript, or JavaScript that uses callbacks, is hard to get right intuitively. A lot of code ends up looking like this:

fs.readdir(source, function (err, files) {
  if (err) {
    console.log('Error finding files: ' + err)
  } else {
    files.forEach(function (filename, fileIndex) {
      console.log(filename)
      gm(source + filename).size(function (err, values) {
        if (err) {
          console.log('Error identifying file size: ' + err)
        } else {
          console.log(filename + ' : ' + values)
          aspect = (values.width / values.height)
          widths.forEach(function (width, widthIndex) {
            height = Math.round(width / aspect)
            console.log('resizing ' + filename + 'to ' + height + 'x' + height)
            this.resize(width, height).write(dest + 'w' + width + '_' + filename, function(err) {
              if (err) console.log('Error writing file: ' + err)
            })
          }.bind(this))
        }
      })
    })
  }
})

See the pyramid shape and all the }) at the end? This is affectionately known as callback hell.
The cause of callback hell is when people try to write JavaScript in a way where execution happens visually from top to bottom. Lots of people make this mistake! In other languages like C, Ruby or Python there is the expectation that whatever happens on line 1 will finish before the code on line 2 starts running and so on down the file.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What do you mean by Asynchronous API?

A

All APIs of Node.js library are aynchronous that is non-blocking. It essentially means a Node.js based server never waits for a API to return data. Server moves to next API after calling it and a notification mechanism of Events of Node.js helps server to get response from the previous API call.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the difference between returning a callback and just calling a callback?

A

“return callback” typically refers to the pattern where a function returns control flow or status information synchronously while passing the actual result of an asynchronous operation to a callback function for further processing. “Just callback” might simply refer to passing a callback function without any return value from the asynchronous function itself.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is libuv?

A

libuv is a C library that is used to abstract non-blocking I/O operations to a consistent interface across all supported platforms. It provides mechanisms to handle file system, DNS, network, child processes, pipes, signal handling, polling and streaming. It also includes a thread pool for offloading work for some things that can’t be done asynchronously at the operating system level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is V8?

A

The V8 library provides Node.js with a JavaScript engine (a program that converts Javascript code into lower level or machine code that microprocessors can understand), which Node.js controls via the V8 C++ API. V8 is maintained by Google, for use in Chrome.
The Chrome V8 engine :
1. The V8 engine is written in C++ and used in Chrome and Nodejs.
2. It implements ECMAScript as specified in ECMA-262.
3. The V8 engine can run standalone we can embed it with our own C++ program.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the file package.json?

A

All npm packages contain a file, usually in the project root, called package.json - this file holds various metadata relevant to the project. This file is used to give information to npm that allows it to identify the project as well as handle the project’s dependencies. It can also contain other metadata such as a project description, the version of the project in a particular distribution, license information, even configuration data - all of which can be vital to both npm and to the end users of the package. The package.json file is normally located at the root directory of a Node.js project.

Here is a minimal package.json:
{
“name” : “barebones”,
“version” : “0.0.0”,
}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Name some Built-in Globals in Node.js

A

Node.js has a number of built-in global identifiers that every Node.js developer should have some familiarity with. Some of these are true globals, being visible everywhere; others exist at the module level, but are inherent to every module, thus being pseudo-globals.

The list of true globals:
1. global - The global namespace. Setting a property to this namespace makes it globally visible within the running process.
2. process - The Node.js built-in process module, which provides interaction with the current Node.js process.
3. console - The Node.js built-in console module, which wraps various STDIO functionality in a browser-like way.
4. setTimeout(), clearTimeout(), setInterval(), clearInterval() - The built-in timer functions are globals.

The pseudo-globals included at the module level in every module:
module, module.exports, exports - These objects all pertain to the Node.js module system.

  1. __filename - The __filename keyword contains the path of the currently executing file. Note that this is not defined while running the Node.js REPL.
  2. __dirname - Like __filename, the __dirname keyword contains the path to the root directory of the currently executing script. Also not present in the Node.js REPL.
  3. require() - The require() function is a built-in function, exposed per-module, that allows other valid modules to be included.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does Promisifying technique mean in Node.js?

A

This technique is a way to be able to use a classic Javascript function that takes a callback, and have it return a promise:
For example:

const fs = require('fs')

const getFile = (fileName) => {
    return new Promise((resolve, reject) => {
        fs.readFile(fileName, (err, data) => {
            if (err) {
                reject(err)
                return
            }
            resolve(data)
        })
    })
}

getFile('/etc/passwd')
.then(data => console.log(data))
.catch(err => console.log(err))
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q


What’s the difference between process.cwd() VS ___dirname ?

A

cwd is a method of global object process, returns a string value which is the current working directory of the Node.js process.
__dirname is the directory name of the current script as a string value. __dirname is not actually global but rather local to each module.

Consider the project structure:
Project
├── main.js
└──lib
└── script.js
Suppose we have a file script.js files inside a sub directory of project, i.e. C:/Project/lib/script.js and running node main.js which require script.js

main.js

require('./lib/script.js')
console.log(process.cwd())
// C:\Project
console.log(\_\_dirname)
// C:\Project
console.log(\_\_dirname === process.cwd())
// true

script.js

console.log(process.cwd())
// C:\Project
console.log(\_\_dirname)
// C:\Project\lib
console.log(\_\_dirname === process.cwd())
// false
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why we always require modules at the top of a file? Can we require modules inside of functions?

A

Yes, we can but we shall never do it.
Node.js always runs require synchronously. If you require an external module from within functions your module will be synchronously loaded when those functions run and this can cause two problems:

  1. If that module is only needed in one route handler function it might take some time for the module to load synchronously. As a result, several users would be unable to get any access to your server and requests will queue up.
  2. If the module you require causes an error and crashes the server you may not know about the error.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the preferred method of resolving unhandled exceptions in Node.js for synchronous code?

A

For synchronous code, if an error happens, return the error:

~~~
// Define divider as a syncrhonous function
var divideSync = function(x,y) {
// if error condition?
if ( y === 0 ) {
// “throw” the error safely by returning it
return new Error(“Can’t divide by zero”)
}
else {
// no error occured, continue on
return x/y
}
}

// Divide 4/2
var result = divideSync(4,2)
// did an error occur?
if ( result instanceof Error ) {
// handle the error safely
console.log(‘4/2=err’, result)
}
else {
// no error occured, continue on
console.log(‘4/2=’+result)
}

// Divide 4/0
result = divideSync(4,0)
// did an error occur?
if ( result instanceof Error ) {
// handle the error safely
console.log(‘4/0=err’, result)
}
else {
// no error occured, continue on
console.log(‘4/0=’+result)
}
```

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Explain how does Node.js work?

A

Node.js is an open-source backend javascript runtime environment. It is used as backend service where javascript works on the server-side of the application. This way javascript is used on both frontend and backend. Node.js runs on chrome v8 engine which converts javascript code into machine code, it is highly scalable, lightweight, fast, and data-intensive.

Working of Node.js: Node.js accepts the request from the clients and sends the response, while working with the request node.js handles them with a single thread. To operate I/O operations or requests node.js use the concept of threads. Thread is a sequence of instructions that the server needs to perform. It runs parallel on the server to provide the information to multiple clients. Node.js is an event loop single-threaded language. It can handle concurrent requests with a single thread without blocking it for one request.

Node.js basically works on two concept

Asynchronous
Non-blocking I/O
Non-blocking I/o: Non-blocking i/o means working with multiple requests without blocking the thread for a single request. I/O basically interacts with external systems such as files, databases. Node.js is not used for CPU-intensive work means for calculations, video processing because a single thread cannot handle the CPU works.

Asynchronous: Asynchronous is executing a callback function. The moment we get the response from the other server or database it will execute a callback function. Callback functions are called as soon as some work is finished and this is because the node.js uses an event-driven architecture. The single thread doesn’t work with the request instead it sends the request to another system which resolves the request and it is accessible for another request.

To implement the concept of the system to handle the request node.js uses the concept of Libuv.

Libuv is an open-source library built-in C. It has a strong focus on asynchronous and I/O, this gives node access to the underlying computer operating system, file system, and networking.

Libuv implements two extremely important features of node.js

Event loop
Thread pool
Event loop: The event loop contains a single thread and is responsible for handling easy tasks like executing callbacks and network I/O. When the program is to initialize all the top-level code is executed, the code is not in the callback function. All the applications code that is inside callback functions will run in the event loop. EventLoop is the heart of node.js. When we start our node application the event loop starts running right away. Most of the work is done in the event loop.

Nodejs use event-driven-architecture.

Events are emitted.
Event loop picks them up.
Callbacks are called.
Event queue: As soon as the request is sent the thread places the request into a queue. It is known as an event queue. The process like app receiving HTTP request or server or a timer will emit event as soon as they are done with the work and event loop will pick up these events and call the callback functions that are associated with each event and response is sent to the client.

The event loop is an indefinite loop that continuously receives the request and processes them. It checks the queue and waits for the incoming request indefinitely.

Thread pool: Though node.js is single-threaded it internally maintains a thread pool. When non-blocking requests are accepted there are processed in an event loop, but while accepting blocking requests it checks for available threads in a thread pool, assigns a thread to the client’s request which is then processed and send back to the event loop, and response is sent to the respective client.

The thread pool size can be change:

process.env.UV_THREADPOOL_SIZE = 1;

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Stream Chaining in Node.js?

A

Chaining the stream: Chaining of the stream is a mechanism of creating a chain of multiple stream operations by connecting the output of one stream with another stream. It is normally used with piping operations. For example, we will use piping and chaining to first compress a file and then decompress the same.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are Event Emitters?

A

If you worked with JavaScript in the browser, you know how much of the interaction of the user is handled through events: mouse clicks, keyboard button presses, reacting to mouse movements, and so on.

On the backend side, Node.js offers us the option to build a similar system using the events module.

This module, in particular, offers the EventEmitter class, which we’ll use to handle our events.

You initialize that using

import EventEmitter from 'node:events';
const eventEmitter = new EventEmitter();

This object exposes, among many others, the on and emit methods.
1. emit() is used to trigger an event
2. on() is used to add a callback function that’s going to be executed when the event is triggered
3.
For example, let’s create a start event, and as a matter of providing a sample, we react to that by just logging to the console:

eventEmitter.on('start', () => {
  console.log('started');
});

When we run
eventEmitter.emit('start');

the event handler function is triggered, and we get the console log.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What Are Buffer and why to use them in Node.js?

A

Simply put, a Buffer is a way to store and manipulate binary data in Node.js. Binary data refers to data that consists of binary values, as opposed to text data, which consists of characters and symbols. Examples of binary data include images, audio and video files, and raw data from a network.

Why is this important? The reason is that when you work with binary data, you often need to manipulate it in-memory, which can be difficult and inefficient using JavaScript’s standard data structures. For example, you might need to concatenate two binary data streams, slice a large binary file into smaller pieces, or encode and decode binary data into different character encodings. This is where Buffers come in: they provide a fast and efficient way to store and manipulate binary data in Node.js.

So, how do you use Buffers in Node.js? First, you need to create a Buffer object using the “Buffer” constructor. For example, you might create a Buffer with a fixed size like this:

const myBuffer = Buffer.alloc(10);

Or you might create a Buffer from an existing binary data stream:

const myBuffer = Buffer.from('Hello, world!');

Once you have a Buffer, you can use its various methods to manipulate the binary data it contains. For example, you might use the “slice” method to extract a portion of the binary data:

const slice = myBuffer.slice(0, 5);
console.log(slice.toString()); // Output: "Hello"

You can also use the “concat” method to concatenate two or more Buffers:

const firstBuffer = Buffer.from('Hello, ');
const secondBuffer = Buffer.from('world!');
const combinedBuffer = Buffer.concat([firstBuffer, secondBuffer]);
console.log(combinedBuffer.toString()); // Output: "Hello, world!"

As you can see, Buffers provide a flexible and efficient way to store and manipulate binary data in Node.js. Whether you’re working with images, audio, video, or raw data, you’ll find that Buffers are a powerful tool that can help you build high-performance and scalable applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is a Blocking Code in Node.js?

A

Blocking is when the execution of additional JavaScript in the Node.js process must wait until a non-JavaScript operation completes. This happens because the event loop is unable to continue running JavaScript while a blocking operation is occurring.

In Node.js, JavaScript that exhibits poor performance due to being CPU intensive rather than waiting on a non-JavaScript operation, such as I/O, isn’t typically referred to as blocking. Synchronous methods in the Node.js standard library that use libuv are the most commonly used blocking operations. Native modules may also have blocking methods.

All of the I/O methods in the Node.js standard library provide asynchronous versions, which are non-blocking, and accept callback functions. Some methods also have blocking counterparts, which have names that end with Sync.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How does concurrency work in Node.js?

A

JavaScript execution in Node.js is single threaded, so concurrency refers to the event loop’s capacity to execute JavaScript callback functions after completing other work. Any code that is expected to run in a concurrent manner must allow the event loop to continue running as non-JavaScript operations, like I/O, are occurring.

As an example, let’s consider a case where each request to a web server takes 50ms to complete and 45ms of that 50ms is database I/O that can be done asynchronously. Choosing non-blocking asynchronous operations frees up that 45ms per request to handle other requests. This is a significant difference in capacity just by choosing to use non-blocking methods instead of blocking methods.

The event loop is different than models in many other languages where additional threads may be created to handle concurrent work.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How does Node.js handle Child Threads?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

When should we use Node.js?

A

You did a great job of summarizing what’s awesome about Node.js. My feeling is that Node.js is especially suited for applications where you’d like to maintain a persistent connection from the browser back to the server. Using a technique known as “long-polling”, you can write an application that sends updates to the user in real time. Doing long polling on many of the web’s giants, like Ruby on Rails or Django, would create immense load on the server, because each active client eats up one server process. This situation amounts to a tarpit attack. When you use something like Node.js, the server has no need of maintaining separate threads for each open connection.

This means you can create a browser-based chat application in Node.js that takes almost no system resources to serve a great many clients. Any time you want to do this sort of long-polling, Node.js is a great option.

It’s worth mentioning that Ruby and Python both have tools to do this sort of thing (eventmachine and twisted, respectively), but that Node.js does it exceptionally well, and from the ground up. JavaScript is exceptionally well situated to a callback-based concurrency model, and it excels here. Also, being able to serialize and deserialize with JSON native to both the client and the server is pretty nifty.

I look forward to reading other answers here, this is a fantastic question.

It’s worth pointing out that Node.js is also great for situations in which you’ll be reusing a lot of code across the client/server gap. The Meteor framework makes this really easy, and a lot of folks are suggesting this might be the future of web development. I can say from experience that it’s a whole lot of fun to write code in Meteor, and a big part of this is spending less time thinking about how you’re going to restructure your data, so the code that runs in the browser can easily manipulate it and pass it back.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the difference between setTimeout(fn, 0) VS setImmediate (fn) ?

A

setTimeout is simply like calling the function after delay has finished. Whenever a function is called it is not executed immediately, but queued so that it is executed after all the executing and currently queued eventhandlers finish first. setTimeout(,0) essentially means execute after all current functions in the present queue get executed. No guarantees can be made about how long it could take.

setImmediate is similar in this regard except that it doesn’t use queue of functions. It checks queue of I/O eventhandlers. If all I/O events in the current snapshot are processed, it executes the callback. It queues them immediately after the last I/O handler somewhat like process.nextTick. So it is faster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What’s the Event Loop?

A

The event loop got its name because of how it’s usually implemented, which usually resembles:

while (queue.waitForMessage()) {
  queue.processNextMessage();
}

queue.waitForMessage() waits synchronously for a message to arrive (if one is not already available and waiting to be handled).

“Run-to-completion”
Each message is processed completely before any other message is processed.

This offers some nice properties when reasoning about your program, including the fact that whenever a function runs, it cannot be preempted and will run entirely before any other code runs (and can modify data the function manipulates). This differs from C, for instance, where if a function runs in a thread, it may be stopped at any point by the runtime system to run some other code in another thread.

A downside of this model is that if a message takes too long to complete, the web application is unable to process user interactions like click or scroll. The browser mitigates this with the “a script is taking too long to run” dialog. A good practice to follow is to make message processing short and if possible cut down one message into several messages.

Adding messages
In web browsers, messages are added anytime an event occurs and there is an event listener attached to it. If there is no listener, the event is lost. So a click on an element with a click event handler will add a message — likewise with any other event.

The first two arguments to the function setTimeout are a message to add to the queue and a time value (optional; defaults to 0). The time value represents the (minimum) delay after which the message will be pushed into the queue. If there is no other message in the queue, and the stack is empty, the message is processed right after the delay. However, if there are messages, the setTimeout message will have to wait for other messages to be processed. For this reason, the second argument indicates a minimum time — not a guaranteed time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

When should I use EventEmitter ?

A

Whenever it makes sense for code to SUBSCRIBE to something rather than get a callback from something. The typical use case would be that there’s multiple blocks of code in your application that may need to do something when an event happens.

For example, let’s say you are creating a ticketing system. The common way to handle things might be like this:

function addTicket(ticket, callback) {
    insertTicketIntoDatabase(ticket, function(err) {
        if (err)
            return handleError(err);

        callback();
    });
}

But now, someone has decided that when a ticket is inserted into the database, you should email the user to let them know. That’s fine, you can add it to the callback:

function addTicket(ticket, callback) {
    insertTicketIntoDatabase(ticket, function(err) {
        if (err)
            return handleError(err);

        emailUser(ticket, callback);
    });
}

But now, someone wants to also notify another system that the ticket has been inserted. Over time, there could be any number of things that should happen when a ticket is inserted. So let’s change it around a bit:

function addTicket(ticket, callback) {
    insertTicketIntoDatabase(ticket, function(err) {
        if (err)
            return handleError(err);

        TicketEvent.emit('inserted', ticket);
        callback();
    });
}

We no longer need to wait on all these functions to complete before we notify the user interface. And elsewhere in your code, you can add these functions easily:

TicketEvent.on('inserted', function(ticket) {
    emailUser(ticket);
});

TicketEvent.on('inserted', function(ticket) {
    notifySlack(ticket);
});
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is difference between synchronous and asynchronous method of fs module?

A

Synchronous methods: Synchronous functions block the execution of the program until the file operation is performed. These functions are also called blocking functions. The synchronous methods have File Descriptor as the last argument. File Descriptor is a reference to opened files. It is a number or a reference id to the file returned after opening the file using fs.open() method of the fs module. All asynchronous methods can perform synchronously just by appending “Sync” to the function name. Some of the synchronous methods of fs module in NodeJS are:

fs.readFileSync()
fs.renameSync()
fs.writeSync()
fs.writeFileSync()
fs.fsyncSync()
fs.appendFileSync()
fs.statSync()
fs.readdirSync()
fs.existsSync()

Asynchronous methods:

Asynchronous functions do not block the execution of the program and each command is executed after the previous command even if the previous command has not computed the result. The previous command runs in the background and loads the result once it has finished processing. Thus, these functions are called non-blocking functions. They take a callback function as the last parameter. Asynchronous functions are generally preferred over synchronous functions as they do not block the execution of the program whereas synchronous functions block the execution of the program until it has finished processing. Some of the asynchronous methods of fs module in NodeJS are:

fs.readFile()
fs.rename()
fs.write()
fs.writeFile()
fs.fsync()
fs.appendFile()
fs.stat()
fs.readdir()
fs.exists()
Heavy operations which consume time for processing such as querying huge data from a database should be done asynchronously as other operations can still be executed and thus, reducing the time of execution of the program.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

How to avoid Callback Hell in Node.js?

A
  1. Split Functions into Smaller Functions
  2. Using Promises
  3. Using Async/ await
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is the preferred method of resolving unhandled exceptions in Node.js for asynchronous code?

A

For callback-based (ie. asynchronous) code, the first argument of the callback is err, if an error happens err is the error, if an error doesn’t happen then err is null. Any other arguments follow the err argument:

~~~
var divide = function(x,y,next) {
// if error condition?
if ( y === 0 ) {
// “throw” the error safely by calling the completion callback
// with the first argument being the error
next(new Error(“Can’t divide by zero”))
}
else {
// no error occured, continue on
next(null, x/y)
}
}

divide(4,2,function(err,result){
// did an error occur?
if ( err ) {
// handle the error safely
console.log(‘4/2=err’, err)
}
else {
// no error occured, continue on
console.log(‘4/2=’+result)
}
})

divide(4,0,function(err,result){
// did an error occur?
if ( err ) {
// handle the error safely
console.log(‘4/0=err’, err)
}
else {
// no error occured, continue on
console.log(‘4/0=’+result)
}
})
```

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

what is a stream?

A

A stream is a way of data handling that helps us to obtain a sequential output by reading or writing the input (files, network communications, and any kind of end-to-end information exchange). That is, they let you read data from a source or write it to a destination or perform any other specific task uninterruptedly and constantly. The stream is not a unique concept to Node.js and it is a part of Unix for quite a long time. A pipe operator is used to make the programs react with each other by passing streams. Hence, the Node.js stream is used as a basis for all streaming APIs.

Example: When you are streaming YouTube, Netflix, or Spotify then, instead of the whole content downloading all at once, it downloads in small chunks while you keep browsing. Another example can be chatting on Facebook or WhatsApp where the data is continuously flowing between two people. This is because instead of reading all the data at once in the memory the stream processes it into smaller pieces to make large files easily readable. It is useful because some files are larger than the available free space that you have on your device. Hence, the stream makes such files readable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What are the Advantages of Stream?

A
  1. Memory efficiency: Stream is memory (spatial) efficient because they enable you to download files in smaller chunks instead of a whole in the memory before you can process it thus, saving space.
  2. Time efficiency: Stream is time-efficient because you start processing the data in smaller chunks so the procedure starts earlier compared to the general way, where you have to download the whole data to be able to process it. Hence, this early processing saves a lot of time.
  3. Composable data: Data is composed because of the piping ability of the streams which lets them connect together in spite of however heavy the codes are. It means that the process of one input getting piped to output keeps on happening.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

what is readable stream?

A

It is the stream from where you can receive and read the data in an ordered fashion. However, you are not allowed to send anything. For example fs.createReadStream() lets us read the contents of a file.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

what is writable stream?

A

Writable stream: It is the stream where you can send data in an ordered fashion but you are not allowed to receive it back. For example fs.createWriteStream() lets us write data to a file.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What is duplex stream?

A

Duplex stream: It is the stream that is both readable and writable. Thus you can send in and receive data together. For example net.Socket is a TCP socket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What is transform stream?

A

Transform stream: It is the stream that is used to modify the data or transform it as it is read. The transform stream is basically a duplex in nature. For example, zlib.createGzip stream is used to compress the data using gzip.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Are you familiar with differences between Node.js modules and ES6 modules?

A

NOTE: In Node.js, using both require and import concurrently is prohibited; it’s recommended to use require over import to avoid the experimental module flag requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

How can we run an external process with Node.js ?

A

Node.js is a cross-platform, open-source back-end JavaScript runtime environment that uses the V8 engine to execute JavaScript code outside of a web browser. Node.js allows developers to utilize JavaScript to create command-line tools and server-side scripting, which involves running scripts on the server before sending the page to the user’s browser.

In this article, we are going to talk about how we can run external processes with Node.js.

The child_process module provides us with the capability to run external processes in Node.js With the help of the child process module, we may use any system command as a “child process” to access Operating System features. The module provides us with four ways to create a child process:

spawn
fork
exec
execFile

import { spawn } from 'child_process'; 
  
const lsProcess = spawn('ls'); 
lsProcess.stdout.on('data', data => { 
    console.log(`stdout:\n${data}`); 
}) 
lsProcess.stderr.on("data", (data) => { 
    console.log(`stdout: ${data}`); 
}); 
lsProcess.on('exit', code => { 
    console.log(`Process ended with ${code}`); 
})
37
Q

How to you debug a node application?

A

Enable Inspector
When started with the –inspect switch, a Node.js process listens for a debugging client. By default, it will listen at host and port 127.0.0.1:9229. Each process is also assigned a unique UUID.

Inspector clients must know and specify host address, port, and UUID to connect. A full URL will look something like ws://127.0.0.1:9229/0f2c936f-b1cd-4ac9-aab3-f63b0f33d55e.

Node.js will also start listening for debugging messages if it receives a SIGUSR1 signal. (SIGUSR1 is not available on Windows.) In Node.js 7 and earlier, this activates the legacy Debugger API. In Node.js 8 and later, it will activate the Inspector API.

Security Implications
Since the debugger has full access to the Node.js execution environment, a malicious actor able to connect to this port may be able to execute arbitrary code on behalf of the Node.js process. It is important to understand the security implications of exposing the debugger port on public and private networks.

Exposing the debug port publicly is unsafe
If the debugger is bound to a public IP address, or to 0.0.0.0, any clients that can reach your IP address will be able to connect to the debugger without any restriction and will be able to run arbitrary code.

By default node –inspect binds to 127.0.0.1. You explicitly need to provide a public IP address or 0.0.0.0, etc., if you intend to allow external connections to the debugger. Doing so may expose you to a potentially significant security threat. We suggest you ensure appropriate firewalls and access controls in place to prevent a security exposure.

See the section on ‘Enabling remote debugging scenarios’ on some advice on how to safely allow remote debugger clients to connect.

Local applications have full access to the inspector
Even if you bind the inspector port to 127.0.0.1 (the default), any applications running locally on your machine will have unrestricted access. This is by design to allow local debuggers to be able to attach conveniently.

Browsers, WebSockets and same-origin policy
Websites open in a web-browser can make WebSocket and HTTP requests under the browser security model. An initial HTTP connection is necessary to obtain a unique debugger session id. The same-origin-policy prevents websites from being able to make this HTTP connection. For additional security against DNS rebinding attacks, Node.js verifies that the ‘Host’ headers for the connection either specify an IP address or localhost precisely.

These security policies disallow connecting to a remote debug server by specifying the hostname. You can work-around this restriction by specifying either the IP address or by using ssh tunnels as described below.

Inspector Clients
A minimal CLI debugger is available with node inspect myscript.js. Several commercial and open source tools can also connect to the Node.js Inspector.

Chrome DevTools 55+, Microsoft Edge
Option 1: Open chrome://inspect in a Chromium-based browser or edge://inspect in Edge. Click the Configure button and ensure your target host and port are listed.
Option 2: Copy the devtoolsFrontendUrl from the output of /json/list (see above) or the –inspect hint text and paste into Chrome.

38
Q

How does Node.js handle child threads?

A

Node.js, in its essence, is a single thread process. It does not expose child threads and thread management methods to the developer. Technically, Node.js does spawn child threads for certain tasks such as asynchronous I/O, but these run behind the scenes and do not execute any application JavaScript code, nor block the main event loop.

If threading support is desired in a Node.js application, there are tools available to enable it, such as the ChildProcess module. In fact, Node.js 12 has experimental support for threads.

39
Q

What is the relationship between Node.js and V8?

A

V8 is the Javascript engine inside of node.js that parses and runs your Javascript. The same V8 engine is used inside of Chrome to run javascript in the Chrome browser. Google open-sourced the V8 engine and the builders of node.js used it to run Javascript in node.js.

Can Node.js work without V8?

No. The current node.js binary cannot work without V8. It would have no Javascript engine and thus no ability to run code which would obviously render it non-functional. Node.js was not designed to run with any other Javascript engine and, in fact, all the native code bindings that come with node.js (such as the fs module or the net module) all rely on the specific V8 interface between C++ and Javascript.

There is an effort by Microsoft to allow the Chakra Javascript engine (that’s the engine in Edge) to be used with node.js. They build a V8 shim on top of Chakra so that the node.js binary code that expects to be talking to V8 can continue to do what it was doing, but actually end up talking to the Chakra engine underneath. From what I’ve read this is particularly targeted at Microsoft platforms that already have the Chakra engine and do not have the V8 engine running on them, though presumably you could use it on Windows too.

40
Q

What are the use cases for the Node.js “vm” core module ?

A

Below are some use cases of the Node.js “vm” module.

Use case 1: Secure Environment for Running Untrusted Code:

One of the primary use cases for the Node.js “vm” module is to provide a secure environment for running untrusted code. The “vm” module provides a sandboxed environment where scripts can be executed without the risk of affecting the host system or other scripts. This is particularly useful when dealing with user-generated content or third-party code that may contain security vulnerabilities.

Use case 2: Improving the Performance of Applications:

Another use case for the Node.js “vm” module is to improve the performance of your applications. The “vm” module allows you to run scripts in a separate context, which can help to isolate and optimize the execution of specific tasks. This can be useful for running computationally intensive tasks, such as data processing or machine learning algorithms, without affecting the performance of the rest of your application.

Use case 3: Advanced Features:

The “vm” module also provides a number of advanced features for developers to use. One such feature is the ability to evaluate code dynamically at runtime. This can be useful for creating dynamic user interfaces or for implementing code-generation techniques. Additionally, the “vm” module also provides a way to access the context of the script, which can be useful for debugging or for creating custom context-aware features.

Use case 4: Non-Blocking Execution:

The “vm” module also provides a way to run code in a non-blocking manner. This can be very useful in situations where you need to run a script that takes a long time to execute, but you don’t want to block the execution of other scripts. The “vm” module provides a way to run a script in a separate context so that it can execute independently of other scripts.

41
Q

Provide some reasons to not use node.js

A

NPM Dependency Nightmare
When you are working with Node.js and JavaScript, the biggest concern is the fact that your code is completely dependent on the NPM package manager. This means that if there is a problem within one of your dependencies, it can cause problems for your entire application. Furthermore, you need to ensure that your dependencies are up-to-date because if they are not, you will soon find that your application is not working properly. When this situation arises, you need to identify the problem and then update your dependencies so that everything works as expected.

Furthermore, one of the packages you use may use hundreds of other packages as its dependency, and if one of those packages breaks, it can directly affect your application. For example, you can visit this link to see all the packages gatsby depends on.

Unstable API
The Node.js API is ever-changing whether with new features or changes to the API. Sometimes these changes are also backward-incompatible, which can create huge problems for your application. So you should be careful when updating your Node.js version. Hence, if you are working with a production codebase, it’s a better decision to use one of the Long-term Support versions (LTS) for your project.

Not Ideal for CPU-intensive tasks
Node.js, at its core, is single-threaded and event-driven (Learn more about Node.js architecture here). Its event-driven nature makes applications highly scalable. But if your application has to run tasks that are CPU-intensive and heavy computing, Node.js might not be the best choice for you. Because when a heavy task is running, it blocks the Node.js event loop from moving forward for a longer period of time. This is completely different than languages like Java or Go, which are multi-threaded and can perform several actions at the same time.

42
Q

what are named exports ?

A

Named exports allow you to export multiple values from a module and give each of them a specific name. You can import these values by using their respective names when importing in another module.

Example:

// moduleA.js
export const foo = 'Foo';
export function bar() {
    // function implementation
}

// moduleB.js
import { foo, bar } from './moduleA';

In this example, you export both the foo constant and the bar function from moduleA, and you import them using their respective names in moduleB.

43
Q

what are default exports?

A

Default export is used to export a single value as the default value for a module. This value can be a variable, function, class, or any other JavaScript entity. When importing a default export, you can assign it any name you want in the importing module.

Example:

// moduleC.js
const myDefault = ‘Default Value’;
export default myDefault;

// moduleD.js
import myAlias from ‘./moduleC’;
In this example, the myDefault value is exported as the default export from moduleC. When importing it into moduleD, it’s assigned the name myAlias.

44
Q

can we mix named and default exports in a single module in javascript?

A

It’s important to note that you can mix both named and default exports in the same module:

// moduleE.js
export const namedExport = 'Named Export';
const defaultExport = 'Default Export';
export default defaultExport;

When importing mixed exports, you can choose how to import them:

import myDefault, { namedExport } from './moduleE';

Here, myDefault will receive the default export value, and namedExport will receive the named export value.

45
Q

What will be the output?
~~~
import { EventEmitter } from ‘events’;
const eventEmitter = new EventEmitter();

eventEmitter.on(‘myEvent’, () => {
console.log(‘Listener 1’);
});

eventEmitter.emit(‘myEvent’);

eventEmitter.on(“myEvent”, () => {
console.log(“Listener 2”);
});
~~~

A

We get only Listener 1 as output in the console, as the Listener 2 was registered after emitting the event.

46
Q

What will be the output?
~~~
import { EventEmitter } from ‘events’;

const eventEmitter1 = new EventEmitter();
eventEmitter1.on(‘myEvent’, () => {
console.log(‘Listener’);
});

const eventEmitter2 = new EventEmitter();
eventEmitter2.emit(‘myEvent’);
~~~

A

**EventEmitter Instance Should Be Singleton for a Single Event Name
**
In other words, the on() and the emit() functions must be called on the same EventEmitter instance.

The listeners won’t work if registered on a separate EventEmitter instance.

This code won’t print anything in the console, as there were two separate instances used in this code: one instance for registering the publisher and the other instance for listening to the event.

47
Q

how to maintain a single EventEmitter instance across the application?

A

A node application is generally 100s of files. This gets challenging to maintain by a single copy of the EventEmitter instance throughout the application.

There is a simple strategy to create and maintain a singleton copy for an EventEmitter instance.

When creating the EventEmitter instance, we can simply store it as an application-level setting using app.set(<key>, <value>).</value></key>

import { EventEmitter } from "events";
import express from 'express';

const eventEmitter = new EventEmitter();

const app = express();
app.set('eventEmitter', eventEmitter);

// access it from any module of the application
console.log(app.get('eventEmitter'));
48
Q

Is an Event Emitter Synchronous or Asynchronous?

A

Consider the following code snippet:
~~~
import { EventEmitter } from ‘events’;
const eventEmitter = new EventEmitter();

eventEmitter.on(‘myEvent’, (data) => {
console.log(data);
});

console.log(‘Statement A’);
eventEmitter.emit(‘myEvent’, ‘Statement B’);
console.log(“Statement C”);
~~~

When we execute this code snippet, we get the following output in the console:

> Statement A
Statement B
Statement C
The events raised by event emitters are synchronously executed by the listeners in the current event loop’s iteration.

49
Q

what is the Order of Execution of the Listeners while listening to events in node js?

A

The listeners are executed in the order the listeners are created for an event emitter.

Consider the following code snippet to understand this statement:

import { EventEmitter } from 'events';
const eventEmitter = new EventEmitter();

eventEmitter.on('myEvent', (data) => {
    console.log(data, '- FIRST');
});

console.log('Statement A');

eventEmitter.on("myEvent", data => {
    console.log(data, '- SECOND');
});

eventEmitter.emit('myEvent', 'Emitted Statement');

console.log("Statement B");

When executed, the above code gives the output:

> Statement A
Emitted Statement - FIRST
Emitted Statement - SECOND
Statement B
The listener that’s registered earlier is executed earlier.

50
Q

What will be the output?
~~~
import { EventEmitter } from “events”;
const eventEmitter = new EventEmitter();

eventEmitter.on(“myEvent”, data => {
console.log(data, “- ON”);
});

eventEmitter.once(“myEvent”, data => {
console.log(data, “- ONCE”);
});

eventEmitter.emit(“myEvent”, “Emitted Statement”);
eventEmitter.emit(“myEvent”, “Emitted Statement”);
eventEmitter.emit(“myEvent”, “Emitted Statement”);
~~~

A

With once(), the listener will be discarded after listening for an event. Events listened with once() will be triggered only once.

Running the above code will give the output:

Emitted Statement - ON
Emitted Statement - ONCE
Emitted Statement - ON
Emitted Statement - ON
Notice here the once listener was only called once. After getting called for the first time, it’ll be discarded for further use.

51
Q

What is the meaning of the “at” (@) prefix on npm packages?

A

Basically there are two types of modules on npm, they are -

Global modules - these are modules that follow the naming convention that exists today. You require(‘foo’) and there is much rejoicing. They are owned by one or more people through the npm install XYZ command.

Scoped modules - these are new modules that are “scoped” under an organization name that begins with an @ the organisation’s name, a slash and finally the package name, e.g. @someOrgScope/packagename. Scopes are a way of grouping related packages together, and also affect a few things about the way npm treats the package.

A scoped package is installed by referencing it by name, preceded by an @-symbol, in npm install:

npm install @myorg/mypackage

52
Q

what is the highest level scope of the javascript scope heirarchy in the browser?

A

In the browser, we have access to two global objects: window and document

window is the highest level scope of the javascript scope heirarchy in the browser. If you declare a global variable in javascript, it is attached as a property of window. Window also contains all globally accessible browser APIs that the browser initializes by default e.g. localStorage and console.

Example:

~~~
var a = ‘global string’; //this string is attached to window
window.a; // returns ‘global string’
```

53
Q

what is the document object in the browser?

A

document is an object that points to the highest parent node of your currently visible document object model (DOM). Since it is a globally accessible object created by the browser, it is also a property on window.

Example:

document; // returns an object that contains all nodes in currently visible DOM
window.document; // returns the same object as the line above!

Requesting childNodesfrom document returns an array of 1 element: the outermost HTML tag of the currently visible DOM.

54
Q

what is the equivalent to window object in node js?

A

The equivalent objects in a node program are named global and process.

Since there isn’t a browser window in a node program (code can execute anywhere, not just in the browser) the highest-level scope in your node program is called global. It can be interacted with in the same way as the window object, and like window it also contains globally accessible objects and methods for your node program (e.g. setTimeout and console). You may also reference these global properties without explicitly calling them from global.

Example:

var a = 'global string';
global.a // returns 'global string'
global.console.log('hi') //logs 'hi'
console.log('hi') //also logs 'hi'

Similarly, since code is not executing in the browser, there is no DOM available to a node process. There are, however, APIs that node exposes to interact with the running node process. node exposes these APIs to the user in a second global object called process.

The objects and APIs available from process give you access to details about the environment in which the node process is running.

process.env  //returns an object with environment variables set for the execution environment
process.memoryUsage() //returns data about the memory being consumed to run the process

Since node offers a different execution environments for your javascript, it makes sense that the global objects are named differently than the browser equivalents. I’ve used process.env many times to grab environment variables from my execution environment, but I never took time to look at what else was exposed to a program via process (like exit() or cpuUsage()).

55
Q

What is Worker Thread in Node.JS?

A

Worker threads are the useful feature in Node.Js which allows us to run JavaScript code in parallel with the main thread. Before worker threads NodeJ.Js could execute one operation at a time. Worker threads provide the capability to perform parallel processing by creating separate threads.

Worker threads are useful for performing CPU-intensive JavaScript operations, but they don’t not help much with I/O intensive work. They are also able to share memory by transferring ArrayBuffer instances or sharing SharedArrayBuffer instances.

// index.js

const { Worker } = require('worker_threads') 

function runService(workerData) { 
	return new Promise((resolve, reject) => { 
		const worker = new Worker( 
				'./worker.js', { workerData }); 
		worker.on('message', resolve); 
		worker.on('error', reject); 
		worker.on('exit', (code) => { 
			if (code !== 0) 
				reject(new Error( 
`Worker Thread stopped with the exit code: ${code}`)); 
		}) 
	}) 
} 

async function run() { 
	const result = await runService('GeeksForGeeks') 
	console.log(result); 
} 

run().catch(err => console.error(err)) 
56
Q

What is clusters in Node.JS?

A

Node.Js clusters are used to run multiple instances in single threaded Node application. By using cluster module you can distribute workloads among application threads. The cluster module allows us to create child processes that all share server ports. To handle heavy load, we need to launch a cluster of Node.js processes and hence make use of multiple cores.

Each child process has its own event loop, memory, V8 instance, and shares the same server port.

const cluster = require('cluster');
const os = require('os');
const express=require("express"); 
const app=express(); 

if (cluster.isMaster) {
// Master process code
console.log(`Master ${process.pid} is running`);

// Fork workers equal to the number of CPU cores
for (let i = 0; i < os.cpus().length; i++) {
	cluster.fork();
}

// Listen for worker exit and fork a new one
cluster.on('exit', (worker, code, signal) => {
	console.log(`Worker ${worker.process.pid} died`);
	cluster.fork();
});
} else {
// Worker process code
console.log(`Worker ${process.pid} started`);

const port=8000; 
	app.listen(port,(req,res)=>{ 
	console.log(`server running at port ${port}`); 
	}); 
}
57
Q

difference between worker thread and cluster in terms of granuality of working in nodejs?

A

Worker threads operate at thread level, providing a way to run JavaScript code in parallel within a single process.

Clusters operate at process level, allowing you to create multiple Node.js processes (workers) to handle incoming network requests.

58
Q

difference between worker thread and cluster in terms of communication in nodejs?

A

Communication between worker threads is typically achieved through message passing using the postMessage API.

Communication between the master process and worker processes is achieved using IPC mechanisms.

59
Q

difference between worker thread and cluster in terms of isolation in nodejs?

A

Worker threads have their own isolated JavaScript context, means they don’t share variables or memory directly.

Each worker process in a cluster is a separate Node.js process, which means they have their own memory space.

60
Q

difference between worker thread and cluster in terms of I/O operations in nodejs?

A

Worker threads are not built for I/O-intensive operations, as Node.js’s built-in asynchronous I/O mechanisms are often more efficient.

Clusters are built for handle I/O-intensive operations efficiently. Each worker in a cluster can handle incoming requests independently

61
Q

difference between worker thread and cluster in terms of memory sharing in nodejs?

A

Worker threads can share memory using ArrayBuffer or SharedArrayBuffer instances, which allows more direct communication and shared data.

Clusters operate in separate processes so memory is isolated between them. Communication between clusters is often achieved through message passing.

62
Q
A
63
Q
A
64
Q

difference between worker thread and cluster in terms of use case in nodejs?

A

Worker threads are good for CPU-intensive tasks where parallel processing can significantly improve performance.

Clusters are good for improving the scalability of networked applications by distributing incoming requests among multiple processes.

65
Q

Is there any difference between res.send vs return res.send?

A

The return keyword returns from your function, thus ending its execution. This means that any lines of code after it will not be executed.

In some circumstances, you may want to use res.send and then do other stuff.

app.get('/', function(req, res) {
  res.send('i am a beautiful butterfly');
  console.log("this gets executed");
});

app.get('/', function(req, res) {
  return res.send('i am a beautiful butterfly');
  console.log("this does NOT get executed");
});
66
Q

What are express.json() and express.urlencoded()?

A

Here is the explanation that should clear doubts on express.json() and express.urlencoded() and the use of body-parser. It took me some time to figure this out.

  1. What is Middleware? It is those methods/functions/operations that are called BETWEEN processing the Request and sending the Response in your application method.
  2. When talking about express.json() and express.urlencoded() think specifically about POST requests (i.e. the .post request object) and PUT Requests (i.e. the .put request object)
  3. You DO NOT NEED express.json() and express.urlencoded() for GET Requests or DELETE Requests.
  4. You NEED express.json() and express.urlencoded() for POST and PUT requests, because in both these requests you are sending data (in the form of some data object) to the server and you are asking the server to accept or store that data (object), which is enclosed in the body (i.e. req.body) of that (POST or PUT) Request
  5. Express provides you with middleware to deal with the (incoming) data (object) in the body of the request.
    a. express.json() is a method inbuilt in express to recognize the incoming Request Object as a JSON Object. This method is called as a middleware in your application using the code: app.use(express.json());b. express.urlencoded() is a method inbuilt in express to recognize the incoming Request Object as strings or arrays. This method is called as a middleware in your application using the code: app.use(express.urlencoded());

ALTERNATIVELY, I recommend using body-parser (it is an NPM package) to do the same thing. It is developed by the same peeps who built express and is designed to work with express. body-parser used to be part of express. Think of body-parser specifically for POST Requests (i.e. the .post request object) and/or PUT Requests (i.e. the .put request object).

In body-parser you can do

// calling body-parser to handle the Request Object from POST requests
var bodyParser = require('body-parser');
// parse application/json, basically parse incoming Request Object as a JSON Object 
app.use(bodyParser.json());
// parse application/x-www-form-urlencoded, basically can only parse incoming Request Object if strings or arrays
app.use(bodyParser.urlencoded({ extended: false }));
// combines the 2 above, then you can parse incoming Request Object if object, with nested objects, or generally any type.
app.use(bodyParser.urlencoded({ extended: true }));
67
Q

what is gracefull shutdown?

A

Let’s imagine you have an HTTP server with NodeJS connected to a database, and every time the server gets called, it sends a request to the database to get/set data which will also be sent to the client by the response.
Imagine you need to shut down the server. The easiest way to kill the server is <Ctrl>+C. But what if your server didn't finish all the requests? What if some client connections close because we destroyed the server? We will not be able to handle the requests anymore.</Ctrl>

-That gives you a point to think, right?

As you might guess, you need to handle all requests and close all resources processing on data(e.e. database connections) and not take any other requests. After that, you can shut down your server with a quiet conscience.

Graceful shutdown - When all of your requests to the server have received a response and there is no remaining data processing work to be done.

Creating a graceful shutdown and shutting down the server in the correct way is essential. You can’t know what can happen to the requests made to the server. When you shut down it immediately, you can make a mistake and kill the process, which is doing something important at the moment.

68
Q

How to gracefully shutdown a node js server?

A

Here are the four steps to quickly do a graceful shutdown.

  1. Handle process kill signal
  2. Stop new requests from client
  3. Close all data process
  4. Exit from process
const express = require('express');
const mongoose = require('mongoose');

const app = express();
app.use(express.urlencoded({extended: true})); 
app.use(express.json());

mongoose.connect('mongodb://localhost/test', (err) => {
  if (err) throw err;
  console.log('Mongoose connected!');
});
const User = mongoose.model('User', { name: String });

app.post('/user', async (req, res) => {
  try {
    const user = new User({ name: req.body.username });
    await user.save();
    res.send('Success!').status(201);
  } catch (err) {
    res.send(err.message).status(500);
  }
});

const server = app.listen(3000, () => console.log('Example app listening on port 3000!'));

process.on('SIGTERM', () => {
  console.info('SIGTERM signal received.');
  console.log('Closing http server.');
  server.close(() => {
    console.log('Http server closed.');
    // boolean means [force], see in mongoose doc
    mongoose.connection.close(false, () => {
      console.log('MongoDb connection closed.');
      process.exit(0);
    });
  });
});
69
Q

How v8 engine works?

A

From a high-level view, the V8 JavaScript engine execution consists of 5 steps.

  1. Initialize environment in the host
  2. Compile JavaScript codes
  3. Generate bytecodes
  4. Interpret and execute bytecodes
  5. Optimize some bytecodes for better performance
70
Q

Does nodejs natively take advantage of multi-core processors?

A

Nodejs does run YOUR Javascript in a single thread. But, nodejs itself uses threads for things such as cryptography and disk I/O that your Javascript may call and because these functions are non-blocking, asynchronous interfaces, other parts of your Javascript can run during those operations. And, nodejs also uses some threads for its own internal implementation. If you fire up a nodejs server and examine what resources it’s using, you will see it has multiple threads (I forget the exact count last time I looked, but it was in the range of 6-9 threads on Windows 11).

71
Q

How to use Class in Node ?

A

There are two ways to define classes in javascript
1. using prototypes
2. using ES6 syntax

72
Q

how to define a class using prototype in javascript?

A
// Define a class using a constructor function
function UniversityStudent() {
	this.studentID = "UNI_ID_001";
}

// Add a method to set the student's name
UniversityStudent.prototype.setStudentName =
	function (studentName) {
		this.name = studentName;
	};

// Add a method to greet the student
UniversityStudent.prototype.greetStudent =
	function () {
		console.log(
			"Hello, " + this.name +
			"! Your university ID is " + this.studentID
		);
	};

// Create an object using the UniversityStudent class
var newUniversityStudent = new UniversityStudent();

// Call the method to set the student's name
newUniversityStudent.setStudentName("Ashish");

// Call the method to greet the student
newUniversityStudent.greetStudent();
73
Q

how to define a class using ES6 syntax in javascript?

A
// UniversityStudent class declaration
class UniversityStudent {
	constructor() {
		this.studentID = "UNI_ID_001";
	}

	set studentName(studentName) {
		this._studentName = studentName;
	}

	get studentName() {
		return this._studentName;
	}

	greetStudent() {
		console.log(
			"Hello, " + this.studentName +
			"! Your university ID is " + this.studentID);
	}
}

var newUniversityStudent = new UniversityStudent();
newUniversityStudent.studentName = "Ashish";

newUniversityStudent.greetStudent();
74
Q

What is LTS releases of Node.js why should you care ?

A

LTS releases, also known as “stable” releases, are versions of Node.js that are designated for long-term support. This means that they are more stable and reliable than standard releases, which are meant for testing and experimentation. LTS releases are intended for use in production environments, where stability and reliability are critical.

75
Q

what does __filename do in nodejs?

A

The __filename in the Node.js returns the filename of the code which is executed. It gives the absolute path of the code file. The following approach covers how to implement __filename in the NodeJS project.

// Node.js code to demonstrate the absolute 
// file name of the current Module. 
console.log("Filename of the current file is: ", 
	\_\_filename);
76
Q

Difference between dependencies, devDependencies and peerDependencies in terms of definition?

A

A dependency is a library that a project needs to function effectively. DevDependencies are the packages a developer needs during development.
A peer dependency specifies that our package is compatible with a particular version of an npm package.
In package.json file, there is an object called as peerDependencies and it consists of all the packages that are exactly required in the project or to the person who is downloading and the version numbers should also be the same. That is the reason they were named as peerDependencies. The best example is ‘react’ which is common in every project to run similarly.

77
Q

Difference between dependencies, devDependencies and peerDependencies in terms of installation?

A

dependency: If a package doesn’t already exist in the node_modules directory, then it is automatically added.

devdependency: As you install a package, npm will automatically install the dev dependencies.

peerDependencies are not automatically installed. You need to manually modify your package.json file in order to add a Peer Dependency.

78
Q

Difference between dependencies, devDependencies and peerDependencies in terms of usage?

A

dependench: These are the libraries you need when you run your code.

devdependency: These dependencies may be needed at some point during the development process, but not during execution.

Peer dependencies are only encountered when you publish your own package, that is, when you develop code that will be used by other programs.

79
Q

how to add dependency, devDependency and peerdependencies in your node project?

A

Dependencies can be added to your project by running :

npm i <package_name>
Dev dependencies can be added to your project by running :</package_name>

npm i <package_name>
--save-dev</package_name>

peer dependency:
Change the package.json file manually.

80
Q

How to handle errors for async code in Node.js ?

A

If we want to handle the error for asynchronous code in Node.js then we can do it in the following two manners.

  1. Handle error using callback
  2. Handle Promise rejection

Handle error using callback: A callback function is to perform some operation after the function execution is completed. We can call our callback function after an asynchronous operation is completed. If there is some error we can call the callback function with that error otherwise we can call it with the error as null and the result of the asynchronous operation as the arguments.

81
Q

can nodejs work without v8

A

The current Node. js engine cannot work without V8. It would have no JavaScript engine and hence no ability to run any JavaScript code

82
Q

What are the various timing features of Node.js ?

A

The timer modules in Node.js consists of functions that help to control the timings of code execution. It includes setTimeout(), setImmediate(), and setInterval() methods.

83
Q

what does setTimeout() do?

A

setTimeout() Method: The setTimeout() method is used to schedule code execution after a designated amount of milliseconds. The specified function will be executed once. We can use the clearTimeout() method to prevent the function from running. The setTimeout() method returns the ID that can be used in clearTimeout() method.

84
Q

how does setImmediate() works?

A

setImmediate() Method: The setImmediate() method is used to execute code at the end of the current event loop cycle. Any function passed as the setImmediate() argument is a callback that can be executed in the next iteration of the event loop.

85
Q

how does setInterval() works?

A

setInterval() Method: The setInterval() method is used to call a function at specified intervals (in milliseconds). It is used to execute the function continuously after a specified period.
We can use the clearInterval() method to prevent the function from running. The setInterval() method returns the ID which can be used in clearInterval() method

86
Q

What is NODE_ENV?

A

NODE_ENV is an environment variable that stands for Node environment in the Express server. The NODE_ENV environment variable specifies the environment in which an application is running (usually, development or production). Depending on this an application may perform specific tasks like turning debugging on or off, listening on a specific port, etc.

NODE_ENV as performance booster: One of the simplest things we can do to improve performance is to set NODE_ENV to “production”.

Setting NODE_ENV to “production” makes Express:

  1. Cache view templates.
  2. Cache CSS files generated from CSS extensions.
  3. Generate less verbose error messages.
    Thereby improving the performance of the application which is comparatively slower in development.
87
Q

how to access NODE_ENV?

A
const environment = process.env.NODE_ENV;
if (environment === 'production') {
    /* Do something specific
    to production environment. */
}
88
Q

difference between cluster and load balancer in terms of definition in nodejs?

A

Load Balancers distribute the processing load among the group of servers.
A cluster is a group of servers that run as if it were a single entity.

89
Q

difference between cluster and load balancer in terms of deployment in nodejs?

A

Load balancing can be more simple to deploy with different types of servers
Cluster: It usually requires identical servers within the cluster

90
Q

difference between cluster and load balancer in terms of resiliency in nodejs?

A

Relatively less resilient Load Balancing for applications
eg.-While doing transactions If one server fails, the customer has to re-enter data again from the start as the user state will be lost.

Server clusters are more resilient for applications
eg.-If any server got failed during the transaction, another server within the cluster will work and the customer will complete the transaction.

91
Q

does javascript have a map() function to iterate over object properties?

A

NO, it doesn’t.
To iterate over the object properties, we simply need to use Object.keys() to get all the properties of the object and then use Array.prototype.forEach() to run the provided function for each key-value pair. The callback should receive three arguments - the value, the key and the object itself.

const forOwn = (obj, fn) =>
  Object.keys(obj).forEach(key => fn(obj[key], key, obj));

forOwn({ foo: 'bar', a: 1 }, v => console.log(v));
// Logs: 'bar', 1
92
Q
A