Javascript / Typescript Interview Notes

Welcome to the JS/TS specific interview prep notes. More sections will be added here!

Q: Can you explain the difference between the microtask queue and the macrotask queue?

Answer:

Both the microtask queue and the macrotask queue (often just called the "task queue") are deeply integrated into the JavaScript Event Loop. They dictate the strict priority order in which asynchronous callbacks are executed.

1. The Microtask Queue

The microtask queue has absolute, highest priority over the macrotask queue. Its purpose is to execute small, immediate background tasks exactly before the Event Loop is allowed to continue, render the UI, or look at the macrotask queue.

What goes into the microtask queue?

  • Promises (.then(), .catch(), .finally())
  • process.nextTick() (in Node.js)
  • queueMicrotask()
  • MutationObserver callbacks

2. The Macrotask Queue

The macrotask queue holds larger, more generic operations that the browser or host environment schedules.

What goes into the macrotask queue?

  • setTimeout()
  • setInterval()
  • setImmediate() (in Node.js)
  • I/O operations (like fetching network data)
  • UI rendering and event callbacks (like clicks or keyboard presses)

How they interact (The Execution Order)

  1. The Event Loop executes the current synchronous code (the call stack) until it is completely empty.
  2. Once empty, it checks the microtask queue. It executes every single task inside the microtask queue until it's completely empty. (If a microtask schedules another microtask, it will process that one too!).
  3. Only when the microtask queue is 100% empty, the Event Loop takes exactly one task from the macrotask queue and executes it.
  4. After that one macrotask finishes, it loops back and checks the microtask queue again.

Classic Interview Example

console.log('1. Script start'); // Synchronous

setTimeout(() => { // Macrotask
  console.log('2. setTimeout'); 
}, 0);

Promise.resolve().then(() => { // Microtask
  console.log('3. Promise 1'); 
}).then(() => { // Microtask chained
  console.log('4. Promise 2'); 
});

console.log('5. Script end'); // Synchronous

Output Order:

  1. 1. Script start (Sync)
  2. 5. Script end (Sync)
  3. 3. Promise 1 (Microtask Queue)
  4. 4. Promise 2 (Microtask Queue emptied)
  5. 2. setTimeout (Macrotask Queue picked up last)

Q: What is Hoisting in JavaScript?

Answer:

Hoisting is JavaScript's default behavior of moving declarations to the top of the current scope (script or function) prior to execution.

It's crucial to understand that only declarations are hoisted, not initializations (assignments). The JavaScript engine executes code in a two-pass process: the Creation Phase (where hoisting happens) and the Execution Phase.

1. Function Hoisting

Function Declarations are completely hoisted—both the function name and the function body. This means you can invoke a function before it appears in your code.

// ✅ Works perfectly!
sayHello();

function sayHello() { // Function Declaration
    console.log("Hello there!");
}

However, Function Expressions (including arrow functions) are treated like variables. If they use var, let, or const, they follow those respective hoisting rules.

// ❌ TypeError: sayGoodbye is not a function (it is `undefined` right now)
sayGoodbye();

var sayGoodbye = function() { // Function Expression
    console.log("Goodbye!");
};

2. Variable Hoisting (var)

Variables declared with var are also hoisted to the top of their function/global scope. However, they are initialized with the default value of undefined.

console.log(count); // Output: undefined
var count = 5;      // Declaration is hoisted, but assignment (= 5) stays here

How the JS Engine sees it:

var count; // Hoisted and set to undefined
console.log(count);
count = 5;

3. The Temporal Dead Zone (let and const)

Variables declared with let and const are technically hoisted, but with a major catch: they are NOT initialized.

Because they aren't initialized with undefined, trying to access them before the exact line they are declared results in a strict ReferenceError. The space between the top of the scope and the line where they are defined is known as the Temporal Dead Zone (TDZ).

// entering TDZ for `username`
console.log("Doing some work..."); 

// console.log(username); // ❌ ReferenceError: Cannot access 'username' before initialization

let username = "Abhay"; // TDZ ends here
console.log(username); // ✅ Output: Abhay

Summary

  • Function Declarations: Fully hoisted (safe to call early).
  • var: Hoisted, but initialized to undefined.
  • let / const: Hoisted, but uninitialized. Placed in the Temporal Dead Zone (causes an error if accessed early).
  • Function Expressions / Arrow Functions: Handled based on the variable keyword (var/let/const) they are attached to.

Q: What are the differences between var, let, and const?

Answer:

The primary differences between var, let, and const revolve around scope, hoisting behavior, and reassignment. This is a fundamental concept in modern JavaScript.

1. Scope

  • var is Function Scoped: If declared inside a function, it is scoped to that function. If declared block (like an if statement or for loop), it "leaks" out to the surrounding function scope.
  • let and const are Block Scoped: They are constrained strictly to the block they are defined in (any code wrapped in {}).
function scopeTest() {
    if (true) {
        var functionScoped = "I leak out!";
        let blockScoped = "I stay inside.";
    }
    console.log(functionScoped); // ✅ "I leak out!"
    // console.log(blockScoped); // ❌ ReferenceError
}

2. Hoisting

All three are hoisted to the top of their respective scopes, but they initialize differently:

  • var: Initialized with undefined. You can access a var variable before you write the declaration, but its value will be undefined.
  • let and const: They are hoisted, but they are NOT initialized. Accessing them before their declaration line results in a ReferenceError. The space before their declaration is known as the Temporal Dead Zone (TDZ).
console.log(a); // Output: undefined
var a = 10;

// console.log(b); // ❌ ReferenceError: Cannot access 'b' before initialization
let b = 20;

3. Reassignment & Redeclaration

  • var: Can be randomly redeclared in the same scope without throwing an error (which is highly prone to bugs), and can be reassigned.
  • let: Cannot be redeclared in the same scope, but its value can be reassigned.
  • const: Cannot be redeclared and its binding cannot be reassigned. It must be initialized at the time of declaration.

[!CAUTION] While const prevents the reassignment of the variable itself, it does not make the assigned object or array immutable. You can still mutate the internal properties of a const object.

var x = 1;
var x = 2; // ✅ Perfectly fine

let y = 1;
// let y = 2; // ❌ SyntaxError: Identifier 'y' has already been declared
y = 2; // ✅ Reassignment is fine

const person = { name: "Abhay" };
// person = { name: "John" }; // ❌ TypeError: Assignment to constant variable.
person.name = "John"; // ✅ Allowed! (Mutation, not reassignment)

Summary Rule of Thumb

  1. Always use const by default. It clarifies your intent that the variable should not be reassigned.
  2. If you know you will need to re-assign the variable (like a counter in a loop), use let.
  3. Stop using var entirely in modern codebase environments unless maintaining legacy code.

Q: Can you explain what a closure is in JavaScript? And as a follow-up, what would this code log and why? How would you fix it to log 0, 1, 2?

for (var i = 0; i < 3; i++) {
  setTimeout(() => console.log(i), 1000);
}

Answer:

A Closure is the combination of a function bundled together (enclosed) with references to its surrounding state (the lexical environment).

In simpler terms, a closure gives a function access to its outer scope, even after the outer function has returned. In JavaScript, closures are created every time a function is created, at function creation time.

How it Works

When a function executes, it uses a scope chain to look up variables. If a function returns another function inside of it, that inner function maintains a "backpack" (a hidden [[Environment]] reference) containing all the variables it needs from its parent's scope.

function makeGreeting(greeting) {
    // `greeting` is in the outer lexical scope
    return function(name) {
        // This inner function forms a closure 
        console.log(`${greeting}, ${name}!`);
    }
}

const sayHello = makeGreeting("Hello");
const sayHowdy = makeGreeting("Howdy");

// `makeGreeting` has already finished executing, but `sayHello` 
// still remembers the `greeting` variable ("Hello")!
sayHello("Abhay"); // Output: Hello, Abhay!
sayHowdy("Abhay"); // Output: Howdy, Abhay!

Why are Closures Useful?

  1. Data Privacy / Encapsulation: JavaScript did not historically have access modifiers like private. Closures allow you to create private variables that cannot be accessed from the outside.

    function createCounter() {
        let count = 0; // Private variable
        return {
            increment: () => ++count,
            getCount: () => count
        };
    }
    
    const counter = createCounter();
    counter.increment();
    console.log(counter.getCount()); // 1
    console.log(counter.count); // undefined (cannot access directly!)
    
  2. Function Factories: Creating partially applied functions (like makeGreeting above).

  3. Memoization / Caching: Keeping a private cache object to store expensive calculation results.

  4. Callbacks & Event Handlers: When attaching an event listener, you often use variables from the outer scope inside the callback. The event listener forms a closure to remember those variables when the event actually fires.

Classic Interview "Gotcha"

A common interview question involves closures inside a for loop using var vs let:

// Using var (Function Scoped)
for (var i = 0; i < 3; i++) {
    setTimeout(() => console.log(i), 1000); 
}
// Output: 3, 3, 3 
// Because `var` is function-scoped, there is only one `i` variable shared 
// by all closures. By the time the timeout runs, the loop has finished and `i` is 3.

// Using let (Block Scoped)
for (let j = 0; j < 3; j++) {
    setTimeout(() => console.log(j), 1000);
}
// Output: 0, 1, 2
// Because `let` is block-scoped, a new `j` is created for every single iteration. 
// Each closure gets its own independent copy.

Q: Comparison of Object.freeze() and Object.seal()

Answer:

Both Object.freeze() and Object.seal() are methods used to make an object immutable to a certain degree, but they have different levels of strictness.

1. Object.seal()

Sealing an object prevents new properties from being added and existing properties from being removed. However, you can still modify the values of existing properties (as long as they are writable).

  • Add properties? No
  • Delete properties? No
  • Modify existing properties? Yes
  • Reconfigure properties? No (cannot change enumerability/writability)

Example:

const user = { name: "Abhay", role: "admin" };
Object.seal(user);

user.name = "John"; // ✅ Allowed! (Value is modified)
user.age = 25;      // ❌ Not allowed! (Silently fails in non-strict mode, throws error in strict mode)
delete user.role;   // ❌ Not allowed!

2. Object.freeze()

Freezing an object is the strictest level of immutability. It does exactly what seal does, but it also prevents modifying the values of existing properties. The object becomes completely read-only.

  • Add properties? No
  • Delete properties? No
  • Modify existing properties? No
  • Reconfigure properties? No

Example:

const user = { name: "Abhay", role: "admin" };
Object.freeze(user);

user.name = "John"; // ❌ Not allowed!
user.age = 25;      // ❌ Not allowed!
delete user.role;   // ❌ Not allowed!

Shallow vs Deep

[!IMPORTANT] Both methods are shallow. This means if the object contains a nested object, the nested object's properties can still be modified, added, or deleted! To freeze or seal a deeply nested object, you have to recursively call the method on all child objects.

const company = {
    name: "Tech Corp",
    details: { employees: 50 }
};

Object.freeze(company);
// company.name = "New Tech"; // ❌ Blocked
company.details.employees = 100; // ✅ Allowed! Nested objects are unprotected.

Q: What are Type Guards in TypeScript?

Answer:

A Type Guard is a technique in TypeScript that allows you to narrow down the type of a variable within a conditional block. By performing a runtime check, you give TypeScript the guarantee it needs to let you safely access properties that belong only to a specific type.

TypeScript supports several built-in type guards, and also allows you to define your own.

1. typeof

Used to check basic, standard Javascript primitive types (string, number, boolean, symbol).

function printId(id: number | string) {
    if (typeof id === "string") {
        // In this block, TypeScript knows `id` is a string
        console.log(id.toUpperCase());
    } else {
        // Here, TypeScript knows it's a number
        console.log(id.toFixed(2));
    }
}

2. instanceof

Used to check if an object was constructed from a specific class.

class Car { drive() {} }
class Plane { fly() {} }

function moveVehicle(vehicle: Car | Plane) {
    if (vehicle instanceof Car) {
        vehicle.drive(); 
    } else {
        vehicle.fly(); 
    }
}

3. The in Operator

Often used to narrow down structural types (like interfaces or generic objects) by checking if a specific property exists on the object.

interface Bird { fly(): void; }
interface Fish { swim(): void; }

function moveAnimal(animal: Bird | Fish) {
    if ("fly" in animal) {
        animal.fly(); // TypeScript narrowed `animal` down to `Bird`
    } else {
        animal.swim(); // Narrowed to `Fish`
    }
}

4. User-Defined Type Guards (Type Predicates)

Sometimes standard checks aren't descriptive enough. You can define a custom validation function that returns a type predicate (parameterName is Type).

interface Admin { role: string; privileges: string[]; }
interface User { role: string; lastLogin: Date; }

// The `person is Admin` tells the TS compiler the type if this returns true
function isAdmin(person: Admin | User): person is Admin {
    // We are doing a manual check
    return (person as Admin).privileges !== undefined;
}

function processDashboard(person: Admin | User) {
    if (isAdmin(person)) {
        // TypeScript knows `person` is an Admin here!
        console.log("Admin Privileges:", person.privileges); 
    } else {
        console.log("User Last Login:", person.lastLogin);
    }
}

Q: What is the difference between interface and type in TypeScript?

Answer:

In modern TypeScript, both interface and type (type aliases) are often used interchangeably to define object shapes, but they have a few crucial differences under the hood.

Here are the key distinctions to remember for your interviews:

1. Primitive and Union Types

type can represent any kind of type. This includes primitives, unions, and tuples. interface can only represent the shape of an object (including functions and arrays).

// ✅ Only possible with `type`
type ID = string | number; // Union
type Coordinates = [number, number]; // Tuple
type Callback = (data: string) => void;

// ❌ Cannot be done with `interface`
// interface ID = string | number; // Error!

2. Extending / Inheritance

Both support extension, but their syntax and under-the-hood behavior differ.

  • interface uses the extends keyword.
  • type uses intersections (&).
// Interface extension
interface Animal { name: string; }
interface Bear extends Animal { honey: boolean; }

// Type intersection
type AnimalType = { name: string; }
type BearType = AnimalType & { honey: boolean; }

[!NOTE] Performance tip: TypeScript's compiler caches interface resolution much more efficiently than type intersections. If you have deep, complex hierarchies, interface extends will compile faster than massive type & type intersections.

3. Declaration Merging

interface supports declaration merging. If you declare the same interface twice, TypeScript will automatically merge them into one. This is extremely useful for extending third-party libraries (like adding custom properties to the Window object).

type does not support declaration merging. Declaring a type twice throws an error.

// ✅ Interfaces Merge
interface User { name: string; }
interface User { age: number; }
// Result: { name: string; age: number }

// ❌ Types Conflict
type Person = { name: string; }
type Person = { age: number; } // Duplicate identifier 'Person'

4. Implementation in Classes

A class can implements both an interface and a type alias (as long as the type alias resolves to an object shape). There is no difference here.

type Flyable = { fly(): void };
interface Swimmable { swim(): void };

class Duck implements Flyable, Swimmable {
    fly() {}
    swim() {}
}

Summary Rule of Thumb

Use interface for public-facing API contracts, object shapes, and when you need declaration merging. Use type when you need a union, intersection, tuple, or are aliasing a primitive type.

Q: What's the difference between Partial<T>, Required<T>, Pick<T, K>, and Omit<T, K>?

Answer:

These are built-in Utility Types in TypeScript that perform transformations on existing types. They allow you to create new, derivative types without needing to copy-paste interface definitions, which keeps your type definitions perfectly in sync (DRY code).

1. Partial<T>

Makes all properties in type T optional (?). Highly useful for update/patch payloads where you might only send a subset of the object.

interface User {
    id: number;
    name: string;
    email: string;
}

// PartialUser allows objects with any combination of the User properties
type PartialUser = Partial<User>;

// Example usage:
function updateUser(id: number, changes: Partial<User>) {
    // `changes` can be { name: "Abhay" }, { email: "a@b.com" }, or empty!
}

2. Required<T>

The exact opposite of Partial. It makes all properties in type T required, stripping away any optional ? modifiers.

interface RegistrationForm {
    username: string;
    bio?: string;     // Optional
    avatarUrl?: string; // Optional
}

// By the time it hits the database, we expect everything to be filled out
type CompleteProfile = Required<RegistrationForm>;

const user: CompleteProfile = {
    username: "abhay",
    // ❌ Error: Property 'bio' is missing
    // ❌ Error: Property 'avatarUrl' is missing
};

3. Pick<T, K>

Constructs a new type by "picking" a specific set of properties K (string literals or union of string literals) from type T. It's great for creating stripped-down versions of gigantic models.

interface Product {
    id: number;
    title: string;
    description: string;
    price: number;
    stock: number;
    manufacturerId: string;
}

// We only need the title and price for a small list view
type ProductPreview = Pick<Product, "title" | "price">;

const renderPreview = (product: ProductPreview) => {
    console.log(`${product.title} costs $${product.price}`);
    // console.log(product.description); // ❌ Error: Property does not exist
};

4. Omit<T, K>

The exact opposite of Pick. It constructs a new type by taking all properties from T and then omitting (removing) the specific keys K you provide. This is especially useful when creating database insertion payloads where autogenerated fields like id or createdAt shouldn't be included.

interface BlogPost {
    id: string; // generated by DB
    title: string;
    content: string;
    authorId: string;
    createdAt: Date; // generated by DB
}

// Creating a new post payload doesn't need an ID or creation date yet!
type CreatePostPayload = Omit<BlogPost, "id" | "createdAt">;

const newPost: CreatePostPayload = {
    title: "TypeScript Utils",
    content: "They are great!",
    authorId: "user_123"
    // We cannot specify `id` or `createdAt` here!
};

Summary Comparison

  • Partial: "Make everything optional."
  • Required: "Make everything mandatory."
  • Pick: "Give me exactly these specific fields."
  • Omit: "Give me everything except these specific fields."

Q: What is the difference between Structural Typing and Nominal Typing?

Answer:

This is one of the most fundamental design concepts in language architecture. The key difference dictates how the compiler determines if one type is "compatible" with another. TypeScript is a Structurally Typed language, whereas languages like Java, C#, and C++ are Nominally Typed.

1. Structural Typing (TypeScript)

Often referred to as "Duck Typing" (If it walks like a duck and quacks like a duck, it's a duck).

In a structurally typed language, two types are completely compatible if their internal structure (their shape) matches. The compiler doesn't care what the types are literally named or if they explicitly inherit from one another.

interface Ball { diameter: number; }
interface Earth { diameter: number; }

let myBall: Ball = { diameter: 10 };
let myEarth: Earth = { diameter: 12742 };

// 🤯 PERFECTLY VALID IN TYPESCRIPT!
myBall = myEarth; 
myEarth = myBall; 

Why? Because TypeScript only checks the structure. Both objects require a diameter property of type number. Since they both have it, they are structurally interchangeable.

2. Nominal Typing (Java, C#)

"Nominal" comes from the word for "name". In a nominally typed language, two types are only compatible if they share the exact same name or explicit inheritance path.

Even if two classes have the exact same shape, the compiler will refuse to mix them.

// Java Example (Nominal Typing)
class Ball { public int diameter; }
class Earth { public int diameter; }

Ball myBall = new Ball();
Earth myEarth = new Earth();

// ❌ COMPILE ERROR IN JAVA! 
// "Incompatible types: Earth cannot be converted to Ball"
myBall = myEarth; 

Why? Because Java strictly looks at the name of the types. An Earth is not literally a Ball, nor does class Earth implements Ball, so the Java compiler violently rejects it despite their identical shapes.

Which is better?

Neither is strictly better, they serve different paradigms:

  • Structural Typing (TS) makes mocking, testing, and merging JSON data incredibly fast and flexible. You don't need massive inheritance trees.
  • Nominal Typing (Java) provides tighter safety guarantees. You can't accidentally pass a UserId string into a function expecting a Password string if they are defined as distinct nominal classes, which prevents logical mix-ups.

Q: What is the difference between any, unknown, and never in TypeScript?

Answer:

These three special types form the extreme boundaries of TypeScript's type system. They dictate how strictly the compiler treats unknown or impossible data.

1. any (The Escape Hatch)

any completely disables the TypeScript compiler for that specific variable. You can assign anything to it, and you can perform any operation on it without the compiler complaining.

  • When to use it: Almost never. It's an escape hatch. It's useful during migrations from legacy JS to TS, or when interacting with poorly typed third-party libraries.
let myVar: any = 5;
myVar = "Hello";  // Valid
myVar.doSomething(); // Valid (Compiler allows it, but it crushes at runtime!)

2. unknown (The Safe any)

unknown is the type-safe counterpart to any. Just like any, you can assign absolutely any value to unknown. However, you cannot access properties, call methods, or assign an unknown value to a strictly-typed variable until you prove what it is using a type guard.

  • When to use it: When you are receiving data from an API, parsing JSON, or accepting external user input where you truly don't know the shape yet.
let safeVar: unknown = 5;
safeVar = { name: "Abhay" }; // Valid assignment

// ❌ COMPILE ERROR: Object is of type 'unknown'.
// safeVar.name = "John"; 

// ✅ We must prove what it is first (Type Narrowing)
if (typeof safeVar === "object" && safeVar !== null && "name" in safeVar) {
    console.log(safeVar.name); // Now the compiler is happy
}

3. never (The Impossible Type)

never represents a state that should never physically occur in your code. It's the bottom-most type in TypeScript.

  • When does it happen naturally?
    1. A function that always throws an error.
    2. A function with an infinite loop.
function throwCrashError(): never {
    throw new Error("System Crash");
}

function infiniteLoop(): never {
    while (true) {}
}
  • When to use it purposefully? (Exhaustive Checking) The most advanced and powerful use case for never is ensuring that switch statements have handled every single possible case.
type Status = "pending" | "approved" | "rejected";

function handleStatus(status: Status) {
    switch (status) {
        case "pending": return "Wait";
        case "approved": return "Good";
        case "rejected": return "Bad";
        default:
            // If we ever add a new status to the type union but forget to 
            // add a case here, the compiler will try to assign it to this `never` 
            // variable and throw a compile error alerting us to the bug!
            const exhaustiveCheck: never = status;
            return exhaustiveCheck;
    }
}

Summary

  • any: "I don't care about types. Turn off the compiler."
  • unknown: "I don't know what this is yet, but force me to check before I touch it."
  • never: "This code path represents an impossible state."

Q: What is Type Coercion in JavaScript?

Answer:

Type Coercion is the process of converting a value from one data type to another (such as a string to a number, or an object to a boolean). In JavaScript, type coercion can be either explicit or implicit.

Understanding this concept is crucial because JavaScript is a loosely-typed language, meaning it will aggressively try to coerce types silently in the background to execute operations, which often leads to bizarre bugs.

1. Explicit Type Coercion (Type Casting)

This happens when a developer intentionally writes code to convert one type to another using built-in global functions.

let val = "123";

// Explicitly converting a String to a Number
const num = Number(val); 
const num2 = parseInt(val, 10); 

// Explicitly converting a Number to a String
const str = String(456); 
const str2 = (456).toString();

// Explicitly converting to a Boolean
const bool = Boolean(1); // true
const bool2 = !!0; // false

2. Implicit Type Coercion

This happens silently by the JavaScript engine when you apply operators to values of different types. The engine automatically decides what type the value should be to make the operation work.

The String Concatenation Trap (+) If any operand of the + operator is a string, JavaScript coerces the other operands to strings and concatenates them.

console.log(1 + "2");     // "12" (Number 1 is coerced to String "1")
console.log("5" + true);  // "5true"

The Numeric Conversion Trap (-, *, /) Unlike +, math operators like -, *, and / strictly expect numbers. JavaScript will attempt to coerce strings into numbers to perform the math.

console.log("5" - "2");   // 3 (Strings are coerced to Numbers)
console.log("5" * "2");   // 10
console.log("10" - "a");  // NaN (Not-a-Number, because "a" cannot be cleanly converted)

3. Loose Equality (==) vs Strict Equality (===)

This is the most common interview question related to coercion.

  • === (Strict Equality): Checks if both the value AND the data type are identical. No type coercion is performed.
  • == (Loose Equality): Checks if the values are equal after performing implicit type coercion if the types are different.
console.log(1 === "1"); // false (Number vs String)
console.log(1 == "1");  // true (The string "1" is coerced into a Number before comparing)

console.log(0 == false); // true
console.log("" == false); // true
console.log(null == undefined); // true

[!CAUTION] Always use strict equality (===) in modern JavaScript/TypeScript to avoid catastrophic bugs caused by unpredictable implicit coercion rules!

4. Truthy and Falsy Values

When variables are used in a boolean context (like an if statement), JavaScript coerces them into booleans.

Every value in JS is considered "Truthy" (coerces to true) except for exactly 6 "Falsy" values:

  1. false
  2. 0 (and -0)
  3. "" (empty string)
  4. null
  5. undefined
  6. NaN

Q: What are Decorators in TypeScript/JavaScript?

Answer:

A Decorator is a special kind of declaration that can be attached to a class declaration, method, accessor, property, or parameter. Decorators use the form @expression, where expression must evaluate to a function that will be called at runtime with information about the decorated declaration.

Essentially, decorators are a way to use higher-order functions to wrap or modify the behavior of classes and their members in a declarative way. They are heavily used in frameworks like Angular and NestJS.

1. Requirements

To use decorators in TypeScript, you typically need to enable the experimentalDecorators compiler option in your tsconfig.json.

{
  "compilerOptions": {
    "target": "ES6",
    "experimentalDecorators": true
  }
}

2. Class Decorators

A class decorator is applied to the constructor of the class and can be used to observe, modify, or replace a class definition.

function Logger(constructor: Function) {
    console.log(`Logging creation of class: ${constructor.name}`);
}

@Logger
class Person {
    constructor(public name: string) {
        console.log("Person initialized");
    }
}
// Outputs: 
// "Logging creation of class: Person" (At definition time)

3. Method Decorators

Method decorators are applied to methods, allowing you to observe, modify, or replace a method definition. They are extremely useful for tasks like logging, error handling, or binding this context.

It takes 3 arguments:

  1. Target: The prototype of the class.
  2. PropertyKey: The name of the method.
  3. Descriptor: The PropertyDescriptor of the method.
function ReadOnly(target: any, propertyKey: string, descriptor: PropertyDescriptor) {
    descriptor.writable = false; // Prevents the method from being overridden
}

class MathOperations {
    @ReadOnly
    multiply(a: number, b: number) {
        return a * b;
    }
}

4. Decorator Factories

If you want to customize how a decorator is applied to a declaration by passing arguments to it, you can write a decorator factory. A decorator factory is simply a function that returns the actual decorator wrapper function.

function LogWithPrefix(prefix: string) {
    return function (target: any, propertyKey: string, descriptor: PropertyDescriptor) {
        const originalMethod = descriptor.value;
        descriptor.value = function (...args: any[]) {
            console.log(`[${prefix}] Calling ${propertyKey}`);
            return originalMethod.apply(this, args);
        };
    };
}

class Service {
    @LogWithPrefix("DEBUG")
    fetchData() {
        // ...
    }
}

[!NOTE] Decorators run when the class is defined, not when it is instantiated. The decorator functions execute sequentially during the file's initialization step.

Q: What does the cleanup function in useEffect do, and when does it run?

Answer:

In React, the cleanup function is the function you return from within a useEffect callback. Its primary job is to clean up side effects (like subscriptions, timers, or event listeners) to prevent memory leaks and unexpected behavior.

useEffect(() => {
    // 1. Setup the side effect
    const timer = setInterval(() => console.log('Tick'), 1000);

    // 2. Return the cleanup function
    return () => {
        clearInterval(timer); 
    };
}, []); 

When does the cleanup function run?

React runs the cleanup function in two specific scenarios:

  1. Before an Effect runs again (on re-renders): If your useEffect dependencies change and the component re-renders, React will first run the cleanup function from the previous render with the old state/props, and then run the newly updated effect.
  2. When the component unmounts: Before the component is removed from the DOM entirely, React fires the cleanup function to destroy any lingering processes.

What happens if you forget to clean it up?

Forgetting to clean up effects (like event listeners, WebSockets, or setInterval timers) typically leads to Memory Leaks.

If the component unmounts but a setInterval is still running in the background, it will continue executing forever. If that interval tries to update a React state variable (e.g., setCount(c => c + 1)) on an unmounted component, React used to aggressively throw a memory leak warning:

"Warning: Can't perform a React state update on an unmounted component. This is a no-op, but it indicates a memory leak in your application."

Even worse, if the component is mounted and unmounted 10 times, you will now have 10 identical intervals running at the exact same time, severely degrading app performance and causing chaotic UI bugs.

Q: What's the difference between useMemo and useCallback? When are they not worth using?

Answer:

Both useMemo and useCallback are React Hooks used for performance optimization (memoization). They both take a dependency array and re-compute only when those dependencies change.

Their core difference lies in what they return:

  • useMemo caches the result of a function calculation.
  • useCallback caches the function itself (the reference to the function).

Essentially, useCallback(fn, deps) is exactly equivalent to useMemo(() => fn, deps).

1. useMemo Use Cases

Use it to avoid repeating expensive, time-consuming calculations on every single render.

// ✅ GOOD USE CASE: Expensive calculation
const expensiveResult = useMemo(() => {
    return bigArray.filter(item => item.value > threshold).map(heavyTransformation);
}, [bigArray, threshold]);

2. useCallback Use Cases

Use it when you need to keep a function reference completely stable between renders. This is almost exclusively needed when you are passing a callback function as a prop to a deeply nested child component that is wrapped in React.memo(), or if the function is used in a useEffect dependency array.

// ✅ GOOD USE CASE: Passing to a pure child component
const handleSubmit = useCallback((data) => {
    api.submit(data);
}, []); // Function reference never changes

// MemoizedChild will NOT re-render since `handleSubmit` is identical across renders
return <MemoizedChild onSubmit={handleSubmit} />;

When is it NOT worth it? (The "Gotcha")

A massive red flag in interviews is developers who wrap every function in useCallback and every variable in useMemo.

React components are designed to tear down and rebuild extremely fast. Over-memoizing actually hurts performance because caching results and tracking dependency arrays takes up more memory and execution time than just letting React do its normal job.

DON'T use them when:

  1. The operation is cheap: Doing basic math or string concatenation? Don't use useMemo. const fullName = firstName + lastName is thousands of times faster to recalculate than wrapping it in useMemo.
  2. Passing to native HTML elements: Wrapping a function in useCallback just to pass it to <button onClick={handleClick}> is completely useless. Native DOM elements don't care about reference equality.
  3. The child isn't memoized: If you pass a useCallback function to a custom <Button> component, but that <Button> component is NOT wrapped in React.memo, the child is going to re-render anyway. You just wasted memory caching the function!

[!TIP] Interview Rule of Thumb: Write the code without useMemo or useCallback first. Only add them if you explicitly identify a performance bottleneck (like a 500ms lag on typing) or a useEffect infinite loop caused by an unstable function dependency.

Q: What is the difference between Controlled and Uncontrolled Components in React?

Answer:

This question fundamentally asks about how forms and input data are handled within a React application. The difference lies in who controls the current state of the data: React, or the DOM itself.

1. Controlled Components (React handles the state)

In a controlled component, the form data is handled strictly by the React component. React acts as the "Single Source of Truth."

You track the input's value using useState and update it dynamically using an onChange handler. The input element only displays what the React state tells it to display.

import { useState } from 'react';

function ControlledInput() {
    const [name, setName] = useState('');

    const handleChange = (e) => {
        // We can format, validate, or intercept the typing instantly!
        setName(e.target.value.toUpperCase()); 
    };

    return (
        <form>
            <input 
                type="text" 
                value={name} // Driven strictly by React
                onChange={handleChange} 
            />
        </form>
    );
}

Pros:

  • Instant validation (disabling buttons if input is invalid).
  • Enforcing input formats (like forcing uppercase, as shown above).
  • Dynamic inputs (conditionally showing other fields based on this input).

2. Uncontrolled Components (The DOM handles the state)

In an uncontrolled component, form data is handled directly by the DOM, mimicking traditional HTML behavior.

Instead of tracking every single keystroke with state, you use a ref (useRef) to grab the data directly from the DOM only when you actually need it (usually upon form submission).

import { useRef } from 'react';

function UncontrolledInput() {
    const nameRef = useRef(null);

    const handleSubmit = (e) => {
        e.preventDefault();
        // We ONLY access the value right when the user clicks submit
        alert(`Submitted: ${nameRef.current.value}`); 
    };

    return (
        <form onSubmit={handleSubmit}>
            <input 
                type="text" 
                ref={nameRef} // Tells React to track this DOM node
                defaultValue="Abhay" // Used instead of 'value' for initial state
            />
            <button type="submit">Submit</button>
        </form>
    );
}

Pros:

  • Less code (no useState or onChange boilerplate).
  • Faster execution (typing does not trigger a React component re-render).
  • Easier integration with non-React third-party APIs or vanilla JS libraries that expect direct DOM manipulation.

Summary

  • Use Controlled for real-time validation, dynamic UI changes based on input, and strictly keeping React as the source of truth. (This is generally the recommended approach in React).
  • Use Uncontrolled if the form is incredibly simple or if you need to wrap legacy vanilla JS components.

Q: Why does React need key props when rendering lists? Why is using the array index a problem?

Answer:

This question dives into the core of how React optimizes UI updates, a process known as Reconciliation (or the "Diffing" algorithm).

Why do we need Keys?

When a React component re-renders, React compares the newly generated virtual DOM tree with the old virtual DOM tree to figure out what precisely changed.

When it comes to rendering looping lists of elements (e.g. using .map()), React needs a way to instantly identify which specific items have been added, removed, reordered, or modified. The key is a unique identifier that tells React the specific identity of that element across renders.

Without keys, React would have to destroy and recreate the entire list from scratch if anything shifted, which is terrible for performance.

The Problem with using Array Index as the Key

Often, developers default to using the array index (item, index) as a key when they don't have a unique ID:

// 🚨 Bad Practice (for dynamic lists)
{items.map((item, index) => (
  <ListItem key={index} item={item} />
))}

If the list is completely static (never sorted, filtered, or prepended to), using the index is perfectly fine. However, if the list is dynamic, using the index causes severe bugs.

Here is why: An array index only represents the item's current position in the array, not its true identity.

Imagine you have a list of three text-input fields loaded from state:

  1. ["Apple", "Banana", "Cherry"]
  2. They are rendered with keys 0, 1, 2.
  3. You type "Green" into the "Apple" input.
  4. You click a button to delete "Apple" from the array.

The remaining array is now ["Banana", "Cherry"].

  • "Banana" shifts from index 1 to index 0.
  • React sees an element with key 0 in the old tree, and an element with key 0 in the new tree.
  • React mistakenly thinks this is the exact same underlying DOM element. Instead of deleting the first input, it recycles it! The input field that used to say "Apple" (and the word "Green" you typed into it) will now simply have its label changed to "Banana".

The Solution

Always use a unique, stable identifier from your data payload (like a database ID or UUID) as the key.

// ✅ Good Practice
{items.map(item => (
  <ListItem key={item.databaseId} item={item} />
))}

This guarantees that no matter how the items are sorted, added, or removed, React knows exactly which DOM node maps to which piece of data.