Author: Raju BK

  • WordPress with Nginx and ssl

    WordPress with Nginx and ssl

    Install Nginx on Ubuntu 20.04 LTS

    sudo apt update && sudo apt upgrade -y
    sudo apt install nginx -y
    sudo systemctl start nginx
    sudo systemctl enable nginx

    Install PHP 8.3 and its dependencies

    sudo apt install software-properties-common -y
    sudo add-apt-repository ppa:ondrej/php
    sudo apt update
    sudo apt install php8.3-fpm php8.3-mysql php8.3-curl php8.3-mbstring php8.3-xml php8.3-zip php8.3-gd -y
    
    sudo systemctl start php8.3-fpm
    sudo systemctl enable php8.3-fpm
    
    php -v

    Install MariaDB and its dependencies

    sudo apt install mariadb-server -y
    
    sudo systemctl start mariadb
    sudo systemctl enable mariadb
    
    sudo mysql_secure_installation

    Create a new database and user for WordPress

    sudo mysql -u root -p
    
    CREATE DATABASE wordpress;
    CREATE USER 'wpuser'@'localhost' IDENTIFIED BY 'secure_password'; # Replace with your desired password
    GRANT ALL PRIVILEGES ON wordpress.* TO 'wpuser'@'localhost';
    FLUSH PRIVILEGES;
    EXIT;

    Download and extract WordPress

    cd /var/www/html
    
    sudo wget https://wordpress.org/latest.tar.gz
    sudo tar -xvzf latest.tar.gz
    sudo rm latest.tar.gz
    
    sudo chown -R www-data:www-data /var/www/html/wordpress
    sudo chmod -R 755 /var/www/html/wordpress
    
    sudo mv /var/www/html/wordpress/wp-config-sample.php /var/www/html/wordpress/wp-config.php
    
    sudo nano /var/www/html/wordpress/wp-config.php
    
    define( 'DB_NAME', 'wordpress' );
    define( 'DB_USER', 'wpuser' );
    define( 'DB_PASSWORD', 'secure_password' ); # Replace with your password
    define( 'DB_HOST', 'localhost' );

    Create a new Nginx configuration file for WordPress

    sudo nano /etc/nginx/sites-available/wordpress
    
    server {
        listen 80;
        server_name wp.rajubk.com;
    
        return 301 https://wp.rajubk.com$request_uri;
    }
    
    server {
        listen 443 ssl http2;
        server_name wp.rajubk.com;
    
        root /var/www/html/wordpress;
        index index.php;
    
        # SSL parameters
        ssl_certificate /etc/letsencrypt/live/rajubk.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/rajubk.com/privkey.pem;
        ssl_trusted_certificate /etc/letsencrypt/live/rajubk.com/chain.pem;
    
        # Log files
        access_log /var/log/nginx/sample.com.access.log;
        error_log /var/log/nginx/sample.com.error.log;
    
        location = /favicon.ico {
            log_not_found off;
            access_log off;
        }
    
        location = /robots.txt {
            allow all;
            log_not_found off;
            access_log off;
        }
    
        location / {
            try_files $uri $uri/ /index.php?$args;
        }
    
        location ~ \.php$ {
            include snippets/fastcgi-php.conf;
            fastcgi_pass unix:/run/php/php8.3-fpm.sock;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            include fastcgi_params;
        }
    
        location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg)$ {
            expires max;
            log_not_found off;
        }
    }
    
    sudo ln -s /etc/nginx/sites-available/wordpress /etc/nginx/sites-enabled/
    
    sudo nginx -t
    
    sudo systemctl reload nginx
  • Advanced JavaScript Techniques

    Advanced JavaScript Techniques

    JavaScript has evolved significantly over the years, becoming one of the most powerful and versatile programming languages in web development. However, writing efficient, scalable, and maintainable JavaScript code requires mastering advanced techniques. By leveraging modern JavaScript patterns and features, developers can improve execution speed, enhance code modularity, and simplify complex tasks.

    This article explores ten essential advanced JavaScript techniques that will elevate your coding skills and help you build robust applications. From closures and destructuring to metaprogramming and memory management, these techniques will give you the edge needed to write professional-level JavaScript.

    1. Asynchronous JavaScript with Async/Await

    Handling asynchronous tasks effectively is crucial for modern web development. The async/await syntax provides a clean and readable way to manage asynchronous operations, replacing traditional callback-based approaches.

    Example:

    async function fetchUserData(userId) {
      const response = await new Promise((resolve) =>
        setTimeout(() => resolve({ id: userId, name: "John Doe" }), 1000)
      );
      return response;
    }
    
    async function displayUserData() {
      try {
        console.log("Fetching user data...");
        const user = await fetchUserData(123);
        console.log("User data:", user);
      } catch (error) {
        console.error("Error fetching user data:", error);
      }
    }
    
    displayUserData();

    Why Use Async/Await?

    • Improves readability by resembling synchronous code.
    • Simplifies error handling with try/catch.
    • Reduces deeply nested callback structures.

    2. Proxies for Intercepting and Enhancing Object Behavior

    JavaScript’s Proxy object allows developers to intercept and modify fundamental operations on objects, making them highly useful for creating custom behaviors such as logging, validation, and dynamic property handling.

    Example:

    const target = { name: "John" };
    const handler = {
      get: (obj, prop) => `${prop} is ${obj[prop]}`,
      set: (obj, prop, value) => {
        console.log(`Setting ${prop} to ${value}`);
        obj[prop] = value;
        return true;
      },
    };
    const proxy = new Proxy(target, handler);
    
    console.log(proxy.name); // "name is John"
    proxy.age = 30; // "Setting age to 30"

    Why Use Proxies?

    • Validation: Ensure properties meet certain criteria before being set.
    • Logging: Track access and modifications to object properties.
    • Default Values: Provide fallback values for undefined properties.
    • Computed Properties: Generate property values on-the-fly.

    By using Proxies, you can add powerful meta-programming capabilities to your JavaScript code, enabling more flexible and dynamic object interactions.

    3. Debouncing and Throttling for Performance Optimization

    Handling frequent user events like scrolling, resizing, or keypresses can impact performance. Debouncing and throttling help control function execution frequency.

    Debouncing: Debouncing delays the execution of a function until a specified time has passed since the last event trigger. This is useful for optimizing performance in cases like search input fields and window resize events.

    function debounce(func, delay) {
      let timeout;
      return function (...args) {
        clearTimeout(timeout);
        timeout = setTimeout(() => func.apply(this, args), delay);
      };
    }

    Throttling: Throttling ensures that a function is executed at most once within a given time frame, preventing excessive function calls.

    function throttle(func, limit) {
      let inThrottle;
      return function (...args) {
        if (!inThrottle) {
          func.apply(this, args);
          inThrottle = true;
          setTimeout(() => (inThrottle = false), limit);
        }
      };
    }

    These techniques improve performance by limiting how often a function runs, which is essential for user input handling and optimizing animations.

    4. Using Proxies to Intercept Object Behavior

    Proxy objects allow you to intercept and redefine fundamental operations on objects, such as property access, assignment, and function calls. This is useful for validation, logging, or building reactive frameworks.

    Example:

    const target = {
      message1: "hello",
      message2: "everyone",
    };
    
    const handler = {
      get(target, prop, receiver) {
        if (prop === "message2") {
          return "world";
        }
        return Reflect.get(...arguments);
      },
    };
    
    const proxy = new Proxy(target, handler);
    
    console.log(proxy.message1); // hello
    console.log(proxy.message2); // world

    Why Use Proxies?

    • Provides dynamic control over property access and modification.
    • Enables data validation, logging, and computed properties.
    • Useful for creating reactive programming frameworks and API wrappers.

    Proxies allow developers to intercept and modify object behavior, making them essential for metaprogramming and advanced JavaScript development.

    5. Optional Chaining (?.) for Safe Property Access

    Optional chaining (?.) provides a way to access deeply nested object properties without worrying about runtime errors due to undefined or null values.

    Example:

    const user = { profile: { name: "Alice" } };
    console.log(user.profile?.name); // 'Alice'
    console.log(user.address?.city); // undefined (no error)

    Why Use It?

    • Prevents runtime errors from missing properties.
    • Reduces excessive if statements for property checks.
    • Especially useful when working with API responses.

    6. Offload Heavy Tasks with Web Workers

    JavaScript is single-threaded, but Web Workers let you run scripts in background threads. Use them for CPU-heavy tasks like data processing or image manipulation:

    // main.js
    const worker = new Worker("worker.js");
    worker.postMessage(data);
    worker.onmessage = (e) => updateUI(e.data);
    
    // worker.js
    self.onmessage = (e) => {
      const result = processData(e.data);
      self.postMessage(result);
    };

    Why Use Web Workers?

    • Prevents UI freezes by offloading CPU-intensive tasks to background threads.
    • Enhances application responsiveness and performance.
    • Ideal for data processing, image manipulation, and real-time computations.

    Using Web Workers ensures that heavy computations do not block the main thread, leading to a smoother user experience.

    7. Master Memory Management

    Memory leaks silently degrade performance. Avoid globals, use WeakMap/WeakSet for caches, and monitor leaks with DevTools:

    const cache = new WeakMap();  
    function computeExpensiveValue(obj) {
      if (!cache.has(obj)) {
        const result = /* heavy computation */;
        cache.set(obj, result);
      }
      return cache.get(obj);
    }

    Why Use It?

    • Prevents memory leaks by allowing garbage collection of unused objects.
    • Efficient for caching without affecting memory consumption.
    • Useful for managing private data within objects.

    Using WeakMap ensures that cached objects are automatically cleaned up when no longer needed, preventing unnecessary memory usage.

    8. Currying Functions for Better Reusability

    Currying transforms a function that takes multiple arguments into a series of functions, each taking one argument. This technique makes functions more reusable and allows for partial application.

    // Basic curry function
    function curry(fn) {
      return function curried(...args) {
        if (args.length >= fn.length) {
          return fn.apply(this, args);
        } else {
          return function (...nextArgs) {
            return curried.apply(this, args.concat(nextArgs));
          };
        }
      };
    }
    
    // Usage
    const add = (a, b, c) => a + b + c;
    const curriedAdd = curry(add);
    console.log(curriedAdd(1)(2)(3));

    Why Use It?

    • Enables partial application of functions for better reusability.
    • Enhances functional programming by making functions more flexible.
    • Improves readability and simplifies repetitive tasks.

    Currying is particularly useful for creating highly reusable utility functions in modern JavaScript applications.

    9. Closures for Private State Management

    Closures are one of JavaScript’s most powerful features. They allow functions to remember and access variables from their outer scope even after the outer function has finished executing. This makes them particularly useful for encapsulating private state and preventing unintended modifications.

    Example:

    function createCounter() {
      let count = 0;
      return function () {
        count++;
        return count;
      };
    }
    
    const counter = createCounter();
    console.log(counter()); // 1
    console.log(counter()); // 2
    console.log(counter()); // 3

    Why Use Closures?

    • Encapsulation: Keep variables private and inaccessible from the global scope.
    • Data Integrity: Maintain controlled access to data, preventing unintended modifications.
    • Memory Efficiency: Create function factories that share behavior but maintain separate state.
    • Callback Functions: Preserve context in asynchronous operations.

    Closures are commonly used in event handlers, factory functions, and callback functions to preserve state efficiently.

    10. Destructuring for More Concise and Readable Code

    Destructuring simplifies the process of extracting values from arrays and objects, making your code cleaner and more readable. This technique is particularly useful when working with complex data structures or API responses.

    Object Destructuring:

    const person = { name: "Jack", age: 20 };
    const { name, age } = person;
    
    console.log(name); // 'Jack'
    console.log(age); // 20

    Array Destructuring:

    const numbers = [1, 2, 3];
    const [first, second] = numbers;
    console.log(first); // 1
    console.log(second); // 2

    Why Use Destructuring?

    • Reduces redundant variable assignments.
    • Enhances code readability.
    • Especially useful when working with API responses or function parameters.

    By leveraging destructuring, you can write more concise and expressive code, making it easier to work with complex data structures and improving overall code readability.

    Conclusion

    By mastering these advanced JavaScript techniques, developers can write cleaner, more efficient, and scalable code. Understanding closures, destructuring, proxies, async/await, and performance optimizations like debouncing and throttling will enhance your ability to build high-performance applications. Additionally, incorporating best practices like the module pattern and optional chaining will further improve your coding efficiency.

  • 10 JavaScript Tricks every developer should know

    10 JavaScript Tricks every developer should know

    JavaScript is a powerful language, but mastering it requires more than just knowing the basics. The real magic lies in the hidden gems — lesser-known but powerful tricks that can make your code cleaner, more efficient, and easier to maintain. Whether you’re a beginner or a seasoned developer, these 10 JavaScript tricks will help you level up your coding game! 👾

    1. Object.freeze() — Making Objects Immutable

    In JavaScript, by default objects are mutable, meaning you can change their properties after creation. But what if you need to prevent modifications? That’s where Object.freeze() comes in handy.

    const user = {
      name: "Alice",
      age: 25,
    };
    
    Object.freeze(user);
    
    user.age = 30; // This won't work, as the object is frozen
    console.log(user.age); // 25

    Note: When running the statement user.age = 30; JS won’t throw any error but when we will try to retreve the value of user’s age it will be 25.

    Real-World Use Case:

    Use Object.freeze() in Redux to ensure state objects remain unchanged, preventing accidental mutations.

    2. Destructuring for Cleaner Code

    Destructuring makes it easy to extract values from objects and arrays, leading to cleaner and more readable code.

    const person = { name: "Bob", age: 28, city: "New York" };
    const { name, age } = person;
    console.log(name, age); // Bob 28

    Real-World Use Case:

    Use destructuring in function arguments for cleaner APIs:

    function greet({ name }) {
      console.log(`Hello, ${name}!`);
    }
    
    greet(person); // Hello, Bob!

    3. Intl API — Effortless Localization

    The Intl API provides built-in support for internationalization, allowing you to format dates, numbers, and currencies easily.

    Example:

    const date = new Date();
    console.log(new Intl.DateTimeFormat("fr-FR").format(date));

    Output:

    Real-World Use Case:

    Use Intl.NumberFormat for currency formatting:

    const price = 1234.56;
    console.log(
      new Intl.NumberFormat("en-US", { style: "currency", currency: "USD" }).format(
        price
      )
    );

    Output:

    Price formatted as USD

    4. Optional Chaining — Avoiding Errors on Undefined Properties

    Optional chaining (?.) prevents runtime errors when accessing deeply nested properties that may not exist.

    Example:

    const user = { profile: { name: "Charlie" } };
    console.log(user.profile?.name);
    console.log(user.address?.street);

    Output:

    Real-World Use Case:

    Useful when working with APIs where certain data fields may be missing.

    5. Nullish Coalescing Operator — Smarter Defaults

    The ?? operator assigns a default value only if the left-hand side is null or undefined (unlike ||, which also considers 0 and "" as falsy).

    const username = "";
    console.log(username || "Guest");
    console.log(username ?? "Guest");

    Output:

    Real-World Use Case:

    Use it for better default handling in user settings or configurations.

    6. Short-Circuit Evaluation for Concise Conditionals

    Instead of writing long if statements, use short-circuit evaluation for quick conditional assignments.

    Example:

    const isLoggedIn = true;
    const greeting = isLoggedIn && "Welcome back!";
    console.log(greeting);

    Output:

    7. Using map() for Transforming Arrays

    Instead of using forEach to modify arrays, prefer map() which returns a new array.

    Example:

    const numbers = [1, 2, 3];
    const doubled = numbers.map((num) => num * 2);
    console.log(doubled);

    Output:

    8. Using reduce() for Complex Data Transformations

    The reduce() function allows you to accumulate values from an array into a single result.

    Exemple:

    const nums = [1, 2, 3, 4];
    const sum = nums.reduce((sum, num) => sum + num, 0);
    console.log("This is the sum ", sum);

    Output:

    9. Using setTimeout() in a Loop for Delayed Execution

    When using loops with timeouts, always use a closure to capture the correct loop variable.

    for (let i = 1; i <= 4; i++) {
      setTimeout(() => console.log("after delay of " + i + " seconds"), i * 1000);
    }

    Output:

    This Outputs 1 2 3 with delay of 1, 2 , 3 and 4 seconds respectively.

    10. Debouncing to Optimize Performance

    Debouncing is useful when dealing with events like scrolling, resizing, or input changes to prevent excessive function calls.

    Example:

    function debounce(func, delay) {
      let timer;
      return function (...args) {
        clearTimeout(timer);
        timer = setTimeout(() => func.apply(this, args), delay);
      };
    }
    window.addEventListener(
      "resize",
      debounce(() => console.log("Resized!"), 500)
    );

    Real-World Use Case:

    Prevent excessive API calls when a user is typing in a search box.

    Final Thoughts

    Mastering JavaScript isn’t just about knowing the syntax — it’s about using the right tools and tricks to write better code. These 10 tricks can make your applications faster, cleaner, and more reliable. Try them out and integrate them into your daily coding habits!

  • Context Splitting in React: A Technique to Prevent Unnecessary Rerenders

    Context Splitting in React: A Technique to Prevent Unnecessary Rerenders

    One of the most common issues developers face is managing rerenders, especially when working with Context API. Today, I want to share a powerful technique that quite known but… 😄

    The Problem with Traditional Context

    Before diving into the solution, let’s understand the problem. When using React’s Context API in the traditional way, any component that consumes a context will rerender whenever any value within that context changes.

    Consider this example:

    // Traditional Context approach
    const ThemeContext = React.createContext({
      theme: "light",
      setTheme: () => {},
    });
    
    function ThemeProvider({ children }) {
      const [theme, setTheme] = useState("light");
    
      return (
        <ThemeContext.Provider value={{ theme, setTheme }}>
          {children}
        </ThemeContext.Provider>
      );
    }
    
    function ThemeDisplay() {
      const { theme } = useContext(ThemeContext);
      console.log("ThemeDisplay rendering");
      return <div>Current theme: {theme}</div>;
    }
    
    function ThemeToggle() {
      const { setTheme } = useContext(ThemeContext);
      console.log("ThemeToggle rendering");
    
      return (
        <button
          onClick={() => setTheme((prev) => (prev === "light" ? "dark" : "light"))}
        >
          Toggle Theme
        </button>
      );
    }

    The issue here is that both ThemeDisplay and ThemeToggle will rerender whenever the theme changes, even though ThemeToggle only needs setTheme and doesn’t actually use the current theme value in its rendering.

    The Possible Solution: Context Splitting 💡

    The context splitting pattern addresses this problem by separating our context into two distinct contexts:

    1. data context that holds just the state (e.g., theme)
    2. setter context that holds just the updater function (e.g., setTheme)

    Here’s how it looks in practice:

    // Split Context approach
    const ThemeContext = React.createContext("light");
    const ThemeSetterContext = React.createContext(() => {});
    
    function ThemeProvider({ children }) {
      const [theme, setTheme] = useState("light");
    
      return (
        <ThemeContext.Provider value={theme}>
          <ThemeSetterContext.Provider value={setTheme}>
            {children}
          </ThemeSetterContext.Provider>
        </ThemeContext.Provider>
      );
    }
    
    function ThemeDisplay() {
      const theme = useContext(ThemeContext);
      console.log("ThemeDisplay rendering");
      return <div>Current theme: {theme}</div>;
    }
    
    function ThemeToggle() {
      const setTheme = useContext(ThemeSetterContext);
      console.log("ThemeToggle rendering");
    
      return (
        <button
          onClick={() => setTheme((prev) => (prev === "light" ? "dark" : "light"))}
        >
          Toggle Theme
        </button>
      );
    }

    With this pattern, when the theme changes:

    • ThemeDisplay rerenders (as it should since it displays the theme)
    • ThemeToggle does NOT rerender because it only consumes the setter context, which never changes (the setter function reference remains stable)

    Complete Example with Rerender Counting 🧪

    Let’s create a more complete example that demonstrates the difference between the traditional and split context approaches. We’ll add rerender counters to visualize the performance impact:

    import React, { useState, useContext, memo } from "react";
    import ReactDOM from "react-dom/client";
    
    /* ========== Traditional Context (Single) ========== */
    const TraditionalThemeContext = React.createContext({
      theme: "light",
      setTheme: () => {},
    });
    
    const TraditionalThemeProvider = ({ children }) => {
      const [theme, setTheme] = useState("light");
      return (
        <TraditionalThemeContext.Provider value={{ theme, setTheme }}>
          {children}
        </TraditionalThemeContext.Provider>
      );
    };
    
    const TraditionalThemeDisplay = memo(() => {
      const { theme } = useContext(TraditionalThemeContext);
      console.log("🔁 TraditionalThemeDisplay re-rendered");
      return (
        <div>
          Current theme: <strong>{theme}</strong>
        </div>
      );
    });
    
    const TraditionalThemeToggleButton = memo(() => {
      const { setTheme } = useContext(TraditionalThemeContext);
      console.log("🔁 TraditionalThemeToggleButton re-rendered");
      return (
        <button
          onClick={() => setTheme((prev) => (prev === "light" ? "dark" : "light"))}
        >
          Toggle Theme
        </button>
      );
    });
    
    /* ========== Optimized Context (Split) ========== */
    const ThemeContext = React.createContext("light");
    const SetThemeContext = React.createContext(() => {});
    
    const ThemeProvider = ({ children }) => {
      const [theme, setTheme] = useState("light");
    
      return (
        <ThemeContext.Provider value={theme}>
          <SetThemeContext.Provider value={setTheme}>
            {children}
          </SetThemeContext.Provider>
        </ThemeContext.Provider>
      );
    };
    
    const OptimizedThemeDisplay = memo(() => {
      const theme = useContext(ThemeContext);
      console.log("✅ OptimizedThemeDisplay re-rendered");
      return (
        <div>
          Current theme: <strong>{theme}</strong>
        </div>
      );
    });
    
    const OptimizedThemeToggleButton = memo(() => {
      const setTheme = useContext(SetThemeContext);
      console.log("✅ OptimizedThemeToggleButton re-rendered");
      return (
        <button
          onClick={() => setTheme((prev) => (prev === "light" ? "dark" : "light"))}
        >
          Toggle Theme
        </button>
      );
    });
    
    /* ========== App ========== */
    export const App = () => {
      return (
        <div style={{ fontFamily: "sans-serif", padding: 20 }}>
          <h1>🎨 Theme Context Comparison</h1>
    
          <div style={{ marginBottom: 40, padding: 10, border: "1px solid gray" }}>
            <h2>🧪 Traditional Context (Single)</h2>
            <TraditionalThemeProvider>
              <TraditionalThemeDisplay />
              <TraditionalThemeToggleButton />
            </TraditionalThemeProvider>
          </div>
    
          <div style={{ marginBottom: 40, padding: 10, border: "1px solid gray" }}>
            <h2>⚡ Optimized Context (Split)</h2>
            <ThemeProvider>
              <OptimizedThemeDisplay />
              <OptimizedThemeToggleButton />
            </ThemeProvider>
          </div>
        </div>
      );
    };

    Initial render:

    Now lets see what happens when we click toggle buttons

    What to Look for in Console

    Each toggle will log:

    Traditional Context:

    🔁 TraditionalThemeDisplay re-rendered

    🔁 TraditionalThemeToggleButton re-rendered ✅ ← re-renders unnecessarily

    Optimized Context:

    ✅ OptimizedThemeDisplay re-rendered

    ✅ OptimizedThemeToggleButton re-rendered ❌ ← does NOT re-render

    Optimizing with useMemo

    We can optimize the traditional approach somewhat by using useMemo to prevent the context value object from being recreated on every render:

    function OptimizedTraditionalProvider({ children }) {
      const [theme, setTheme] = useState("light");
    
      // Memoize the value object to prevent unnecessary context changes
      const value = useMemo(
        () => ({
          theme,
          setTheme,
        }),
        [theme]
      );
    
      return (
        <TraditionalThemeContext.Provider value={value}>
          {children}
        </TraditionalThemeContext.Provider>
      );
    }

    This helps, but still has the fundamental issue that components consuming only setTheme will rerender when theme changes. The split context approach solves this problem more elegantly.

    When to Use Context Splitting

    Context splitting is particularly valuable when:

    1. You have many components that only need to update state but don’t need to read it
    2. You have expensive components that should only rerender when absolutely necessary
    3. Your app has deep component trees where performance optimization matters

    Potential Downsides

    While context splitting is powerful, it does come with some trade-offs:

    1. Increased Complexity — Managing two contexts instead of one adds some boilerplate
    2. Provider Nesting — You end up with more nested providers in your component tree
    3. Mental Overhead — Developers need to choose the right context for each use case

    Custom Hooks for Clean API

    To make this pattern more developer-friendly, you can create custom hooks:

    function useTheme() {
      return useContext(ThemeContext);
    }
    
    function useSetTheme() {
      return useContext(ThemeSetterContext);
    }
    
    // Usage
    function MyComponent() {
      const theme = useTheme();
      const setTheme = useSetTheme();
      // ...
    }

    Measuring the Performance Impact

    When you run the demo code provided above, you’ll see a clear difference in render counts:

    1. With the traditional context, both the reader and toggler components rerender when the theme changes
    2. With the split context, only the reader rerenders while the toggler’s render count stays the same

    This performance difference might seem small in a simple example, but in a real application with dozens or hundreds of components consuming context, the impact can be substantial.

    Conclusion 🚀

    Context splitting is a powerful technique for optimizing React applications that use the Context API. By separating your state and setter functions into different contexts, you can ensure components only rerender when the specific data they consume changes.

    While this technique adds some complexity to your codebase, the performance benefits can be visible in larger applications.

  • How to Cache in React and Next.js Apps with Best Practices for 2025

    How to Cache in React and Next.js Apps with Best Practices for 2025

    In modern web development, speed and efficiency are important. Whether you’re building with React or using Next.js, caching has become one of the most important techniques for improving performance, reducing server load, and making user experience better.

    With the latest updates in Next.js and advancements in the React ecosystem, caching strategies have improved, and learning them is key for any serious developer. In this blog, we’ll learn how caching works in both React and Next.js, go through best practices, and highlight real-world examples that you can apply today.

    What is Caching?

    Caching refers to the process of storing data temporarily so future requests can be served faster. In the context of web applications, caching can occur at various levels:

    • Browser caching (storing static assets)
    • Client-side data caching (with libraries like SWR or React Query)
    • Server-side caching (Next.js API routes or server actions)
    • CDN caching (via edge networks)

    Effective caching minimizes redundant data fetching, accelerates loading times, and improves the perceived performance of your application.

    Caching in React Applications

    React doesn’t have built-in caching, but the community provides powerful tools to manage cache effectively on the client side.

    1. React Query and SWR for Data Caching

    These libraries help cache remote data on the client side and reduce unnecessary requests:

    import useSWR from "swr";
    
    const fetcher = (url: string) => fetch(url).then((res) => res.json());
    
    export default function User() {
      const { data, error } = useSWR("/api/user", fetcher);
    
      if (error) return <div>Failed to load</div>;
      if (!data) return <div>Loading...</div>;
      return <div>Hello {data.name}</div>;
    }

    Best Practices:

    • Set revalidation intervals (revalidateOnFocusdedupingInterval)
    • Use optimistic updates for a snappy UI
    • Preload data when possible

    2. Memoization for Component-Level Caching

    For expensive computations and rendering logic:

    import { useMemo } from "react";
    
    const ExpensiveComponent = ({ items }) => {
      const sortedItems = useMemo(() => items.sort(), [items]);
      return <List items={sortedItems} />;
    };

    3. LocalStorage and SessionStorage

    Persisting client state across sessions:

    useEffect(() => {
      const cachedData = localStorage.getItem("userData");
      if (cachedData) {
        setUser(JSON.parse(cachedData));
      }
    }, []);

    Server-Side Caching in Next.js (Latest App Router)

    With the App Router in Next.js 13+ and the stability in v14+, server actions and data caching have become much more robust and declarative.

    1. Caching with fetch and cache Option

    Next.js allows caching behavior to be specified per request:

    export async function getProduct(productId: string) {
      const res = await fetch(`https://api.example.com/products/${productId}`, {
        next: { revalidate: 60 }, // ISR (Incremental Static Regeneration)
      });
      return res.json();
    }

    Best Practices:

    • Use cache: 'force-cache' for static content
    • Use revalidate to regenerate content periodically
    • Use cache: 'no-store' for dynamic or user-specific data

    2. Using Server Actions and React Server Components (RSC)

    // app/actions.ts
    "use server";
    
    export async function saveData(formData: FormData) {
      const name = formData.get("name");
      // Save to database or perform API calls
    }

    Server actions in the App Router allow you to cache server-side logic and fetch results in React Server Components without hydration.

    3. Using generateStaticParams and generateMetadata

    These methods help Next.js know which routes to pre-build and cache efficiently:

    export async function generateStaticParams() {
      const products = await fetchProducts();
      return products.map((product) => ({ id: product.id }));
    }

    Cache Invalidation Strategies

    Proper cache invalidation ensures that stale data is replaced with up-to-date content:

    • Time-based (revalidate: 60 seconds)
    • On-demand revalidation (res.revalidate in API route)
    • Tag-based revalidation (coming soon in Next.js)
    • Mutations trigger refetch in SWR/React Query

    CDN and Edge Caching with Next.js

    Vercel and other hosting providers like Netlify and Cloudflare deploy Next.js apps globally. Edge caching improves load time by serving users from the nearest region.

    Tips:

    • Leverage Edge Functions for dynamic personalization
    • Use headers like Cache-Control effectively
    • Deploy static assets via CDN for better global performance

    Final Best Practices

    • Prefer static rendering where possible
    • Cache API calls both on server and client
    • Use persistent cache (IndexedDB/localStorage) when applicable
    • Memoize expensive computations
    • Profile and audit cache hits/misses with dev tools

    Conclusion

    Caching in React and Next.js is no longer optional — it’s essential for delivering fast, resilient, and scalable applications. Whether you’re fetching data client-side or leveraging powerful server-side features in Next.js App Router, the right caching strategy can drastically improve your app’s performance and user satisfaction. As frameworks evolve, staying updated with caching best practices ensures your apps remain performant and competitive.

    By applying these techniques, you not only enhance the speed and reliability of your applications but also reduce infrastructure costs and improve SEO outcomes. Start caching smartly today and take your web performance to the next level.

  • Top 7 Features in Next.js 15 That Will Supercharge Your Web Apps

    Top 7 Features in Next.js 15 That Will Supercharge Your Web Apps

    In today’s fast-paced web ecosystem, developers need tools that are flexible, performant, and future-ready. Next.js 15 delivers on all fronts. Whether you’re building static websites, dynamic dashboards, or enterprise-grade applications, this version introduces groundbreaking features that take developer productivity and user experience to the next level.

    In this post, we’ll walk through the top 7 features in Next.js 15 that are engineered to supercharge your web apps — plus practical use cases, code examples, and why they matter.

    1. 🔄 React Server Actions (Stable with React 19)

    Say goodbye to complex API routes.
    Next.js 15 supports React Server Actions, allowing you to handle server logic directly inside your component files.

    🚀 How it works:

    // Inside your component file
    "use server";
    
    export async function saveForm(data) {
      await db.save(data);
    }

    🧠 Why it matters:

    • No need to create separate api/ endpoints.
    • Full type safety with server logic co-located.
    • Less client-side JavaScript shipped.

    Ideal for: Form submissions, database updates, authenticated mutations.

    2. 🧭 Stable App Router with Layouts and Nested Routing

    Introduced in v13 and now fully stable, the app/ directory in Next.js 15 gives you modular routing with nested layouts, co-located data fetching, and component-based architecture.

    📁 Folder structure:

    app/
    layout.tsx
    page.tsx
    dashboard/
    layout.tsx
    page.tsx

    🎯 Why it matters:

    • Improved scalability for large apps
    • Built-in support for error boundaries and loading states
    • Cleaner structure that mirrors component trees

    Ideal for: Scalable dashboards, admin panels, modular websites.

    3. ⚙️ Partial Prerendering (PPR)

    Static + Dynamic rendering in one page? Yes, please.

    Next.js 15 introduces Partial Prerendering, an experimental feature that allows you to render part of a page statically and the rest dynamically.

    💡 Use case:

    Your homepage might have:

    • A statically rendered hero section
    • A dynamic, user-personalized feed

    🧠 Why it matters:

    • Faster load times for static content
    • Seamless hydration for dynamic sections
    • Enhanced user experience without trade-offs

    4. ⚡️ Turbopack (Improved Performance)

    Turbopack, Vercel’s Rust-based successor to Webpack, continues to mature in Next.js 15. It offers:

    • Blazing-fast cold starts
    • Incremental compilation
    • Near-instant HMR (Hot Module Reloading)

    🧪 How to enable:

    next dev --turbo

    🚀 Why it matters:

    • 10x faster rebuilds compared to Webpack
    • Smooth DX for teams working on large monorepos

    Note: Still experimental but highly promising.

    5. 🖼️ Smarter <Image /> Component

    Image optimization just got smarter. The updated next/image now supports:

    • Native lazy loading
    • Blur-up placeholders
    • AVIF + WebP support out of the box

    🧠 Why it matters:

    • Faster Core Web Vitals (especially LCP)
    • Reduced bandwidth and better UX
    • Simplified image management

    6. 🌐 Edge Middleware Enhancements

    Next.js 15 improves the DX around Edge Middleware, allowing you to run logic at the edge without cold starts or serverless latency.

    📦 Use cases:

    • A/B Testing
    • Geolocation-based redirects
    • Auth checks at the CDN level

    🔥 Improvements:

    • Better logging and error traces
    • Enhanced compatibility with dynamic routes

    7. 🧪 React 19 Compatibility

    Next.js 15 is one of the first frameworks fully compatible with React 19, bringing:

    • React Compiler support (in alpha)
    • Enhanced Concurrent Features
    • Better memory and rendering optimizations

    🧠 Why it matters:

    You can future-proof your app now and explore experimental features with a stable foundation.

    Conclusion

    Next.js 15 isn’t just about new APIs — it’s about enabling fastermore scalable, and more maintainable apps with less effort. These 7 features are engineered to help modern teams:

    ✅ Ship faster
    ✅ Write less code
    ✅ Deliver better performance

    Whether you’re migrating a legacy React app or starting fresh, Next.js 15 equips you with the tools to build next-gen experiences today.

    Ready to Supercharge Your Stack?

    Which feature are you most excited about?
    Leave a comment, share this post with your team, or try upgrading today:

    npm install next@latest

    👉 Follow for more Next.js deep-dives and practical guides.

  • How to Use Server Actions in Next js with Best Practices

    How to Use Server Actions in Next js with Best Practices

    Introduction:

    Next.js continues to improve with new features that enhance developer experience and performance. One of those features is Server Actions, It’s introduced to simplify server-side logic handling without creating separate API routes. Server Actions helps you to keep your components cleaner, improve security, and provide a better way to handle mutations in both Server and Client Components.

    By mastering Server Actions, developers can create fast, reliable, and maintainable full-stack applications with ease.

    What Are Server Actions?

    Server Actions are asynchronous functions that run only on the server. They are invoked directly from your React components and can handle tasks like database mutations, form processing, and more. These actions simplify server-client interactions by eliminating the need for explicit API endpoints.

    To declare a Server Action, use the "use server" directive:

    // app/actions/user.ts
    "use server";
    
    export async function createUser(formData: FormData) {
      const name = formData.get("name");
      const email = formData.get("email");
      // Save to database here
      return { success: true };
    }

    Using Server Actions in Server Components

    In Server Components, you can define Server Actions inline or import them from a separate file. This is especially useful for quick forms or specific mutations tied to one component.

    // app/actions/user.ts
    'use server';
    
    export async function createUser(formData: FormData) {
    const name = formData.get('name');
    const email = formData.get('email');
    // Save to database here
    return { success: true };
    }

    Using Server Actions in Client Components

    You can also use Server Actions in Client Components by importing them from a server-marked module.

    // app/actions/user.ts
    "use server";
    
    export async function updateUser(formData: FormData) {
      const id = formData.get("id");
      const name = formData.get("name");
      // Update user in DB
      return { success: true };
    }
    
    // app/components/EditUserForm.tsx
    ("use client");
    
    import { updateUser } from "@/app/actions/user";
    
    export default function EditUserForm() {
      return (
        <form action={updateUser}>
          <input type="hidden" name="id" value="123" />
          <input type="text" name="name" />
          <button type="submit">Update</button>
        </form>
      );
    }

    Binding Parameters to Server Actions

    You can pass arguments to Server Actions using .bind(), making them dynamic and reusable.

    // app/actions/user.ts
    "use server";
    
    export async function deleteUser(userId: string, formData: FormData) {
      // Delete user by ID
    }
    
    // app/components/DeleteUserButton.tsx
    ("use client");
    
    import { deleteUser } from "@/app/actions/user";
    
    export default function DeleteUserButton({ userId }: { userId: string }) {
      const deleteWithId = deleteUser.bind(null, userId);
    
      return (
        <form action={deleteWithId}>
          <button type="submit">Delete</button>
        </form>
      );
    }

    Best Practices for Server Actions

    Separation of Concerns

    • Keep your logic and UI separate. Define Server Actions in dedicated files and import them where needed.

    Organize by Domain

    • Group your actions by feature or domain (actions/user.tsactions/orders.ts) for better structure.

    Error Handling

    • Use try-catch blocks inside Server Actions to gracefully handle failures and log issues.

    Type Safety

    • Use TypeScript to enforce correct types for FormData fields and return values.

    Secure Operations

    • Always verify user sessions or tokens before making sensitive changes, even inside Server Actions.

    Avoid Logic Duplication

    • Reuse Server Actions across components to prevent writing the same logic multiple times.

    Validate Input

    • Use libraries like Zod or Yup to validate incoming data and avoid corrupting your database.

    Final Thoughts

    Server Actions offer a powerful pattern for managing server-side logic in a way that feels native to React and Next.js. They simplify the code, reduce the boilerplate of API routes, and make it easier to maintain a full-stack application.

    By following the best practices outlined above, you’ll write cleaner, more scalable code that benefits both your team and your users.

  • 9 Must-Know Advanced Uses of Promises

    9 Must-Know Advanced Uses of Promises

    Overview

    The Promise object represents the eventual completion (or failure) of an asynchronous operation and its resulting value.

    A Promise is always in one of the following states:

    • Pending: The initial state, neither fulfilled nor rejected.
    • Fulfilled: The operation completed successfully.
    • Rejected: The operation failed.

    Unlike “old-style” callbacks, using Promises has the following conventions:

    • Callback functions will not be called until the current event loop completes.
    • Even if the asynchronous operation completes (successfully or unsuccessfully), callbacks added via then() afterward will still be called.
    • You can add multiple callbacks by calling then() multiple times, and they will be executed in the order they were added.

    The characteristic feature of Promises is chaining.

    Usage

    1. Promise.all([])

    When all Promise instances in the array succeed, it returns an array of success results in the order they were requested. If any Promise fails, it enters the failure callback.

    const p1 = new Promise((resolve) => {
      resolve(1);
    });
    const p2 = new Promise((resolve) => {
      resolve(1);
    });
    const p3 = Promise.resolve("ok");
    
    // If all promises succeed, result will be an array of 3 results.
    const result = Promise.all([p1, p2, p3]);
    // If one fails, the result is the failed promise's value.

    2. Promise.allSettled([])

    The execution will not fail; it returns an array corresponding to the status of each Promise instance in the input array.

    const p1 = Promise.resolve(1);
    const p2 = Promise.reject(-1);
    Promise.allSettled([p1, p2]).then((res) => {
      console.log(res);
    });
    // Output:
    /*
       [
        { status: 'fulfilled', value: 1 },
        { status: 'rejected', reason: -1 }
       ] 
    */

    3. Promise.any([])

    If any Promise in the input array fulfills, the returned instance will become fulfilled and return the value of the first fulfilled promise. If all are rejected, it will become rejected.

    const p1 = new Promise((resolve, reject) => {
      reject(1);
    });
    const p2 = new Promise((resolve, reject) => {
      reject(2);
    });
    const p3 = Promise.resolve("ok");
    
    Promise.any([p1, p2, p3]).then(
      (r) => console.log(r), // Outputs 'ok'
      (e) => console.log(e)
    );

    4. Promise.race([])

    As soon as any Promise in the array changes state, the state of the race method will change accordingly; the value of the first changed Promise will be passed to the race method’s callback.

    const p1 = new Promise((resolve) => {
      setTimeout(() => {
        resolve(10);
      }, 3000);
    });
    const p2 = new Promise((resolve, reject) => {
      setTimeout(() => {
        throw new Error("I encountered an error");
      }, 2000);
    });
    
    Promise.race([p1, p2]).then(
      (v) => console.log(v), // Outputs 10
      (e) => console.log(e)
    );

    Throwing an exception does not change the race state; it is still determined by p1.

    Advanced Uses

    Here are 9 advanced uses that help developers handle asynchronous operations more efficiently and elegantly.

    1. Concurrency Control

    Using Promise.all allows for parallel execution of multiple Promises, but to control the number of simultaneous requests, you can implement a concurrency control function.

    const concurrentPromises = (promises, limit) => {
      return new Promise((resolve, reject) => {
        let i = 0;
        let result = [];
        const executor = () => {
          if (i >= promises.length) {
            return resolve(result);
          }
          const promise = promises[i++];
          Promise.resolve(promise)
            .then((value) => {
              result.push(value);
              if (i < promises.length) {
                executor();
              } else {
                resolve(result);
              }
            })
            .catch(reject);
        };
        for (let j = 0; j < limit && j < promises.length; j++) {
          executor();
        }
      });
    };

    2. Promise Timeout

    Sometimes, you may want a Promise to automatically reject if it does not resolve within a certain time frame. This can be implemented as follows.

    const promiseWithTimeout = (promise, ms) =>
      Promise.race([
        promise,
        new Promise((resolve, reject) =>
          setTimeout(() => reject(new Error("Timeout after " + ms + "ms")), ms)
        ),
      ]);

    3. Cancelling Promises

    Native JavaScript Promises cannot be cancelled, but you can simulate cancellation by introducing controllable interrupt logic.

    const cancellablePromise = (promise) => {
      let isCanceled = false;
      const wrappedPromise = new Promise((resolve, reject) => {
        promise.then(
          (value) => (isCanceled ? reject({ isCanceled, value }) : resolve(value)),
          (error) => (isCanceled ? reject({ isCanceled, error }) : reject(error))
        );
      });
      return {
        promise: wrappedPromise,
        cancel() {
          isCanceled = true;
        },
      };
    };

    4. Sequential Execution of Promise Array

    Sometimes you need to execute a series of Promises in order, ensuring that the previous asynchronous operation completes before starting the next.

    const sequencePromises = (promises) =>
      promises.reduce((prev, next) => prev.then(() => next()), Promise.resolve());

    5. Retry Logic for Promises

    When a Promise is rejected due to temporary errors, you may want to retry its execution.

    const retryPromise = (promiseFn, maxAttempts, interval) => {
      return new Promise((resolve, reject) => {
        const attempt = (attemptNumber) => {
          if (attemptNumber === maxAttempts) {
            reject(new Error("Max attempts reached"));
            return;
          }
          promiseFn()
            .then(resolve)
            .catch(() => {
              setTimeout(() => {
                attempt(attemptNumber + 1);
              }, interval);
            });
        };
        attempt(0);
      });
    };

    6. Ensuring a Promise Resolves Only Once

    In some cases, you may want to ensure that a Promise resolves only once, even if resolve is called multiple times.

    const onceResolvedPromise = (executor) => {
      let isResolved = false;
      return new Promise((resolve, reject) => {
        executor((value) => {
          if (!isResolved) {
            isResolved = true;
            resolve(value);
          }
        }, reject);
      });
    };

    7. Using Promises Instead of Callbacks

    Promises provide a more standardized and convenient way to handle asynchronous operations by replacing callback functions.

    const callbackToPromise = (fn, ...args) => {
      return new Promise((resolve, reject) => {
        fn(...args, (error, result) => {
          if (error) {
            reject(error);
          } else {
            resolve(result);
          }
        });
      });
    };

    8. Dynamically Generating a Promise Chain

    In some situations, you may need to dynamically create a series of Promise chains based on different conditions.

    const tasks = [task1, task2, task3]; // Array of asynchronous tasks
    
    const promiseChain = tasks.reduce((chain, currentTask) => {
      return chain.then(currentTask);
    }, Promise.resolve());

    9. Using Promises to Implement a Simple Asynchronous Lock

    In a multi-threaded environment, you can use Promises to implement a simple asynchronous lock, ensuring that only one task can access shared resources at a time.

    let lock = Promise.resolve();
    
    const acquireLock = () => {
      let release;
      const waitLock = new Promise((resolve) => {
        release = resolve;
      });
      const tryAcquireLock = lock.then(() => release);
      lock = waitLock;
      return tryAcquireLock;
    };

    This code creates and resolves Promises continuously, implementing a simple FIFO queue to ensure that only one task can access shared resources. The lock variable represents whether there is a task currently executing, always pointing to the Promise of the task in progress. The acquireLock function requests permission to execute and creates a new Promise to wait for the current task to finish.

    Conclusion

    Promises are an indispensable part of modern JavaScript asynchronous programming. Mastering their advanced techniques will greatly enhance development efficiency and code quality. With the various methods outlined above, developers can handle complex asynchronous scenarios more confidently and write more readable, elegant, and robust code.

  • Why Do We Need useLayoutEffect?

    Why Do We Need useLayoutEffect?

    If you have worked at all with React hooks before then you have used the useEffect hook extensively. You may not know, though, that there is a second type of useEffect hook called useLayoutEffect. In this article I will be explaining the useLayoutEffect hook and comparing it to useEffectIf you are not already familiar with useEffect check out my full article on it here.

    The Biggest Difference

    Everything about these two hooks is nearly identical. The syntax for them is exactly the same and they are both used to run side effects when things change in a component. The only real difference is when the code inside the hook is actually run.

    In useEffect the code in the hook is run asynchronously after React renders the component. This means the code for this hook can run after the DOM is painted to the screen.

    The useLayoutEffect hook runs synchronously directly after React calculates the DOM changes but before it paints those changes to the screen. This means that useLayoutEffect code will delay the painting of a component since it runs synchronously before painting, while useEffect is asynchronous and will not delay the paint.

    Why Use useLayoutEffect?

    So if useLayoutEffect will delay the painting of a component why would we want to use it. The biggest reason for using useLayoutEffect is when the code being run directly modifies the DOM in a way that is observable to the user.

    For example, if I needed to change the background color of a DOM element as a side effect it would be best to use useLayoutEffect since we are directly modifying the DOM and the changes are observable to the user. If we were to use useEffect we could run into an issue where the DOM is painted before the useEffect code is run. This would cause the DOM element to be the wrong color at first and then change to the right color due to the useEffect code.

    You Probably Don’t Need useLayoutEffect

    As you can see from the previous example, use cases for useLayoutEffect are pretty niche. In general it is best to always use useEffect and only switch to useLayoutEffect when you actually run into an issue with useEffect causing flickers in your DOM or incorrect results.

    Conclusion

    useLayoutEffect is a very useful hook for specific situations, but in most cases you will be perfectly fine using useEffect. Also, since useEffect does not block painting it is the better option to use if it works properly.

  • Finally Master Next.js’s Most Complex Feature – Caching

    Finally Master Next.js’s Most Complex Feature – Caching

    Introduction

    Next.js is an amazing framework that makes writing complex server rendered React apps much easier, but there is one huge problem. Next.js’s caching mechanism is extremely complicated and can easily lead to bugs in your code that are difficult to debug and fix.

    If you don’t understand how Next.js’s caching mechanism works it feels like you are constantly fighting Next.js instead of reaping the amazing benefits of Next.js’s powerful caching. That is why in this article I am going to break down exactly how every part of Next.js’s cache works so you can stop fighting it and finally take advantage of its incredible performance gains.

    Before we get started, here is an image of how all the caches in Next.js interact with one another. I know this is overwhelming, but by the end of this article you will understand exactly what each step in this process does and how they all interact.

    cache-interactions

    In the image above, you probably noticed the term “Build Time” and “Request Time”. To make sure this does not cause any confusion throughout the article, let me explain them before we move forward.

    Build time refers to when an aplication is built and deployed. Anything that is cached during this process (mostly static content) will be part of the build time cache. The build time cache is only updated when the application is rebuilt and redeployed.

    Request time refers to when a user requests a page. Typically, data cached at request time is dynamic as we want to fetch it directly from the data source when the user makes requests.

    Next.js Caching Mechanisms

    Understanding Next.js’s caching can seem daunting at first. This is because it is composed of four distinct caching mechanisms which each operating at different stages of your application and interacting in ways that can initially appear complex.

    Here are the four caching mechanisms in Next.js:

    1. Request Memoization
    2. Data Cache
    3. Full Route Cache
    4. Router Cache

    For each of the above, I will delve into their specific roles, where they’re stored, their duration, and how you can effectively manage them, including ways to invalidate the cache and opt out. By the end of this exploration, you’ll have a solid grasp of how these mechanisms work together to optimize Next.js’s performance.

    Request Memoization

    One common problem in React is when you need to display the same information in multiple places on the same page. The easiest option is to just fetch the data in both places that it is needed, but this is not ideal since you are now making two requests to your server to get the same data. This is where Request Memoization comes in.

    Request Memoization is a React feature that actually caches every fetch request you make in a server component during the render cycle (which basically just refers to the process of rendering all the components on a page). This means that if you make a fetch request in one component and then make the same fetch request in another component, the second fetch request will not actually make a request to the server. Instead, it will use the cached value from the first fetch request.

    export default async function fetchUserData(userId) {
      // The `fetch` function is automatically cached by Next.js
      const res = await fetch(`https://api.example.com/users/${userId}`);
      return res.json();
    }
    
    export default async function Page({ params }) {
      const user = await fetchUserData(params.id);
    
      return (
        <>
          <h1>{user.name}</h1>
          <UserDetails id={params.id} />
        </>
      );
    }
    
    async function UserDetails({ id }) {
      const user = await fetchUserData(id);
      return <p>{user.name}</p>;
    }

    In the code above, we have two components: Page and UserDetails. The first call to the fetchUserData() function in Page makes a fetch request just like normal, but the return value of that fetch request is stored in the Request Memoization cache. The second time fetchUserData is called by the UserDetails component, does not actually make a new fetch request. Instead, it uses the memoized value from the first time this fetch request was made. This small optimization drastically increases the performance of your application by reducing the number of requests made to your server and it also makes your components easier to write since you don’t need to worry about optimizing your fetch requests.

    It is important to know that this cache is stored entirely on the server which means it will only cache fetch requests made from your server components. Also, this cache is completely cleared at the start of each request which means it is only valid for the duration of a single render cycle. This is not an issue, though, as the entire purpose of this cache is to reduce duplicate fetch requests within a single render cycle.

    Lastly, it is important to note that this cache will only cache fetch requests made with the GET method. A fetch request must also have the exact same parameters (URL and options) passed to it in order to be memoized.

    Caching Non-fetch Requests

    By default React only caches fetch requests, but there are times when you might want to cache other types of requests such as database requests. To do this, we can use React’s cache function. All you need to do is pass the function you want to cache to cache and it will return a memoized version of that function.

    import { cache } from "react";
    import { queryDatabase } from "./databaseClient";
    
    export const fetchUserData = cache((userId) => {
      // Direct database query
      return queryDatabase("SELECT * FROM users WHERE id = ?", [userId]);
    });

    In this code above, the first time fetchUserData() is called, it queries the database directly, as there is no cached result yet. But the next time this function is called with the same userId, the data is retrieved from the cache. Just like with fetch, this memoization is valid only for the duration of a single render pass and works identical to the fetch memoization.

    Revalidation

    Revalidation is the process of clearing out a cache and updating it with new data. This is important to do since if you never update a cache it will eventually become stale and out of date. Luckily, we don’t have to worry about this with Request Memoization since this cache is only valid for the duration of a single request we never have to revalidate.

    Opting out

    To opt out of this cache, we can pass in an AbortController signal as a parameter to the fetch request.

    async function fetchUserData(userId) {
      const { signal } = new AbortController();
      const res = await fetch(`https://api.example.com/users/${userId}`, {
        signal,
      });
      return res.json();
    }

    Doing this will tell React not to cache this fetch request in the Request Memoization cache, but I would not recommend doing this unless you have a very good reason to as this cache is very useful and can drastically improve the performance of your application.

    The diagram below provides a visual summary of how Request Memoization works.

    request-memo

    Request Memoization is technically a React feature, not exclusive to Next.js. I included it as part of the Next.js caching mechanisms, though, since it is necessary to understand in order to comprehend the full Next.js caching process.

    Data Cache

    Request Memoization is great for making your app more performant by preventing duplicate fetch request, but when it comes to caching data across requests/users it is useless. This is where the data cache comes in. It is the last cache that is hit by Next.js before it actually fetches your data from an API or database and is persistent across multiple requests/users.

    Imagine we have a simple page that queries an API to get guide data on a specific city.

    export default async function Page({ params }) {
      const city = params.city;
      const res = await fetch(`https://api.globetrotter.com/guides/${city}`);
      const guideData = await res.json();
    
      return (
        <div>
          <h1>{guideData.title}</h1>
          <p>{guideData.content}</p>
          {/* Render the guide data */}
        </div>
      );
    }

    This guide data really doesn’t change often at all so it doesn’t actually make sense to fetch this data fresh everytime someone needs it. Instead we should cache that data across all requests so it will load instantly for future users. Normally, this would be a pain to implement, but luckily Next.js does this automatically for us with the Data Cache.

    By default every fetch request in your server components will be cached in the Data Cache (which is stored on the server) and will be used for all future requests. This means that if you have 100 users all requesting the same data, Next.js will only make one fetch request to your API and then use that cached data for all 100 users. This is a huge performance boost.

    Duration

    The Data Cache is different than the Request Memoization cache in that data from this cache is never cleared unless you specifically tell Next.js to do so. This data is even persisted across deployments which means that if you deploy a new version of your application, the Data Cache will not be cleared.

    Revalidation

    Since the Data Cache is never cleared by Next.js we need a way to opt into revalidation which is just the process of removing data from the cache. In Next.js there are two different ways to do this: time-based revalidation and on-demand revalidation.

    Time-based Revalidation

    The easiest way to revalidate the Data Cache is to just automatically clear the cache after a set period of time. This can be done in two ways.

    const res = fetch(`https://api.globetrotter.com/guides/${city}`, {
      next: { revalidate: 3600 },
    });

    The first way is to pass the next.revalidate option to your fetch request. This will tell Next.js how many seconds to keep your data in the cache before it is considered stale. In the example above, we are telling Next.js to revalidate the cache every hour.

    The other way to set a revalidation time is to use the revalidate segment config option.

    export const revalidate = 3600;
    
    export default async function Page({ params }) {
      const city = params.city;
      const res = await fetch(`https://api.globetrotter.com/guides/${city}`);
      const guideData = await res.json();
    
      return (
        <div>
          <h1>{guideData.title}</h1>
          <p>{guideData.content}</p>
          {/* Render the guide data */}
        </div>
      );
    }

    Doing this will make all fetch requests for this page revalidate every hour unless they have their own more specific revalidation time set.

    The one important thing to understand with time based revalidation is how it handles stale data.

    The first time a fetch request is made it will get the data and then store it in the cache. Each new fetch request that occurs within the 1 hour revalidation time we set will use that cached data and make no more fetch requests. Then after 1 hour, the first fetch request that is made will still return the cached data, but it will also execute the fetch request to get the newly updated data and store that in the cache. This means that each new fetch request after this one will use the newly cached data. This pattern is called stale-while-revalidate and is the behavior that Next.js uses.

    On-demand Revalidation

    If your data is not updated on a regular schedule, you can use on-demand revalidation to revalidate the cache only when new data is available. This is useful when you want to invalidate the cache and fetch new data only when a new article is published or a specific event occurs.

    This can be done one of two ways.

    import { revalidatePath } from "next/cache";
    
    export async function publishArticle({ city }) {
      createArticle(city);
    
      revalidatePath(`/guides/${city}`);
    }

    The revalidatePath function takes a string path and will clear the cache of all fetch request on that route.

    If you want to be more specific in the exact fetch requests to revalidate, you can use revalidateTag function.

    const res = fetch(`https://api.globetrotter.com/guides/${city}`, {
      next: { tags: ["city-guides"] },
    });

    Here, we’re adding the city-guides tag to our fetch request so we can target it with revalidateTag.

    import { revalidateTag } from "next/cache";
    
    export async function publishArticle({ city }) {
      createArticle(city);
    
      revalidateTag("city-guides");
    }

    By calling revalidateTag with a string it will clear the cache of all fetch request with that tag.

    Opting out

    Opting out of the data cache can be done in multiple ways.

    no-store
    const res = fetch(`https://api.globetrotter.com/guides/${city}`, {
      cache: "no-store",
    });

    By passing cache: "no-store" to your fetch request, you are telling Next.js to not cache this request in the Data Cache. This is useful when you have data that is constantly changing and you want to fetch it fresh every time.

    You can also call the noStore function to opt out of the Data Cache for everything within the scope of that function.

    import { unstable_noStore as noStore } from "next/cache";
    
    function getGuide() {
      noStore();
      const res = fetch(`https://api.globetrotter.com/guides/${city}`);
    }

    Currently, this is an experimental feature which is why it is prefixed with unstable_, but it is the preferred method of opting out of the Data Cache going forward in Next.js.

    This is a really great way to opt out of caching on a per component or per function basis since all other opt out methods will opt out of the Data Cache for the entire page.

    export const dynamic = 'force-dynamic'

    If we want to change the caching behavior for an entire page and not just a specific fetch request, we can add this segment config option to the top level of our file. This will force the page to be dynamic and opt out of the Data Cache entirely.

    export const dynamic = "force-dynamic";
    export const revalidate = 0

    Another way to opt the entire page out of the data cache is to use the revalidate segment config option with a value of 0

    export const revalidate = 0;

    This line is pretty much the page-level equivalent of cache: "no-store". It applies to all requests on the page, ensuring nothing gets cached.

    Caching Non-fetch Requests

    So far, we have only seen how to cache fetch requests with the Data Cache, but we can do much more than that.

    If we go back to our previous example of city guides, we might want to pull data directly from our database. For this, we can use the cache function that’s provided by Next.js. This is similar to the React cache function, except it applies to the Data Cache instead of Request Memoization.

    import { getGuides } from "./data";
    import { unstable_cache as cache } from "next/cache";
    
    const getCachedGuides = cache((city) => getGuides(city), ["guides-cache-key"]);
    
    export default async function Page({ params }) {
      const guides = await getCachedGuides(params.city);
      // ...
    }

    Currently, this is an experimental feature which is why it is prefixed with unstable_, but it is the only way to cache non-fetch requests in the Data Cache.

    The code above is short, but it can be confusing if this is the first time you are seeing the cache function.

    The cache function takes three parameters (but only two are required). The first parameter is the function you want to cache. In our case it is the getGuides function. The second parameter is the key for the cache. In order for Next.js to know which cache is which it needs a key to identify them. This key is an array of strings that must be unique for each unique cache you have. If two cache functions have the same key array passed to them they will be considered the same exact request and stored in the same cache (similar to a fetch request with the same URL and params).

    The third parameter is an optional options parameter where you can define things like a revalidation time and tags.

    In our particular code we are caching the results of our getGuides function and storing them in the cache with the key ["guides-cache-key"]. This means that if we call getCachedGuides with the same city twice, the second time it will use the cached data instead of calling getGuides again.

    Below is a diagram that walks you through how the Data Cache operates, step by step.

    data-cache

    Full Route Cache

    The third type of cache is the Full Route Cache, and this one is a bit easier to understand since is much less configurable than the Data Cache. The main reason this cache is useful is because it lets Next.js cache static pages at build time instead of having to build those static pages for each request.

    In Next.js, the pages we render to our clients consist of HTML and something called the React Server Component Payload (RSCP). The payload contains instructions for how the client components should work together with the rendered server components to render the page. The Full Route Cache stores the HTML and RSCP for static pages at build time.

    Now that we know what it stores, let’s take a look at an example.

    import Link from "next/link";
    
    async function getBlogList() {
      const blogPosts = await fetch("https://api.example.com/posts");
      return await blogPosts.json();
    }
    
    export default async function Page() {
      const blogData = await getBlogList();
    
      return (
        <div>
          <h1>Blog Posts</h1>
          <ul>
            {blogData.map((post) => (
              <li key={post.slug}>
                <Link href={`/blog/${post.slug}`}>
                  <a>{post.title}</a>
                </Link>
                <p>{post.excerpt}</p>
              </li>
            ))}
          </ul>
        </div>
      );
    }

    In the code I have above, Page will be cached at build time because it does not contain any dynamic data. More specifically, its HTML and RSCP will be stored in the Full Router Cache so that it is served faster when a user requests access. The only way this HTML/RSCP will be updated is if we redeploy our application or manually invalidate the data cache that this page depends on.

    I know you may think that since we are doing a fetch request that we have dynamic data, but this fetch request is cached by Next.js in the Data Cache so this page is actually considered static. Dynamic data is data that changes on every single request to a page, such as a dynamic URL parameter, cookies, headers, search params, etc.

    Similarly to the Data Cache the Full Route Cache is stored on the server and persists across different requests and users, but unlike the Data Cache, this cache is cleared every time you redeploy your application.

    Opting out

    Opting out of the Full Route Cache can be done in two ways.

    The first way is to opt out of the Data Cache. If the data you are fetching for the page is not cached in the Data Cache then the Full Route Cache will not be used.

    The second way is to use dynamic data in your page. Dynamic data includes things such as the headerscookies, or searchParams dynamic functions, and dynamic URL parameters such as id in /blog/[id].

    The diagram below demonstrates the step-by-step process of how Full Route Cache works.

    full-route-cache

    This cache only works with your production builds since in development all pages are rendered dynamically, thus, they are never stored in this cache.

    Router Cache

    This last cache is a bit unique in that it is the only cache that is stored on the client instead of on the server. It can also be the source of many bugs if not understood properly. This is because it caches routes that a user visits so when they come back to those routes it uses the cached version and never actually makes a request to the server While this approach is an advantage when it comes to page loading speeds, it can also be quite frustrating. Let’s take a look below at why.

    export default async function Page() {
      const blogData = await getBlogList();
    
      return (
        <div>
          <h1>Blog Posts</h1>
          <ul>
            {blogData.map((post) => (
              <li key={post.slug}>
                <Link href={`/blog/${post.slug}`}>
                  <a>{post.title}</a>
                </Link>
                <p>{post.excerpt}</p>
              </li>
            ))}
          </ul>
        </div>
      );
    }

    In the code I have above, when the user navigates to this page, its HTML/RSCP gets stored in the Router Cache. Similarly, when they navigate to any of the /blog/${post.slug} routes, that HTML/RSCP also gets cached. This means if the user navigates back to a page they have already been to it will pull that HTML/RSCP from the Router Cache instead of making a request to the server.

    Duration

    The router cache is a bit unique in that the duration it is stored for depends on the type of route. For static routes, the cache is stored for 5 minutes, but for dynamic routes, the cache is only stored for 30 seconds. This means that if a user navigates to a static route and then comes back to it within 5 minutes, it will use the cached version. But if they come back to it after 5 minutes, it will make a request to the server to get the new HTML/RSCP. The same thing applies to dynamic routes, except the cache is only stored for 30 seconds instead of 5 minutes.

    This cache is also only stored for the user’s current session. This means that if the user closes the tab or refreshes the page, the cache will be cleared.

    You can also manually revalidate this cache by clearing the data cache from a server action using revalidatePath/revalidateTag. You can also call the router.refresh function which you get from the useRouter hook on the client. This will force the client to refetch the page you are currently on.

    Revalidation

    We already discussed two ways of revalidation in the previous section but there are plenty of other ways to do it.

    We can revalidate the Router Cache on demand similar to how we did it for the Data Cache. This means that revalidating Data Cache using revalidatePath or revalidateTag also revalidates the Router Cache.

    Opting out

    There is no way to opt out of the Router Cache, but considering the plethora of ways to revalidate the cache it is not a big deal.

    Here is an image that provides a visual summary of how the Router Cache works.

    router-cache

    Conclusion

    Having multiple caches like this can be difficult to wrap your head around, but hopefully this article was able to open your eyes to how these caches work and how they interact with one another. While the official documentation mentions that knowledge of caching is not necessary to be productive with Next.js, I think it helps a lot to understand its behavior so that you can configure the settings that work best for your particular app.