sudomysql-uroot-pCREATEDATABASEwordpress;CREATEUSER'wpuser'@'localhost'IDENTIFIEDBY'secure_password'; # Replace with your desired passwordGRANTALLPRIVILEGESONwordpress.*TO'wpuser'@'localhost';FLUSHPRIVILEGES;EXIT;
Download and extract WordPress
cd/var/www/htmlsudowgethttps://wordpress.org/latest.tar.gzsudotar-xvzflatest.tar.gzsudormlatest.tar.gzsudochown-Rwww-data:www-data/var/www/html/wordpresssudochmod-R755/var/www/html/wordpresssudomv/var/www/html/wordpress/wp-config-sample.php/var/www/html/wordpress/wp-config.phpsudonano/var/www/html/wordpress/wp-config.phpdefine('DB_NAME','wordpress' );define('DB_USER','wpuser' );define('DB_PASSWORD','secure_password' ); # Replace with your passworddefine('DB_HOST','localhost' );
Create a new Nginx configuration file for WordPress
JavaScript has evolved significantly over the years, becoming one of the most powerful and versatile programming languages in web development. However, writing efficient, scalable, and maintainable JavaScript code requires mastering advanced techniques. By leveraging modern JavaScript patterns and features, developers can improve execution speed, enhance code modularity, and simplify complex tasks.
This article explores ten essential advanced JavaScript techniques that will elevate your coding skills and help you build robust applications. From closures and destructuring to metaprogramming and memory management, these techniques will give you the edge needed to write professional-level JavaScript.
1. Asynchronous JavaScript with Async/Await
Handling asynchronous tasks effectively is crucial for modern web development. The async/await syntax provides a clean and readable way to manage asynchronous operations, replacing traditional callback-based approaches.
Improves readability by resembling synchronous code.
Simplifies error handling with try/catch.
Reduces deeply nested callback structures.
2. Proxies for Intercepting and Enhancing Object Behavior
JavaScript’s Proxy object allows developers to intercept and modify fundamental operations on objects, making them highly useful for creating custom behaviors such as logging, validation, and dynamic property handling.
Example:
const target = { name:"John" };const handler = {get: (obj, prop) =>`${prop} is ${obj[prop]}`,set: (obj, prop, value) => { console.log(`Setting ${prop} to ${value}`); obj[prop] = value;returntrue; },};const proxy =newProxy(target, handler);console.log(proxy.name); // "name is John"proxy.age =30; // "Setting age to 30"
Why Use Proxies?
Validation: Ensure properties meet certain criteria before being set.
Logging: Track access and modifications to object properties.
Default Values: Provide fallback values for undefined properties.
By using Proxies, you can add powerful meta-programming capabilities to your JavaScript code, enabling more flexible and dynamic object interactions.
3. Debouncing and Throttling for Performance Optimization
Handling frequent user events like scrolling, resizing, or keypresses can impact performance. Debouncing and throttling help control function execution frequency.
Debouncing: Debouncing delays the execution of a function until a specified time has passed since the last event trigger. This is useful for optimizing performance in cases like search input fields and window resize events.
These techniques improve performance by limiting how often a function runs, which is essential for user input handling and optimizing animations.
4. Using Proxies to Intercept Object Behavior
Proxy objects allow you to intercept and redefine fundamental operations on objects, such as property access, assignment, and function calls. This is useful for validation, logging, or building reactive frameworks.
Provides dynamic control over property access and modification.
Enables data validation, logging, and computed properties.
Useful for creating reactive programming frameworks and API wrappers.
Proxies allow developers to intercept and modify object behavior, making them essential for metaprogramming and advanced JavaScript development.
5. Optional Chaining (?.) for Safe Property Access
Optional chaining (?.) provides a way to access deeply nested object properties without worrying about runtime errors due to undefined or null values.
Example:
const user = { profile: { name:"Alice" } };console.log(user.profile?.name); // 'Alice'console.log(user.address?.city); // undefined (no error)
Why Use It?
Prevents runtime errors from missing properties.
Reduces excessive if statements for property checks.
Especially useful when working with API responses.
6. Offload Heavy Tasks with Web Workers
JavaScript is single-threaded, but Web Workers let you run scripts in background threads. Use them for CPU-heavy tasks like data processing or image manipulation:
Prevents UI freezes by offloading CPU-intensive tasks to background threads.
Enhances application responsiveness and performance.
Ideal for data processing, image manipulation, and real-time computations.
Using Web Workers ensures that heavy computations do not block the main thread, leading to a smoother user experience.
7. Master Memory Management
Memory leaks silently degrade performance. Avoid globals, use WeakMap/WeakSet for caches, and monitor leaks with DevTools:
const cache =newWeakMap(); functioncomputeExpensiveValue(obj) {if (!cache.has(obj)) {const result =/* heavy computation */; cache.set(obj, result); }return cache.get(obj);}
Why Use It?
Prevents memory leaks by allowing garbage collection of unused objects.
Efficient for caching without affecting memory consumption.
Useful for managing private data within objects.
Using WeakMap ensures that cached objects are automatically cleaned up when no longer needed, preventing unnecessary memory usage.
8. Currying Functions for Better Reusability
Currying transforms a function that takes multiple arguments into a series of functions, each taking one argument. This technique makes functions more reusable and allows for partial application.
Enables partial application of functions for better reusability.
Enhances functional programming by making functions more flexible.
Improves readability and simplifies repetitive tasks.
Currying is particularly useful for creating highly reusable utility functions in modern JavaScript applications.
9. Closures for Private State Management
Closures are one of JavaScript’s most powerful features. They allow functions to remember and access variables from their outer scope even after the outer function has finished executing. This makes them particularly useful for encapsulating private state and preventing unintended modifications.
Encapsulation: Keep variables private and inaccessible from the global scope.
Data Integrity: Maintain controlled access to data, preventing unintended modifications.
Memory Efficiency: Create function factories that share behavior but maintain separate state.
Callback Functions: Preserve context in asynchronous operations.
Closures are commonly used in event handlers, factory functions, and callback functions to preserve state efficiently.
10. Destructuring for More Concise and Readable Code
Destructuring simplifies the process of extracting values from arrays and objects, making your code cleaner and more readable. This technique is particularly useful when working with complex data structures or API responses.
Object Destructuring:
const person = { name:"Jack", age:20 };const { name, age } = person;console.log(name); // 'Jack'console.log(age); // 20
Especially useful when working with API responses or function parameters.
By leveraging destructuring, you can write more concise and expressive code, making it easier to work with complex data structures and improving overall code readability.
Conclusion
By mastering these advanced JavaScript techniques, developers can write cleaner, more efficient, and scalable code. Understanding closures, destructuring, proxies, async/await, and performance optimizations like debouncing and throttling will enhance your ability to build high-performance applications. Additionally, incorporating best practices like the module pattern and optional chaining will further improve your coding efficiency.
JavaScript is a powerful language, but mastering it requires more than just knowing the basics. The real magic lies in the hidden gems — lesser-known but powerful tricks that can make your code cleaner, more efficient, and easier to maintain. Whether you’re a beginner or a seasoned developer, these 10 JavaScript tricks will help you level up your coding game! 👾
1. Object.freeze() — Making Objects Immutable
In JavaScript, by default objects are mutable, meaning you can change their properties after creation. But what if you need to prevent modifications? That’s where Object.freeze() comes in handy.
const user = { name:"Alice", age:25,};Object.freeze(user);user.age =30; // This won't work, as the object is frozenconsole.log(user.age); // 25
Note: When running the statement user.age = 30; JS won’t throw any error but when we will try to retreve the value of user’s age it will be 25.
Real-World Use Case:
Use Object.freeze() in Redux to ensure state objects remain unchanged, preventing accidental mutations.
2. Destructuring for Cleaner Code
Destructuring makes it easy to extract values from objects and arrays, leading to cleaner and more readable code.
const person = { name:"Bob", age:28, city:"New York" };const { name, age } = person;console.log(name, age); // Bob 28
Real-World Use Case:
Use destructuring in function arguments for cleaner APIs:
functiongreet({ name }) { console.log(`Hello, ${name}!`);}greet(person); // Hello, Bob!
3. Intl API — Effortless Localization
The Intl API provides built-in support for internationalization, allowing you to format dates, numbers, and currencies easily.
Example:
const date =newDate();console.log(new Intl.DateTimeFormat("fr-FR").format(date));
Prevent excessive API calls when a user is typing in a search box.
Final Thoughts
Mastering JavaScript isn’t just about knowing the syntax — it’s about using the right tools and tricks to write better code. These 10 tricks can make your applications faster, cleaner, and more reliable. Try them out and integrate them into your daily coding habits!
One of the most common issues developers face is managing rerenders, especially when working with Context API. Today, I want to share a powerful technique that quite known but… 😄
The Problem with Traditional Context
Before diving into the solution, let’s understand the problem. When using React’s Context API in the traditional way, any component that consumes a context will rerender whenever any value within that context changes.
The issue here is that bothThemeDisplay and ThemeToggle will rerender whenever the theme changes, even though ThemeToggle only needs setTheme and doesn’t actually use the current theme value in its rendering.
The Possible Solution: Context Splitting 💡
The context splitting pattern addresses this problem by separating our context into two distinct contexts:
A data context that holds just the state (e.g., theme)
A setter context that holds just the updater function (e.g., setTheme)
ThemeDisplay rerenders (as it should since it displays the theme)
ThemeToggledoes NOT rerender because it only consumes the setter context, which never changes (the setter function reference remains stable)
Complete Example with Rerender Counting 🧪
Let’s create a more complete example that demonstrates the difference between the traditional and split context approaches. We’ll add rerender counters to visualize the performance impact:
✅ OptimizedThemeToggleButton re-rendered ❌ ← does NOT re-render
Optimizing with useMemo
We can optimize the traditional approach somewhat by using useMemo to prevent the context value object from being recreated on every render:
functionOptimizedTraditionalProvider({ children }) {const [theme, setTheme] =useState("light");// Memoize the value object to prevent unnecessary context changesconst value =useMemo( () => ({ theme, setTheme, }), [theme] );return ( <TraditionalThemeContext.Providervalue={value}>{children} </TraditionalThemeContext.Provider> );}
This helps, but still has the fundamental issue that components consuming only setTheme will rerender when theme changes. The split context approach solves this problem more elegantly.
When to Use Context Splitting
Context splitting is particularly valuable when:
You have many components that only need to update state but don’t need to read it
You have expensive components that should only rerender when absolutely necessary
Your app has deep component trees where performance optimization matters
Potential Downsides
While context splitting is powerful, it does come with some trade-offs:
Increased Complexity — Managing two contexts instead of one adds some boilerplate
Provider Nesting — You end up with more nested providers in your component tree
Mental Overhead — Developers need to choose the right context for each use case
Custom Hooks for Clean API
To make this pattern more developer-friendly, you can create custom hooks:
When you run the demo code provided above, you’ll see a clear difference in render counts:
With the traditional context, both the reader and toggler components rerender when the theme changes
With the split context, only the reader rerenders while the toggler’s render count stays the same
This performance difference might seem small in a simple example, but in a real application with dozens or hundreds of components consuming context, the impact can be substantial.
Conclusion 🚀
Context splitting is a powerful technique for optimizing React applications that use the Context API. By separating your state and setter functions into different contexts, you can ensure components only rerender when the specific data they consume changes.
While this technique adds some complexity to your codebase, the performance benefits can be visible in larger applications.
In modern web development, speed and efficiency are important. Whether you’re building with React or using Next.js, caching has become one of the most important techniques for improving performance, reducing server load, and making user experience better.
With the latest updates in Next.js and advancements in the React ecosystem, caching strategies have improved, and learning them is key for any serious developer. In this blog, we’ll learn how caching works in both React and Next.js, go through best practices, and highlight real-world examples that you can apply today.
What is Caching?
Caching refers to the process of storing data temporarily so future requests can be served faster. In the context of web applications, caching can occur at various levels:
Browser caching (storing static assets)
Client-side data caching (with libraries like SWR or React Query)
Server-side caching (Next.js API routes or server actions)
CDN caching (via edge networks)
Effective caching minimizes redundant data fetching, accelerates loading times, and improves the perceived performance of your application.
Caching in React Applications
React doesn’t have built-in caching, but the community provides powerful tools to manage cache effectively on the client side.
1. React Query and SWR for Data Caching
These libraries help cache remote data on the client side and reduce unnecessary requests:
Use cache: 'no-store' for dynamic or user-specific data
2. Using Server Actions and React Server Components (RSC)
// app/actions.ts"use server";exportasyncfunctionsaveData(formData:FormData) {const name = formData.get("name");// Save to database or perform API calls}
Server actions in the App Router allow you to cache server-side logic and fetch results in React Server Components without hydration.
3. Using generateStaticParams and generateMetadata
These methods help Next.js know which routes to pre-build and cache efficiently:
Proper cache invalidation ensures that stale data is replaced with up-to-date content:
Time-based (revalidate: 60 seconds)
On-demand revalidation (res.revalidate in API route)
Tag-based revalidation (coming soon in Next.js)
Mutations trigger refetch in SWR/React Query
CDN and Edge Caching with Next.js
Vercel and other hosting providers like Netlify and Cloudflare deploy Next.js apps globally. Edge caching improves load time by serving users from the nearest region.
Tips:
Leverage Edge Functions for dynamic personalization
Use headers like Cache-Control effectively
Deploy static assets via CDN for better global performance
Final Best Practices
Prefer static rendering where possible
Cache API calls both on server and client
Use persistent cache (IndexedDB/localStorage) when applicable
Memoize expensive computations
Profile and audit cache hits/misses with dev tools
Conclusion
Caching in React and Next.js is no longer optional — it’s essential for delivering fast, resilient, and scalable applications. Whether you’re fetching data client-side or leveraging powerful server-side features in Next.js App Router, the right caching strategy can drastically improve your app’s performance and user satisfaction. As frameworks evolve, staying updated with caching best practices ensures your apps remain performant and competitive.
By applying these techniques, you not only enhance the speed and reliability of your applications but also reduce infrastructure costs and improve SEO outcomes. Start caching smartly today and take your web performance to the next level.
In today’s fast-paced web ecosystem, developers need tools that are flexible, performant, and future-ready. Next.js 15 delivers on all fronts. Whether you’re building static websites, dynamic dashboards, or enterprise-grade applications, this version introduces groundbreaking features that take developer productivity and user experience to the next level.
In this post, we’ll walk through the top 7 features in Next.js 15 that are engineered to supercharge your web apps — plus practical use cases, code examples, and why they matter.
1. 🔄 React Server Actions (Stable with React 19)
Say goodbye to complex API routes. Next.js 15 supports React Server Actions, allowing you to handle server logic directly inside your component files.
🚀 How it works:
// Inside your component file"use server";exportasyncfunctionsaveForm(data) {await db.save(data);}
🧠 Why it matters:
No need to create separate api/ endpoints.
Full type safety with server logic co-located.
Less client-side JavaScript shipped.
Ideal for: Form submissions, database updates, authenticated mutations.
2. 🧭 Stable App Router with Layouts and Nested Routing
Introduced in v13 and now fully stable, the app/ directory in Next.js 15 gives you modular routing with nested layouts, co-located data fetching, and component-based architecture.
Static + Dynamic rendering in one page? Yes, please.
Next.js 15 introduces Partial Prerendering, an experimental feature that allows you to render part of a page statically and the rest dynamically.
💡 Use case:
Your homepage might have:
A statically rendered hero section
A dynamic, user-personalized feed
🧠 Why it matters:
Faster load times for static content
Seamless hydration for dynamic sections
Enhanced user experience without trade-offs
4. ⚡️ Turbopack (Improved Performance)
Turbopack, Vercel’s Rust-based successor to Webpack, continues to mature in Next.js 15. It offers:
Blazing-fast cold starts
Incremental compilation
Near-instant HMR (Hot Module Reloading)
🧪 How to enable:
next dev --turbo
🚀 Why it matters:
10x faster rebuilds compared to Webpack
Smooth DX for teams working on large monorepos
Note: Still experimental but highly promising.
5. 🖼️ Smarter <Image /> Component
Image optimization just got smarter. The updated next/image now supports:
Native lazy loading
Blur-up placeholders
AVIF + WebP support out of the box
🧠 Why it matters:
Faster Core Web Vitals (especially LCP)
Reduced bandwidth and better UX
Simplified image management
6. 🌐 Edge Middleware Enhancements
Next.js 15 improves the DX around Edge Middleware, allowing you to run logic at the edge without cold starts or serverless latency.
📦 Use cases:
A/B Testing
Geolocation-based redirects
Auth checks at the CDN level
🔥 Improvements:
Better logging and error traces
Enhanced compatibility with dynamic routes
7. 🧪 React 19 Compatibility
Next.js 15 is one of the first frameworks fully compatible with React 19, bringing:
React Compiler support (in alpha)
Enhanced Concurrent Features
Better memory and rendering optimizations
🧠 Why it matters:
You can future-proof your app now and explore experimental features with a stable foundation.
Conclusion
Next.js 15 isn’t just about new APIs — it’s about enabling faster, more scalable, and more maintainable apps with less effort. These 7 features are engineered to help modern teams:
Next.js continues to improve with new features that enhance developer experience and performance. One of those features is Server Actions, It’s introduced to simplify server-side logic handling without creating separate API routes. Server Actions helps you to keep your components cleaner, improve security, and provide a better way to handle mutations in both Server and Client Components.
By mastering Server Actions, developers can create fast, reliable, and maintainable full-stack applications with ease.
What Are Server Actions?
Server Actions are asynchronous functions that run only on the server. They are invoked directly from your React components and can handle tasks like database mutations, form processing, and more. These actions simplify server-client interactions by eliminating the need for explicit API endpoints.
To declare a Server Action, use the "use server" directive:
// app/actions/user.ts"use server";exportasyncfunctioncreateUser(formData:FormData) {const name = formData.get("name");const email = formData.get("email");// Save to database herereturn { success:true };}
Using Server Actions in Server Components
In Server Components, you can define Server Actions inline or import them from a separate file. This is especially useful for quick forms or specific mutations tied to one component.
// app/actions/user.ts'use server';exportasyncfunctioncreateUser(formData:FormData) {const name = formData.get('name');const email = formData.get('email');// Save to database herereturn { success:true };}
Using Server Actions in Client Components
You can also use Server Actions in Client Components by importing them from a server-marked module.
// app/actions/user.ts"use server";exportasyncfunctionupdateUser(formData:FormData) {const id = formData.get("id");const name = formData.get("name");// Update user in DBreturn { success:true };}// app/components/EditUserForm.tsx("use client");import { updateUser } from"@/app/actions/user";exportdefaultfunctionEditUserForm() {return ( <formaction={updateUser}> <inputtype="hidden"name="id"value="123" /> <inputtype="text"name="name" /> <buttontype="submit">Update</button> </form> );}
Binding Parameters to Server Actions
You can pass arguments to Server Actions using .bind(), making them dynamic and reusable.
Keep your logic and UI separate. Define Server Actions in dedicated files and import them where needed.
Organize by Domain
Group your actions by feature or domain (actions/user.ts, actions/orders.ts) for better structure.
Error Handling
Use try-catch blocks inside Server Actions to gracefully handle failures and log issues.
Type Safety
Use TypeScript to enforce correct types for FormData fields and return values.
Secure Operations
Always verify user sessions or tokens before making sensitive changes, even inside Server Actions.
Avoid Logic Duplication
Reuse Server Actions across components to prevent writing the same logic multiple times.
Validate Input
Use libraries like Zod or Yup to validate incoming data and avoid corrupting your database.
Final Thoughts
Server Actions offer a powerful pattern for managing server-side logic in a way that feels native to React and Next.js. They simplify the code, reduce the boilerplate of API routes, and make it easier to maintain a full-stack application.
By following the best practices outlined above, you’ll write cleaner, more scalable code that benefits both your team and your users.
The Promise object represents the eventual completion (or failure) of an asynchronous operation and its resulting value.
A Promise is always in one of the following states:
Pending: The initial state, neither fulfilled nor rejected.
Fulfilled: The operation completed successfully.
Rejected: The operation failed.
Unlike “old-style” callbacks, using Promises has the following conventions:
Callback functions will not be called until the current event loop completes.
Even if the asynchronous operation completes (successfully or unsuccessfully), callbacks added via then() afterward will still be called.
You can add multiple callbacks by calling then() multiple times, and they will be executed in the order they were added.
The characteristic feature of Promises is chaining.
Usage
Promise.all([])
When all Promise instances in the array succeed, it returns an array of success results in the order they were requested. If any Promise fails, it enters the failure callback.
const p1 =newPromise((resolve) => {resolve(1);});const p2 =newPromise((resolve) => {resolve(1);});const p3 =Promise.resolve("ok");// If all promises succeed, result will be an array of 3 results.const result =Promise.all([p1, p2, p3]);// If one fails, the result is the failed promise's value.
2. Promise.allSettled([])
The execution will not fail; it returns an array corresponding to the status of each Promise instance in the input array.
If any Promise in the input array fulfills, the returned instance will become fulfilled and return the value of the first fulfilled promise. If all are rejected, it will become rejected.
As soon as any Promise in the array changes state, the state of the race method will change accordingly; the value of the first changed Promise will be passed to the race method’s callback.
Throwing an exception does not change the race state; it is still determined by p1.
Advanced Uses
Here are 9 advanced uses that help developers handle asynchronous operations more efficiently and elegantly.
Concurrency Control
Using Promise.all allows for parallel execution of multiple Promises, but to control the number of simultaneous requests, you can implement a concurrency control function.
9. Using Promises to Implement a Simple Asynchronous Lock
In a multi-threaded environment, you can use Promises to implement a simple asynchronous lock, ensuring that only one task can access shared resources at a time.
This code creates and resolves Promises continuously, implementing a simple FIFO queue to ensure that only one task can access shared resources. The lock variable represents whether there is a task currently executing, always pointing to the Promise of the task in progress. The acquireLock function requests permission to execute and creates a new Promise to wait for the current task to finish.
Conclusion
Promises are an indispensable part of modern JavaScript asynchronous programming. Mastering their advanced techniques will greatly enhance development efficiency and code quality. With the various methods outlined above, developers can handle complex asynchronous scenarios more confidently and write more readable, elegant, and robust code.
If you have worked at all with React hooks before then you have used the useEffect hook extensively. You may not know, though, that there is a second type of useEffect hook called useLayoutEffect. In this article I will be explaining the useLayoutEffect hook and comparing it to useEffect. If you are not already familiar with useEffect check out my full article on it here.
The Biggest Difference
Everything about these two hooks is nearly identical. The syntax for them is exactly the same and they are both used to run side effects when things change in a component. The only real difference is when the code inside the hook is actually run.
In useEffect the code in the hook is run asynchronously after React renders the component. This means the code for this hook can run after the DOM is painted to the screen.
The useLayoutEffect hook runs synchronously directly after React calculates the DOM changes but before it paints those changes to the screen. This means that useLayoutEffect code will delay the painting of a component since it runs synchronously before painting, while useEffect is asynchronous and will not delay the paint.
Why Use useLayoutEffect?
So if useLayoutEffect will delay the painting of a component why would we want to use it. The biggest reason for using useLayoutEffect is when the code being run directly modifies the DOM in a way that is observable to the user.
For example, if I needed to change the background color of a DOM element as a side effect it would be best to use useLayoutEffect since we are directly modifying the DOM and the changes are observable to the user. If we were to use useEffect we could run into an issue where the DOM is painted before the useEffect code is run. This would cause the DOM element to be the wrong color at first and then change to the right color due to the useEffect code.
You Probably Don’t Need useLayoutEffect
As you can see from the previous example, use cases for useLayoutEffect are pretty niche. In general it is best to always use useEffect and only switch to useLayoutEffect when you actually run into an issue with useEffect causing flickers in your DOM or incorrect results.
Conclusion
useLayoutEffect is a very useful hook for specific situations, but in most cases you will be perfectly fine using useEffect. Also, since useEffect does not block painting it is the better option to use if it works properly.
Next.js is an amazing framework that makes writing complex server rendered React apps much easier, but there is one huge problem. Next.js’s caching mechanism is extremely complicated and can easily lead to bugs in your code that are difficult to debug and fix.
If you don’t understand how Next.js’s caching mechanism works it feels like you are constantly fighting Next.js instead of reaping the amazing benefits of Next.js’s powerful caching. That is why in this article I am going to break down exactly how every part of Next.js’s cache works so you can stop fighting it and finally take advantage of its incredible performance gains.
Before we get started, here is an image of how all the caches in Next.js interact with one another. I know this is overwhelming, but by the end of this article you will understand exactly what each step in this process does and how they all interact.
In the image above, you probably noticed the term “Build Time” and “Request Time”. To make sure this does not cause any confusion throughout the article, let me explain them before we move forward.
Build time refers to when an aplication is built and deployed. Anything that is cached during this process (mostly static content) will be part of the build time cache. The build time cache is only updated when the application is rebuilt and redeployed.
Request time refers to when a user requests a page. Typically, data cached at request time is dynamic as we want to fetch it directly from the data source when the user makes requests.
Next.js Caching Mechanisms
Understanding Next.js’s caching can seem daunting at first. This is because it is composed of four distinct caching mechanisms which each operating at different stages of your application and interacting in ways that can initially appear complex.
Here are the four caching mechanisms in Next.js:
Request Memoization
Data Cache
Full Route Cache
Router Cache
For each of the above, I will delve into their specific roles, where they’re stored, their duration, and how you can effectively manage them, including ways to invalidate the cache and opt out. By the end of this exploration, you’ll have a solid grasp of how these mechanisms work together to optimize Next.js’s performance.
Request Memoization
One common problem in React is when you need to display the same information in multiple places on the same page. The easiest option is to just fetch the data in both places that it is needed, but this is not ideal since you are now making two requests to your server to get the same data. This is where Request Memoization comes in.
Request Memoization is a React feature that actually caches every fetch request you make in a server component during the render cycle (which basically just refers to the process of rendering all the components on a page). This means that if you make a fetch request in one component and then make the same fetch request in another component, the second fetch request will not actually make a request to the server. Instead, it will use the cached value from the first fetch request.
exportdefaultasyncfunctionfetchUserData(userId) {// The `fetch` function is automatically cached by Next.jsconst res =awaitfetch(`https://api.example.com/users/${userId}`);return res.json();}exportdefaultasyncfunctionPage({ params }) {const user =awaitfetchUserData(params.id);return ( <> <h1>{user.name}</h1> <UserDetailsid={params.id} /> </> );}asyncfunctionUserDetails({ id }) {const user =awaitfetchUserData(id);return <p>{user.name}</p>;}
In the code above, we have two components: Page and UserDetails. The first call to the fetchUserData() function in Page makes a fetch request just like normal, but the return value of that fetch request is stored in the Request Memoization cache. The second time fetchUserData is called by the UserDetails component, does not actually make a new fetch request. Instead, it uses the memoized value from the first time this fetch request was made. This small optimization drastically increases the performance of your application by reducing the number of requests made to your server and it also makes your components easier to write since you don’t need to worry about optimizing your fetch requests.
It is important to know that this cache is stored entirely on the server which means it will only cache fetch requests made from your server components. Also, this cache is completely cleared at the start of each request which means it is only valid for the duration of a single render cycle. This is not an issue, though, as the entire purpose of this cache is to reduce duplicate fetch requests within a single render cycle.
Lastly, it is important to note that this cache will only cache fetch requests made with the GET method. A fetch request must also have the exact same parameters (URL and options) passed to it in order to be memoized.
Caching Non-fetch Requests
By default React only caches fetch requests, but there are times when you might want to cache other types of requests such as database requests. To do this, we can use React’s cache function. All you need to do is pass the function you want to cache to cache and it will return a memoized version of that function.
import { cache } from"react";import { queryDatabase } from"./databaseClient";exportconst fetchUserData =cache((userId) => {// Direct database queryreturnqueryDatabase("SELECT * FROM users WHERE id = ?", [userId]);});
In this code above, the first time fetchUserData() is called, it queries the database directly, as there is no cached result yet. But the next time this function is called with the same userId, the data is retrieved from the cache. Just like with fetch, this memoization is valid only for the duration of a single render pass and works identical to the fetch memoization.
Revalidation
Revalidation is the process of clearing out a cache and updating it with new data. This is important to do since if you never update a cache it will eventually become stale and out of date. Luckily, we don’t have to worry about this with Request Memoization since this cache is only valid for the duration of a single request we never have to revalidate.
Opting out
To opt out of this cache, we can pass in an AbortControllersignal as a parameter to the fetch request.
asyncfunctionfetchUserData(userId) {const { signal } =newAbortController();const res =awaitfetch(`https://api.example.com/users/${userId}`, { signal, });return res.json();}
Doing this will tell React not to cache this fetch request in the Request Memoization cache, but I would not recommend doing this unless you have a very good reason to as this cache is very useful and can drastically improve the performance of your application.
The diagram below provides a visual summary of how Request Memoization works.
Request Memoization is technically a React feature, not exclusive to Next.js. I included it as part of the Next.js caching mechanisms, though, since it is necessary to understand in order to comprehend the full Next.js caching process.
Data Cache
Request Memoization is great for making your app more performant by preventing duplicate fetch request, but when it comes to caching data across requests/users it is useless. This is where the data cache comes in. It is the last cache that is hit by Next.js before it actually fetches your data from an API or database and is persistent across multiple requests/users.
Imagine we have a simple page that queries an API to get guide data on a specific city.
exportdefaultasyncfunctionPage({ params }) {const city = params.city;const res =awaitfetch(`https://api.globetrotter.com/guides/${city}`);const guideData =await res.json();return ( <div> <h1>{guideData.title}</h1> <p>{guideData.content}</p>{/* Render the guide data */} </div> );}
This guide data really doesn’t change often at all so it doesn’t actually make sense to fetch this data fresh everytime someone needs it. Instead we should cache that data across all requests so it will load instantly for future users. Normally, this would be a pain to implement, but luckily Next.js does this automatically for us with the Data Cache.
By default every fetch request in your server components will be cached in the Data Cache (which is stored on the server) and will be used for all future requests. This means that if you have 100 users all requesting the same data, Next.js will only make one fetch request to your API and then use that cached data for all 100 users. This is a huge performance boost.
Duration
The Data Cache is different than the Request Memoization cache in that data from this cache is never cleared unless you specifically tell Next.js to do so. This data is even persisted across deployments which means that if you deploy a new version of your application, the Data Cache will not be cleared.
Revalidation
Since the Data Cache is never cleared by Next.js we need a way to opt into revalidation which is just the process of removing data from the cache. In Next.js there are two different ways to do this: time-based revalidation and on-demand revalidation.
Time-based Revalidation
The easiest way to revalidate the Data Cache is to just automatically clear the cache after a set period of time. This can be done in two ways.
const res =fetch(`https://api.globetrotter.com/guides/${city}`, { next: { revalidate:3600 },});
The first way is to pass the next.revalidate option to your fetch request. This will tell Next.js how many seconds to keep your data in the cache before it is considered stale. In the example above, we are telling Next.js to revalidate the cache every hour.
The other way to set a revalidation time is to use the revalidate segment config option.
exportconst revalidate =3600;exportdefaultasyncfunctionPage({ params }) {const city = params.city;const res =awaitfetch(`https://api.globetrotter.com/guides/${city}`);const guideData =await res.json();return ( <div> <h1>{guideData.title}</h1> <p>{guideData.content}</p>{/* Render the guide data */} </div> );}
Doing this will make all fetch requests for this page revalidate every hour unless they have their own more specific revalidation time set.
The one important thing to understand with time based revalidation is how it handles stale data.
The first time a fetch request is made it will get the data and then store it in the cache. Each new fetch request that occurs within the 1 hour revalidation time we set will use that cached data and make no more fetch requests. Then after 1 hour, the first fetch request that is made will still return the cached data, but it will also execute the fetch request to get the newly updated data and store that in the cache. This means that each new fetch request after this one will use the newly cached data. This pattern is called stale-while-revalidate and is the behavior that Next.js uses.
On-demand Revalidation
If your data is not updated on a regular schedule, you can use on-demand revalidation to revalidate the cache only when new data is available. This is useful when you want to invalidate the cache and fetch new data only when a new article is published or a specific event occurs.
This can be done one of two ways.
import { revalidatePath } from"next/cache";exportasyncfunctionpublishArticle({ city }) {createArticle(city);revalidatePath(`/guides/${city}`);}
The revalidatePath function takes a string path and will clear the cache of all fetch request on that route.
If you want to be more specific in the exact fetch requests to revalidate, you can use revalidateTag function.
const res =fetch(`https://api.globetrotter.com/guides/${city}`, { next: { tags: ["city-guides"] },});
Here, we’re adding the city-guides tag to our fetch request so we can target it with revalidateTag.
import { revalidateTag } from"next/cache";exportasyncfunctionpublishArticle({ city }) {createArticle(city);revalidateTag("city-guides");}
By calling revalidateTag with a string it will clear the cache of all fetch request with that tag.
Opting out
Opting out of the data cache can be done in multiple ways.
no-store
const res =fetch(`https://api.globetrotter.com/guides/${city}`, { cache:"no-store",});
By passing cache: "no-store" to your fetch request, you are telling Next.js to not cache this request in the Data Cache. This is useful when you have data that is constantly changing and you want to fetch it fresh every time.
You can also call the noStore function to opt out of the Data Cache for everything within the scope of that function.
import { unstable_noStoreas noStore } from"next/cache";functiongetGuide() {noStore();const res =fetch(`https://api.globetrotter.com/guides/${city}`);}
Currently, this is an experimental feature which is why it is prefixed with unstable_, but it is the preferred method of opting out of the Data Cache going forward in Next.js.
This is a really great way to opt out of caching on a per component or per function basis since all other opt out methods will opt out of the Data Cache for the entire page.
export const dynamic = 'force-dynamic'
If we want to change the caching behavior for an entire page and not just a specific fetch request, we can add this segment config option to the top level of our file. This will force the page to be dynamic and opt out of the Data Cache entirely.
exportconst dynamic ="force-dynamic";
export const revalidate = 0
Another way to opt the entire page out of the data cache is to use the revalidate segment config option with a value of 0
exportconst revalidate =0;
This line is pretty much the page-level equivalent of cache: "no-store". It applies to all requests on the page, ensuring nothing gets cached.
Caching Non-fetch Requests
So far, we have only seen how to cache fetch requests with the Data Cache, but we can do much more than that.
If we go back to our previous example of city guides, we might want to pull data directly from our database. For this, we can use the cache function that’s provided by Next.js. This is similar to the React cache function, except it applies to the Data Cache instead of Request Memoization.
Currently, this is an experimental feature which is why it is prefixed with unstable_, but it is the only way to cache non-fetch requests in the Data Cache.
The code above is short, but it can be confusing if this is the first time you are seeing the cache function.
The cache function takes three parameters (but only two are required). The first parameter is the function you want to cache. In our case it is the getGuides function. The second parameter is the key for the cache. In order for Next.js to know which cache is which it needs a key to identify them. This key is an array of strings that must be unique for each unique cache you have. If two cache functions have the same key array passed to them they will be considered the same exact request and stored in the same cache (similar to a fetch request with the same URL and params).
The third parameter is an optional options parameter where you can define things like a revalidation time and tags.
In our particular code we are caching the results of our getGuides function and storing them in the cache with the key ["guides-cache-key"]. This means that if we call getCachedGuides with the same city twice, the second time it will use the cached data instead of calling getGuides again.
Below is a diagram that walks you through how the Data Cache operates, step by step.
Full Route Cache
The third type of cache is the Full Route Cache, and this one is a bit easier to understand since is much less configurable than the Data Cache. The main reason this cache is useful is because it lets Next.js cache static pages at build time instead of having to build those static pages for each request.
In Next.js, the pages we render to our clients consist of HTML and something called the React Server Component Payload (RSCP). The payload contains instructions for how the client components should work together with the rendered server components to render the page. The Full Route Cache stores the HTML and RSCP for static pages at build time.
Now that we know what it stores, let’s take a look at an example.
In the code I have above, Page will be cached at build time because it does not contain any dynamic data. More specifically, its HTML and RSCP will be stored in the Full Router Cache so that it is served faster when a user requests access. The only way this HTML/RSCP will be updated is if we redeploy our application or manually invalidate the data cache that this page depends on.
I know you may think that since we are doing a fetch request that we have dynamic data, but this fetch request is cached by Next.js in the Data Cache so this page is actually considered static. Dynamic data is data that changes on every single request to a page, such as a dynamic URL parameter, cookies, headers, search params, etc.
Similarly to the Data Cache the Full Route Cache is stored on the server and persists across different requests and users, but unlike the Data Cache, this cache is cleared every time you redeploy your application.
Opting out
Opting out of the Full Route Cache can be done in two ways.
The first way is to opt out of the Data Cache. If the data you are fetching for the page is not cached in the Data Cache then the Full Route Cache will not be used.
The second way is to use dynamic data in your page. Dynamic data includes things such as the headers, cookies, or searchParams dynamic functions, and dynamic URL parameters such as id in /blog/[id].
The diagram below demonstrates the step-by-step process of how Full Route Cache works.
This cache only works with your production builds since in development all pages are rendered dynamically, thus, they are never stored in this cache.
Router Cache
This last cache is a bit unique in that it is the only cache that is stored on the client instead of on the server. It can also be the source of many bugs if not understood properly. This is because it caches routes that a user visits so when they come back to those routes it uses the cached version and never actually makes a request to the server While this approach is an advantage when it comes to page loading speeds, it can also be quite frustrating. Let’s take a look below at why.
In the code I have above, when the user navigates to this page, its HTML/RSCP gets stored in the Router Cache. Similarly, when they navigate to any of the /blog/${post.slug} routes, that HTML/RSCP also gets cached. This means if the user navigates back to a page they have already been to it will pull that HTML/RSCP from the Router Cache instead of making a request to the server.
Duration
The router cache is a bit unique in that the duration it is stored for depends on the type of route. For static routes, the cache is stored for 5 minutes, but for dynamic routes, the cache is only stored for 30 seconds. This means that if a user navigates to a static route and then comes back to it within 5 minutes, it will use the cached version. But if they come back to it after 5 minutes, it will make a request to the server to get the new HTML/RSCP. The same thing applies to dynamic routes, except the cache is only stored for 30 seconds instead of 5 minutes.
This cache is also only stored for the user’s current session. This means that if the user closes the tab or refreshes the page, the cache will be cleared.
You can also manually revalidate this cache by clearing the data cache from a server action using revalidatePath/revalidateTag. You can also call the router.refresh function which you get from the useRouter hook on the client. This will force the client to refetch the page you are currently on.
Revalidation
We already discussed two ways of revalidation in the previous section but there are plenty of other ways to do it.
We can revalidate the Router Cache on demand similar to how we did it for the Data Cache. This means that revalidating Data Cache using revalidatePath or revalidateTag also revalidates the Router Cache.
Opting out
There is no way to opt out of the Router Cache, but considering the plethora of ways to revalidate the cache it is not a big deal.
Here is an image that provides a visual summary of how the Router Cache works.
Conclusion
Having multiple caches like this can be difficult to wrap your head around, but hopefully this article was able to open your eyes to how these caches work and how they interact with one another. While the official documentation mentions that knowledge of caching is not necessary to be productive with Next.js, I think it helps a lot to understand its behavior so that you can configure the settings that work best for your particular app.