Previously on meyerweb, I explored ways to do strange things with the infinity
keyword in CSS calculation functions. There were some great comments on that post, by the way; you should definitely go give them a read. Anyway, in this post, I’ll be doing the same thing, but with different properties!
When last we met, I’d just finished up messing with font sizes and line heights, and that made me think about other text properties that accept lengths, like those that indent text or increase the space between words and letters. You know, like these:
div:nth-of-type(1) {text-indent: calc(infinity * 1ch);}
div:nth-of-type(2) {word-spacing: calc(infinity * 1ch);}
div:nth-of-type(3) {letter-spacing: calc(infinity * 1ch);}
<div>I have some text and I cannot lie!</div>
<div>I have some text and I cannot lie!</div>
<div>I have some text and I cannot lie!</div>
According to Frederic Goudy, I am now the sort of man who would steal a infinite number of sheep. Which is untrue, because, I mean, where would I put them?
Visually, these all came to exactly the same result, textually speaking, with just very small (probably line-height-related) variances in element height. All get very large horizontal overflow scrolling, yet scrolling out to the end of that overflow reveals no letterforms at all; I assume they’re sat just offscreen when you reach the end of the scroll region. I particularly like how the “I” in the first <div>
disappears because the first line has been indented a few million (or a few hundred undecillion) pixels, and then the rest of the text is wrapped onto the second line. And in the third <div>
, we can check for line-leading steganography!
When you ask for the computed values, though, that’s when things get weird.
Computed value for… | |||
---|---|---|---|
Browser | text-indent |
word-spacing |
letter-spacing |
Safari | 33554428px | 33554428px | 33554428px |
Chrome | 33554400px | 3.40282e+38px | 33554400px |
Firefox (Nightly) | 3.40282e+38px | 3.40282e+38px | 3.40282e+38px |
Safari and Firefox are at least internally consistent, if many orders of magnitude apart from each other. Chrome… I don’t even know what to say. Maybe pick a lane?
I have to admit that by this point in my experimentation, I was getting a little bored of infinite pixel lengths. What about infinite unitless numbers, like line-height
or — even better — z-index
?
div {
position: absolute;
}
div:nth-of-type(1) {
top: 10%;
left: 1em;
z-index: calc(infinity + 1);
}
div:nth-of-type(2) {
top: 20%;
left: 2em;
z-index: calc(infinity);
}
div:nth-of-type(3) {
top: 30%;
left: 3em;
z-index: 32767;
}
<div>I’m really high!</div>
<div>I’m really high!</div>
<div>I’m really high!</div>
It turns out that in CSS you can go to infinity, but not beyond, because the computed values were the same regardless of whether the calc()
value was infinity
or infinity + 1
.
Browser | Computed value |
---|---|
Safari | 2147483647 |
Chrome | 2147483647 |
Firefox (Nightly) | 2147483647 |
Thus, the first two <div>
s were a long way above the third, but were themselves drawn with the later-painted <div>
on top of the first. This is because in positioning, if overlapping elements have the same z-index
value, the one that comes later in the DOM gets painted over top any that come before it.
This does also mean you can have a finite value beat infinity. If you change the previous CSS like so:
div:nth-of-type(3) {
top: 30%;
left: 3em;
z-index: 2147483647;
}
…then the third <div>
is painted atop the other two, because they all have the same computed value. And no, increasing the finite value to a value equal to 2,147,483,648 or higher doesn’t change things, because the computed value of anything in that range is still 2147483647
.
The results here led me to an assumption that browsers (or at least the coding languages used to write them) use a system where any “infinity” that has multiplication, addition, or subtraction done to it just returns “infinite”. So if you try to double Infinity
, you get back Infinity
(or Infinite
or Inf
or whatever symbol is being used to represent the concept of the infinite). Maybe that’s entry-level knowledge for your average computer science major, but I was only one of those briefly and I don’t think it was covered in the assembler course that convinced me to find another major.
Looking across all those years back to my time in university got me thinking about infinite spans of time, so I decided to see just how long I could get an animation to run.
div {
animation-name: shift;
animation-duration: calc(infinity * 1s);
}
@keyframes shift {
from {
transform: translateX(0px);
}
to {
transform: translateX(100px);
}
}
<div>I’m timely!</div>
The results were truly something to behold, at least in the cases where beholding was possible. Here’s what I got for the computed animation-duration
value in each browser’s web inspector Computed Values tab or subtab:
Browser | Computed value | As years |
---|---|---|
Safari | 🤷🏽 | |
Chrome | 1.79769e+308s | 5.7004376e+300 |
Firefox (Nightly) | 3.40282e+38s | 1.07902714e+31 |
Those are… very long durations. In Firefox, the <div>
will finish the animation in just a tiny bit over ten nonillion (ten quadrillion quadrillion) years. That’s roughly ten times as long as it will take for nearly all the matter in the known Universe to have been swallowed by supermassive galactic black holes.
In Chrome, on the other hand, completing the animation will take approximately half again as long as our current highest estimate for the amount of time it will take for all the protons and neutrons in the observable Universe to decay into radiation, assuming protons actually decay. (Source: Wikipedia’s Timeline of the far future.)
“Okay, but what about Safari?” you may be asking. Well, there’s no way as yet to find out, because while Safari loads and renders the page like usual, the page then becomes essentially unresponsive. Not the browser, just the page itself. This includes not redrawing or moving the scrollbar gutters when the window is resized, or showing useful information in the Web Inspector. I’ve already filed a bug, so hopefully one day we’ll find out whether its temporal limitations are the same as Chrome’s or not.
It should also be noted that it doesn’t matter whether you supply 1s
or 1ms
as the thing to multiply with infinity
: you get the same result either way. This makes some sense, because any finite number times infinity is still infinity. Well, sort of. But also yes.
So what happens if you divide a finite amount by infinity? In browsers, you very consistently get nothing!
div {
animation-name: shift;
animation-duration: calc(100000000000000000000000s / infinity);
}
(Any finite number could be used there, so I decided to type 1 and then hold the 0 key for a second or two, and use the resulting large number.)
Browser | Computed value |
---|---|
Safari | 0 |
Chrome | 0 |
Firefox (Nightly) | 0 |
Honestly, seeing that kind of cross-browser harmony… that was soothing.
And so we come full circle, from something that yielded consistent results to something else that yields consistent results. Sometimes, it’s the little wins that count the most.
Just not infinitely.
What’s a good maximum line length for your coding standard?
This is, of course, a trick question. By posing it as a question, I have created the misleading impression that it is a question, but Black has selected the correct number for you; it’s 88 which is obviously very lucky.
Thanks for reading my blog.
OK, OK. Clearly, there’s more to it than that. This is an age-old debate on the level of “tabs versus spaces”. So contentious, in fact, that even the famously opinionated Black does in fact let you change it.
One argument that certain silly people1 like to make is “why are we wrapping at 80 characters like we are using 80 character teletypes, it’s the 2020s! I have an ultrawide monitor!”. The implication here is that the width of 80-character terminals is an antiquated relic, based entirely around the hardware limitations of a bygone era, and modern displays can put tons of stuff on one line, so why not use that capability?
This feels intuitively true, given the huge disparity between ancient times and now: on my own display, I can comfortably fit about 350 characters on a line. What a shame, to have so much room for so many characters in each line, and to waste it all on blank space!
But... is that true?
I stretched out my editor window all the way to measure that ‘350’ number, but I did not continue editing at that window width. In order to have a more comfortable editing experience, I switched back into writeroom mode, a mode which emulates a considerably more writerly application, which limits each line length to 92 characters, regardless of frame width.
You’ve probably noticed this too. Almost all sites that display prose of any kind limit their width, even on very wide screens.
As silly as that tiny little ribbon of text running down the middle of your monitor might look with a full-screened stereotypical news site or blog, if you full-screen a site that doesn’t set that width-limit, although it makes sense that you can now use all that space up, it will look extremely, almost unreadably bad.
Blogging software does not set a column width limit on your text because of some 80-character-wide accident of history in the form of a hardware terminal.
Similarly, if you really try to use that screen real estate to its fullest for coding, and start editing 200-300 character lines, you’ll quickly notice it starts to feel just a bit weird and confusing. It gets surprisingly easy to lose your place. Rhetorically the “80 characters is just because of dinosaur technology! Use all those ultrawide pixels!” talking point is quite popular, but practically people usually just want a few more characters worth of breathing room, maxing out at 100 characters, far narrower than even the most svelte widescreen.
So maybe those 80 character terminals are holding us back a little bit, but... wait a second. Why were the terminals 80 characters wide in the first place?
As this lovely Software Engineering Stack Exchange post summarizes, terminals were probably 80 characters because teletypes were 80 characters, and teletypes were probably 80 characters because punch cards were 80 characters, and punch cards were probably 80 characters because that’s just about how many typewritten characters fit onto one line of a US-Letter piece of paper.
Even before typewriters, consider the average newspaper: why do we call a regularly-occurring featured article in a newspaper a “column”? Because broadsheet papers were too wide to have only a single column; they would always be broken into multiple! Far more aggressive than 80 characters, columns in newspapers typically have 30 characters per line.
The first newspaper printing machines were custom designed and could have used whatever width they wanted, so why standardize on something so narrow?3
There has been a surprising amount of scientific research around this issue, but in brief, there’s a reason here rooted in human physiology: when you read a block of text, you are not consciously moving your eyes from word to word like you’re dragging a mouse cursor, repositioning continuously. Human eyes reading text move in quick bursts of rotation called “saccades”. In order to quickly and accurately move from one line of text to another, the start of the next line needs to be clearly visible in the reader’s peripheral vision in order for them to accurately target it. This limits the angle of rotation that the reader can perform in a single saccade, and, thus, the length of a line that they can comfortably read without hunting around for the start of the next line every time they get to the end.
So, 80 (or 88) characters isn’t too unreasonable for a limit. It’s longer than 30 characters, that’s for sure!
But, surely that’s not all, or this wouldn’t be so contentious in the first place?
The ultrawide aficionados do have a point, even if it’s not really the simple one about “old terminals” they originally thought. Our modern wide-screen displays are criminally underutilized, particularly for text. Even adding in the big chunky file, class, and method tree browser over on the left and the source code preview on the right, a brief survey of a Google Image search for “vs code” shows a lot of editors open with huge, blank areas on the right side of the window.
Big screens are super useful as they allow us to leverage our spatial memories to keep more relevant code around and simply glance around as we think, rather than navigate interactively. But it only works if you remember to do it.
Newspapers allowed us to read a ton of information in one sitting with minimum shuffling by packing in as much as 6 columns of text. You could read a column to the bottom of the page, back to the top, and down again, several times.
Similarly, books fill both of their opposed pages with text at the same time, doubling the amount of stuff you can read at once before needing to turn the page.
You may notice that reading text in a book, even in an ebook app, is more comfortable than reading a ton of text by scrolling around in a web browser. That’s because our eyes are built for saccades, and repeatedly tracking the continuous smooth motion of the page as it scrolls to a stop, then re-targeting the new fixed location to start saccading around from, is literally more physically strenuous on your eye’s muscles!
There’s a reason that the codex was a big technological innovation over the scroll. This is a regression!
Today, the right thing to do here is to make use of horizontally split panes in your text editor or IDE, and just make a bit of conscious effort to set up the appropriate code on screen for the problem you’re working on. However, this is a potential area for different IDEs to really differentiate themselves, and build multi-column continuous-code-reading layouts that allow for buffers to wrap and be navigable newspaper-style.
Similar, modern CSS has shockingly good support for multi-column layouts, and it’s a shame that true multi-column, page-turning layouts are so rare. If I ever figure out a way to deploy this here that isn’t horribly clunky and fighting modern platform conventions like “scrolling horizontally is substantially more annoying and inconsistent than scrolling vertically” maybe I will experiment with such a layout on this blog one day. Until then… just make the browser window narrower so other useful stuff can be in the other parts of the screen, I guess.
But, I digress. While I think that columnar layouts for reading prose are an interesting thing more people should experiment with, code isn’t prose.
The metric used for ideal line width, which you may have noticed if you clicked through some of those Wikipedia links earlier, is not “character cells in your editor window”, it is characters per line, or “CPL”.
With an optimal CPL somewhere between 45 and 95, a code-line-width of somewhere around 90 might actually be the best idea, because whitespace uses up your line-width budget. In a typical object-oriented Python program2, most of your code ends up indented by at least 8 spaces: 4 for the class scope, 4 for the method scope. Most likely a lot of it is 12, because any interesting code will have at least one conditional or loop. So, by the time you’re done wasting all that horizontal space, a max line length of 90 actually looks more like a maximum of 78... right about that sweet spot from the US-Letter page in the typewriter that we started with.
In principle, source code is structured information, whose presentation could be fully decoupled from its serialized representation. Everyone could configure their preferred line width appropriate to their custom preferences and the specific physiological characteristics of their eyes, and the code could be formatted according to the language it was expressed in, and “hard wrapping” could be a silly antiquated thing.
The problem with this argument is the same as the argument against “but tabs are semantic indentation”, to wit: nope, no it isn’t. What “in principle” means in the previous paragraph is actually “in a fantasy world which we do not inhabit”. I’d love it if editors treated code this way and we had a rich history and tradition of structured manipulations rather than typing in strings of symbols to construct source code textually. But that is not the world we live in. Hard wrapping is unfortunately necessary to integrate with diff tools.
The exact, specific number here is still ultimately a matter of personal preference.
Hopefully, understanding the long history, science, and underlying physical constraints can lead you to select a contextually appropriate value for your own purposes that will balance ease of reading, integration with the relevant tools in your ecosystem, diff size, presentation in the editors and IDEs that your contributors tend to use, reasonable display in web contexts, on presentation slides, and so on.
But — and this is important — counterpoint:
No it isn’t, you don’t need to select an optimal width, because it’s already been selected for you. It is 88.
Thank you for reading, and especially thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor!
Node.js has undergone a remarkable transformation since its early days. If you’ve been writing Node.js for several years, you’ve likely witnessed this evolution firsthand—from the callback-heavy, CommonJS-dominated landscape to today’s clean, standards-based development experience.
The changes aren’t just cosmetic; they represent a fundamental shift in how we approach server-side JavaScript development. Modern Node.js embraces web standards, reduces external dependencies, and provides a more intuitive developer experience. Let’s explore these transformations and understand why they matter for your applications in 2025.
The module system is perhaps where you’ll notice the biggest difference. CommonJS served us well, but ES Modules (ESM) have become the clear winner, offering better tooling support and alignment with web standards.
Let’s look at how we used to structure modules. This approach required explicit exports and synchronous imports:
// math.js
function add(a, b) {
return a + b;
}
module.exports = { add };
// app.js
const { add } = require('./math');
console.log(add(2, 3));
This worked fine, but it had limitations—no static analysis, no tree-shaking, and it didn’t align with browser standards.
Modern Node.js development embraces ES Modules with a crucial addition—the node:
prefix for built-in modules. This explicit naming prevents confusion and makes dependencies crystal clear:
// math.js
export function add(a, b) {
return a + b;
}
// app.js
import { add } from './math.js';
import { readFile } from 'node:fs/promises'; // Modern node: prefix
import { createServer } from 'node:http';
console.log(add(2, 3));
The node:
prefix is more than just a convention—it’s a clear signal to both developers and tools that you’re importing Node.js built-ins rather than npm packages. This prevents potential conflicts and makes your code more explicit about its dependencies.
One of the most game-changing features is top-level await. No more wrapping your entire application in an async function just to use await at the module level:
// app.js - Clean initialization without wrapper functions
import { readFile } from 'node:fs/promises';
const config = JSON.parse(await readFile('config.json', 'utf8'));
const server = createServer(/* ... */);
console.log('App started with config:', config.appName);
This eliminates the common pattern of immediately-invoked async function expressions (IIFE) that we used to see everywhere. Your code becomes more linear and easier to reason about.
Node.js has embraced web standards in a big way, bringing APIs that web developers already know directly into the runtime. This means fewer dependencies and more consistency across environments.
Remember when every project needed axios, node-fetch, or similar libraries for HTTP requests? Those days are over. Node.js now includes the Fetch API natively:
// Old way - external dependencies required
const axios = require('axios');
const response = await axios.get('https://api.example.com/data');
// Modern way - built-in fetch with enhanced features
const response = await fetch('https://api.example.com/data');
const data = await response.json();
But the modern approach goes beyond just replacing your HTTP library. You get sophisticated timeout and cancellation support built-in:
async function fetchData(url) {
try {
const response = await fetch(url, {
signal: AbortSignal.timeout(5000) // Built-in timeout support
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return await response.json();
} catch (error) {
if (error.name === 'TimeoutError') {
throw new Error('Request timed out');
}
throw error;
}
}
This approach eliminates the need for timeout libraries and provides a consistent error handling experience. The AbortSignal.timeout()
method is particularly elegant—it creates a signal that automatically aborts after the specified time.
Modern applications need to handle cancellation gracefully, whether it’s user-initiated or due to timeouts. AbortController provides a standardized way to cancel operations:
// Cancel long-running operations cleanly
const controller = new AbortController();
// Set up automatic cancellation
setTimeout(() => controller.abort(), 10000);
try {
const data = await fetch('https://slow-api.com/data', {
signal: controller.signal
});
console.log('Data received:', data);
} catch (error) {
if (error.name === 'AbortError') {
console.log('Request was cancelled - this is expected behavior');
} else {
console.error('Unexpected error:', error);
}
}
This pattern works across many Node.js APIs, not just fetch. You can use the same AbortController with file operations, database queries, and any async operation that supports cancellation.
Testing used to require choosing between Jest, Mocha, Ava, or other frameworks. Node.js now includes a full-featured test runner that covers most testing needs without any external dependencies.
The built-in test runner provides a clean, familiar API that feels modern and complete:
// test/math.test.js
import { test, describe } from 'node:test';
import assert from 'node:assert';
import { add, multiply } from '../math.js';
describe('Math functions', () => {
test('adds numbers correctly', () => {
assert.strictEqual(add(2, 3), 5);
});
test('handles async operations', async () => {
const result = await multiply(2, 3);
assert.strictEqual(result, 6);
});
test('throws on invalid input', () => {
assert.throws(() => add('a', 'b'), /Invalid input/);
});
});
What makes this particularly powerful is how seamlessly it integrates with the Node.js development workflow:
# Run all tests with built-in runner
node --test
# Watch mode for development
node --test --watch
# Coverage reporting (Node.js 20+)
node --test --experimental-test-coverage
The watch mode is especially valuable during development—your tests re-run automatically as you modify code, providing immediate feedback without any additional configuration.
While async/await isn’t new, the patterns around it have matured significantly. Modern Node.js development leverages these patterns more effectively and combines them with newer APIs.
Modern error handling combines async/await with sophisticated error recovery and parallel execution patterns:
import { readFile, writeFile } from 'node:fs/promises';
async function processData() {
try {
// Parallel execution of independent operations
const [config, userData] = await Promise.all([
readFile('config.json', 'utf8'),
fetch('/api/user').then(r => r.json())
]);
const processed = processUserData(userData, JSON.parse(config));
await writeFile('output.json', JSON.stringify(processed, null, 2));
return processed;
} catch (error) {
// Structured error logging with context
console.error('Processing failed:', {
error: error.message,
stack: error.stack,
timestamp: new Date().toISOString()
});
throw error;
}
}
This pattern combines parallel execution for performance with comprehensive error handling. The Promise.all()
ensures that independent operations run concurrently, while the try/catch provides a single point for error handling with rich context.
Event-driven programming has evolved beyond simple event listeners. AsyncIterators provide a more powerful way to handle streams of events:
import { EventEmitter, once } from 'node:events';
class DataProcessor extends EventEmitter {
async *processStream() {
for (let i = 0; i < 10; i++) {
this.emit('data', `chunk-${i}`);
yield `processed-${i}`;
// Simulate async processing time
await new Promise(resolve => setTimeout(resolve, 100));
}
this.emit('end');
}
}
// Consume events as an async iterator
const processor = new DataProcessor();
for await (const result of processor.processStream()) {
console.log('Processed:', result);
}
This approach is particularly powerful because it combines the flexibility of events with the control flow of async iteration. You can process events in sequence, handle backpressure naturally, and break out of processing loops cleanly.
Streams remain one of Node.js’s most powerful features, but they’ve evolved to embrace web standards and provide better interoperability.
Stream processing has become more intuitive with better APIs and clearer patterns:
import { Readable, Transform } from 'node:stream';
import { pipeline } from 'node:stream/promises';
import { createReadStream, createWriteStream } from 'node:fs';
// Create transform streams with clean, focused logic
const upperCaseTransform = new Transform({
objectMode: true,
transform(chunk, encoding, callback) {
this.push(chunk.toString().toUpperCase());
callback();
}
});
// Process files with robust error handling
async function processFile(inputFile, outputFile) {
try {
await pipeline(
createReadStream(inputFile),
upperCaseTransform,
createWriteStream(outputFile)
);
console.log('File processed successfully');
} catch (error) {
console.error('Pipeline failed:', error);
throw error;
}
}
The pipeline
function with promises provides automatic cleanup and error handling, eliminating many of the traditional pain points with stream processing.
Modern Node.js can seamlessly work with Web Streams, enabling better compatibility with browser code and edge runtime environments:
// Create a Web Stream (compatible with browsers)
const webReadable = new ReadableStream({
start(controller) {
controller.enqueue('Hello ');
controller.enqueue('World!');
controller.close();
}
});
// Convert between Web Streams and Node.js streams
const nodeStream = Readable.fromWeb(webReadable);
const backToWeb = Readable.toWeb(nodeStream);
This interoperability is crucial for applications that need to run in multiple environments or share code between server and client.
JavaScript’s single-threaded nature isn’t always ideal for CPU-intensive work. Worker threads provide a way to leverage multiple cores effectively while maintaining the simplicity of JavaScript.
Worker threads are perfect for computationally expensive tasks that would otherwise block the main event loop:
// worker.js - Isolated computation environment
import { parentPort, workerData } from 'node:worker_threads';
function fibonacci(n) {
if (n < 2) return n;
return fibonacci(n - 1) + fibonacci(n - 2);
}
const result = fibonacci(workerData.number);
parentPort.postMessage(result);
The main application can delegate heavy computations without blocking other operations:
// main.js - Non-blocking delegation
import { Worker } from 'node:worker_threads';
import { fileURLToPath } from 'node:url';
async function calculateFibonacci(number) {
return new Promise((resolve, reject) => {
const worker = new Worker(
fileURLToPath(new URL('./worker.js', import.meta.url)),
{ workerData: { number } }
);
worker.on('message', resolve);
worker.on('error', reject);
worker.on('exit', (code) => {
if (code !== 0) {
reject(new Error(`Worker stopped with exit code ${code}`));
}
});
});
}
// Your main application remains responsive
console.log('Starting calculation...');
const result = await calculateFibonacci(40);
console.log('Fibonacci result:', result);
console.log('Application remained responsive throughout!');
This pattern allows your application to utilize multiple CPU cores while keeping the familiar async/await programming model.
Modern Node.js prioritizes developer experience with built-in tools that previously required external packages or complex configurations.
Development workflow has been significantly streamlined with built-in watch mode and environment file support:
{
"name": "modern-node-app",
"type": "module",
"engines": {
"node": ">=20.0.0"
},
"scripts": {
"dev": "node --watch --env-file=.env app.js",
"test": "node --test --watch",
"start": "node app.js"
}
}
The --watch
flag eliminates the need for nodemon, while --env-file
removes the dependency on dotenv. Your development environment becomes simpler and faster:
// .env file automatically loaded with --env-file
// DATABASE_URL=postgres://localhost:5432/mydb
// API_KEY=secret123
// app.js - Environment variables available immediately
console.log('Connecting to:', process.env.DATABASE_URL);
console.log('API Key loaded:', process.env.API_KEY ? 'Yes' : 'No');
These features make development more pleasant by reducing configuration overhead and eliminating restart cycles.
Security and performance have become first-class concerns with built-in tools for monitoring and controlling application behavior.
The experimental permission model allows you to restrict what your application can access, following the principle of least privilege:
# Run with restricted file system access
node --experimental-permission --allow-fs-read=./data --allow-fs-write=./logs app.js
# Network restrictions
node --experimental-permission --allow-net=api.example.com app.js
# Above allow-net feature not avaiable yet, PR merged in node.js repo, will be available in future release
This is particularly valuable for applications that process untrusted code or need to demonstrate security compliance.
Performance monitoring is now built into the platform, eliminating the need for external APM tools for basic monitoring:
import { PerformanceObserver, performance } from 'node:perf_hooks';
// Set up automatic performance monitoring
const obs = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
if (entry.duration > 100) { // Log slow operations
console.log(`Slow operation detected: ${entry.name} took ${entry.duration}ms`);
}
}
});
obs.observe({ entryTypes: ['function', 'http', 'dns'] });
// Instrument your own operations
async function processLargeDataset(data) {
performance.mark('processing-start');
const result = await heavyProcessing(data);
performance.mark('processing-end');
performance.measure('data-processing', 'processing-start', 'processing-end');
return result;
}
This provides visibility into application performance without external dependencies, helping you identify bottlenecks early in development.
Modern Node.js makes application distribution simpler with features like single executable applications and improved packaging.
You can now bundle your Node.js application into a single executable file, simplifying deployment and distribution:
# Create a self-contained executable
node --experimental-sea-config sea-config.json
The configuration file defines how your application gets bundled:
{
"main": "app.js",
"output": "my-app-bundle.blob",
"disableExperimentalSEAWarning": true
}
This is particularly valuable for CLI tools, desktop applications, or any scenario where you want to distribute your application without requiring users to install Node.js separately.
Error handling has evolved beyond simple try/catch blocks to include structured error handling and comprehensive diagnostics.
Modern applications benefit from structured, contextual error handling that provides better debugging information:
class AppError extends Error {
constructor(message, code, statusCode = 500, context = {}) {
super(message);
this.name = 'AppError';
this.code = code;
this.statusCode = statusCode;
this.context = context;
this.timestamp = new Date().toISOString();
}
toJSON() {
return {
name: this.name,
message: this.message,
code: this.code,
statusCode: this.statusCode,
context: this.context,
timestamp: this.timestamp,
stack: this.stack
};
}
}
// Usage with rich context
throw new AppError(
'Database connection failed',
'DB_CONNECTION_ERROR',
503,
{ host: 'localhost', port: 5432, retryAttempt: 3 }
);
This approach provides much richer error information for debugging and monitoring, while maintaining a consistent error interface across your application.
Node.js includes sophisticated diagnostic capabilities that help you understand what’s happening inside your application:
import diagnostics_channel from 'node:diagnostics_channel';
// Create custom diagnostic channels
const dbChannel = diagnostics_channel.channel('app:database');
const httpChannel = diagnostics_channel.channel('app:http');
// Subscribe to diagnostic events
dbChannel.subscribe((message) => {
console.log('Database operation:', {
operation: message.operation,
duration: message.duration,
query: message.query
});
});
// Publish diagnostic information
async function queryDatabase(sql, params) {
const start = performance.now();
try {
const result = await db.query(sql, params);
dbChannel.publish({
operation: 'query',
sql,
params,
duration: performance.now() - start,
success: true
});
return result;
} catch (error) {
dbChannel.publish({
operation: 'query',
sql,
params,
duration: performance.now() - start,
success: false,
error: error.message
});
throw error;
}
}
This diagnostic information can be consumed by monitoring tools, logged for analysis, or used to trigger automatic remediation actions.
Package management and module resolution have become more sophisticated, with better support for monorepos, internal packages, and flexible module resolution.
Modern Node.js supports import maps, allowing you to create clean internal module references:
{
"imports": {
"#config": "./src/config/index.js",
"#utils/*": "./src/utils/*.js",
"#db": "./src/database/connection.js"
}
}
This creates a clean, stable interface for internal modules:
// Clean internal imports that don't break when you reorganize
import config from '#config';
import { logger, validator } from '#utils/common';
import db from '#db';
These internal imports make refactoring easier and provide a clear distinction between internal and external dependencies.
Dynamic imports enable sophisticated loading patterns, including conditional loading and code splitting:
// Load features based on configuration or environment
async function loadDatabaseAdapter() {
const dbType = process.env.DATABASE_TYPE || 'sqlite';
try {
const adapter = await import(`#db/adapters/${dbType}`);
return adapter.default;
} catch (error) {
console.warn(`Database adapter ${dbType} not available, falling back to sqlite`);
const fallback = await import('#db/adapters/sqlite');
return fallback.default;
}
}
// Conditional feature loading
async function loadOptionalFeatures() {
const features = [];
if (process.env.ENABLE_ANALYTICS === 'true') {
const analytics = await import('#features/analytics');
features.push(analytics.default);
}
if (process.env.ENABLE_MONITORING === 'true') {
const monitoring = await import('#features/monitoring');
features.push(monitoring.default);
}
return features;
}
This pattern allows you to build applications that adapt to their environment and only load the code they actually need.
As we look at the current state of Node.js development, several key principles emerge:
Embrace Web Standards: Use node:
prefixes, fetch API, AbortController, and Web Streams for better compatibility and reduced dependencies
Leverage Built-in Tools: The test runner, watch mode, and environment file support reduce external dependencies and configuration complexity
Think in Modern Async Patterns: Top-level await, structured error handling, and async iterators make code more readable and maintainable
Use Worker Threads Strategically: For CPU-intensive tasks, worker threads provide true parallelism without blocking the main thread
Adopt Progressive Enhancement: Use permission models, diagnostics channels, and performance monitoring to build robust, observable applications
Optimize for Developer Experience: Watch mode, built-in testing, and import maps create a more pleasant development workflow
Plan for Distribution: Single executable applications and modern packaging make deployment simpler
The transformation of Node.js from a simple JavaScript runtime to a comprehensive development platform is remarkable. By adopting these modern patterns, you’re not just writing contemporary code—you’re building applications that are more maintainable, performant, and aligned with the broader JavaScript ecosystem.
The beauty of modern Node.js lies in its evolution while maintaining backward compatibility. You can adopt these patterns incrementally, and they work alongside existing code. Whether you’re starting a new project or modernizing an existing one, these patterns provide a clear path toward more robust, enjoyable Node.js development.
As we move through 2025, Node.js continues to evolve, but the foundational patterns we’ve explored here provide a solid base for building applications that will remain modern and maintainable for years to come.