Happy belated birthday, Shakespeare.
Happy belated birthday, Shakespeare.
For many projects I work on it’s useful to define all of our brand colours in a JavaScript file, particularly as I work on a lot of data visualisations that use them. Here’s an abridged example of how I define brand colours, as well as those used for data visualisations, and their variants:
// theme.js
const theme = {
color: {
brand: {
primary: {
DEFAULT: '#7B1FA2',
light: '#BA68C8',
dark: '#4A148C',
},
secondary: {
DEFAULT: '#E91E63',
light: '#F48FB1',
dark: '#C2185B',
},
},
data: {
blue: '#40C4FF',
turquoise: '#84FFFF',
mint: '#64FFDA',
},
},
}
I also need those variables in my CSS, where they’re defined as custom properties. But I don’t want to have to maintain my colour theme in two places! That’s why I created a script to create a CSS file that defines custom properties from a JS source file. If you’re interested, here’s how it’s done.
For this walkthrough you’ll need Node and NPM installed. If you’re already familiar with setting up a project using NPM, you can skip over this part. Otherwise, assuming you’ve already installed NPM globally, you’ll need to run npm init
in your project root and follow the prompts. This creates a package.json file in the root of your project directory.
We’ll need to create a JS file for our script so we can run it from the command line. For simplicity, let’s create a file called index.js in the project root, and add a single line:
// index.js
console.log('Hello world')
Now we should be able to run node index.js
from the terminal and see our “Hello world” message, so we know our very basic script has run successfully.
Now let’s import the theme defined in the JS file from which we want to create our CSS custom properties. We’ll call this theme.js. You’ll need to make sure your file exports the theme so it can be imported elsewhere.
// theme.js
const theme = {
// Theme colours as defined above...
}
export default theme
// index.js
import theme from './theme.js'
console.log(theme)
Running the script again with node index.js
, we should see the theme object logged in the terminal. Now we need to actually do something with it!
The aim here is to create CSS custom properties that correspond to the theme object keys. For example:
// theme.js
const theme = {
color: {
primary: 'red',
secondary: 'blue',
},
}
Would become:
/* styles.css */
:root {
--color-primary: red;
--color-secondary: blue;
}
However, our theme as defined in our JS file isn’t quite so simple. As you can see from the example at the beginning, some of our colour definitions include multiple lighter or darker variants, nested more than one level deep.
What we would like here is to map our colours so that their custom property names are prefixed with their ancestor property names. For example, we would use --color-brand-primary
for the default primary brand colour, and --color-brand-primary-light
for its lighter variant.
:root {
--color-brand-primary: #7b1fa2;
--color-brand-primary-light: #ba68c8;
}
We shouldn’t assume that all colour will have the same property names either. We should be able to define them using any names we like, as many levels as is required.
Note, I’m including color
here as a property of theme
. That’s because the actual theme configuration might include things like font families too. We’ll keep it simple and focus on colour here, but the script we’re going to write should (theoretically!) work for any object properties of the theme.
We’ll write a function that looks at any key/value pair and returns the CSS custom property definition as a string.
The first part is easy enough:
// index.js
import theme from './theme.js'
const mapTheme = ([key, value]) => {
// If value is a string, return the result
if (typeof value === 'string') {
return `--${key}: ${value}`
}
}
This would work fine is we had a very simple theme, like this:
const theme = {
purple: '#7B1FA2',
pink: '#E91E63',
}
We could convert our theme object to an array using Object.entries()
and map over the entries with this function:
// index.js
import theme from './theme.js'
const mapTheme = ([key, value]) => {
// If value is a string, return the result
if (typeof value === 'string') {
return `--${key}: ${value}`
}
}
console.log(Object.entries(theme).map(mapTheme))
// result: ['--purple: #7B1FA2', '--pink: #E91E63']
However, that’s not going to be enough for our nested theme variables. Instead we’ll amend the mapTheme()
function so that if the value is not a string
//index.js
const mapTheme = ([key, value]) => {
// If value is a string, return the result
if (typeof value === 'string') {
return `--${key}: ${value}`
}
// Otherwise, call the function again to check the next pair
return Object.entries(value).flatMap(mapTheme)
}
console.log(Object.entries(theme).flatMap(mapTheme))
You might notice we’re using the flatMap()
array method instead of map()
as above. This is so that the result is output as a flat array, which is what we want, instead of nesting the custom properties.
If we check the result at this point, we’ll see it’s not quite what we want. We end up with custom property names that correspond to the nested object keys but don’t tell us anything about the parent groups. We also end up with duplicates:
[
'--DEFAULT: #7B1FA2',
'--light: #BA68C8',
'--dark: #4A148C',
'--DEFAULT: #E91E63',
'--light: #F48FB1',
'--dark: #C2185B',
'--blue: #40C4FF',
'--turquoise: #84FFFF',
'--mint: #64FFDA',
]
If we want more useful custom property names we’ll need to append the name to its parent group name, unless the key is DEFAULT
, in which case we’ll simply return the parent group key.
// index.js
import theme from './theme.js'
const mapTheme = ([key, value]) => {
// If value is a string, return the result
if (typeof value === 'string') {
return `--${key}: ${value}`
}
return Object.entries(value).flatMap(([nestedKey, nestedValue]) => {
// Append to the custom property name, unless default value
const newKey = nestedKey === 'DEFAULT' ? key : `${key}-${nestedKey}`
// Check the new key/value pair
return mapTheme([newKey, nestedValue])
})
}
console.log(Object.entries(theme).flatMap(mapTheme))
This results in far more helpful names:
[
'--color-brand-primary: #7B1FA2',
'--color-brand-primary-light: #BA68C8',
'--color-brand-primary-dark: #4A148C',
'--color-brand-secondary: #E91E63',
'--color-brand-secondary-light: #F48FB1',
'--color-brand-secondary-dark: #C2185B',
'--color-data-blue: #40C4FF',
'--color-data-turquoise: #84FFFF',
'--color-data-mint: #64FFDA',
]
for
loopBy the way, we could do this in a slightly different way with a for
loop. It’s a similar amount of code, but we don’t need the nested flatMap
, which might make for a slightly more elegant solution (you be the judge!):
// index.js
let result = []
const mapTheme = (obj, key = null) => {
for (const property in obj) {
let name = key || property
if (property !== 'DEFAULT' && !!key) {
name = `${key}-${property}`
}
if (typeof obj[property] === 'string') {
result.push(`--${name}: ${obj[property]}`)
} else {
mapTheme(obj[property], name)
}
}
}
mapTheme(theme)
console.log(result)
Now we can take these values and write them to a CSS file for use in our project. We could simply copy them from the console, but even better if we write a script that will do it for us.
We’ll import the writeFile
method from the Node JS library and write a new async function called buildTheme
, which we’ll export. (We’ll remove the console log from the previous example.)
// index.js
import { writeFile } from 'fs/promises'
import theme from './theme.js'
const mapTheme = ([key, value]) => {
/* ... */
}
const buildTheme = async () => {
try {
console.log(Object.entries(theme).flatMap(mapTheme))
} catch (e) {
console.error(e)
}
}
buildTheme()
We should now be able to run the script from the command line by typing node index.js
and see the result logged.
Next we’ll convert the custom properties into a suitable format for our CSS file. We’ll want each custom property to be indented and set on its own line, which we can do with the escaped characters \n
and \t
respectively.
// index.js
import { writeFile } from 'fs/promises'
import theme from './theme.js'
const mapTheme = ([key, value]) => {
/* ... */
}
const buildTheme = async () => {
try {
const result = Object.entries(theme).flatMap(mapTheme)
// Indent each custom property and append a semicolon
let content = result.map((line) => `\t${line};`)
// Append and prepend brackets, and put each item on a new line
content = [':root {', ...content, '}'].join('\n')
console.log(content)
} catch (e) {
console.error(e)
}
}
buildTheme()
All that remains is to write the result to a CSS file, using the writeFile()
method. We’ll need to specify the location of the file we want to write to, and its character encoding, which will be 'utf-8'
. We’re including a helpful console log informing the user that the file has been written, and ensuring we catch any errors by also logging them to the console.
// index.js
import { writeFile } from 'fs/promises'
import theme from './theme.js'
const mapTheme = ([key, value]) => {
/* ... */
}
const buildTheme = async () => {
try {
const result = Object.entries(theme).flatMap(mapTheme)
let content = result.map((line) => `\t${line};`)
content = [':root {', ...content, '}'].join('\n')
// Write to the file
await writeFile('src/theme.css', content, { encoding: 'utf-8' })
console.log('CSS file written')
} catch (e) {
console.error(e)
}
}
buildTheme()
Running the script now outputs the CSS file we need.
@theme {
--color-brand-primary: #7b1fa2;
--color-brand-primary-light: #ba68c8;
--color-brand-primary-dark: #4a148c;
--color-brand-secondary: #e91e63;
--color-brand-secondary-light: #f48fb1;
--color-brand-secondary-dark: #c2185b;
--color-data-blue: #40c4ff;
--color-data-turquoise: #84ffff;
--color-data-mint: #64ffda;
}
Here’s the complete file:
// index.js
import { writeFile } from 'fs/promises'
import theme from './theme.js'
const mapTheme = ([key, value]) => {
if (typeof value === 'string') {
return `--${key}: ${value}`
}
return Object.entries(value).flatMap(([nestedKey, nestedValue]) => {
const newKey = nestedKey === 'DEFAULT' ? key : `${key}-${nestedKey}`
return mapTheme([newKey, nestedValue])
})
}
const buildTheme = async () => {
try {
const result = Object.entries(theme).flatMap(mapTheme)
let content = result.map((line) => `\t${line};`)
content = [':root {', ...content, '}'].join('\n')
await writeFile('src/theme.css', content, { encoding: 'utf-8' })
console.log('CSS file written')
} catch (e) {
console.error(e)
}
}
buildTheme()
Author's note
This article was originally published on Superface blog before the pivot to agentic tooling platform and is republished here with company's permission.
I am no longer working with geocoding APIs and the content of this article may be outdated.
Geocoding is the process of converting an address to geolocation coordinates (latitude and longitude). Reverse geocoding is the opposite: assigning a street address to the given coordinates. Examples of geocoding include:
How do you build this feature? The easiest way is to use a geocoding API, which often includes reverse geocoding and address data cleaning functions as well.
The good news is that there isn’t a shortage of geocoding API providers to choose from. The bad news is that you have to pick one. Which is why we’re here: to help you decide on the most suitable geocoding API for your project.
In this article, we will look at the pricing model, and terms of use:
In the follow-up articles, we will also explore additional criteria:
At Superface we don’t provide a geocoding API. Instead, we are building a universal API client which lets you connect to any API and any provider – directly from your application without passing the data through our servers. You can even use multiple providers behind a single interface without the need to study the documentation for each or keep up with the API changes.
Geocoding is particularly one domain where your project can benefit from using multiple API providers. Whether it’s for accuracy, cost management, or legal reasons. Our goal is to provide you with accurate and impartial information about geocoding APIs, and we will show you how you can use them immediately with OneSDK, our API client.
Oh, and one more thing: OneSDK is free and open-source, it doesn’t matter whether you will use it for geocoding twice or a billion times. Our business is built around providing the connectors to the APIs and their long-term support, but not around the usage volume.
Provider | Free Requests | Rate Limit (requests per second) | Pricing (per 1,000 requests) | Additional Notes |
---|---|---|---|---|
HERE | 30,000/month | 5 | $0.83 up to 5M $0.66 up to 10M |
|
Google Maps | 40,000/month ($200 credit) | 50 | $5 up to 100,000 $4 up to 500,000 |
Attribution & Google Maps required |
Azure Maps | 5,000/month | 500 (geocoding) 250 (reverse geocoding) |
$4.50 | |
OpenCage | 2,500/day | 1 (free) 15 (X-Small) up to 40 (Large) |
$0.17 (10,000 per day) $0.11 (300,000 per day)[1] |
Free trial for testing only Monthly fixed pricing |
TomTom Maps | 2,500/day | 5 | $0.54 | |
LocationIQ | 5,000/day | 2 | $0.16 (10,000 per day) $0.03 (1M per day)[1:1] |
Free plan requires attribution Monthly fixed pricing |
Nominatim | n/a | 1 | n/a | Low-volume, noncommercial use only Attribution required |
HERE’s pricing starts with the Limited Plan, which provides you with 1,000 free requests per day, with a rate limit of 5 requests per second.
If you provide payment information, you are upgraded to the Base Plan. The Base Plan removes the rate limit and sets you up for 30,000 free requests per month. Above that, requests up to 5 million are $0.830 per 1,000, and $0.660 per 1,000 requests between 5 and 10 million per month.
Google Maps Platform requires you to provide billing details to use the Geocoding API, and provides you with $200 of free credit per month, which is good for 40,000 free geocoding API requests (check the Geocoding API Usage and Billing).
If that’s not enough, the pricing starts at $5 per 1,000 requests up to 100,000 requests per month. Above that, the price gets lower to $4 per 1,000 requests up to 500,000 requests.
Regardless of usage, there’s a rate limit of 50 requests per second. Google also prohibits displaying of geocoding results on another map than Google Maps, and requires displaying Google logo for attribution.
Azure Maps provides 5,000 free requests per month (see the pricing for Azure Maps Search), and the price per 1,000 requests above that is $4.50 (up to 500,000 requests).
Queries are rate limited to 500 per second for geocoding, and 250 per second for reverse geocoding.
OpenCage pricing is richer than for other services. You have a choice of purchasing a one-time requests package (valid up to one year), or subscribing to different usage tiers on a monthly or annual basis.
The free tier is intended only for testing and development and provides you with 2,500 requests per day, rate limited to 1 request per second. The cheapest package costs $50 per month and comes with 10,000 requests per day (about $0.17 per 1,000 requests) and a rate limit of 15 requests per second. The biggest pre-Enterprise package costs $1,000 per month, with 300,000 requests per day and a rate limit of 40 requests per second (which is approx $0.11 per 1,000 requests).
One nice thing is that the daily request limit is “soft” – if you occasionally cross the limit, the service won’t be blocked, and you won’t be charged anything extra. Only if you repeatedly pass the limit OpenCage asks you to upgrade your plan for the next month.
LocationIQ pricing is very similar to OpenCage's. You have a choice of plans paid on a monthly and annual basis, but no option to purchase a one-time requests package.
The free tier does allow commercial usage as long as you include a link in your application to LocationIQ. Furthermore, the free tier limit is doubled compared to OpenCage, with 5,000 free requests allowed per day and a rate limit of 2 requests per second. The smallest package is basically the same as OpenCage's: it costs $49 per month and comes with 10,000 requests per day (about $0.16 per 1,000 requests) and a rate limit of 15 requests per second. However, the biggest package includes 1 million requests per day for $950 per month (about $0.03 per 1,000 requests).
Similar to OpenCage, LocationIQ has a “soft” limit for daily requests, allowing requests “upto an additional 100% of your daily limit”. For example, on the smallest package, you can occasionally perform 20,000 requests per day before getting an error.
TomTom provides a generous free tier with 2,500 requests per day available for commercial applications as well. Above that, 1,000 requests cost €0.50 ($0.54).
Nominatim is a bit different from the other services on this list. It’s primarily an open-source project that uses data from OpenStreetMap. And conversely, OpenStreetMap’s search is powered by Nominatim. You can (and should) run Nominatim on your server, but if you just want to try the API or have a low-volume hobby project, you’re welcome to use the Nominatim instance provided by OpenStreetMap.
However, pay close attention to its usage policy, in particular:
Nominatim is also used by some commercial providers, including OpenCage and LocationIQ.
While each service has different pricing tiers, we can compare the price based on the number of requests made. We’ve omitted Nominatim in this comparison, since it’s always free, but isn’t intended for commercial projects.
Based purely on pricing, we can draw a conclusion about each provider.
Azure Maps is, for higher volume scenarios, the most expensive option, with low free tier and fixed price per request. Similar to Google Maps, Azure Maps’ price per 1,000 requests is almost ten times higher compared to other providers.
Google Maps Platform is similarly expensive, but also the most restrictive provider, with requirements for attribution and displaying data using their embedded maps. This can introduce additional costs, as Google Maps with JavaScript API is also paid per usage.
OpenCage and LocationIQ both provide monthly plans with a fixed price. OpenCage also provides the possibility to purchase one-off usage credits and handles billing in multiple currencies automatically. LocationIQ, on the other hand, provides more generous free tier, and their monthly plans are cheaper, especially for higher volume usage. The “Business Plus” plan in particular allows for 1 million requests per day, allowing for a whopping 30 million requests per month without negotiating custom pricing. A monthly subscription probably makes the most sense if your usage volume of the geocoding API is consistent throughout the month.
On the other hand, TomTom Maps may be preferable if your usage is uneven. The price per 1,000 calls is among the lowest, and you have a large amount of free requests per day. And unlike OpenCage and LocationIQ, you don't need to pay a monthly subscription. The commercial-friendly free tier is also a great option for smaller and low-budget projects.
HERE is a viable option for high-volume usage. While most providers require you to upgrade to the (presumably expensive) Enterprise plan once you use around 500,000 requests/month, HERE will ask you only once you reach 10 million monthly requests. (However, LocationIQ allows for 1 million requests per day with their biggest package.)
Finally, Nominatim is a special option. Great for small projects, but not intended for commercial usage. Still, if you use the service, consider supporting the project.
The article was updated on June 26, 2023, to include LocationIQ per the provider's request.
Python is one of the most widely adopted programming languages in the world. Yet, because of it’s ease and simplicity to just “get something working”, it’s also one of the most underappreciated.
If you search for Top 10 Advanced Python Tricks
on Google or any other search engine, you’ll find tons of blogs or LinkedIn articles going over trivial (but still useful) things like generators
or tuples
.
However, as someone who’s written Python for the past 12 years, I’ve come across a lot of really interesting, underrated, unique, or (as some might say) “un-pythonic” tricks to really level up what Python can do.
That’s why I decided to compile the top 14 of said features alongside examples and additional resources if you want to dive deeper into any of them.
These tips & tricks were originally featured as part of a 14-day series on X/Twitter between March 1st and March 14th (pi-day, hence why there are 14 topics in the article).
All X/Twitter links will also be accompanied with a Nitter counterpart. Nitter is a privacy-abiding open source Twitter frontend. Learn more about the project here.
@overload
is a decorator from Python’s typing
module that lets you define multiple signatures for the same function. Each overload tells the type checker exactly what types to expect when specific parameters are passed in.
For example, the code below dictates that only list[str]
can be returned if mode=split
, and only str
can be returned if mode=upper
. (The Literal
type also forces mode to be either one of split
or upper
)
from typing import Literal, overload
@overload
def transform(data: str, mode: Literal["split"]) -> list[str]:
...
@overload
def transform(data: str, mode: Literal["upper"]) -> str:
...
def transform(data: str, mode: Literal["split", "upper"]) -> list[str] | str:
if mode == "split":
return data.split()
else:
return data.upper()
split_words = transform("hello world", "split") # Return type is list[str]
split_words[0] # Type checker is happy
upper_words = transform("hello world", "upper") # Return type is str
upper_words.lower() # Type checker is happy
upper_words.append("!") # Cannot access attribute "append" for "str"
Overloads can do more than just change return type based on arguments! In another example, we use typing overloads to ensure that either one of id
OR username
are passed in, but never both.
@overload
def get_user(id: int = ..., username: None = None) -> User:
...
@overload
def get_user(id: None = None, username: str = ...) -> User:
...
def get_user(id: int | None = None, username: str | None = None) -> User:
...
get_user(id=1) # Works!
get_user(username="John") # Works!
get_user(id=1, username="John") # No overloads for "get_user" match the provided arguments
The
...
is a special value often used in overloads to indicate that a parameter is optional, but still requires a value.
✨ Quick bonus trick: As you probably saw, Python also has support for String Literals. These help assert that only specific string values can be passed to a parameter, giving you even more type safety. Think of them like a lightweight form of Enums!
def set_color(color: Literal["red", "blue", "green"]) -> None:
...
set_color("red")
set_color("blue")
set_color("green")
set_color("fuchsia") # Argument of type "Literal['fuchsia']" cannot be assigned to parameter "color"
@overload
By default, both required parameters and optional parameters can be assigned with both positional and keyword syntax. However, what if you don’t want that to happen? Keyword-only and Positional-only args let you control that.
def foo(a, b, /, c, d, *, e, f):
# ^ ^
# Ever seen these before?
...
*
(asterisk) marks keyword-only parameters. Arguments after *
must be passed as keyword arguments.
# KW+POS | KW ONLY
# vv | vv
def foo(a, *, b):
...
# == ALLOWED ==
foo(a=1, b=2) # All keyword
foo(1, b=2) # Half positional, half keyword
# == NOT ALLOWED ==
foo(1, 2) # Cannot use positional for keyword-only parameter
# ^
/
(forward slash) marks positional-only parameters. Arguments before /
must be passed positionally and cannot be used as keyword arguments.
# POS ONLY | KW POS
# vv | vv
def bar(a, /, b):
...
# == ALLOWED ==
bar(1, 2) # All positional
bar(1, b=2) # Half positional, half keyword
# == NOT ALLOWED ==
bar(a=1, b=2) # Cannot use keyword for positional-only parameter
# ^
Keyword-only and Positional-only arguments are especially helpful for API developers to enforce how their arguments may be used and passed in.
A quick history lesson into Python’s typing:
This is less of a “Python Feature” and more of a history lesson into Python’s type system, and what
from __future__ import annotations
does if you ever encounter it in production code.
Python’s typing system started off as a hack. Function annotation syntax was first introduced with PEP 3107 back in Python 3.0 as purely an extra way to decorate functions with no actual type-checking functionality.
Proper specifications for type annotations were later added in Python 3.5 through PEP 484, but they were designed to be evaluated at bound / definition time. This worked great for simple cases, but it increasingly caused headaches with one type of problem: forward references.
This meant that forward references (using a type before it gets defined) required falling back to string literals, making the code less elegant and more error-prone.
# This won't work
class Foo:
def action(self) -> Foo:
# The `-> Foo` return annotation is evaluated immediately during definition,
# but the class `Foo` is not yet fully defined at that point,
# causing a NameError during type checking.
...
# This is the workaround -> Using string types
class Bar:
def action(self) -> "Bar":
# Workaround with string literals, but ugly and error-prone
...
Introduced as a PEP (Python Enhancement Proposal), PEP 563: Postponed Evaluation of Annotations aimed to fix this by changing when type annotations were evaluated. Instead of evaluating annotations at definition time, PEP 563 “string-ifies” types behind the scenes and postpones evaluation until they’re actually needed, typically during static analysis. This allows for cleaner forward references without explicitly defining string literals and reduces the runtime overhead of type annotations.
from __future__ import annotations
class Foo:
def bar(self) -> Foo: # Works now!
...
So what was the problem?
For type checkers, this change is largely transparent. But because PEP 563 implements this by essentially treating all types as strings behind the scenes, anything that relies on accessing return types at runtime (i.e., ORMs, serialization libraries, validators, dependency injectors, etc.) will have compatibility issues with the new setup.
That’s why even after ten years after the initial proposal, modern Python (3.13 as of writing this) still relies on the same hacked-together type system introduced in Python 3.5.
# ===== Regular Python Typing =====
def foobar() -> int:
return 1
ret_type = foobar.__annotations__.get("return")
ret_type
# Returns: <class 'int'>
new_int = ret_type()
# ===== With Postponed Evaluation =====
from __future__ import annotations
def foobar() -> int:
return 1
ret_type = foobar.__annotations__.get("return")
ret_type
# "int" (str)
new_int = ret_type() # TypeError: 'str' object is not callable
Recently, PEP 649 proposes a new method to handle Python function and class annotations through deferred, or “lazy,” evaluation. Instead of evaluating annotations at the time of function or class definition, as is traditionally done, this approach delays their computation until they are actually accessed.
This is achieved by compiling the annotation expressions into a separate function, stored in a special __annotate__
attribute. When the __annotations__
attribute is accessed for the first time, this function is invoked to compute and cache the annotations, making them readily available for subsequent accesses.
# Example code from the PEP 649 proposal
class function:
# __annotations__ on a function object is already a
# "data descriptor" in Python, we're just changing
# what it does
@property
def __annotations__(self):
return self.__annotate__()
# ...
def annotate_foo():
return {'x': int, 'y': MyType, 'return': float}
def foo(x = 3, y = "abc"):
...
foo.__annotate__ = annotate_foo
class MyType:
...
foo_y_annotation = foo.__annotations__['y']
This deferred evaluation strategy addresses issues like forward references and circular dependencies, as annotations are only evaluated when needed. Moreover, it enhances performance by avoiding the immediate computation of annotations that might not be used, and maintains full semantic information, supporting introspection and runtime type-checking tools.
✨ Bonus Fact: Since Python 3.11, Python now supports a “Self” type (PEP 673) that allows for proper typing of methods that return instances of their own class, solving this particular example of self-referential return types.
from typing import Self
class Foo:
def bar(self) -> Self:
...
__future__
— Future Statement DefinitionsDid you know that Python has Generics? In fact, since Python 3.12, a newer, sleeker, and sexier syntax for Generics was introduced.
class KVStore[K: str | int, V]:
def __init__(self) -> None:
self.store: dict[K, V] = {}
def get(self, key: K) -> V:
return self.store[key]
def set(self, key: K, value: V) -> None:
self.store[key] = value
kv = KVStore[str, int]()
kv.set("one", 1)
kv.set("two", 2)
kv.set("three", 3)
Python 3.5 initially introduced Generics through the TypeVar
syntax. However, PEP 695 for Python 3.12 revamped type annotations with native syntax for generics, type aliases, and more.
# OLD SYNTAX - Python 3.5 to 3.11
from typing import Generic, TypeVar
UnBounded = TypeVar("UnBounded")
Bounded = TypeVar("Bounded", bound=int)
Constrained = TypeVar("Constrained", int, float)
class Foo(Generic[UnBounded, Bounded, Constrained]):
def __init__(self, x: UnBounded, y: Bounded, z: Constrained) -> None:
self.x = x
self.y = y
self.z = z
# NEW SYNTAX - Python 3.12+
class Foo[UnBounded, Bounded: int, Constrained: int | float]:
def __init__(self, x: UnBounded, y: Bounded, z: Constrained) -> None:
self.x = x
self.y = y
self.z = z
This change also introduces an even more powerful version of variadic generics. Meaning you can have an arbitrary number of type parameters for complex data structures and operations.
class Tuple[*Ts]:
def __init__(self, *args: *Ts) -> None:
self.values = args
# Works with any number of types!
pair = Tuple[str, int]("hello", 42)
triple = Tuple[str, int, bool]("world", 100, True)
Finally, as part of the 3.12 typing changes, Python also introduced a new concise syntax for type aliases!
# OLD SYNTAX - Python 3.5 to 3.9
from typing import NewType
Vector = NewType("Vector", list[float])
# OLD-ish SYNTAX - Python 3.10 to 3.11
from typing import TypeAlias
Vector: TypeAlias = list[float]
# NEW SYNTAX - Python 3.12+
type Vector = list[float]
One of Python’s major features (and also major complaints) is its support for Duck Typing. There’s a saying that goes:
“If it walks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.”
However, that raises the question: How do you type duck typing?
class Duck:
def quack(self): print('Quack!')
class Person:
def quack(self): print("I'm quacking!")
class Dog:
def bark(self): print('Woof!')
def run_quack(obj):
obj.quack()
run_quack(Duck()) # Works!
run_quack(Person()) # Works!
run_quack(Dog()) # Fails with AttributeError
That’s where Protocols come in. Protocols (also known as Structural Subtyping) are typing classes in Python defining the structure or behavior that classes can follow without the use of interfaces or inheritance.
from typing import Protocol
class Quackable(Protocol):
def quack(self) -> None:
... # The ellipsis indicates this is just a method signature
class Duck:
def quack(self): print('Quack!')
class Dog:
def bark(self): print('Woof!')
def run_quack(obj: Quackable):
obj.quack()
run_quack(Duck()) # Works!
run_quack(Dog()) # Fails during TYPE CHECKING (not runtime)
In essence, Protocols check what your object can do, not what it is. They simply state that as long as an object implements certain methods or behaviors, it qualifies, regardless of its actual type or inheritance.
✨ Additional quick tip: Add the @runtime_checkable
decorator if you want isinstance()
checks to work alongside your Protocols!
@runtime_checkable
class Drawable(Protocol):
def draw(self) -> None:
...
Context Managers are objects that define the methods: __enter__()
and __exit__()
. The __enter__()
method runs when you enter the with
block, and the __exit__()
method runs when you leave it (even if an exception occurs).
Contextlib
simplifies this process by wrapping all that boilerplate code in a single easy-to-use decorator.
# OLD SYNTAX - Traditional OOP-style context manager
class retry:
def __enter__(self):
print("Entering Context")
def __exit__(self, exc_type, exc_val, exc_tb):
print("Exiting Context")
# NEW SYNTAX - New contextlib-based context manager
import contextlib
@contextlib.contextmanager
def retry():
print("Entering Context")
yield
print("Exiting Context")
To create your own, write a function with the @contextlib.contextmanager
decorator. Add setup code before yield
, cleanup code after it. Any variables on yield will be passed in as additional context. That’s it.
The yield
statement instructs the context manager to pause your function and lets content within the with
block run.
import contextlib
@contextlib.contextmanager
def context():
# Setup code here
setup()
yield (...) # Any variables you want to be passed to the with block
# Teardown code here
takedown()
Overall, this is a much more concise and readable way of creating and using context managers in Python.
contextlib
— Utilities for with-statement contextsIntroduced in Python 3.10, Structural Pattern Matching gives Python developers a powerful alternative to traditional conditional logic. At its most basic, the syntax looks like this:
match value:
case pattern1:
# code if value matches pattern1
case pattern2:
# code if value matches pattern2
case _:
# wildcard case (default)
The real power comes with destructuring! Match patterns break down complex data structures and extract values in a single step.
# Destructuring and matching tuples
match point:
case (0, 0):
return "Origin"
case (0, y):
return f"Y-axis at {y}"
case (x, 0):
return f"X-axis at {x}"
case (x, y):
return f"Point at ({x}, {y})"
# Using OR pattern (|) to match multiple patterns
match day:
case ("Monday"
| "Tuesday"
| "Wednesday"
| "Thursday"
| "Friday"):
return "Weekday"
case "Saturday" | "Sunday":
return "Weekend"
# Guard clauses with inline 'if' statements
match temperature:
case temp if temp < 0:
return "Freezing"
case temp if temp < 20:
return "Cold"
case temp if temp < 30:
return "Warm"
case _:
return "Hot"
# Capture entire collections using asterisk (*)
match numbers:
case [f]:
return f"First: {f}"
case [f, l]:
return f"First: {f}, Last: {l}"
case [f, *m, l]:
return f"First: {f}, Middle: {m}, Last: {l}"
case []:
return "Empty list"
You can also combine match-case with other Python features like walrus operators to create even more powerful patterns.
# Check if a packet is valid or not
packet: list[int] = [0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07]
match packet:
case [c1, c2, *data, footer] if ( # Deconstruct packet into header, data, and footer
(checksum := c1 + c2) == sum(data) and # Check that the checksum is correct
len(data) == footer # Check that the data length is correct
):
print(f"Packet received: {data} (Checksum: {checksum})")
case [c1, c2, *data]: # Failure case where structure is correct but checksum is wrong
print(f"Packet received: {data} (Checksum Failed)")
case [_, *__]: # Failure case where packet is too short
print("Invalid packet length")
case []: # Failure case where packet is empty
print("Empty packet")
case _: # Failure case where packet is invalid
print("Invalid packet")
Slots are a way to potentially speed up the creation and access of any Python class.
TLDR: They define a fixed set of attributes for classes, optimizing and speeding up accesses during runtime.
class Person:
__slots__ = ('name', 'age')
def __init__(self, name, age):
self.name = name
self.age = age
Under the hood, Python classes store instance attributes in an internal dictionary called __dict__
, meaning a hash table lookup is required each time you want to access a value.
In contrast, __slots__
uses an array-like structure where attributes can be looked up in true O(1) time, bringing a minor overall speed bump to Python.
# Without __slots__
class FooBar:
def __init__(self):
self.a = 1
self.b = 2
self.c = 3
f = FooBar()
print(f.__dict__) # {'a': 1, 'b': 2, 'c': 3}
# With __slots__
class FooBar:
__slots__ = ('a', 'b', 'c')
def __init__(self):
self.a = 1
self.b = 2
self.c = 3
f = FooBar()
print(f.__dict__) # AttributeError
print(f.__slots__) # ('a', 'b', 'c')
There is still debate about whether __slots__
is worth using, as it complicates class definitions with very marginal or no performance benefits at all. However, it is a useful tool to have in your arsenal if you ever need it.
__slots__
in Python!__slots__
This is not a Python “feature” or “tip” per se, but instead a handful of quick syntax tips to really clean up your Python codebase.
As someone who’s seen a lot of Python code.
If you ever need to check if a for loop completes without a break, for-else statements are a great way to accomplish this without using a temporary variable.
# ===== Don't write this =====
found_server = False # Keep track of whether we found a server
for server in servers:
if server.check_availability():
primary_server = server
found_server = True # Set the flag to True
break
if not found_server:
# Use the backup server if no server was found
primary_server = backup_server
# Continue execution with whatever server we found
deploy_application(primary_server)
# ===== Write this instead =====
for server in servers:
if server.check_availability():
primary_server = server
break
else:
# Use the backup server if no server was found
primary_server = backup_server
# Continue execution with whatever server we found
deploy_application(primary_server)
If you need to define and evaluate a variable all in one expression, the Walrus Operator (new in Python 3.8 with PEP 572) is a quick way to accomplish just that.
Walrus operators are really useful for using a value right after checking if it is
not None
!
# ===== Don't write this =====
response = get_user_input()
if response:
print('You pressed:', response)
else:
print('You pressed nothing')
# ===== Write this instead =====
if response := get_user_input():
print('You pressed:', response)
else:
print('You pressed nothing')
Short-circuit Evaluation is a shortcut for getting the “next available” or “next truthy” value in a list of expressions. It turns out you can simply chain or
statements!
# ===== Don't write this =====
username, full_name, first_name = get_user_info()
if username is not None:
display_name = username
elif full_name is not None:
display_name = full_name
elif first_name is not None:
display_name = first_name
else:
display_name = "Anonymous"
# ===== Write this instead =====
username, full_name, first_name = get_user_info()
display_name = username or full_name or first_name or "Anonymous"
Finally, Python lets you chain comparison operators together to shorten up integer range comparisons, making them more readable than the equivalent boolean expressions.
# ===== Don't write this =====
if 0 < x and x < 10:
print("x is between 0 and 10")
# ===== Write this instead =====
if 0 < x < 10: # Instead of if 0 < x and x < 10
print("x is between 0 and 10")
for/else
- Python TipsPython’s f-strings are no secret by now. Introduced in Python 3.6 with PEP 498, they are a better, cleaner, faster, and safer method of interpolating variables, objects, and expressions into strings.
But did you know there is more to f-strings than just inserting variables? There exists a hidden formatting syntax called the Format Mini-Language that allows you to have much greater control over string formatting.
print(f"{' [ Run Status ] ':=^50}")
print(f"[{time:%H:%M:%S}] Training Run {run_id=} status: {progress:.1%}")
print(f"Summary: {total_samples:,} samples processed")
print(f"Accuracy: {accuracy:.4f} | Loss: {loss:#.3g}")
print(f"Memory: {memory / 1e9:+.2f} GB")
Output:
=================== [ Run Status ] ===================
[11:16:37] Training Run run_id=42 status: 87.4%
Summary: 12,345,678 samples processed
Accuracy: 0.9876 | Loss: 0.0123
Memory: +2.75 GB
You can do things like enable debug expressions, apply number formatting (similar to str.format
), add string padding, format datetime objects, and more! All within f-string format specifiers.
Hello World!
print(f"{name=}, {age=}")
name='Claude', age=3
print(f"Pi: {pi:.2f}")
print(f"Avogadro: {avogadro:.2e}")
print(f"Big Number: {big_num:,}")
print(f"Hex: {num:#0x}")
print(f"Number: {num:09}")
Pi: 3.14
Avogadro: 6.02e+23
Big Number: 1,000,000
Hex: 0x1a4
Number: 000000420
print(f"Left: |{word:<10}|")
print(f"Right: |{word:>10}|")
print(f"Center: |{word:^10}|")
print(f"Center *: |{word:*^10}|")
Left: |Python |
Right: | Python|
Center: | Python |
Center *: |**Python**|
print(f"Date: {now:%Y-%m-%d}")
print(f"Time: {now:%H:%M:%S}")
Date: 2025-03-10
Time: 14:30:59
print(f"Progress: {progress:.1%}")
Progress: 75.0%
You can use the built-in @cache
decorator to dramatically speed up recursive functions and expensive calculations! (which superseded @lru_cache
in Python 3.9!)
from functools import cache
@cache
def fib(n):
return n if n < 2 else fib(n-1) + fib(n-2)
Since Python 3.2, @lru_cache
was introduced as part of the functools
module for quick & clean function memoization. Starting with Python 3.9, @cache
was added for the same effect with less code. lru_cache
still exists if you want explicit control of the cache size.
FIB_CACHE = {}
# With Manual Caching :(
def fib(n):
if n in FIB_CACHE:
return FIB_CACHE[n]
if n <= 2:
return 1
FIB_CACHE[n] = fib(n - 1) + fib(n - 2)
return FIB_CACHE[n]
from functools import lru_cache
# Same code with lru_cache :)
@lru_cache(maxsize=None)
def fib(n):
return n if n < 2 else fib(n-1) + fib(n-2)
from functools import cache
# Same code with new Python 3.9's cache :D
@cache
def fib(n):
return n if n < 2 else fib(n-1) + fib(n-2)
@functools.cache
@functools.lru_cache
Did you know that Python has native Promise
-like concurrency control?
from concurrent.futures import Future
# Manually create a Future Object
future = Future()
# Set its result whenever you want
future.set_result("Hello from the future!")
# Get the result
print(future.result()) # "Hello from the future!"
Python’s concurrent.futures
module gives you direct control over async operations, just like JS Promises. For example, they let you attach callbacks that run when the result is ready (just like JS’s .then()
).
from concurrent.futures import Future
future = Future()
# Add callbacks BEFORE or AFTER completion!
future.add_done_callback(lambda f: print(f"Got: {f.result()}"))
future.set_result("Async result")
# Prints: "Got: Async result"
future.add_done_callback(lambda f: print(f"After: {f.result()}"))
# Prints: "After: Async result"
Python Futures also come with primitives to handle exceptions, set timeouts, or stop tasks completely.
from concurrent.futures import Future
import time, threading
# Create and manage a future manually
future = Future()
# Background task function
def background_task():
time.sleep(2)
future.set_result("Done!")
thread = threading.Thread(target=background_task)
thread.daemon = True
thread.start()
# Try all control operations
print(f"Cancelled: {future.cancel()}") # Likely False if started
try:
# Wait at most 0.5 seconds
result = future.result(timeout=0.5)
except TimeoutError:
print("Timed out!")
# Create failed future
err_future = Future()
err_future.set_exception(ValueError("Failed"))
print(f"Has error: {bool(err_future.exception())}")
Just like modern JS, the asyncio
module has its own Future that works seamlessly with Python’s async/await
syntax:
import asyncio
async def main():
future = asyncio.Future()
# Set result after delay
asyncio.create_task(set_after_delay(future))
# Await just like a JS Promise!
result = await future
print(result) # "Worth the wait!"
async def set_after_delay(future):
await asyncio.sleep(1)
future.set_result("Worth the wait!")
asyncio.run(main())
Finally, for CPU or I/O bound tasks, Python’s ThreadPoolExecutor
can automatically create and manage futures for you.
from concurrent.futures import ThreadPoolExecutor
import time
def slow_task():
time.sleep(1)
return "Done!"
with ThreadPoolExecutor() as executor:
# Returns a Future immediately
future = executor.submit(slow_task)
# Do other work while waiting...
print("Working...")
# Get result when needed
print(future.result())
concurrent.futures
in Pythonconcurrent.futures
concurrent.futures
Did you know you can make class attributes act as BOTH methods AND properties?!? This isn’t a built-in feature of Python, but instead a demonstration of what you can do with clever use of Python’s dunder (magic) methods and descriptors.
(Note that this is very much an example implementation and should not be used in production)
from typing import Callable, Generic, TypeVar, ParamSpec, Self
P = ParamSpec("P")
R = TypeVar("R")
T = TypeVar("T")
class ProxyProperty(Generic[P, R]):
func: Callable[P, R]
instance: object
def __init__(self, func: Callable[P, R]) -> None:
self.func = func
def __get__(self, instance: object, _=None) -> Self:
self.instance = instance
return self
def __call__(self, *args: P.args, **kwargs: P.kwargs) -> R:
return self.func(self.instance, *args, **kwargs)
def __repr__(self) -> str:
return self.func(self.instance)
def proxy_property(func: Callable[P, R]) -> ProxyProperty[P, R]:
return ProxyProperty(func)
class Container:
@proxy_property
def value(self, val: int = 5) -> str:
return f"The value is: {val}"
# Example usage
c = Container()
print(c.value) # Returns: The value is: 5
print(c.value(7)) # Returns: The value is: 7
How does this work under the hood? It comes down to Python’s Descriptor Protocol:
The __get__
method transforms the ProxyProperty
object into a descriptor.
When you access c.value
, Python calls __get__
which returns self
(the descriptor instance).
The __repr__
method handles property access (returning default values).
The __call__
method handles method calls with parameters.
This creates a dual-purpose attribute that can be both read directly AND called like a function!
The benefit of this class is that it allows you to create intuitive APIs where a property might need configuration, or properties that should have sensible defaults but still allow for customization.
If you want to look at a proper production-ready implementation of proxy properties, check out Codegen’s implementation of
ProxyProperty
here: codegen/src/codegen/sdk/_proxy.py
Finally, introducing one of Python’s most powerful yet mysterious features: Metaclasses
class MyMetaclass(type):
def __new__(cls, name, bases, namespace):
# Magic happens here
return super().__new__(cls, name, bases, namespace)
class MyClass(metaclass=MyMetaclass):
pass
obj = MyClass()
Classes in Python aren’t just blueprints for objects. They’re objects too! And every object needs a class that created it. So what creates class objects? Metaclasses.
By default, Python uses the type
metaclass to create all classes. For example, these two are equivalent to each other:
# Create a MyClass object
class MyClass:
...
obj = MyClass()
# Also creates a MyClass object
obj2 = type("MyClass", (), {})
To break down what those arguments mean, here is an example that creates a class with an attribute x
and a method say_hi
, that also subclasses off object
.
# type(
# name,
# bases,
# attributes
# )
CustomClass = type(
'CustomClass',
(object,),
{'x': 5, 'say_hi': lambda self: 'Hello!'}
)
obj = CustomClass()
print(obj.x) # 5
print(obj.say_hi()) # Hello!
In essence, Metaclasses let you customize and modify these arguments during class creation. For example, here is a metaclass that doubles every integer attribute for a class:
class DoubleAttrMeta(type):
def __new__(cls, name, bases, namespace):
new_namespace = {}
for key, val in namespace.items():
if isinstance(val, int):
val *= 2
new_namespace[key] = val
return super().__new__(cls, name, bases, new_namespace)
class MyClass(metaclass=DoubleAttrMeta):
x = 5
y = 10
print(MyClass.x) # 10
print(MyClass.y) # 20
Here is another example of a metaclass that registers every class created into a registry.
# ===== Metaclass Solution =====
class RegisterMeta(type):
registry = []
def __new__(mcs, name, bases, attrs):
cls = super().__new__(mcs, name, bases, attrs)
mcs.registry.append(cls)
return cls
The problem is, decorators could achieve this same goal without the use of black magic (and it’s often cleaner too).
# ===== Decorator Solution =====
def register(cls):
registry.append(cls)
return cls
@register
class MyClass:
pass
And that kind of brings to light the biggest problem with metaclasses:
Almost 100% of the time, you will never need to touch them.
In your day-to-day development, 99% of your code won’t ever hit a use case where metaclasses could be useful. And of that 1%, 95% of those cases could just be solved with regular decorators, dunder methods, or just plain inheritance.
That’s why there is that one famous Python quote that goes:
Metaclasses are deeper magic than 99% of users should ever worry about. If you wonder whether you need them, you don’t. - Tim Peters
But if you are that 1% which has a unique enough problem that only metaclasses can solve, they are a powerful tool that lets you tinker with the internals of the Python object system.
As for some real-world examples of metaclasses:
- Python’s “ABC” implementation uses metaclasses to implement abstract classes.
- Python’s “Enum” implementation uses it to create enumeration types.
- A bunch of 3rd party libraries like Django, SQLAlchemy, Pydantic, and Pytest use metaclasses for a variety of purposes.
And that’s it folks! 14 of some of the most interesting & underrated Python features that I’ve encountered in my Python career.
If you’ve made it this far, shoot me a quick message as to which ones you’ve seen before and which ones you haven’t! I’d love to hear from you.
Happy Python-ing, y’all 🐍!
🇫🇷 This post is also available in french
The View Transition API allows native animation of page state changes, without depending on third-party libraries. Recently, Next.js has experimentally integrated this feature.
Although this approach is still in the testing phase and minimally documented, it paves the way for simplifying animations that were once considered complex.
Before discussing its use in Next.js, I will briefly introduce the API in its native version for those who may not yet be familiar with it. If you are already familiar with the API, you can directly skip to this section.
The View Transition API is a native browser feature that allows animating DOM state or navigation changes without resorting to a third-party library like Framer Motion or GSAP. It significantly simplifies the integration of transitions/visual animations when updating a page's content.
This feature is not supported by all browsers. Currently, it's only supported by Chrome-based browsers and the Safari browser.
The core of this API lies in the method startViewTransition
.
document.startViewTransition(() => updateTheDOMSomehow())
Calling this method, and passing it a callback function that updates the DOM, triggers a view transition cycle.
What does this mean? Essentially, when calling this method, the API captures the state of the page. Once the operation is complete (the page capture), your DOM update function is called and the API then again captures the state of the page, after your DOM mutation.
The API then builds a tree that looks like this:
::view-transition
└─ ::view-transition-group(root)
└─ ::view-transition-image-pair(root)
├─ ::view-transition-old(root)
└─ ::view-transition-new(root)
As the name suggests, ::view-transition-old
represents the capture of the old view and, as you may have guessed, ::view-transition-new
represents the current capture of the view.
The old view is animated in fade out, while the new one appears in fade in (via the CSS property opacity
). This is the default behaviour, but it can of course be customised (this is the whole point).
To customise view transitions, we will use the pseudo selectors ::view-transition....
in CSS.
So, if I want to make a slightly more complex transition (than a fade in/out), I can target my view transition and apply a CSS animation of my choice.
For example, here, I respectively apply my animations pop-in
and pop-out
that I would have defined earlier in my style sheet.
::view-transition-old(root) {
animation: pop-out 0.3s ease;
}
::view-transition-new(root) {
animation: pop-in 0.3s ease;
}
In the previous example, we used a view transition on the whole page (root
), but it is possible to target a specific element.
To do this, we must first assign it a view transition name (view-transition-name
).
.box {
view-transition-name: box;
}
We can then target the view transition associated with this element with the corresponding pseudo elements:
::view-transition-old(box) {
animation: skew-out 0.3s ease;
}
::view-transition-new(box) {
animation: skew-in 0.3s ease;
}
You must assign a unique view-transition-name
to each element for the capture to work.
If you want to animate several elements in the same way, you can use the view-transition-class: myClass
property. In this case, you will need to prefix your class name with a period to select it with the pseudo-element (e.g., view-transition-old(.myClass)
or view-transition-new(.myClass)
).
View transitions fall into two categories.
Transitions on the same document and those on multi-documents (page change transition).
Both categories rely on the same principles, with the difference that for a multi-documents transition, there is no need to call the startViewTransition
method to start the transition. The navigation between the documents is what triggers the transition.
We are now generally up to date on the state of the View Transition API (as of the writing of this article).
If you wish to delve deeper into the API, I recommend the MDN documentation (although a little light), the W3C specification, and the many articles from Chrome For Developers on the subject.
Recently, a feature was introduced to facilitate the integration of the View Transition API in Next.js projects. But beware, we are stepping into very little documented territory here.
This feature, introduced in a release a few months ago, is still marked as experimental
. At the time of writing, the official documentation is quite minimal on the topic (here is the link if you want to take a look) and resources remain scarce.
It was precisely this lack of documentation that led me to dig a little bit deeper.
Digging around, I stumbled upon a demo shared by Delba Oliveira, a developer at Vercel. This made me want to delve deeper into the subject and experiment directly with this new API in a small test project on my end. Here's what I gleaned from it.
Spoiler: it's already promising.
To use this feature in NextJS, you need to enable the experimental flag viewTransition
. And use a version ≥ v15.2.0
.
next.config.js
module.exports = {
// ...
experimental: {
viewTransition: true,
},
}
unstable_viewTransition
Currently, React exposes an unstable_viewTransition
component (yes, the name sets the tone). It takes several properties:
name
: Equivalent to the view-transition-name
in CSS.className
: To assign a view-transition-class
to the element.exit
/ enter
: To add a class (CSS) for the exit or entrance animation of the element. (On mounting or unmounting the component)In practical terms, here's what it might look like in the code:
import { unstable_viewTransition as ViewTransition } from 'react'
export default function MyComponent() {
return (
<ViewTransition name="box">
<div className="box">Hello</div>
</ViewTransition>
)
}
Even though the article here talks about integration in Next, it is crucial to note that this is fundamentally a feature of React introduced by this pull request.
This is why we import the component unstable_ViewTransition
from React.
And now... a little homegrown demo to show what it's like in action in a mini NextJS blog 👇
In this demo, we use the multi-document view transition, so it's the navigation that triggers the transitions.
I gave the same name
to the <ViewTransition>
component used in the /blog
page and the /blog/post/[slug]
page.
So, when navigating between the two pages, the transition is applied to animate the page change between these two elements.
Apart from a little blur effect on the images, this demo is purely accomplished with the React API, in default mode, as presented above, without any additional configuration. Therefore, it's relatively simple to implement.
And here is the source code used for this demo.
The experimental integration of the View Transition API within Next.js opens up interesting prospects for developing smoother interfaces. In the future, we could imagine configurations of predefined transitions between pages, similar to NuxtJS.
However, as I have mentioned several times throughout this article, this is a feature that is not yet entirely production-ready. The API may change and will likely continue to evolve.
As a bonus, I've gathered some examples of sites that use the View Transition API creatively so you can see the possibilities it opens up.
Last year, our team spent a lot of time interviewing fellow Platform, DevOps, DevEx, CI/CD, and SRE engineers, as well as engineering leaders, in order to better understand their day-to-day challenges. We began this effort to see how Earthfiles, one of our products, could serve engineering teams at scale. But as we spoke to more and more people, we realized that platform engineering as an industry is on a collision course with something far more painful and visceral than just build speed.
It was the summer of 2024, and we needed to turn open-source success into commercial success for Earthfiles. Eleven thousand GitHub stars. How hard could it be to monetize that traction?
To make venture-scale money, we needed to go up-market and figure out what makes mid-size companies and enterprises tick and how our product might be able to help them. So we started interviewing platform and engineering leaders across the industry. DocuSign, Affirm, Roblox, Palo Alto Networks, Twilio, LinkedIn, Box, Morgan Stanley, BNY, and many more. We spoke to over 100 of these.
We started with a simple question:
What are the top issues that you’re struggling with in your day-to-day work?
We were hoping to get validation that developer productivity is a top concern and then be able to further narrow down how CI/CD speed would be able to unlock developer efficiency. But of all these interviews, only one mentioned build speed as a top issue, and it was largely biased by a recent production incident where they couldn’t get the fix out quickly enough due to a slow end-to-end CI/CD process.
In fact, the top issues they typically mentioned had nothing to do with productivity - not on the surface, anyway. It had to do more with how to best manage the engineering chaos that becomes inevitable at scale. You see, in the era of containers and microservices, there has been a general trend for companies to give more and more freedom to individual dev teams. It’s a container - if it slots in nicely in production, there’s less of a concern about what’s inside the container. 1
The increased freedom results in an explosion of diversity at the dev infrastructure layer. Within any given company, you’ll find a mix of programming languages, CI technologies, build scripts, packaging constructs, in-house scripts, adapters - you name it. Every team’s setup is a unique snowflake. Even within the same programming language ecosystem, different teams will set up their dev process completely differently. Completely different build, test, packaging logic. Completely different runtime versions. Completely different eng culture. So on and so forth. This craziness is now the norm. 2
This core problem of tech stack diversity is what we heard more commonly in our interviews. The interesting aspect of it is that different companies explained it differently to us, and different personas in the organizations were impacted by different consequences.
Platform teams complained about the constant firefighting required to support every app team’s unique needs. App teams on the other hand are focused on shipping features quickly - they complained about having to reinvent the wheel over and over again, about being slowed down by rigid deployment blockers, and about being given production readiness requirements very late in the process. Security teams complained about not having any visibility into the chaos. Engineering leadership complained about not being able to enforce high-quality engineering standards and not being able to understand the level of maturity of each app.
Everybody complained in their own way about the same fundamental core issue: extreme tech diversity is impossible to govern efficiently.
The other thing we heard loud and clear is that going back to the pre-microservice era of more standardized tech stacks isn’t a solution. Freedom is useful and necessary. It enables innovation. And hey, for many orgs, even if they suddenly thought that freedom is a bad thing, it would be impossible to go back and rewrite all the existing functionality to make the tech stack consistent. It’s just too much work.
We took it all in.
The industry is seemingly facing a catch-22. You can’t have strong innovation without freedom. You can’t have high-quality engineering and security without standardization.
We became obsessed with this problem: How do you preserve freedom, but still enforce the right standards at scale?
After speaking to over 100 engineering leaders, we identified a handful of common strategies for dealing with engineering chaos. Each has its strengths but also major weaknesses.
Approach | Issues |
---|---|
1. Common CI/CD Templates – Centralizing workflows via reusable templates works well for companies that adopted them early. But in mature orgs, adoption is rarely 100%, and maintaining consistency is a losing battle. | Rigid, difficult to retrofit, and often resented by app teams. |
2. Manual Checklists – Reviews per PR or before launches. Cheap and flexible, but prone to human error and rubber-stamping. | Inefficient, inconsistent, and lacks ongoing visibility. |
3. Scorecards (IDPs) – Great for accountability and high-level visibility. But they’re shallow, with limited CI/CD support and no shift-left feedback. | Issues are discovered too late, and enforcement is manual and inconsistent. |
4. Individual Vendor Tools – Best for depth in specific areas like code scanning, testing, or licensing. But without unification, coverage remains inconsistent and fragmented. | Too many dashboards, poor integration, no centralized control plane. |
5. DIY Solutions – Custom internal systems provide deep insights but are costly and hard to maintain. | Scalability issues, limited shift-left feedback, and incomplete enforcement. |
6. Doing Nothing – Policies without enforcement. It’s compliance theater: the intention is there, the tools exist, but there’s no way to track or govern what’s actually happening across teams. | Inconsistent enforcement, lack of visibility, massive risk. |
Each approach tackles part of the problem in some way but none solves it entirely.
The more we listened, the more we realized our mission had to grow beyond what we first imagined.
We started out Earthly with the goal of helping teams tame CI/CD complexity in today’s world of diverse tech stacks. One way to do that is to empower teams managing CI/CD (both platform and app teams) to be more effective in how they develop and run CI scripts. Consistent and fast CI scripts means that collaboration barriers are greatly reduced between these diverse ecosystems, and engineering teams as a whole are more productive. Certainly, that is the mission of Earthfiles.
But, another way to look at it is to step back and address the bigger problem. Enterprises are struggling to tame not just CI/CD complexity, but SDLC complexity as a whole, because it’s riddled with the same diversity, but also entangled with the difficulties of managing people at scale and giving every team the freedom to innovate with the right tools for the job, but to do so safely, within guardrails that aren’t slowing them down.
After over a hundred interviews, one insight became impossible to ignore: a significant chunk of production incidents originate from issues that could have been caught earlier in the software development lifecycle. And yet, while we’ve built a whole industry around monitoring and securing production systems, we treat everything before production like the Wild West.
This is why today we’re announcing Earthly Lunar.
Lunar is a platform for monitoring engineering practices at scale. It’s like production monitoring, except it targets everything that happens before production. It gives Platform, DevEx, Security, QA, and Compliance teams real-time visibility into how applications are being developed, together with the power to gradually enforce specific practices — across every project, in every PR and in every deployment.
Lunar works by instrumenting your existing CI/CD pipelines (no YAML changes needed) and source code repositories to collect structured metadata about how code is built, tested, scanned, and deployed. This metadata is then continuously evaluated against policies that you define—policies that are flexible, testable, and expressive enough to reflect your real-world engineering standards.
Want to block deployments that would violate compliance rules, like using unapproved licenses or bypassing required security scans? Or fail a PR if it introduces stale dependencies or vulnerable CI plugins? Or ensure that security-sensitive services are collecting SBOMs, running code scans, and deploying frequently enough to avoid operational drift? Lunar makes all of that possible—without requiring a wholesale rewrite of every team’s CI pipeline, and without sacrificing developer velocity.
And crucially, Lunar is designed to work with the messy reality of modern engineering. It’s not a one-size-fits-all template, and it doesn’t require rewriting every CI pipeline. Its instrumentation is flexible and centralized—meaning platform teams stay in control, app teams stay autonomous, and standards actually get enforced.
Engineering at scale is messy. You’ve got hundreds of services, dozens of teams, and a sprawling ecosystem of tools—each doing one part of the job. But stitching that all together into a coherent, reliable, and compliant software delivery process? That’s the hard part. And that’s what Earthly Lunar is here to solve.
If this sounds like a problem you’re facing, we’d love to show you how Lunar works in practice.