Today we’re excited to announce our beta release of TypeScript 5.0!
This release brings many new features, while aiming to make TypeScript, smaller, simpler, and faster. We’ve implemented the new decorators standard, functionality to better support ESM projects in Node and bundlers, new ways for library authors to control generic inference, expanded our JSDoc functionality, simplified configuration, and made many other improvements.
While the 5.0 release includes correctness changes and deprecations for less-used flags, we believe most users will have a similar upgrade experience as in previous releases.
To get started using the beta, you can get it through NuGet, or use npm with the following command:
npm install typescript@beta
Here’s a quick list of what’s new in TypeScript 5.0!
const
Type Parametersextends
enum
s Are Union enum
s--moduleResolution bundler
--verbatimModuleSyntax
export type *
@satisfies
Support in JSDoc@overload
Support in JSDoc--build
switch
/case
CompletionsDecorators are an upcoming ECMAScript feature that allow us to customize classes and their members in a reusable way.
Let’s consider the following code:
class Person {
name: string;
constructor(name: string) {
this.name = name;
}
greet() {
console.log(`Hello, my name is ${this.name}.`);
}
}
const p = new Person("Ray");
p.greet();
greet
is pretty simple here, but let’s imagine it’s something way more complicated – maybe it does some async logic, it’s recursive, it has side effects, etc.
Regardless of what kind of ball-of-mud you’re imagining, let’s say you throw in some console.log
calls to help debug greet
.
class Person {
name: string;
constructor(name: string) {
this.name = name;
}
greet() {
console.log("LOG: Entering method.");
console.log(`Hello, my name is ${this.name}.`);
console.log("LOG: Exiting method.")
}
}
This pattern is fairly common. It sure would be nice if there was a way we could do this for every method!
This is where decorators come in.
We can write a function called loggedMethod
that looks like the following:
function loggedMethod(originalMethod: any, _context: any) {
function replacementMethod(this: any, ...args: any[]) {
console.log("LOG: Entering method.")
const result = originalMethod.call(this, ...args);
console.log("LOG: Exiting method.")
return result;
}
return replacementMethod;
}
"What’s the deal with all of these any
s?
What is this, any
Script!?"
Just be patient – we’re keeping things simple for now so that we can focus on what this function is doing.
Notice that loggedMethod
takes the original method (originalMethod
) and returns a function that
this
and all of its arguments to the original methodNow we can use loggedMethod
to decorate the method greet
:
class Person {
name: string;
constructor(name: string) {
this.name = name;
}
@loggedMethod
greet() {
console.log(`Hello, my name is ${this.name}.`);
}
}
const p = new Person("Ray");
p.greet();
// Output:
//
// LOG: Entering method.
// Hello, my name is Ray.
// LOG: Exiting method.
We just used loggedMethod
as a decorator above greet
– and notice that we wrote it as @loggedMethod
.
When we did that, it got called with the method target and a context object.
Because loggedMethod
returned a new function, that function replaced the original definition of greet
.
We didn’t mention it yet, but loggedMethod
was defined with a second parameter.
It’s called a "context object", and it has some useful information about how the decorated method was declared – like whether it was a #private
member, or static
, or what the name of the method was.
Let’s rewrite loggedMethod
to take advantage of that and print out the name of the method that was decorated.
function loggedMethod(originalMethod: any, context: ClassMethodDecoratorContext) {
const methodName = String(context.name);
function replacementMethod(this: any, ...args: any[]) {
console.log(`LOG: Entering method '${methodName}'.`)
const result = originalMethod.call(this, ...args);
console.log(`LOG: Exiting method '${methodName}'.`)
return result;
}
return replacementMethod;
}
We’re now using the context parameter – and that it’s the first thing in loggedMethod
that has a type stricter than any
and any[]
.
TypeScript provides a type called ClassMethodDecoratorContext
that models the context object that method decorators take.
Apart from metadata, the context object for methods also has a useful function called addInitializer
.
It’s a way to hook into the beginning of the constructor (or the initialization of the class itself if we’re working with static
s).
As an example – in JavaScript, it’s common to write something like the following pattern:
class Person {
name: string;
constructor(name: string) {
this.name = name;
this.greet = this.greet.bind(this);
}
greet() {
console.log(`Hello, my name is ${this.name}.`);
}
}
Alternatively, greet
might be declared as a property initialized to an arrow function.
class Person {
name: string;
constructor(name: string) {
this.name = name;
}
greet = () => {
console.log(`Hello, my name is ${this.name}.`);
};
}
This code is written to ensure that this
isn’t re-bound if greet
is called as a stand-alone function or passed as a callback.
const greet = new Person("Ray").greet;
// We don't want this to fail!
greet();
We can write a decorator that uses addInitializer
to call bind
in the constructor for us.
function bound(originalMethod: any, context: ClassMethodDecoratorContext) {
const methodName = context.name;
if (context.private) {
throw new Error(`'bound' cannot decorate private properties like ${methodName as string}.`);
}
context.addInitializer(function () {
this[methodName] = this[methodName].bind(this);
});
}
bound
isn’t returning anything – so when it decorates a method, it leaves the original alone.
Instead, it will add logic before any other fields are initialized.
class Person {
name: string;
constructor(name: string) {
this.name = name;
}
@bound
@loggedMethod
greet() {
console.log(`Hello, my name is ${this.name}.`);
}
}
const p = new Person("Ray");
const greet = p.greet;
// Works!
greet();
Notice that we stacked two decorators – @bound
and @loggedMethod
.
These decorations run in "reverse order".
That is, @loggedMethod
decorates the original method greet
, and @bound
decorates the result of @loggedMethod
.
In this example, it doesn’t matter – but it could if your decorators have side-effects or expect a certain order.
Also worth noting – if you’d prefer stylistically, you can put these decorators on the same line.
@bound @loggedMethod greet() {
console.log(`Hello, my name is ${this.name}.`);
}
Something that might not be obvious is that we can even make functions that return decorator functions.
That makes it possible to customize the final decorator just a little.
If we wanted, we could have made loggedMethod
return a decorator and customize how it logs its messages.
function loggedMethod(headMessage = "LOG:") {
return function actualDecorator(originalMethod: any, context: ClassMethodDecoratorContext) {
const methodName = String(context.name);
function replacementMethod(this: any, ...args: any[]) {
console.log(`${headMessage} Entering method '${methodName}'.`)
const result = originalMethod.call(this, ...args);
console.log(`${headMessage} Exiting method '${methodName}'.`)
return result;
}
return replacementMethod;
}
}
If we did that, we’d have to call loggedMethod
before using it as a decorator.
We could then pass in any string as the prefix for messages that get logged to the console.
class Person {
name: string;
constructor(name: string) {
this.name = name;
}
@loggedMethod("⚠")
greet() {
console.log(`Hello, my name is ${this.name}.`);
}
}
const p = new Person("Ray");
p.greet();
// Output:
//
// ⚠ Entering method 'greet'.
// Hello, my name is Ray.
// ⚠ Exiting method 'greet'.
Decorators can be used on more than just methods! They can be used on properties/fields, getters, setters, and auto-accessors. Even classes themselves can be decorated for things like subclassing and registration.
To learn more about decorators in-depth, you can read up on Axel Rauschmayer’s extensive summary.
For more information about the changes involved, you can view the original pull request.
If you’ve been using TypeScript for a while, you might be aware of the fact that it’s had support for "experimental" decorators for years.
While these experimental decorators have been incredibly useful, they modeled a much older version of the decorators proposal, and always required an opt-in compiler flag called --experimentalDecorators
.
Any attempt to use decorators in TypeScript without this flag used to prompt an error message.
--experimentalDecorators
will continue to exist for the foreseeable future;
however, without the flag, decorators will now be valid syntax for all new code.
Outside of --experimentalDecorators
, they will be type-checked and emitted differently.
The type-checking rules and emit are sufficiently different that while decorators can be written to support both the old and new decorators behavior, any existing decorator functions are not likely to do so.
This new decorators proposal is not compatible with --emitDecoratorMetadata
, and it does not allow decorating parameters.
Future ECMAScript proposals may be able to help bridge that gap.
On a final note: at the moment, the proposal for decorators requires that a class decorator comes after the export
keyword if it’s present.
export @register class Foo {
// ...
}
export
@Component({
// ...
})
class Bar {
// ...
}
TypeScript will enforce this restriction within JavaScript files, but will not do so for TypeScript files. Part of this is motivated by existing users – we hope to provide a slightly easier migration path between our original "experimental" decorators and standardized decorators. Furthermore, we’ve heard the preference for the original style from many users, and we hope we can discuss the issue in good faith in future standards discussions.
The loggedMethod
and bound
decorator examples above are intentionally simple and omit lots of details about types.
Typing decorators can be fairly complex.
For example, a well-typed version of loggedMethod
from above might look something like this:
function loggedMethod<This, Args extends any[], Return>(
target: (this: This, ...args: Args) => Return,
context: ClassMethodDecoratorContext<This, (this: This, ...args: Args) => Return>
) {
const methodName = String(context.name);
function replacementMethod(this: This, ...args: Args): Return {
console.log(`LOG: Entering method '${methodName}'.`)
const result = target.call(this, ...args);
console.log(`LOG: Exiting method '${methodName}'.`)
return result;
}
return replacementMethod;
}
We had to separately model out the type of this
, the parameters, and the return type of the original method, using the type parameters This
, Args
, and Return
.
Exactly how complex your decorators functions are defined depends on what you want to guarantee. Just keep in mind, your decorators will be used more than they’re written, so a well-typed version will usually be preferable – but there’s clearly a trade-off with readability, so try to keep things simple.
More documentation on writing decorators will be available in the future – but this post should have a good amount of detail for the mechanics of decorators.
const
Type ParametersWhen inferring the type of an object, TypeScript will usually choose a type that’s meant to be general.
For example, in this case, the inferred type of names
is string[]
:
type HasNames = { readonly names: string[] };
function getNamesExactly<T extends HasNames>(arg: T): T["names"] {
return arg.names;
}
// Inferred type: string[]
const names = getNamesExactly({ names: ["Alice", "Bob", "Eve"]});
Usually the intent of this is to enable mutation down the line.
However, depending on what exactly getNamesExactly
does and how it’s intended to be used, it can often be the case that a more-specific type is desired.
Up until now, API authors have typically had to recommend adding as const
in certain places to achieve the desired inference:
// The type we wanted:
// readonly ["Alice", "Bob", "Eve"]
// The type we got:
// string[]
const names1 = getNamesExactly({ names: ["Alice", "Bob", "Eve"]});
// Correctly gets what we wanted:
// readonly ["Alice", "Bob", "Eve"]
const names2 = getNamesExactly({ names: ["Alice", "Bob", "Eve"]} as const);
This can be cumbersome and easy to forget.
In TypeScript 5.0, you can now add a const
modifier to a type parameter declaration to cause const
-like inference to be the default:
type HasNames = { names: readonly string[] };
function getNamesExactly<const T extends HasNames>(arg: T): T["names"] {
// ^^^^^
return arg.names;
}
// Inferred type: readonly ["Alice", "Bob", "Eve"]
// Note: Didn't need to write 'as const' here
const names = getNamesExactly({ names: ["Alice", "Bob", "Eve"] });
Note that the const
modifier doesn’t reject mutable values, and doesn’t require immutable constraints.
Using a mutable type constraint might give surprising results.
For example:
declare function fnBad<const T extends string[]>(args: T): void;
// 'T' is still 'string[]' since 'readonly ["a", "b", "c"]' is not assignable to 'string[]'
fnBad(["a", "b" ,"c"]);
Here, the inferred candidate for T
is readonly ["a", "b", "c"]
, and a readonly
array can’t be used where a mutable one is needed.
In this case, inference falls back to the constraint, the array is treated as string[]
, and the call still proceeds successfully.
A better definition of this function should use readonly string[]
:
declare function fnGood<const T extends readonly string[]>(args: T): void;
// T is readonly ["a", "b", "c"]
fnGood(["a", "b" ,"c"]);
Similarly, remember to keep in mind that the const
modifier only affects inference of object, array and primitive expressions that were written within the call, so arguments which wouldn’t (or couldn’t) be modified with as const
won’t see any change in behavior:
declare function fnGood<const T extends readonly string[]>(args: T): void;
const arr = ["a", "b" ,"c"];
// 'T' is still 'string[]'-- the 'const' modifier has no effect here
fnGood(arr);
See the pull request and the (first and second second) motivating issues for more details.
extends
When managing multiple projects, it can be helpful to have a "base" configuration file that other tsconfig.json
files can extend from.
That’s why TypeScript supports an extends
field for copying over fields from compilerOptions
.
// packages/front-end/src/tsconfig.json
{
"compilerOptions": {
"extends": "../../../tsconfig.base.json",
"outDir": "../lib",
// ...
}
}
However, there are scenarios where you might want to extend from multiple configuration files.
For example, imagine using a TypeScript base configuration file shipped to npm.
If you want all your projects to also use the options from the @tsconfig/strictest
package on npm, then there’s a simple solution: have tsconfig.base.json
extend from @tsconfig/strictest
:
// tsconfig.base.json
{
"compilerOptions": {
"extends": "@tsconfig/strictest/tsconfig.json",
// ...
}
}
This works to a point.
If you have any projects that don’t want to use @tsconfig/strictest
, they have to either manually disable the options, or create a separate version of tsconfig.base.json
that doesn’t extend from @tsconfig/strictest
.
To give some more flexibility here, Typescript 5.0 now allows the extends
field to take multiple entries.
For example, in this configuration file:
{
"compilerOptions": {
"extends": ["a", "b", "c"]
}
}
Writing this is kind of like extending c
directly, where c
extends b
, and b
extends a
.
If any fields "conflict", the latter entry wins.
So in the following example, both strictNullChecks
and noImplicitAny
are enabled in the final tsconfig.json
.
// tsconfig1.json
{
"compilerOptions": {
"strictNullChecks": true
}
}
// tsconfig2.json
{
"compilerOptions": {
"noImplicitAny": true
}
}
// tsconfig.json
{
"compilerOptions": {
"extends": ["./tsconfig1.json", "./tsconfig2.json"]
},
"files": ["./index.ts"]
}
As another example, we can rewrite our original example in the following way.
// packages/front-end/src/tsconfig.json
{
"compilerOptions": {
"extends": ["@tsconfig/strictest/tsconfig.json", "../../../tsconfig.base.json"],
"outDir": "../lib",
// ...
}
}
For more details, read more on the original pull request.
enum
s Are Union enum
sWhen TypeScript originally introduced enums, they were nothing more than a set of numeric constants with the same type.
enum E {
Foo = 10,
Bar = 20,
}
The only thing special about E.Foo
and E.Bar
was that they were assignable to anything expecting the type E
.
Other than that, they were pretty much just number
s.
function takeValue(e: E) {}
takeValue(E.A); // works
takeValue(123); // error!
It wasn’t until TypeScript 2.0 introduced enum literal types that enums got a bit more special. Enum literal types gave each enum member its own type, and turned the enum itself into a union of each member type. They also allowed us to refer to only a subset of the types of an enum, and to narrow away those types.
// Color is like a union of Red | Orange | Yellow | Green | Blue | Violet
enum Color {
Red, Orange, Yellow, Green, Blue, /* Indigo */, Violet
}
// Each enum member has its own type that we can refer to!
type PrimaryColor = Color.Red | Color.Green | Color.Blue;
function isPrimaryColor(c: Color): C is PrimaryColor {
// Narrowing literal types can catch bugs.
// TypeScript will error here because
// we'll end up comparing 'Color.Red' to 'Color.Green'.
// We meant to use ||, but accidentally wrote &&.
return c === Color.Red && c === Color.Green && c === Color.Blue;
}
One issue with giving each enum member its own type was that those types were in some part associated with the actual value of the member. In some cases it’s not possible to compute that value – for instance, an enum member could be initialized by a function call.
enum E {
Blah = Math.random()
}
Whenever TypeScript ran into these issues, it would quietly back out and use the old enum strategy. That meant giving up all the advantages of unions and literal types.
TypeScript 5.0 manages to make all enums into union enums by creating a unique type for each computed member. That means that all enums can now be narrowed and have their members referenced as types as well.
For more details on this change, you can read the specifics on GitHub.
--moduleResolution bundler
TypeScript 4.7 introduced the node16
and nodenext
options for its --module
and --moduleResolution
settings.
The intent of these options was to better model the precise lookup rules for ECMAScript modules in Node.js;
however, this mode has many restrictions that other tools don’t really enforce.
For example, in an ECMAScript module in Node.js, any relative import needs to include a file extension.
// entry.mjs
import * as utils from "./utils"; // ❌ wrong - we need to include the file extension.
import * as utils from "./utils.mjs"; // ✅ works
There are certain reasons for this in Node.js and the browser – it makes file lookups faster and works better for naive file servers.
But for many developers using tools like bundlers, the node16
/nodenext
settings were cumbersome because bundlers don’t have most of these restrictions.
In some ways, the node
resolution mode was better for anyone using a bundler.
But in some ways, the original node
resolution mode was already out of date.
Most modern bundlers use a fusion of the ECMAScript module and CommonJS lookup rules in Node.js.
For example, extensionless imports work just fine just like in CommonJS, but when looking through the export
conditions of a package, they’ll prefer an import
condition just like in an ECMAScript file.
To model how bundlers work, TypeScript now introduces a new strategy: --moduleResolution bundler
.
{
"compilerOptions": {
"target": "esnext",
"moduleResolution": "bundler"
}
}
If you are using a modern bundler like Vite, esbuild, swc, Webpack, Parcel, and others that implement a hybrid lookup strategy, the new bundler
option should be a good fit for you.
To read more on --moduleResolution bundler
, take a look at the implementing pull request.
JavaScript tooling may now model "hybrid" resolution rules, like in the bundler
mode we described above.
Because tools may differ in their support slightly, TypeScript 5.0 provides ways to enable or disable a few features that may or may not work with your configuration.
allowImportingTsExtensions
--allowImportingTsExtensions
allows TypeScript files to import each other with a TypeScript-specific extension like .ts
, .mts
, or .tsx
.
This flag is only allowed when --noEmit
or --emitDeclarationOnly
is enabled, since these import paths would not be resolvable at runtime in JavaScript output files.
The expectation here is that your resolver (e.g. your bundler, a runtime, or some other tool) is going to make these imports between .ts
files work.
resolvePackageJsonExports
--resolvePackageJsonExports
forces TypeScript to consult the exports
field of package.json
files if it ever reads from a package in node_modules
.
This option defaults to true
under the node16
, nodenext
, and bundler
options for --moduleResolution
.
resolvePackageJsonImports
--resolvePackageJsonImports
forces TypeScript to consult the imports
field of package.json
files when performing a lookup that starts with #
from a file whose ancestor directory contains a package.json
.
This option defaults to true
under the node16
, nodenext
, and bundler
options for --moduleResolution
.
allowArbitraryExtensions
In TypeScript 5.0, when an import path ends in an extension that isn’t a known JavaScript or TypeScript file extension, the compiler will look for a declaration file for that path in the form of {file basename}.d.{extension}.ts
.
For example, if you are using a CSS loader in a bundler project, you might want to write (or generate) declaration files for those stylesheets:
/* app.css */
.cookie-banner {
display: none;
}
// app.d.css.ts
declare const css: {
cookieBanner: string;
};
export default css;
// App.tsx
import styles from "./app.css";
styles.cookieBanner; // string
By default, this import will raise an error to let you know that TypeScript doesn’t understand this file type and your runtime might not support importing it.
But if you’ve configured your runtime or bundler to handle it, you can suppress the error with the new --allowArbitraryExtensions
compiler option.
Note that historically, a similar effect has often been achievable by adding a declaration file named app.css.d.ts
instead of app.d.css.ts
– however, this just worked through Node’s require
resolution rules for CommonJS.
Strictly speaking, the former is interpreted as a declaration file for a JavaScript file named app.css.js
.
Because relative files imports need to include extensions in Node’s ESM support, TypeScript would error on our example in an ESM file under --moduleResolution node16
or nodenext
.
For more information, read up the proposalfor this feature and its corresponding pull request.
customConditions
--customConditions
takes a list of additional conditions that should succeed when TypeScript resolves from an [exports
] or (https://nodejs.org/api/packages.html#exports) or imports
field of a package.json
.
These conditions are added to whatever existing conditions a resolver will use by default.
For example, when this field is set in a tsconfig.json
as so:
{
"compilerOptions": {
"target": "es2022",
"moduleResolution": "bundler",
"customConditions": ["my-condition"]
}
}
Any time an exports
or imports
field is referenced in package.json
, TypeScript will consider conditions called my-condition
.
So when importing from a package with the following package.json
{
// ...
"exports": {
".": {
"my-condition": "./foo.mjs",
"node": "./bar.mjs",
"import": "./baz.mjs",
"require": "./biz.mjs"
}
}
}
TypeScript will try to look for files corresponding to foo.mjs
.
This field is only valid under the node16
, nodenext
, and bundler
options for --moduleResolution
--verbatimModuleSyntax
By default, TypeScript does something called import elision. Basically, if you write something like
import { Car } from "./car";
export function drive(car: Car) {
// ...
}
TypeScript detects that you’re only using an import for types and drops the import entirely. Your output JavaScript might look something like this:
export function drive(car) {
// ...
}
Most of the time this is good, because if Car
isn’t a value that’s exported from ./car
, we’ll get a runtime error.
But it does add a layer of complexity for certain edge cases.
For example, notice there’s no statement like import "./car";
– the import was dropped entirely.
That actually makes a difference for modules that have side-effects or not.
TypeScript’s emit strategy for JavaScript also has another few layers of complexity – import elision isn’t always just driven by how an import is used – it often consults how a value is declared as well. So it’s not always clear whether code like the following
export { Car } from "./car";
should be preserved or dropped.
If Car
is declared with something like a class
, then it can be preserved in the resulting JavaScript file.
But if Car
is only declared as a type
alias or interface
, then the JavaScript file shouldn’t export Car
at all.
While TypeScript might be able to make these emit decisions based on information from across files, not every compiler can.
The type
modifier on imports and exports helps with these situations a bit.
We can make it explicit whether an import or export is only being used for type analysis, and can be dropped entirely in JavaScript files by using the type
modifier.
// This statement can be dropped entirely in JS output
import type * as car from "./car";
// The named import/export 'Car' can be dropped in JS output
import { type Car } from "./car";
export { type Car } from "./car";
type
modifiers are not quite useful on their own – by default, module elision will still drop imports, and nothing forces you to make the distinction between type
and plain imports and exports.
So TypeScript has the flag --importsNotUsedAsValues
to make sure you use the type
modifier, --preserveValueImports
to prevent some module elision behavior, and --isolatedModules
to make sure that your TypeScript code works across different compilers.
Unfortunately, understanding the fine details of those 3 flags is hard, and there are still some edge cases with unexpected behavior.
TypeScript 5.0 introduces a new option called --verbatimModuleSyntax
to simplify the situation.
The rules are much simpler – any imports or exports without a type
modifier are left around.
Anything that uses the type
modifier is dropped entirely.
// Erased away entirely.
import type { A } from "a";
// Rewritten to 'import { b } from "bcd";'
import { b, type c, type d } from "bcd";
// Rewritten to 'import "xyz";'
import { type xyz } from "xyz";
With this new option, what you see is what you get.
That does have some implications when it comes to module interop though.
Under this flag, ECMAScript import
s and export
s won’t be rewritten to require
calls when your settings or file extension implied a different module system.
Instead, you’ll get an error.
If you need to emit code that uses require
and module.exports
, you’ll have to use TypeScript’s module syntax that predates ES2015:
Input TypeScript | Output JavaScript |
---|---|
|
|
|
|
While this is a limitation, it does help make some issues more obvious.
For example, it’s very common to forget to set the type
field in package.json
under --module node16
.
As a result, developers would start writing CommonJS modules instead of an ES modules without realizing it, giving surprising lookup rules and JavaScript output.
This new flag ensures that you’re intentional about the file type you’re using because the syntax is intentionally different.
Because --verbatimModuleSyntax
provides a more consistent story than --importsNotUsedAsValues
and --preserveValueImports
, those two existing flags are being deprecated in its favor.
For more details, read up on [the original pull request]https://github.com/microsoft/TypeScript/pull/52203 and its proposal issue.
export type *
When TypeScript 3.8 introduced type-only imports, the new syntax wasn’t allowed on export * from "module"
or export * as ns from "module"
re-exports. TypeScript 5.0 adds support for both of these forms:
// models/vehicles.ts
export class Spaceship {
// ...
}
// models/index.ts
export type * as vehicles from "./spaceship";
// main.ts
import { vehicles } from "./models";
function takeASpaceship(s: vehicles.Spaceship) {
// ✅ ok - `vehicles` only used in a type position
}
function makeASpaceship() {
return new vehicles.Spaceship();
// ^^^^^^^^
// 'vehicles' cannot be used as a value because it was exported using 'export type'.
}
You can read more about the implementation here.
@satisfies
Support in JSDocTypeScript 4.9 introduced the satisfies
operator.
It made sure that the type of an expression was compatible, without affecting the type itself.
For example, let’s take the following code:
interface CompilerOptions {
strict?: boolean;
outDir?: string;
// ...
extends?: string | string[];
}
declare function resolveConfig(configPath: string): CompilerOptions;
let myCompilerOptions = {
strict: true,
outDir: "../lib",
// ...
extends: [
"@tsconfig/strictest/tsconfig.json",
"../../../tsconfig.base.json"
],
} satisfies CompilerOptions;
Here, TypeScript knows that myCompilerOptions.extends
was declared with an array – because while satisfies
validated the type of our object, it didn’t bluntly change it to CompilerOptions
and lose information.
So if we want to map over extends
, that’s fine.
let inheritedConfigs = myCompilerOptions.extends.map(resolveConfig);
This was helpful for TypeScript users, but plenty of people use TypeScript to type-check their JavaScript code using JSDoc annotations.
That’s why TypeScript 5.0 is supporting a new JSDoc tag called @satisfies
that does exactly the same thing.
/** @satisfies */
can catch type mismatches:
// @ts-check
/**
* @typedef CompilerOptions
* @prop {boolean} [strict]
* @prop {string} [outDir]
* @prop {string | string[]} [extends]
*/
/**
* @satisfies {CompilerOptions}
*/
let myCompilerOptions = {
outdir: "../lib",
// ~~~~~~ oops! we meant outDir
};
But it will preserve the original type of our expressions, allowing us to use our values more precisely later on in our code.
// @ts-check
/**
* @typedef CompilerOptions
* @prop {boolean} [strict]
* @prop {string} [outDir]
* @prop {string | string[]} [extends]
*/
/**
* @satisfies {CompilerOptions}
*/
let myCompilerOptions = {
strict: true,
outDir: "../lib",
extends: [
"@tsconfig/strictest/tsconfig.json",
"../../../tsconfig.base.json"
],
};
let inheritedConfigs = myCompilerOptions.extends.map(resolveConfig);
/** @satisfies */
can also be used inline on any parenthesized expression.
We could have written myCompilerOptions
like this:
let myCompilerOptions = /** @satisfies {CompilerOptions} */ ({
strict: true,
outDir: "../lib",
extends: [
"@tsconfig/strictest/tsconfig.json",
"../../../tsconfig.base.json"
],
});
Why? Well, it usually makes more sense when you’re deeper in some other code, like a function call.
compileCode(/** @satisfies {CompilerOptions} */ ({
// ...
}));
This feature was provided thanks to Oleksandr Tarasiuk!
@overload
Support in JSDocIn TypeScript, you can specify overloads for a function. Overloads give us a way to say that a function can be called with different arguments, and possibly return different results. They can restrict how callers can actually use our functions, and refine what results they’ll get back.
// Our overloads:
function printValue(str: string): void;
function printValue(num: number, maxFractionDigits?: number): void;
// Our implementation:
function printValue(value: string | number, maximumFractionDigits?: number) {
if (typeof value === "number") {
const formatter = Intl.NumberFormat("en-US", {
maximumFractionDigits,
});
value = formatter.format(value);
}
console.log(value);
}
Here, we’ve said that printValue
takes either a string
or a number
as its first argument.
If it takes a number
, it can take a second argument to determine how many fractional digits we can print.
TypeScript 5.0 now allows JSDoc to declare overloads with a new @overload
tag.
Each JSDoc comment with an @overload
tag is treated as a distinct overload for the following function declaration.
// @ts-check
/**
* @overload
* @param {string} value
* @return {void}
*/
/**
* @overload
* @param {number} value
* @param {number} [maximumFractionDigits]
* @return {void}
*/
/**
* @param {string | number} value
* @param {number} [maximumFractionDigits]
*/
function printValue(value, maximumFractionDigits) {
if (typeof value === "number") {
const formatter = Intl.NumberFormat("en-US", {
maximumFractionDigits,
});
value = formatter.format(value);
}
console.log(value);
}
Now regardless of whether we’re writing in a TypeScript or JavaScript file, TypeScript can let us know if we’ve called our functions incorrectly.
// all allowed
printValue("hello!");
printValue(123.45);
printValue(123.45, 2);
printValue("hello!", 123); // error!
This new tag was implemented thanks to Tomasz Lenarcik.
--build
TypeScript now allows the following flags to be passed under --build
mode
--declaration
--emitDeclarationOnly
--declarationMap
--soureMap
--inlineSourceMap
This makes it way easier to customize certain parts of a build where you might have different development and production builds.
For example, a development build of a library might not need to produce declaration files, but a production build would. A project can configure declaration emit to be off by default and simply be built with
tsc --build -p ./my-project-dir
Once you’re done iterating in the inner loop, a "production" build can just pass the --declaration
flag.
tsc --build -p ./my-project-dir --declaration
More information on this change is available here.
switch
/case
CompletionsWhen writing a switch
statement, TypeScript now detects when the value being checked has a literal type.
If so, it will offer a completion that scaffolds out each uncovered case
.
You can see specifics of the implementation on GitHub.
TypeScript 5.0 contains lots of powerful changes across our code structure, our data structures, and algorithmic implementations. What these all mean is that your entire experience should be faster – not just running TypeScript, but even installing it.
Here are a few interesting wins in speed and size that we’ve been able to capture relative to TypeScript 4.9.
Scenario | Time or Size Relative to TS 4.9 |
---|---|
material-ui build time | 90% |
Playwright build time | 89% |
tsc startup time | 89% |
tsc build time | 86% |
Outlook Web build time | 83% |
VS Code build time | 81% |
typescript Package Size | 58% |
In other words, we’ve found TypeScript 5.0 Beta only takes 81% of the time it takes TypeScript 4.9 to build VS Code.
How? There are a few notable improvements we’d like give more details on in the future. But we won’t make you wait for that blog post.
First off, we recently migrated TypeScript from namespaces to modules, allowing us to leverage modern build tooling that can perform optimizations like scope hoisting. Using this tooling, revisiting our packaging strategy, and removing some deprecated code has shaved off about 26.5 MB from TypeScript 4.9’s 63.8 MB package size. It also brought us a notable speed-up through direct function calls.
TypeScript also added more uniformity to internal object types within the compiler, while slimming down certain object types as well. This reduced polymorphic and megamorphic use sites, while offsetting some of the memory footprint that came as a tradeoff.
We’ve also performed some caching when serializing information to strings. Type display, which can happen as part of error reporting, declaration emit, code completions, and more, can end up being fairly expensive. TypeScript now caches some commonly used machinery to reuse across these operations.
Overall, we expect most codebases should see speed improvements from TypeScript 5.0, and have consistently been able to reproduce wins between 10% to 20%. Of course this will depend on hardware and codebase characteristics, but we encourage you to try it out on your codebase today!
For more information, see some of our notable optimizations:
Node
MonomorphizationSymbol
MonomorphizationIdentifier
Size ReductionPrinter
CachingTypeScript now targets ECMAScript 2018. For Node users, that means a minimum version requirement of at least Node.js 10 and later.
lib.d.ts
ChangesChanges to how types for the DOM are generated might have an impact on existing code.
Notably, certain properties have been converted from number
to numeric literal types, and properties and methods for cut, copy, and paste event handling have been moved across interfaces.
In TypeScript 5.0, we moved to modules, removed some unnecessary interfaces, and made some correctness improvements. For more details on what’s changed, see our API Breaking Changes page.
Certain operations in TypeScript will already warn you if you write code which may cause an implicit string-to-number coercion:
function func(ns: number | string) {
return ns * 4; // Error, possible implicit coercion
}
In 5.0, this will also be applied to the relational operators >
, <
, <=
, and >=
:
function func(ns: number | string) {
return ns > 4; // Now also an error
}
To allow this if desired, you can explicitly coerce the operand to a number
using +
:
function func(ns: number | string) {
return +ns > 4; // OK
}
This correctness improvement was contributed courtesy of Mateusz BurzyΕski.
TypeScript has had some long-standing oddities around enum
s ever since its first release.
In 5.0, we’re cleaning up some of these problems, as well as reducing the concept count needed to understand the various kinds of enum
s you can declare.
There are two main new errors you might see as part of this.
The first is that assigning an out-of-domain literal to an enum
type will now error as one might expect:
enum SomeEvenDigit {
Zero = 0,
Two = 2,
Four = 4
}
// Now correctly an error
let m: SomeEvenDigit = 1;
The other is that declaration of certain kinds of indirected mixed string/number enum
forms would, incorrectly, create an all-number enum
:
enum Letters {
A = "a"
}
enum Numbers {
one = 1,
two = Letters.A
}
// Now correctly an error
const t: number = Numbers.two;
You can see more details in relevant change.
--experimentalDecorators
TypeScript 5.0 makes type-checking more accurate for decorators under --experimentalDecorators
.
One place where this becomes apparent is when using a decorator on a constructor parameter.
export declare const inject:
(entity: any) =>
(target: object, key: string | symbol, index?: number) => void;
export class Foo {}
export class C {
constructor(@inject(Foo) private x: any) {
}
}
This call will fail because key
expects a string | symbol
, but constructor parameters receive a key of undefined
.
The correct fix is to change the type of key
within inject
.
A reasonable workaround if you’re using a library that can’t be upgraded is is to wrap inject
in a more type-safe decorator function, and use a type-assertion on key
.
For more details, see this issue.
In TypeScript 5.0, we’ve deprecated the following settings and setting values:
--target: ES3
--out
--noImplicitUseStrict
--keyofStringsOnly
--suppressExcessPropertyErrors
--suppressImplicitAnyIndexErrors
--noStrictGenericChecks
--charset
--importsNotUsedAsValues
--preserveValueImports
prepend
in project referencesThese configurations will continue to be allowed until TypeScript 5.5, at which point they will be removed entirely, however, you will receive a warning if you are using these settings.
In TypeScript 5.0, as well as future releases 5.1, 5.2, 5.3, and 5.4, you can specify "ignoreDeprecations": "5.0"
to silence those warnings.
We’ll also shortly be releasing a 4.9 patch to allow specifying ignoreDeprecations
to allow for smoother upgrades.
Aside from deprecations, we’ve changed some settings to better improve cross-platform behavior in TypeScript.
--newLine
, which controls the line endings emitted in JavaScript files, used to be inferred based on the current operating system if not specified.
We think builds should be as deterministic as possible, and Windows Notepad supports line-feed line endings now, so the new default setting is LF
.
The old OS-specific inference behavior is no longer available.
--forceConsistentCasingInFileNames
, which ensured that all references to the same file name in a project agreed in casing, now defaults to true
.
This can help catch differences issues with code written on case-insensitive file systems.
You can leave feedback and view more information on the tracking issue for 5.0 deprecations
TypeScript 5.0 is shaping up to be a great release. In the coming weeks, we’ll be focusing on bug fixes, stability, and polish for our upcoming Release Candidate, followed by the first stable release.
As usual, details about our release (including target dates!) are available on the TypeScript 5.0 Iteration Plan. We hope the iteration plan makes TypeScript 5.0 easier to test around you and your team’s schedule!
We also hope that TypeScript 5.0 Beta brings lots of new features you’ve been looking forward to. Give our beta release (or our nightly builds) a try today and let us know what you think!
Happy Hacking!
– Daniel Rosenwasser and the TypeScript Team
The post Announcing TypeScript 5.0 Beta appeared first on TypeScript.
With the proliferation of video on-demand streaming services, viewers face a big challenge: finding content across multiple screens and apps. There may be quality information available online but it may be difficult to find. Traditionally, viewers resort to βapp switchingβ which can be frustrating when it comes to finding quality content.
With the emergence of new technologies like AI, metadata, and machine learning, traditional content discovery approaches canβt cut the mustard anymore for content publishers. The solution is to integrate their catalogues and programming guides to a Content Discovery Platform. But, what is a Discovery Platform, and how can it make it easier for users to find what they want? Discovery Platforms with metadata aggregation, AI/ML enrichments, search and recommendations are the new disruptors in Content Marketing. Today’s post will only focus on one of the pillars of Content Discovery: the recommendations engine.
The goal of a recommendations engine is to predict the degree to which a user will like or dislike a set of items such as movies or videos. With this technology, viewers are automatically advised of content that they might like without the need to search for specific items or browse through an online guide. Recommender systems allow viewers to watch shows at times convenient for the viewer, convenient digital access to those shows and to find shows using numerous indices. Indices include genre, actor, director, keyword and the probability that the viewer will like the show as predicted by a collaborative filtering system. This results in greater satisfaction for the viewer with increased loyalty and higher revenues for the business.
1. Methods
Most recommender systems use a combination of different approaches, but broadly speaking there are three different methods that can be used:
Each of these approaches can provide a level of recommendations so that most recommendation platforms take a hybrid approach, using information from each of these different sources to define what shows are recommended to the users.
1.1. Content-based
Content-based recommenders use features such as the genre, cast and age of the show as attributes for a learning system. However, such features are only weakly predictive of whether viewers will like the show. There are only a few hundred genres and they lack the specificity required for accurate prediction.
In the TV world, the only content-analysis technologies available to date rely on the metadata associated with the programmes. The recommendations are only as good as the metadata, and are typically recommendations within a certain genre or with a certain star.
1.2. Social recommendations
Social-networking technologies allow for a new level of sophistication whereby users can easily receive recommendations based on the shows that other people within their social network have ranked highly, providing a more personal level of recommendations than are achieved using a newspaper or web site.
A number of social networks dedicated to providing music recommendations have emerged over the last few years, the most well known of this being imdb.com which encourages users to track all of their listening habits with the website and then applies a collaborative filtering algorithm to identify similar users and then ask them for recommendations.
The advantage of social recommendations is that because they have a high degree of personal relevance they are typically well received, with the disadvantage being that the suggested shows tend to cluster around a few well known or cult-interest programmes.
1.3. Collaborative filtering
Collaborative filter methods are based on collecting and analysing a large amount of information on usersβ behaviour, activity or preferences and predicting what users will like based on their similarity to other users.
There are two types of filtering:
Collaborative filtering systems can be categorised along the following major dimensions:
The tasks for which collaborative filtering is useful are:
1.3.1. Time-based Collaborative Filtering with Implicit Feedback
Most collaborative filtering-based recommender systems use explicit feedback (ratings) that are collected directly from users.Β When users rate truthfully, using rating information is one of the best ways to quantify user preferences. However, many users assign arbitrary ratings that do not reflect their honest opinions. In some e-commerce environments, it is difficult to ask users to give ratings. For instance, in a mobile e-commerce environment the service fee is dependent on the connection time.Β
2. Accuracy
In the recommender systems community it is increasingly recognised that accuracy metrics such as mean average error (MAE), precision and recall, can only partially evaluate a recommender system. User satisfaction, and derivatives thereof such as serendipity, diversity and trust are increasingly seen as important. A system can make better recommendations using the following approaches:
3. Relevance
Googleβs PageRank mechanism is possible in the web because pages are linked to each other, but for video on-demand and streaming platforms we need to find another approach to relevance that will allow us to prioritise the most appropriate programming ahead of less relevant items. There are a number of potential elements that can be included and the best algorithms take into account each of these factors:
4. Challenges
The difficulty in implementing recommendations is that different users have different tastes and opinions about which content they prefer.
5. Research papers
I love a good βI built a thing and here is how I built that thingβ post, especially when itβs penned by someone like Chris whoβs sure to keep you entertained along the way.
Wouldnβt it be neat to have aggregated data (for a website, daily email, push alert, etc) of kids events in our surrounding area so we know about them right away?
β My wife, possibly salty we missed out on Bluey Live tickets in Portland
Recently I packaged my project git-cliff (changelog generator written in Rust) for NPM with the help of my friend @atlj. I thought this would be an interesting topic for a blog post since it has a certain technical depth about distributing binaries and frankly it still amazes me how the whole thing works so smoothly. So let's create a simple Rust project, package it for NPM and fully automate the release process via GitHub Actions.
Q: Wait, what? I thought NPM was just for Javascript stuff!?
A: Actually, no. As long as you have the correct set of tools for executing a binary and packaging it, you're set. It doesn't necessarily need to be a Rust package as well, you can package anything and do anything when the package is installed/executed. That's why NPM is so dangerous! - a topic for another blog post.
Q: Okay, I see. But... why do this? Can't you just download the binary and run it?
A: As an answer to this question, I would like to outline my conversation with @atlj. Please note that it's not the actual conversation and we do not talk like that. Or maybe we do. Anyways, here it is:
atlj: Yo orhun, let's package git-cliff
for NPM so that it will be more accessible to frontend devs and npx
is very convenient for installing/running stuff.
orhun: Sounds good. But how do we do it?
atlj: Check this sh*t out: lefthook (GitHub) & lefthook (NPM)
orhun: Oh, it's a Go project and they have an NPM package. WTF!
atlj: Yeah, we can do the same. Or even better.
orhun: I'm down.
So the inspiration came from lefthook and we wanted to see how we can take this approach and apply it to git-cliff
.
It worked flawlessly! Just run:
npx git-cliff@latest
Q: Show me how.
A: Follow me. π
First of all, let's understand what NPM is and how the NPM registry works.
NPM (originally short for "Node Package Manager") is the default package manager for the Javascript runtime environment Node.js. It consists of a command line client (npm
) and an online database of public and paid-for private packages, called the NPM registry. This is where we will push our packages.
The packages in the NPM registry are in CommonJS format and include a metadata file in JSON format (package.json
). The registry does not have any vetting process for submission, which means that packages found there can potentially be low quality, insecure, or malicious. NPM relies on user reports to take down such packages. You can manually check your package to find insecure dependencies by running npm audit
.
To install a package or the dependencies specified by a package, you can run npm install
. On the other hand, npx
can be used to run an arbitrary command from an NPM package which is either installed locally or fetched remotely. It handles both installation and execution. In that sense, we can think of it as a shortcut for npm install
& npm run
.
At the end of this blog post, we're aiming to install/run our application with npx <app>
.
Let me make it clear that we won't be compiling our Rust application into WASM for packaging it for the NPM registry. That's a wasm-pack
task on its own. You can read more about this approach here and here.
Instead, we will be distributing binaries that are built for different targets (i.e. architectures/platforms). Each NPM package will be responsible for wrapping the target-specific binary and there will be a "base" package that is exposed to the end user. This is why this packaging approach is more portable since you only need to compile binaries for different architectures and place the binaries inside NPM packages.
Let's break it down:
Here, we are taking the following advantages of package.json
:
bin
: This field points to our main executable (a command or local file name) in the package. When the package is installed globally, that file will be linked inside the global bins directory. For example, on an Unix-like OS it'll create a symlink from the index.js
script to /usr/local/bin/myapp
and in the case of Windows it will create a cmd file usually at C:\Users\<Username>\AppData\Roaming\npm\myapp.cmd
which runs the index.js
script.One thing to note here is that the file referenced in bin
should have #!/usr/bin/env node
as shebang, otherwise the script is started without the node
executable.
optionaldependencies
: This field is for the dependencies that can be used but not strictly needed for the package. We will be specifying our target-specific NPM packages in this field since we only want to install the appropriate package for the current architecture.But how do we distinguish between different targets and know which optional dependency to install? Well, os
and cpu
help us to filter the correct dependency among different dependencies.
os
: Specifies which operating systems the package will be running on.Possible values are 'aix', 'darwin', 'freebsd','linux', 'openbsd', 'sunos', and 'win32'.
cpu
: Specifies which CPU architecture the package will be running on.Possible values are 'arm', 'arm64', 'ia32', 'mips', 'mipsel', 'ppc', 'ppc64', 's390', 's390x', and 'x64'.
Our project structure will be the following:
$ git ls-tree -r --name-only HEAD | tree --fromfile
.
βββ Cargo.lock
βββ Cargo.toml # ----------------> manifest of the Rust application
βββ .github
βΒ Β βββ workflows
βΒ Β βββ cd.yml # ------------> GitHub Actions workflow for automated releases
βββ .gitignore
βββ npm
βΒ Β βββ app
βΒ Β βΒ Β βββ package.json # ------> metadata of the base NPM package
βΒ Β βΒ Β βββ src
βΒ Β βΒ Β βΒ Β βββ index.ts # ------> entrypoint of the base NPM package (binary executor)
βΒ Β βΒ Β βββ tsconfig.json
βΒ Β βΒ Β βββ yarn.lock
βΒ Β βββ package.json.tmpl # -----> template for the target-specific NPM packages
βββ src
βββ main.rs # ---------------> entrypoint of the Rust application
Let's create a simple Rust project first:
$ cargo new --bin app && cd app/
$ cargo run
Hello, world!
Next, we need to add our "base" package's package.json
file as follows (some fields are stripped):
{
"name": "app",
"version": "0.1.0",
"bin": "lib/index.js",
"scripts": {
"typecheck": "tsc --noEmit",
"lint": "eslint .",
"lint:fix": "eslint . --fix",
"build": "tsc",
"dev": "yarn build && node lib/index.js"
},
"devDependencies": {
"@types/node": "^18.11.18",
"@typescript-eslint/eslint-plugin": "^5.48.0",
"@typescript-eslint/parser": "^5.48.0",
"eslint": "^8.31.0",
"typescript": "^4.9.4"
},
"optionalDependencies": {
"app-linux-x64": "0.1.0",
"app-linux-arm64": "0.1.0",
"app-darwin-x64": "0.1.0",
"app-darwin-arm64": "0.1.0",
"app-windows-x64": "0.1.0",
"app-windows-arm64": "0.1.0"
}
}
As you can see here, we are setting an optional dependency to each of our target-specific packages so that NPM can decide on the correct package to install based at runtime. So let's add a template for generating these packages.
Huh, wait. Did you say "generating"?
Yes, I think it's a good idea to generate NPM packages during the continuous deployment workflow instead of having 6 different folders and package.json
files in our project. The only thing that changes between these packages is the name
, os
, and the cpu
fields so we can simply create them from a template via envsubst(1)
.
Considering this, we can come up with the following template:
{
"name": "${node_pkg}",
"version": "${node_version}",
"os": ["${node_os}"],
"cpu": ["${node_arch}"]
}
Okay, that's cool and all but where do we put the binary?
Good question. Our directory structure will look like this after we generate package.json
file and build the binary:
$ rg --files npm | tree --fromfile
.
βββ npm
βββ app # ----------------> base package
βΒ Β βββ package.json
βΒ Β βββ src
βΒ Β βΒ Β βββ index.ts # ---> executor
βΒ Β βββ tsconfig.json
βΒ Β βββ yarn.lock
βββ app-linux-x64 # ------> generated package for linux
βΒ Β βββ bin
βΒ Β βΒ Β βββ app # --------> binary
βΒ Β βββ package.json # ---> metadata
βββ package.json.tmpl
Now we know that the correct optional dependency will be installed alongside our base package and it will contain the binary. But, how do we locate it and execute it? Well, that's why we have our src/index.ts
:
#!/usr/bin/env node
import { spawnSync } from "child_process";
/**
* Returns the executable path which is located inside `node_modules`
* The naming convention is app-${os}-${arch}
* If the platform is `win32` or `cygwin`, executable will include a `.exe` extension.
* @see <a href="https://nodejs.org/api/os.html#osarch" rel="nofollow">https://nodejs.org/api/os.html#osarch</a>
* @see <a href="https://nodejs.org/api/os.html#osplatform" rel="nofollow">https://nodejs.org/api/os.html#osplatform</a>
* @example "x/xx/node_modules/app-darwin-arm64"
*/
function getExePath() {
const arch = process.arch;
let os = process.platform as string;
let extension = "";
if (["win32", "cygwin"].includes(process.platform)) {
os = "windows";
extension = ".exe";
}
try {
// Since the binary will be located inside `node_modules`, we can simply call `require.resolve`
return require.resolve(`app-${os}-${arch}/bin/app${extension}`);
} catch (e) {
throw new Error(
`Couldn't find application binary inside node_modules for ${os}-${arch}`
);
}
}
/**
* Runs the application with args using nodejs spawn
*/
function run() {
const args = process.argv.slice(2);
const processResult = spawnSync(getExePath(), args, { stdio: "inherit" });
process.exit(processResult.status ?? 0);
}
run();
When we build the package via yarn build
, it will generate lib/index.js
which will be our entrypoint for the wrapper.
After we have everything in place, we can simply publish these packages via npm publish
. However, please note that optional dependencies should be present in NPM registry for building a package. This means that you need to publish each optional dependency before attempting to publish the base package. Otherwise, you might get an error like the following:
error An unexpected error occurred: "https://registry.npmjs.org/app-linux-x64: Not found".
info If you think this is a bug, please open a bug report with the information provided in "/home/runner/work/packaging-rust-for-npm/packaging-rust-for-npm/npm/app/yarn-error.log".
info Visit <a href="https://yarnpkg.com/en/docs/cli/install" rel="nofollow">https://yarnpkg.com/en/docs/cli/install</a> for documentation about this command.
Error: Process completed with exit code 1.
We can automate the publishing process of the NPM packages with a GitHub Actions workflow which runs when a tag is pushed or a release is created.
As you can see above, we need to use a build matrix for building binaries and publishing the target-specific NPM packages. For that, we can create the following matrix:
matrix:
build:
- {
NAME: linux-x64-glibc,
OS: ubuntu-20.04,
TOOLCHAIN: stable,
TARGET: x86_64-unknown-linux-gnu,
}
- {
NAME: linux-arm64-glibc,
OS: ubuntu-20.04,
TOOLCHAIN: stable,
TARGET: aarch64-unknown-linux-gnu,
}
- {
NAME: win32-x64-msvc,
OS: windows-2022,
TOOLCHAIN: stable,
TARGET: x86_64-pc-windows-msvc,
}
- {
NAME: win32-arm64-msvc,
OS: windows-2022,
TOOLCHAIN: stable,
TARGET: aarch64-pc-windows-msvc,
}
- {
NAME: darwin-x64,
OS: macos-11,
TOOLCHAIN: stable,
TARGET: x86_64-apple-darwin,
}
- {
NAME: darwin-arm64,
OS: macos-11,
TOOLCHAIN: stable,
TARGET: aarch64-apple-darwin,
}
Here, we have the following fields in each matrix field:
NAME
: Name of the build (formatted as <os>-<arch>-<env>
).OS
: Type of machine to run the job on (i.e. runner).TOOLCHAIN
: Type of the Rust toolchain.TARGET
: Type of the Rust target (i.e. target triple).The important part is we will later use NAME
to derive the name of the NPM package. For example, linux-x64-glibc
will correspond to <app>-linux-x64
.
Next, we can build a binary for each build target as follows:
- name: Checkout
uses: actions/checkout@v3
- name: Set the release version
shell: bash
run: echo "RELEASE_VERSION=${GITHUB_REF:11}" >> $GITHUB_ENV
- name: Install Rust toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: ${{ matrix.build.TOOLCHAIN }}
target: ${{ matrix.build.TARGET }}
override: true
- name: Build
uses: actions-rs/cargo@v1
with:
command: build
args: --release --locked --target ${{ matrix.build.TARGET }}
use-cross: ${{ matrix.build.OS == 'ubuntu-20.04' }} # use `cross` for Linux builds
And then, we finally generate the NPM package and publish it:
- name: Install node
uses: actions/setup-node@v3
with:
node-version: "16"
registry-url: "https://registry.npmjs.org"
- name: Publish to NPM
shell: bash
run: |
cd npm
# set the binary name
bin="app"
# derive the OS and architecture from the build matrix name
# note: when split by a hyphen, the first part is the OS and the second is the architecture
node_os=$(echo "${{ matrix.build.NAME }}" | cut -d '-' -f1)
export node_os
node_arch=$(echo "${{ matrix.build.NAME }}" | cut -d '-' -f2)
export node_arch
# set the version
export node_version="${{ env.RELEASE_VERSION }}"
# set the package name
# note: use 'windows' as OS name instead of 'win32'
if [ "${{ matrix.build.OS }}" = "windows-2022" ]; then
export node_pkg="${bin}-windows-${node_arch}"
else
export node_pkg="${bin}-${node_os}-${node_arch}"
fi
# create the package directory
mkdir -p "${node_pkg}/bin"
# generate package.json from the template
envsubst < package.json.tmpl > "${node_pkg}/package.json"
# copy the binary into the package
# note: windows binaries has '.exe' extension
if [ "${{ matrix.build.OS }}" = "windows-2022" ]; then
bin="${bin}.exe"
fi
cp "../target/${{ matrix.build.TARGET }}/release/${bin}" "${node_pkg}/bin"
# publish the package
cd "${node_pkg}"
npm publish --access public
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
As the final step, we publish the base package in another job:
publish-npm-base:
name: Publish the base NPM package
needs: publish-npm-binaries
runs-on: ubuntu-20.04
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Install node
uses: actions/setup-node@v3
with:
node-version: "16"
registry-url: "https://registry.npmjs.org"
- name: Publish the package
shell: bash
run: |
cd npm/app
yarn install # requires optional dependencies to be present in the registry
yarn build
npm publish --access public
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
Here is the complete workflow file:
app-windows-x64
instead of app-win32-x64
?If you read the GitHub Actions workflow file carefully, you might have realized there is an extra check for publishing the package as app-windows-x64
although using app-win32-x64
would be easier because we already named our build matrix as win32-x64-msvc
.
The reason for that change is the NPM registry itself:
npm ERR! 403 403 Forbidden - PUT <a href="https://registry.npmjs.org/app-win32-x64" rel="nofollow">https://registry.npmjs.org/app-win32-x64</a> - Package name triggered spam detection; if you believe this is in error, please contact support at <a href="https://npmjs.com/support" rel="nofollow">https://npmjs.com/support</a>
Apparently, NPM doesn't like numbers in package names and recognizes them as spam. As stupid as it sounds, it's true. Other people on the internet also suffered from this issue:
Stupid npm π‘ the name of chip is pcf8575, which name should I use?
- a random NPM victim
However, I reached out to the support and they actually helped me to publish the win32
package.
Hi Orhun,
Sorry to hear about the trouble you were having.
We've initiated some blocks related to package names. Our hope is this will help with both security and spam concerns we're facing.
As support, we're able to move beyond the block. I've published the
git-cliff-win32-x64
andgit-cliff-win32-arm64
packages and transferred write access to the packages over to yourorhun
user account.
But it's better to just have it named as windows
since we don't want to deal with the spam protection mechanism.
After publishing the NPM packages for your Rust project, you can use npx
to install/run the application. For example:
$ npx git-cliff@latest
Need to install the following packages:
<a href="mailto:git-cliff@1.1.2">git-cliff@1.1.2</a>
Ok to proceed? (y) y
As an alternative to npx
, you can use dum
which is a faster alternative written in Rust:
$ dum install git-cliff
$ dum run git-cliff
I hope this guide was helpful for people who want to do crazy stuff like putting their Rust projects on NPM!
All the code can be found in this repository: https://github.com/orhun/packaging-rust-for-npm
Cheers!
I've been working with TypeScript for a long long time. I think I'm not too bad at it. However, to my despair, some low-level behaviors still confuse me:
0 | 1 extends 0 ? true : false
evaluate to false
?{ name: string }
and { age: number }
, do you &
or |
? Both make some sense, since I want a union of the functionality in both interfaces, but I also want the object to satisfy left & (and) right interfaces.any
different from unknown
? All I get is imprecise mnemonics like "Avoid Any, Use Unknown". Why?never
? "A value that never happens" is very dramatic, but not too precise.whatever | never === whatever
and whatever & never === never
?const x: {} = true;
valid TS code? true
is clearly not an empty object.I was doing some research on never
, and stumbled upon Zhenghao He's Complete Guide To TypeScriptβs Never Type (check out his blog, it's super cool!). It mentions that a type is just a set of values, and β boom β it clicked. I went back to the basics, re-formulating everything I know about TS into set-theoretic terms. Follow me as I:
extends
caluses,unknown
and any
where they belong.In the end, I solve most of my questions, grow much cozier with TS, and come up with this brilliant map of TS types:
First up, a refresher on set theory. Feel free to skip if you're a pro, but my algebra skills are a bit rusty, so I could use a reminder of how it works.
Sets are unordered collections of objects. In kindergarten terms: say we have two apples aka objects (let's call them ivan and bob, shall we?), and some bags aka sets where we can put the apples. We can make, in total, 4 apple sets:
{ ivan }
β sets are written as curly brackets with the set items inside.{ bob }
.{ ivan, bob }
. Hold onto your hats, this is called a universe because at the moment there's nothing in our world except these two apples.{}
. This one gets a special symbol, β
Sets are often drawn as "venn diagrams", with each set represented as a circle:
Apart from listing all the items, we can also build sets by condition. I can say "R is a set of red apples" to mean { ivan }
, considernig ivan is red and bob is green. So far, so good.
Set A is a subset of set B if every element from A is also in B. In our apple world, { ivan }
is a subset of { ivan, bob }
, but { bob }
is not a subset of { ivan }
. Obviously, any set is a subset of itself, and {}
is a subset of any other set S, because not a single item from {}
is missing from S.
There are a few useful operators defined on sets:
This should be enough! Let's see how it all maps to types.
So, the big reveal: you can think of "types" as sets of JavaScript values. Then:
A extends B
as seen in conditional types and generic constraints can be read as "A is subset of B".|
, and intersection, &
, operators are just the union and intersection of two sets.Exclude<A, B>
is as close as TS gets to a difference operator, except it only works when both A
and B
are union types.never
is an empty set. Proof: A & never = never
and A | never = A
for any type A
, and Exclude<0, 0> = never
.This change of view already yields some useful insights:
I know this all sounds like a lot, so let's proceed by example, starting with a simple case of boolean values.
For now, pretend JS only has boolean values. There are exaclty two β true
and false
. Recalling the apples, we can make a total of 4 types:
true
and false
, each made up of a single value;boolean
, which is any boolean value;never
.The diagram of the "boolean types" is basically the one that we had for apples, just the names swapped:
Let's try moving between type world and set world:
boolean
can be written as true | false
(in fact, that's exactly how TS impements it).true
is a subset (aka sub-type) of boolean
never
is an empty set, so never
is a sub-set/type of true
, false
, and boolean
&
is an intersection, so false & true = never
, and boolean & true = (true | false) & true = true
(the universe, boolean
, doesn't affect intersections), and true & never = never
, etc.|
is a union, so true | never = true
, and boolean | true = boolean
(the universe, boolean
, "swallows" other intersection items because they're all subsets of universe).Exclude
correctly computes set difference: Exclude<boolean, true> -> false
Now, a little self-assessment of the tricky extends
cases:
type A = boolean extends never ? 1 : 0;
type B = true extends boolean ? 1 : 0;
type C = never extends false ? 1 : 0;
type D = never extends never ? 1 : 0;
If you recall that "extends" can be read as "is subset of", the answer should be clear β A0,B1,C1,C1. We're making progress!
null
and undefined
are just like boolean
, except they only contain one value each. never extends null
still holds, null & boolean
is never
since no JS value can simultaneously be of 2 different JS types, and so on. Let's add these to our "trivial types map":
With the simple ones out of the way, let's move on to string types. At first, it seems that nothing's changed β string
is a type for "all JS strings", and every string has a corresponding literal type: const str: 'hi' = 'hi';
However, there's one key difference β there are infinitely many possible string values.
It might be a lie, because you can only represent so many strings in finite computer memory, but a) it's enough strings to make enumerating them all unpractical, and b) type systems can operate on pure abstrations without worrying about dirty real-life limitations.
Just like sets, string types can be constructed in a few different ways:
|
union lets you constuct any finite string set β e.g. type Country = 'de' | 'us';
. This won't work for infinite sets β say, all strings with length > 2 β since you can't write an infinite list of value.type V = `v${string}`;
is a set of all strings that start with v
.We can go a bit further by making unions and intersections of literal and template types. Fun time: when combining a union with a template, TS is smart enough to just filter the literals againts the template, so that 'a' | 'b' & `a${string}` = 'a'
. Yet, TS is not smart enough to merge templates, so you get really fancy ways of saying never
, such as `a${string}` & `b${string}`
(obviously, a string can't start with "a" and "b" at the same time).
However, some string types are not representable in TS at all. Try "every string except 'a'". You could Exclude<string, 'a'>
, but since TS doesn't actually model string
as union of all possible string literals, this in fact evaluates back to string
. The template grammar can not express this negative condition either. Bad luck!
The types for numbers, symbols and bigints work the same way, except they don't even get a "template" type, and are limited to finite sets. It's a pity, as I could really use some number subtypes β integer, number between 0 and 1, or positive number. Anyways, all together:
Phew, we've covered all primitive, non-intersecting JS / TS types. We've gotten comfortable moving between sets and types, and discovered that some types can't be defined in TS. Here comes the tricky part.
If you think const x: {} = 9;
makes no sense, this section is for you. As it appears, our mental model of what TS object types / records / interfaces was built on the wrong assumptions.
First, you'd probably expect types like type Sum9 = { sum: 9 }
to act like "literal types" for objects β matching a single object value { sum: 9 }
, adjusted for referential equality. This is absolutely not how it works. Instead, Sum9
is a "thing on which you can access propery sum
to get 9
" β more like a condition / constraint. This lets us call (data: Sum9) => number
with an object obj = { sum: 9, date: '2022-09-13' }
without TS complaining about unknown date
property. See, handy!
Then, {}
type is not an "empty object" type corresponding to a {}
JS literal, but a "thing where I can access properties, but I don't care about any particular properties". Aha, now we can see what's going on in our initial mind-bender: if x = 9
, you can safely x['whatever']
, so it satisfies the unconstrained {}
interface. In fact, we can even make bolder claims like const x: { toString(): string } = 9;
, since we can x.toString()
and actuallty get a string. More yet, keyof number
gives us "toString" | "toFixed" | "toExponential" | "toPrecision" | "valueOf" | "toLocaleString"
, meaning that TS secretly sees our primitive type as an object, which it is (thanks to autoboxing). null
and undefined
do not satisfy {}
, because they throw if you try to read a property. Not super intuitive, but makes sense now.
Coming back to my little "|
or &
" problem, &
and |
operate on "value sets", not on "object shapes", so you need { name: string } & { age: number }
to get objects with both name
and (extra hint: and = &
) age
.
Oh, and what about that odd object
type? Since every property on an interface just adds a constraint to the "thing" we're typing, there's no way to declare an interface that filters out primitive values. It's why TS has a built-in object
type that means specifically "JS object, not a primitive". Yes, you can intersect with object
to get only non-primitive values satisfying an interface: const x: object & { toString(): string } = 9
fails.
Let's add all of these to our type map:
extends
keyword in TS can be confusing. It comes from the object-oriented world where you extend a class in the sense of adding functionality to it, but, since TS uses structural typing, extends
as used in type Extends<A, B> = A extends B ? true : false
is not the same extends
from class X extends Y {}
.
Instead, A extends B
can be read as A is a sub-type of B or, in set terms, A is a subset of B. If B is a union, every member of A must also be in B. If B is a "constrained" interface, A must not violate any of B's constraints. Good news: a usual OOP class A extends B {}
fits A extends B ? 1 : 0
. So does 'a' extends string
, meaning that (excuse the pun) TS extends
extends extends
.
This "subset" view is the best way to never mix up the order of extends
operands:
0 | 1 extends 0
is false, since a 2-element set {0, 1}
is not a subset of the 1-element {0}
(even though {0,1}
does extend {1}
in a geometrical sense).never extends T
is always true, because never
, the empty set, is a subset of any set.T extends never
is only true if T is never
, because an empty set has no subsets except itself.T extends string
allows T to be a string, a literal, or a literal union, or a template, because all of these are subsets of string
.T extends string ? string extends T
makes sure that T is exactly string
, because that's the only way it can be both a subset and a superset of string.Typescript has two types that can represent an arbitrary JS value β unknown
and any
. The normal one is unknown
β the universe of JS values:
// It's a 1
type Y = string | number | boolean | object | bigint | symbol | null | undefined extends unknown ? 1 : 0;
// a shorter one, given the {} oddity
type Y2 = {} | null | undefined extends unknown ? 1 : 0;
// For other types, this is 0:
type N = unknown extends string ? 1 : 0;
On a puzzling side, though:
unknown
is not a union of all other base types, so you can't Exclude<unknown, string>
unknown extends string | number | boolean | object | bigint | symbol | null | undefined
is false, meaning that some TS types are not listed. I suspect enum
s.All in all, it's safe to think of unknown
as "the set of all possible JS values".
any
is the weird one:
any extends string ? 1 : 0
evaluates to 0 | 1
which is basically a "dunno".any extends never ? 1 : 0
evaluates to 0 | 1
, meaning that any
might be empty.We should conclude that any
is "some set, but we're not sure which one" β like a type NaN
. However, upon further inspection, string extends any
, unknown extends any
and even any extends any
are all true, none of which holds for "some set". So, any
is a paradox β every set is a subset of any
, but any
might be empty. The only good news I have is that any extends unknown
, so unknown
is still the universe, and any
does not allow "alien" values.
So, to finish mapping our types, we wrap our entire diagram into unknown
bubble:
Today, we've learnt to that TS types are basically sets of JS values. Here's a little dictionary to go from type-world to set-world, and back:
unknown
never
is an empty set.A extends B
can be read as "A is subset of B".Exclude
is an approximation of set difference that only works on union types.Going back my our initial questions:
0 | 1 extends 0
is false because {0,1} is not a subset of {0}&
and |
work on sets, not on object shapes. A & B
is a set of things that satisfy both A
and B
.unknown
is the set of all JS values. any
is a paradoxical set that includes everything, but might also be empty.never
gives you never
because it's an empty set. never
has no effect in a union.const x: {} = true;
works because TS interfaces work by constraining the property values, and we haven't constrained anything here, so true
fits.We still have a lot of TS mysteries to solve, so stay tuned!