3266 stories
·
2 followers

Is React Having An Angular.js Moment?

1 Share

In 2012, Angular.js changed the landscape of frontend development and quickly became a success. Just two years later, the Angular team launched Angular 2, which was a complete rewrite of the original library based on a different set of paradigms. Many developers, including myself, didn't want to rewrite their existing apps to fit these new ideas. And for new projects, Angular.js wasn't the go to choice anymore, as other frameworks were just as good.

In 2015, we began using React for all of our frontend work. The simple architecture, the focus on components, and the steady productivity regardless of codebase size made it an easy choice. React was a big hit and the community grew very quickly. Recently, the React and Next.js teams have been promoting Server Components, a new way to build web applications that doesn't fit with most existing React app.

Is this change as big as the move from Angular.js to Angular 2? Is React going through a similar phase as Angular.js did?

Note: In this article, I'll discuss new features introduced by both the React and Next.js teams. Since they work closely together, it's hard to say which team is responsible for which feature. So, I'll use "React" to refer to both teams.

Relearning Everything

At its core, React is a view library. This doesn't change: with React Server components, you can still build components with JSX, and render dynamic content passed as props:

function Playlist({ name, tracks }) {
    return (
        <div>
            <h1>{name}</h1>
            <table>
                <thead>
                    <tr>
                        <th>Title</th>
                        <th>Artist</th>
                        <th>Album</th>
                        <th>Duration</th>
                    </tr>
                </thead>
                <tbody>
                    {tracks.map((track, index) => (
                        <tr key={index}>
                            <td>{track.title}</td>
                            <td>{track.artist}</td>
                            <td>{track.album}</td>
                            <td>{track.duration}</td>
                        </tr>
                    ))}
                </tbody>
            </table>
        </div>
    );
}

However, everything else undergoes a change with Server Components. Data fetching no longer relies on useEffect or react-query ; instead, you're expected to utilize fetch within asynchronous components:

async function PlaylistFromId({ id }) {
    const response = await fetch(`/api/playlists/${id}`);
    if (!response.ok) {
        // This will activate the closest `error.js` Error Boundary
        throw new Error('Failed to fetch data');
    }
    const { name, tracks } = response.json();
    return <Playlist name={name} tracks={tracks} />;
}

The fetch function isn't the browser fetch. It's been enhanced by React to provide automatic request deduplication. Why is this necessary? If you need to access the fetched data deeper in the component tree, you can't place it in a React Context because useContext is disabled in Server Components. Therefore, the recommended method to access the same fetched data at various points in the component tree is to re-fetch it wherever required, and rely on React for the deduplication.

This fetch function also caches data by default, irrespective of the response cache headers. The actual fetching process takes place at build time.

If you want a button to initiate a POST action, you now have to include it in a form and use server actions, which means using a function with the use server pragma:

export function AddToFavoritesButton({ id }) {
    async function addToFavorites(data) {
        'use server';

        await fetch(`/api/tracks/${id}/favorites`, { method: 'POST' });
    }
    return (
        <form action={addToFavorites}>
            <button type="submit">Add to Favorites</button>
        </form>
    );
}

The typical React hooks - useState, useContext, useEffect - will result in an error in Server Components. If you require these, you'll have to use the use client escape route, which forces React to render the component on the client-side. Keep in mind, this is now an opt-in feature, whereas it was the default behavior in Next.js before the introduction of Server Components.

CSS-in-JS isn't compatible with Server Components. If you're accustomed to directly styling components using the sx or css prop, you'll have to learn CSS Modules, Tailwind, or Sass. To me, this change feels like a step back:

// in app/dashboard/layout.tsx
import styles from './styles.module.css';

export default function DashboardLayout({
    children,
}: {
    children: React.ReactNode,
}) {
    return <section className={styles.dashboard}>{children}</section>;
}
/* in app/dashboard/styles.module.css */
.dashboard {
    padding: 24px;
}

And what about debugging? Good luck. The React DevTools don't display the details of React Server Components. You can't inspect a component in the browser to see the props it used or its children. At the moment, debugging React Server Components means resorting to console.log everywhere.

The mental model of Server Components is entirely different from client-side JS, despite the fact that the base - JSX - remains the same. Being a proficient React developer doesn't necessarily help you much with Server Components. You essentially have to relearn everything, unless, of course, you're familiar with PHP.

Note: To be fair, most of the issues I've raised concern features labeled as "alpha". It's possible that they'll be resolved before the stable release.

Developing With An Empty Ecosystem

As I mentioned earlier, react-query is no longer viable for data fetching. It turns out, it's not the only library that doesn't work with React Server Components. If you've been relying on the following tools, you'll need to start looking for alternatives:

The issue is that these libraries all use standard React hooks, which fail when called from Server Components.

If you require these libraries, you have to encapsulate them in a component that forces client-side rendering, by using the use client directive.

To emphasize: React Server Components break almost every existing React third-party library, and these library authors must modify their code to make them compatible. Some will do so, many won't. And even if they do, it will take time.

So if you're starting an app with React Server Components, you cannot depend on the existing React ecosystem.

Even worse: client-side React provides tools for everyday needs that are not yet covered in Server Components. For instance, React Context is a great solution for managing dependency injection. Without React Context, Server Components will likely need a Dependency Injection Container, similar to Angular's approach. If this isn't provided by the core team, you'll have to count on the community to do so.

In the meantime, you'll have to code many things manually. Building a React app without a UI Kit, a form framework, a smart API client, and the React integration of your preferred SaaS can be very challenging.

The current React ecosystem is its most significant advantage. It's what makes React so widely used. And React Server Components disrupt it.

Too Much Magic

Server-side rendering is a solved problem. A server-side script receives a request, fetches the data, and generates the HTML. Client-side rendering is equally mature. The browser retrieves the data, and the client-side script updates the DOM.

React suggests mixing server-side and client-side rendering, using what feels like black magic. You can use client components in server components, and vice versa.

When a client component renders a server component, the React server doesn't send HTML but a text representation of the component tree. The client-side script then renders the component tree on the client-side.

If you're used to debugging your AJAX requests with HTML or JSON, you're in for a surprise. Here's a sneak peak at the RSC Wire format that React uses to stream updates from server components to the client:

M1:{"id":"./src/ClientComponent.client.js","chunks":["client1"],"name":""}
S2:"react.suspense"
J0:["$","@1",null,{"children":[["$","span",null,{"children":"Hello from server land"}],["$","$2",null,{"fallback":"Loading tweets...","children":"@3"}]]}]
M4:{"id":"./src/Tweet.client.js","chunks":["client8"],"name":""}
J3:["$","ul",null,{"children":[["$","li",null,{"children":["$","@4",null,{"tweet":{...}}}]}],["$","li",null,{"children":["$","@4",null,{"tweet":{...}}}]}]]}]

This format is not documented because it's an implementation detail.

Readability is what has made HTTP, JSON, and JSX so popular. However, React Server Components disrupt this pattern.

React Server Components seem like too much of a mystery because they're difficult for most developers to comprehend or debug. It's uncertain whether this will boost or hinder productivity.

Do We Really Need This?

If you think about web development from first principles, it's logical to conclude that server-side rendering with a dash of AJAX is an effective way to build web apps. Dan Abramov provided a brilliant explanation for the motivation behind React Server Components in his talk at Remix Conf 2023:

This architecture is well-suited for e-commerce websites, blogs, and other content-focused websites with strong SEO requirements.

However, that's not a new concept. This architecture has been used for years with tools like Hotwire in Rails or Symfony applications.

Also, some of the issues that Server Components aim to address (like data fetching, partial rendering, etc) are already solved by some Single-Page Application frameworks, like our own react-admin. Other issues (large bundles, slow first load, SEO) aren't real problems for admin panels, SaaS, B2B apps, internal apps, CRMs, ERPs, long-lived apps, and more.

That's why many React developers, myself included, are happy with the Single-Page App architecture. And when I need to do some server-side rendering, I'll likely opt for a tool with a more mature ecosystem than React Server Components.

So if I don't need it, why should I be concerned?

The Standard Way To Build React Apps

My first problem is that React is dissuading people from using the Single-Page-App architecture. Or rather, they are discouraging developers from using React without a framework, and the frameworks they recommend promote server-side rendering.

I have a second problem.

The official React.js documentation primarily advises using Next.js.

The official Next.js documentation primarily advises using React Server Components starting from version 13.4.

In other words, React Server Components are the default way to build React apps as per the official documentation. A newcomer in the React ecosystem will naturally use them.

I think this is premature. In Dan Abramov's opinion, too:

"It takes a lot of work to make a new paradigm work"

React Server components require new generation routers, new generation bundlers. They are officially in alpha, and not ready for production.

So why is Next.js so pushy about it?

I can't avoid feeling that the new direction taken by Next.js is not designed to help developers, but to help Vercel sell React. You can't really sell a service around SPAs: once compiled, a SPA is a single JS file that can be hosted for free anywhere. But a server-side rendered app needs a server to run. And a server is a product that can be sold. Perhaps I'm a conspiracy theorist, but I don't see another reason to break the React ecosystem like this.

Existing Apps Are Not Affected

The introduction of React Server Components, unlike the Angular.js to Angular 2 transition, is not a breaking change. Existing single-page applications will still work with the latest version of React. Existing Next.js apps built with the Pages router will also work.

So, the answer to the question "Is React having an Angular.js moment?" is "No".

But if you start a new project today, what will you pick? A robust architecture with mature tooling and ecosystem (Single-Page Apps), or the new shiny thing that the React team is pushing (Server Components)? It's a difficult choice, and people might look for alternatives rather than risking the wrong choice.

I personally think that React's vision of a single tool to meet all web developer needs is overly ambitious - or the current solution is not the right one. To me, the proof is the dropdown in the Next.js documentation that lets readers choose between the App router (Server components) and the Pages router. If a single tool offers two very different ways to do the same thing, is it really the same tool?

So to the question "Is React harming its community by being too ambitious", I think the answer is "Yes".

Conclusion

Server Components may represent progress for server-side frameworks - or at least, they could once they are ready for production. But for the broader React community, I believe they pose a risk of fragmentation and could jeopardize the momentum that React has been building over the years.

If I could express a wish, I'd like a more balanced approach from the React and Next.js teams. I'd like the React team to recognize that the Single-Page App architecture is a valid choice and that it's not going anywhere in the near future. I'd prefer to see Next.js downplay Server Components in their documentation, or at least mark it more prominently as an "alpha" feature.

Maybe I'm being grumpy (again) and it's the future. Or perhaps developers are destined to constantly shift back and forth between paradigms, and that's just the nature of the industry.

Read the whole story
emrox
1 day ago
reply
Hamburg, Germany
Share this story
Delete

useHooks – The React Hooks Library

1 Comment
Read the whole story
emrox
2 days ago
reply
I always forget about that source
Hamburg, Germany
Share this story
Delete

Announcing TypeScript 5.1

2 Shares

Today we’re excited to announce the release of TypeScript 5.1!

If you’re not yet familiar with TypeScript, it’s a language that builds on JavaScript by adding constructs called types. These types can describe some details about our program, and can be checked by TypeScript before they’re compiled away in order to catch possible typos, logic bugs and more. TypeScript also uses these types to provide editor tooling like code completions, refactorings, and more. In fact, if you already write JavaScript in editors like Visual Studio or VS Code, that experience is already powered up by TypeScript! You can learn more at https://typescriptlang.org/.

To get started using TypeScript, you can get it through NuGet, or more commonly through npm with the following command:

npm install -D typescript

Here’s a quick list of what’s new in TypeScript 5.1!

What’s New Since the Beta and RC?

Since the beta, we’ve corrected some of our behavior for init hooks in decorators as the proposed behavior has been adjusted. We’ve also made changes to our emit behavior under isolatedModules, ensuring that script files are not rewritten to modules. This also means that usage of the transpileModule API will also ensure script files are not interpreted as modules, as it assumes the usage of isolatedModules.

Since the RC, we’ve iterated slightly on our built-in refactorings to move declarations to existing files; however, we believe the implementation still needs some improvements. As a result, you may not be able to access it in most editors at the moment, and can only opt in through using a nightly version of TypeScript. We anticipate that TypeScript 5.2 or a future patch release of TypeScript 5.1 will re-introduce this refactoring.

Easier Implicit Returns for undefined-Returning Functions

In JavaScript, if a function finishes running without hitting a return, it returns the value undefined.

function foo() {
    // no return
}

// x = undefined
let x = foo();

However, in previous versions of TypeScript, the only functions that could have absolutely no return statements were void– and any-returning functions. That meant that even if you explicitly said "this function returns undefined" you were forced to have at least one return statement.

// ✅ fine - we inferred that 'f1' returns 'void'
function f1() {
    // no returns
}

// ✅ fine - 'void' doesn't need a return statement
function f2(): void {
    // no returns
}

// ✅ fine - 'any' doesn't need a return statement
function f3(): any {
    // no returns
}

// ❌ error!
// A function whose declared type is neither 'void' nor 'any' must return a value.
function f4(): undefined {
    // no returns
}

This could be a pain if some API expected a function returning undefined – you would need to have either at least one explicit return of undefined or a return statement and an explicit annotation.

declare function takesFunction(f: () => undefined): undefined;

// ❌ error!
// Argument of type '() => void' is not assignable to parameter of type '() => undefined'.
takesFunction(() => {
    // no returns
});

// ❌ error!
// A function whose declared type is neither 'void' nor 'any' must return a value.
takesFunction((): undefined => {
    // no returns
});

// ❌ error!
// Argument of type '() => void' is not assignable to parameter of type '() => undefined'.
takesFunction(() => {
    return;
});

// ✅ works
takesFunction(() => {
    return undefined;
});

// ✅ works
takesFunction((): undefined => {
    return;
});

This behavior was frustrating and confusing, especially when calling functions outside of one’s control. Understanding the interplay between inferring void over undefined, whether an undefined-returning function needs a return statement, etc. seems like a distraction.

First, TypeScript 5.1 now allows undefined-returning functions to have no return statement.

// ✅ Works in TypeScript 5.1!
function f4(): undefined {
    // no returns
}

// ✅ Works in TypeScript 5.1!
takesFunction((): undefined => {
    // no returns
});

Second, if a function has no return expressions and is being passed to something expecting a function that returns undefined, TypeScript infers undefined for that function’s return type.

// ✅ Works in TypeScript 5.1!
takesFunction(function f() {
    //                 ^ return type is undefined

    // no returns
});

// ✅ Works in TypeScript 5.1!
takesFunction(function f() {
    //                 ^ return type is undefined

    return;
});

To address another similar pain-point, under TypeScript’s --noImplicitReturns option, functions returning only undefined now have a similar exception to void, in that not every single code path must end in an explicit return.

// ✅ Works in TypeScript 5.1 under '--noImplicitReturns'!
function f(): undefined {
    if (Math.random()) {
        // do some stuff...
        return;
    }
}

For more information, you can read up on the original issue and the implementing pull request.

Unrelated Types for Getters and Setters

TypeScript 4.3 made it possible to say that a get and set accessor pair might specify two different types.

interface Serializer {
    set value(v: string | number | boolean);
    get value(): string;
}

declare let box: Serializer;

// Allows writing a 'boolean'
box.value = true;

// Comes out as a 'string'
console.log(box.value.toUpperCase());

Initially we required that the get type had to be a subtype of the set type. This meant that writing

box.value = box.value;

would always be valid.

However, there are plenty of existing and proposed APIs that have completely unrelated types between their getters and setters. For example, consider one of the most common examples – the style property in the DOM and CSSStyleRule API. Every style rule has a style property that is a CSSStyleDeclaration; however, if you try to write to that property, it will only work correctly with a string!

TypeScript 5.1 now allows completely unrelated types for get and set accessor properties, provided that they have explicit type annotations. And while this version of TypeScript does not yet change the types for these built-in interfaces, CSSStyleRule can now be defined in the following way:

interface CSSStyleRule {
    // ...

    /** Always reads as a `CSSStyleDeclaration` */
    get style(): CSSStyleDeclaration;

    /** Can only write a `string` here. */
    set style(newValue: string);

    // ...
}

This also allows other patterns like requiring set accessors to accept only "valid" data, but specifying that get accessors may return undefined if some underlying state hasn’t been initialized yet.

class SafeBox {
    #value: string | undefined;

    // Only accepts strings!
    set value(newValue: string) {

    }

    // Must check for 'undefined'!
    get value(): string | undefined {
        return this.#value;
    }
}

In fact, this is similar to how optional properties are checked under --exactOptionalProperties.

You can read up more on the implementing pull request.

Decoupled Type-Checking Between JSX Elements and JSX Tag Types

One pain point TypeScript had with JSX was its requirements on the type of every JSX element’s tag. This release of TypeScript makes it possible for JSX libraries to more accurately describe what JSX components can return. For many, this concretely means it will be possible to use asynchronous server components in React

For some context and background, a JSX element is either of the following:

// A self-closing JSX tag
<Foo />

// A regular element with an opening/closing tag
<Bar></Bar>

When type-checking <Foo /> or <Bar></Bar>, TypeScript always looks up a namespace called JSX and fetches a type out of it called Element. In other words, it looks for JSX.Element.

But to check whether Foo or Bar themselves are valid tag names, TypeScript would roughly just grab the types returned or constructed by Foo or Bar and check for compatibility with JSX.Element (or another type called JSX.ElementClass if the type is constructable).

The limitations here meant that components could not be used if they returned or if they "rendered" a more broad type than just JSX.Element. For example, a JSX library might be fine with a component returning strings or Promises.

As a more concrete example, future versions of React have proposed limited support for components that return Promises, but existing versions of TypeScript cannot express that without someone drastically loosening the type of JSX.Element.

import * as React from "react";

async function Foo() {
    return <div></div>;
}

let element = <Foo />;
//             ~~~
// 'Foo' cannot be used as a JSX component.
//   Its return type 'Promise<Element>' is not a valid JSX element.

To provide libraries with a way to express this, TypeScript 5.1 now looks up a type called JSX.ElementType. ElementType specifies precisely what is valid to use as a tag in a JSX element. So it might be typed today as something like

namespace JSX {
    export type ElementType =
        // All the valid lowercase tags
        keyof IntrinsicAttributes
        // Function components
        (props: any) => Element
        // Class components
        new (props: any) => ElementClass;

    export interface IntrinsictAttributes extends /*...*/ {}
    export type Element = /*...*/;
    export type ClassElement = /*...*/;
}

We’d like to extend our thanks to Sebastian Silbermann who contributed this change!

Namespaced JSX Attributes

TypeScript now supports namespaced attribute names when using JSX.

import * as React from "react";

// Both of these are equivalent:
const x = <Foo a:b="hello" />;
const y = <Foo a : b="hello" />;

interface FooProps {
    "a:b": string;
}

function Foo(props: FooProps) {
    return <div>{props["a:b"]}</div>;
}

Namespaced tag names are looked up in a similar way on JSX.IntrinsicAttributes when the first segment of the name is a lowercase name.

// In some library's code or in an augmentation of that library:
namespace JSX {
    interface IntrinsicElements {
        ["a:b"]: { prop: string };
    }
}

// In our code:
let x = <a:b prop="hello!" />;

This contribution was provided thanks to Oleksandr Tarasiuk.

typeRoots Are Consulted In Module Resolution

When TypeScript’s specified module lookup strategy is unable to resolve a path, it will now resolve packages relative to the specified typeRoots.

See this pull request for more details.

Linked Cursors for JSX Tags

TypeScript now supports linked editing for JSX tag names. Linked editing (occasionally called "mirrored cursors") allows an editor to edit multiple locations at the same time automatically.

An example of JSX tags with linked editing modifying a JSX fragment and a div element.

This new feature should work in both TypeScript and JavaScript files, and can be enabled in Visual Studio Code Insiders. In Visual Studio Code, you can either edit the Editor: Linked Editing option in the Settings UI:

Visual Studio Code's Editor: Linked Editing` option

or configure editor.linkedEditing in your JSON settings file:

{
    // ...
    "editor.linkedEditing": true,
}

This feature will also be supported by Visual Studio 17.7 Preview 1.

You can take a look at our implementation of linked editing here!

Snippet Completions for @param JSDoc Tags

TypeScript now provides snippet completions when typing out a @param tag in both TypeScript and JavaScript files. This can help cut down on some typing and jumping around text as you document your code or add JSDoc types in JavaScript.

An example of completing JSDoc  comments on an 'add' function.

You can check out how this new feature was implemented on GitHub.

Optimizations

Avoiding Unnecessary Type Instantiation

TypeScript 5.1 now avoids performing type instantiation within object types that are known not to contain references to outer type parameters. This has the potential to cut down on many unnecessary computations, and reduced the type-checking time of material-ui’s docs directory by over 50%.

You can see the changes involved for this change on GitHub.

Negative Case Checks for Union Literals

When checking if a source type is part of a union type, TypeScript will first do a fast look-up using an internal type identifier for that source. If that look-up fails, then TypeScript checks for compatibility against every type within the union.

When relating a literal type to a union of purely literal types, TypeScript can now avoid that full walk against every other type in the union. This assumption is safe because TypeScript always interns/caches literal types – though there are some edge cases to handle relating to "fresh" literal types.

This optimization was able to reduce the type-checking time of the code in this issue from about 45 seconds to about 0.4 seconds.

Reduced Calls into Scanner for JSDoc Parsing

When older versions of TypeScript parsed out a JSDoc comment, they would use the scanner/tokenizer to break the comment into fine-grained tokens and piece the contents back together. This could be helpful for normalizing comment text, so that multiple spaces would just collapse into one; but it was extremely "chatty" and meant the parser and scanner would jump back and forth very often, adding overhead to JSDoc parsing.

TypeScript 5.1 has moved more logic around breaking down JSDoc comments into the scanner/tokenizer. The scanner now returns larger chunks of content directly to the parser to do as it needs.

These changes have brought down the parse time of several 10Mb mostly-prose-comment JavaScript files by about half. For a more realistic example, our performance suite’s snapshot of xstate dropped about 300ms of parse time, making it faster to load and analyze.

Breaking Changes

ES2020 and Node.js 14.17 as Minimum Runtime Requirements

TypeScript 5.1 now ships JavaScript functionality that was introduced in ECMAScript 2020. As a result, at minimum TypeScript must be run in a reasonably modern runtime. For most users, this means TypeScript now only runs on Node.js 14.17 and later.

If you try running TypeScript 5.1 under an older version of Node.js such as Node 10 or 12, you may see an error like the following from running either tsc.js or tsserver.js:

node_modules/typescript/lib/tsserver.js:2406
  for (let i = startIndex ?? 0; i < array.length; i++) {
                           ^
 
SyntaxError: Unexpected token '?'
    at wrapSafe (internal/modules/cjs/loader.js:915:16)
    at Module._compile (internal/modules/cjs/loader.js:963:27)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)
    at Module.load (internal/modules/cjs/loader.js:863:32)
    at Function.Module._load (internal/modules/cjs/loader.js:708:14)
    at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:60:12)
    at internal/main/run_main_module.js:17:47

Additionally, if you try installing TypeScript you’ll get something like the following error messages from npm:

npm WARN EBADENGINE Unsupported engine {
npm WARN EBADENGINE   package: 'typescript@5.1.3',
npm WARN EBADENGINE   required: { node: '>=14.17' },
npm WARN EBADENGINE   current: { node: 'v12.22.12', npm: '8.19.2' }
npm WARN EBADENGINE }

from Yarn:

error typescript@5.1.3: The engine "node" is incompatible with this module. Expected version ">=14.17". Got "12.22.12"
error Found incompatible module.

See more information around this change here.

Explicit typeRoots Disables Upward Walks for node_modules/@types

Previously, when the typeRoots option was specified in a tsconfig.json but resolution to any typeRoots directories had failed, TypeScript would still continue walking up parent directories, trying to resolve packages within each parent’s node_modules/@types folder.

This behavior could prompt excessive look-ups and has been disabled in TypeScript 5.1. As a result, you may begin to see errors like the following based on entries in your tsconfig.json‘s types option or /// <reference > directives

error TS2688: Cannot find type definition file for 'node'.
error TS2688: Cannot find type definition file for 'mocha'.
error TS2688: Cannot find type definition file for 'jasmine'.
error TS2688: Cannot find type definition file for 'chai-http'.
error TS2688: Cannot find type definition file for 'webpack-env"'.

The solution is typically to add specific entries for node_modules/@types to your typeRoots:

{
    "compilerOptions": {
        "types": [
            "node",
            "mocha"
        ],
        "typeRoots": [
            // Keep whatever you had around before.
            "./some-custom-types/",

            // You might need your local 'node_modules/@types'.
            "./node_modules/@types",

            // You might also need to specify a shared 'node_modules/@types'
            // if you're using a "monorepo" layout.
            "../../node_modules/@types",
        ]
    }
}

More information is available on the original change on our issue tracker.

What’s Next

Our team is already hard at work on TypeScript 5.2, and you can read the specifics on the TypeScript 5.2 Iteration Plan. In addition to planned work items, this iteration plan describes target release dates which you can use to for your own plans. The best way to play with what’s next is to try a nightly build of TypeScript, and use the nightly editing experience too.

But don’t feel rushed to jump ahead! We hope you enjoy TypeScript 5.1, and that this release makes coding a joy for you.

Happy Hacking!

– Daniel Rosenwasser and the TypeScript Team

The post Announcing TypeScript 5.1 appeared first on TypeScript.

Read the whole story
emrox
5 days ago
reply
Hamburg, Germany
alvinashcraft
6 days ago
reply
West Grove, PA
Share this story
Delete

Introducing the popover API

1 Share

Popovers are everywhere on the web. You can see them in menus, toggletips, and dialogs, which could manifest as account settings, disclosure widgets, and product card previews. Despite how prevalent these components are, building them in browsers is still surprisingly cumbersome. You need to add scripting to manage focus, open and close states, accessible hooks into the components, keyboard bindings to enter and exit the experience, and that’s all even before you start building the useful, unique, core functionality of your popover.

To resolve this, a new set of declarative HTML APIs for building popovers is coming to browsers, starting with the popover API in Chromium 114.

The popover attribute

Browser support

  • Chrome 114, Supported 114
  • Firefox, Not supported
  • Edge 114, Supported 114
  • Safari preview, Preview

Source

Rather than managing all of the complexity yourself, you can let the browser handle it with the popover attribute and subsequent set of features. HTML popovers support:

  • Promotion to the top layer. Popovers will appear on a separate layer above the rest of the page, so you don’t have to futz around with z-index.
  • Light-dismiss functionality. Clicking outside of the popover area will close the popover and return focus.
  • Default focus management. Opening the popover makes the next tab stop inside the popover.
  • Accessible keyboard bindings. Hitting the esc key will close the popover and return focus.
  • Accessible component bindings. Connecting a popover element to a popover trigger semantically.

You can now build popovers with all of these features without using JavaScript. A basic popover requires three things:

  1. A popover attribute on the element containing the popover.
  2. An id on the element containing the popover.
  3. A popovertarget with the value of the popover's id on the element that opens the popover.
<button popovertarget="my-popover"> Open Popover </button>

<div id="my-popover" popover>
<p>I am a popover with more information.<p>
</div>

Now you have a fully-functional basic popover.

This popover could be used to convey additional information or as a disclosure widget.

Defaults and overrides

By default, such as in the previous code snippet, setting up a popover with a popovertarget means the button or element that opens the popover will toggle it open and closed. However, you can also create explicit popovers using popovertargetaction. This overrides the default toggle action. popovertargetaction options include:

popovertargetaction="show": Shows the popover. popovertargetaction="hide": Hides the popover.

Using popovertargetaction="hide", you can create a “close” button within a popover, as in the following snippet:

<button popovertarget="my-popover" popovertargetaction="hide">
<span aria-hidden=”true”></span>
<span class="sr-only">Close</span>
</button>

Auto versus manual popovers

Using the popover attribute on its own is actually a shortcut for popover="auto". When opened, the default popover will force close other auto popovers, except for ancestor popovers. It can be dismissed via light-dismiss or a close button.

On the other hand, setting popover=manual creates another type of popover: a manual popover. These do not force close any other element type and do not close via light-dismiss. You must close them via a timer or explicit close action. Types of popovers appropriate for popover=manual are elements which appear and disappear, but shouldn't affect the rest of the page, such as a toast notification.

If you explore the demo above, you can see that clicking outside of the popover area doesn't light-dismiss the popover. Additionally, if there were other popovers open, they wouldn't close.

To review the differences:

Popovers with popover=auto:

  • When opened, force-close other popovers.
  • Can light-dismiss.

Popovers with popover=manual:

  • Do not force close any other element type.
  • Do not light-dismiss. Close them using a toggle or close action.

Styling popovers

So far you've learned about basic popovers in HTML. But there are also some nice styling features that come with popover. One of those is the ability to style ::backdrop.

In auto popovers, this is a layer directly beneath the top layer (where the popover lives), which sits above the rest of the page. In the following example, the ::backdrop is given a semi-transparent color:

#size-guide::backdrop {
background: rgb(190 190 190 / 50%);
}

By default, popovers get a 2px border and are positioned in the center of the UI, but they are fully customizable! You can style a popover just like any other HTML element: you can change its size, background, position on the page, and so on.

The difference between a popover and a dialog

It's important to note that the popover attribute does not provide semantics on its own. And while you can now build modal dialog-like experiences using popover=”auto”, there are a few key differences between the two:

A dialog element opened with dialog.showModal (a modal dialog), is an experience which requires explicit user interaction to close the modal. A popover supports light-dismiss. A modal dialog does not. A modal dialog makes the rest of the page inert. A popover does not.

The above demo is a semantic dialog with popover behavior. This means that the rest of the page is not inert and that the dialog popover does get light-dismiss behavior. You can build this dialog with popover behavior using the following code:

<button popovertarget="candle-01">
Quick Shop
</button>
<dialog popover id="candle-01">
<button class="close-btn" popovertarget="candle-01" popovertargetaction="hide">...</button>
<div class="product-preview-container">
...
</div>
</dialog>

The WhatWG and OpenUI community group are currently discussing the ability to open a dialog element with HTML ergonomics. This would be similar to popover, but retain the dialog features listed previously, such as making the rest of the page inert. Watch these groups for the future of popover, dialog, and new elements like selectmenu.

Coming soon

Interactive entry and exit

The ability to animate discrete properties, including animating to and from display: none and animating to and from the top layer are not yet available in browsers. However, they are planned for an upcoming version of Chromium, closely following this release.

With the ability to animate discrete properties, and using :popover-open and @starting-style, you'll be able to set up before-change and after-change styles to enable smooth transitions when opening and closing popovers. Take the previous example. Animating it in and out looks much smoother and supports a more fluid user experience:

The implementation for this is currently in flux, but click through to the codepen demo for the latest syntax and try it out with the #experimental-web-platform-features flag turned on in Chrome Canary.

Anchor positioning

Popovers are great when you want to position an alert, modal, or notification based on the viewport. But popovers are also useful for menus, tooltips, and other elements that need to be positioned relative to other elements. This is where CSS anchoring comes in.

The following radial menu demo uses the popover API along with CSS anchor positioning to ensure that the popover #menu-items is always anchored to its toggle trigger, the #menu-toggle button.

Setting up anchors is similar to setting up popovers:

<button id="menu-toggle" popovertarget="menu-items">
Open Menu
</button>
<ul id="menu-items" popover anchor="menu-toggle">
<li class="item">...</li>
<li class="item">...</li>
</ul>

You set up an anchor by giving it an id (in this example, #menu-toggle), and then use anchor="menu-toggle" to connect the two elements. Now, you can use anchor() to style the popover. A centered popover menu that is anchored to the baseline of the anchor toggle might be styled as follows:

#menu-items {
bottom: calc(anchor(bottom));
left: anchor(center);
translate: -50% 0;
}

Now you have a fully-functional popover menu that is anchored to the toggle button and has all of the built-in features of popover, no JavaScript required!

There are even more exciting new features of CSS anchoring, such as @try statements to swap the position of the menu based on its available viewport space. This implementation is subject to chance. Explore the Codepen demo above with the #experimental-web-platform-features flag turned on in Chrome Canary for more.

Conclusion

The popover API is the first step in a series of new capabilities to make building web applications easier to manage and more accessible by default. I'm excited to see how you use popovers!

Additional Reading

Read the whole story
emrox
6 days ago
reply
Hamburg, Germany
Share this story
Delete

PostgreSQL: Don't Do This

1 Share

A short list of common mistakes.

  • Kristian Dupont provides schemalint a tool to verify the database schema against those recommendations.

Database Encoding

Don't use SQL_ASCII

Why not?

SQL_ASCII means "no conversions" for the purpose of all encoding conversion functions. That is to say, the original bytes are simply treated as being in the new encoding, subject to validity checks, without any regard for what they mean. Unless extreme care is taken, an SQL_ASCII database will usually end up storing a mixture of many different encodings with no way to recover the original characters reliably.

When should you?

If your input data is already in a hopeless mixture of unlabelled encodings, such as IRC channel logs or non-MIME-compliant emails, then SQL_ASCII might be useful as a last resort—but consider using bytea first instead, or whether you could autodetect UTF8 and assume non-UTF8 data is in some specific encoding such as WIN1252.

Tool usage

Don't use psql -W or --password

Don't use psql -W or psql --password.

Why not?

Using the --password or -W flags will tell psql to prompt you for a password, before trying to connect to the server - so you'll be prompted for a password even if the server doesn't require one.

It's never required, as if the server does require a password psql will prompt you for one, and it can be very confusing when setting up permissions. If you're connecting with -W to a server configured to allow you access via peer authentication you may think that it's requiring a password when it really isn't. And if the user you're logging in as doesn't have a password set or you enter the wrong password at the prompt you'll still be logged in and think you have the right password - but you won't be able to log in from other clients (that connect via localhost) or when logged in as other users.

When should you?

Never, pretty much. It will save a round trip to the server but that's about it.

Don't use rules

Don't use rules. If you think you want to, use a trigger instead.

Why not?

Rules are incredibly powerful, but they don't do what they look like they do. They look like they're some conditional logic, but they actually rewrite a query to modify it or add additional queries to it.

That means that all non-trivial rules are incorrect.

Depesz has more to say about them.

When should you?

Never. While the rewriter is an implementation detail of VIEWs, there is no reason to pry up this cover plate directly.

Don't use table inheritance

Don't use table inheritance. If you think you want to, use foreign keys instead.

Why not?

Table inheritance was a part of a fad wherein the database was closely coupled to object-oriented code. It turned out that coupling things that closely didn't actually produce the desired results.

When should you?

Never …almost. Now that table partitioning is done natively, that common use case for table inheritance has been replaced by a native feature that handles tuple routing, etc., without bespoke code.

One of the very few exceptions would be temporal_tables extension if you are in a pinch and want to use that for row versioning in place of a lacking SQL 2011 support. Table inheritance will provide a small shortcut instead of using UNION ALL to get both historical as well as current rows. Even then you ought to be wary of caveats while working with parent table.

SQL constructs

Don't use NOT IN

Don't use NOT IN, or any combination of NOT and IN such as NOT (x IN (select…)).

(If you think you wanted NOT IN (select …) then you should rewrite to use NOT EXISTS instead.)

Why not?

Two reasons:

1. NOT IN behaves in unexpected ways if there is a null present:

select * from foo where col not in (1,null); -- always returns 0 rows
select * from foo where col not in (select x from bar);
  -- returns 0 rows if any value of bar.x is null

This happens because col IN (1,null) returns TRUE if col=1, and NULL otherwise (i.e. it can never return FALSE). Since NOT (TRUE) is FALSE, but NOT (NULL) is still NULL, there is no way that NOT (col IN (1,null)) (which is the same thing as col NOT IN (1,null)) can return TRUE under any circumstances.

2. Because of point 1 above, NOT IN (SELECT ...) does not optimize very well. In particular, the planner can't transform it into an anti-join, and so it becomes either a hashed Subplan or a plain Subplan. The hashed subplan is fast, but the planner only allows that plan for small result sets; the plain subplan is horrifically slow (in fact O(N²)). This means that the performance can look good in small-scale tests but then slow down by 5 or more orders of magnitude once a size threshold is crossed; you do not want this to happen.

When should you?

NOT IN (list,of,values,...) is mostly safe unless you might have a null in the list (via a parameter or otherwise). So it's sometimes natural and even advisable to use it when excluding specific constant values from a query result.

Don't use upper case table or column names

Don't use NamesLikeThis, use names_like_this.

Why not?

PostgreSQL folds all names - of tables, columns, functions and everything else - to lower case unless they're "double quoted".

So create table Foo() will create a table called foo, while create table "Bar"() will create a table called Bar.

These select commands will work: select * from Foo, select * from foo, select * from "Bar".

These will fail with "no such table": select * from "Foo", select * from Bar, select * from bar.

This means that if you use uppercase characters in your table or column names you have to either always double quote them or never double quote them. That's annoying enough by hand, but when you start using other tools to access the database, some of which always quote all names and some don't, it gets very confusing.

Stick to using a-z, 0-9 and underscore for names and you never have to worry about quoting them.

When should you?

If it's important that "pretty" names are displaying in report output then you might want to use them. But you can also use column aliases to use lower case names in a table and still get pretty names in the output of a query: select character_name as "Character Name" from foo.

Don't use BETWEEN (especially with timestamps)

Why not?

BETWEEN uses a closed-interval comparison: the values of both ends of the specified range are included in the result.

This is a particular problem with queries of the form

SELECT * FROM blah WHERE timestampcol BETWEEN '2018-06-01' AND '2018-06-08'

This will include results where the timestamp is exactly 2018-06-08 00:00:00.000000, but not timestamps later in that same day. So the query might seem to work, but as soon as you get an entry exactly on midnight, you'll end up double-counting it.

Instead, do:

SELECT * FROM blah WHERE timestampcol >= '2018-06-01' AND timestampcol < '2018-06-08'

When should you?

BETWEEN is safe for discrete quantities like integers or dates, as long as you remember that both ends of the range are included in the result. But it's a bad habit to get into.

Date/Time storage

Don't use timestamp (without time zone)

Don't use the timestamp type to store timestamps, use timestamptz (also known as timestamp with time zone) instead.

Why not?

timestamptz records a single moment in time. Despite what the name says it doesn't store a timestamp, just a point in time described as the number of microseconds since January 1st, 2000 in UTC. You can insert values in any timezone and it'll store the point in time that value describes. By default it will display times in your current timezone, but you can use at time zone to display it in other time zones.

Because it stores a point in time it will do the right thing with arithmetic involving timestamps entered in different timezones - including between timestamps from the same location on different sides of a daylight savings time change.

timestamp (also known as timestamp without time zone) doesn't do any of that, it just stores a date and time you give it. You can think of it being a picture of a calendar and a clock rather than a point in time. Without additional information - the timezone - you don't know what time it records. Because of that, arithmetic between timestamps from different locations or between timestamps from summer and winter may give the wrong answer.

So if what you want to store is a point in time, rather than a picture of a clock, use timestamptz.

More about timestamptz.

When should you?

If you're dealing with timestamps in an abstract way, or just saving and retrieving them from an app, where you aren't going to be doing arithmetic with them then timestamp might be suitable.

Don't use timestamp (without time zone) to store UTC times

Storing UTC values in a timestamp without time zone column is, unfortunately, a practice commonly inherited from other databases that lack usable timezone support.

Use timestamp with time zone instead.

Why not?

Because there is no way for the database to know that UTC is the intended timezone for the column values.

This complicates many otherwise useful time calculations. For example, "last midnight in the timezone given by u.timezone" becomes this:

date_trunc('day', now() AT TIME ZONE u.timezone) AT TIME ZONE u.timezone AT TIME ZONE 'UTC'

And "the midnight prior to x.datecol in u.timezone" becomes this:

date_trunc('day', x.datecol AT TIME ZONE 'UTC' AT TIME ZONE u.timezone)
  AT TIME ZONE u.timezone AT TIME ZONE 'UTC'

When should you?

If compatibility with non-timezone-supporting databases trumps all other considerations.

Don't use timetz

Don't use the timetz type. You probably want timestamptz instead.

Why not?

Even the manual tells you it's only implemented for SQL compliance.

The type time with time zone is defined by the SQL standard, but the definition exhibits properties which lead to questionable usefulness. In most cases, a combination of date, time, timestamp without time zone, and timestamp with time zone should provide a complete range of date/time functionality required by any application.

When should you?

Never.

Don't use CURRENT_TIME

Don't use the CURRENT_TIME function. Use whichever of these is appropriate:

  • CURRENT_TIMESTAMP or now() if you want a timestamp with time zone,
  • LOCALTIMESTAMP if you want a timestamp without time zone,
  • CURRENT_DATE if you want a date,
  • LOCALTIME if you want a time

Why not?

It returns a value of type timetz, for which see the previous entry.

When should you?

Never.

Don't use timestamp(0) or timestamptz(0)

Don't use a precision specification, especially not 0, for timestamp columns or casts to timestamp.

Use date_trunc('second', blah) instead.

Why not?

Because it rounds off the fractional part rather than truncating it as everyone would expect. This can cause unexpected issues; consider that when you store now() into such a column, you might be storing a value half a second in the future.

When should you?

Never.

Text storage

Don't use char(n)

Don't use the type char(n). You probably want text.

Why not?

Any string you insert into a char(n) field will be padded with spaces to the declared width. That's probably not what you actually want.

The manual says:

Values of type character are physically padded with spaces to the specified width n, and are stored and displayed that way. However, trailing spaces are treated as semantically insignificant and disregarded when comparing two values of type character. In collations where whitespace is significant, this behavior can produce unexpected results; for example SELECT 'a '::CHAR(2) collate "C" < E'a\n'::CHAR(2) returns true, even though C locale would consider a space to be greater than a newline. Trailing spaces are removed when converting a character value to one of the other string types. Note that trailing spaces are semantically significant in character varying and text values, and when using pattern matching, that is LIKE and regular expressions.

That should scare you off it.

The space-padding does waste space, but doesn't make operations on it any faster; in fact the reverse, thanks to the need to strip spaces in many contexts.

It's important to note that from a storage point of view char(n) is not a fixed-width type. The actual number of bytes varies since characters may take more than one byte, and the stored values are therefore treated as variable-length anyway (even though the space padding is included in the storage).

When should you?

When you're porting very, very old software that uses fixed width fields. Or when you read the snippet from the manual above and think "yes, that makes perfect sense and is a good match for my requirements" rather than gibbering and running away.

Don't use char(n) even for fixed-length identifiers

Sometimes people respond to "don't use char(n)" with "but my values must always be exactly N characters long" (e.g. country codes, hashes, or identifiers from some other system). It is still a bad idea to use char(n) even in these cases.

Use text, or a domain over text, with CHECK(length(VALUE)=3) or CHECK(VALUE ~ '^[[:alpha:]]{3}$') or similar.

Why not?

Because char(n) doesn't reject values that are too short, it just silently pads them with spaces. So there's no actual benefit over using text with a constraint that checks for the exact length. As a bonus, such a check can also verify that the value is in the correct format.

Remember, there is no performance benefit whatsoever to using char(n) over varchar(n). In fact the reverse is true. One particular problem that comes up is that if you try and compare a char(n) field against a parameter where the driver has explicitly specified a type of text or varchar, you may be unexpectedly unable to use an index for the comparison. This can be hard to debug since it doesn't show up on manual queries.

When should you?

Never.

Don't use varchar(n) by default

Don't use the type varchar(n) by default. Consider varchar (without the length limit) or text instead.

Why not?

varchar(n) is a variable width text field that will throw an error if you try and insert a string longer than n characters (not bytes) into it.

varchar (without the (n)) or text are similar, but without the length limit. If you insert the same string into the three field types they will take up exactly the same amount of space, and you won't be able to measure any difference in performance.

If what you really need is a text field with an length limit then varchar(n) is great, but if you pick an arbitrary length and choose varchar(20) for a surname field you're risking production errors in the future when Hubert Blaine Wolfe­schlegel­stein­hausen­berger­dorff signs up for your service.

Some databases don't have a type that can hold arbitrary long text, or if they do it's not as convenient or efficient or well-supported as varchar(n). Users from those databases will often use something like varchar(255) when what they really want is text.

If you need to constrain the value in a field you probably need something more specific than a maximum length - maybe a minimum length too, or a limited set of characters - and a check constraint can do all of those things as well as a maximum string length.

When should you?

When you want to, really. If what you want is a text field that will throw an error if you insert too long a string into it, and you don't want to use an explicit check constraint then varchar(n) is a perfectly good type. Just don't use it automatically without thinking about it.

Also, the varchar type is in the SQL standard, unlike the text type, so it might be the best choice for writing super-portable applications.

Other data types

Don't use money

The money data type isn't actually very good for storing monetary values. Numeric, or (rarely) integer may be better.

Why not?

lots of reasons.

It's a fixed-point type, implemented as a machine int, so arithmetic with it is fast. But it doesn't handle fractions of a cent (or equivalents in other currencies), it's rounding behaviour is probably not what you want.

It doesn't store a currency with the value, rather assuming that all money columns contain the currency specified by the database's lc_monetary locale setting. If you change the lc_monetary setting for any reason, all money columns will contain the wrong value. That means that if you insert '$10.00' while lc_monetary is set to 'en_US.UTF-8' the value you retrieve may be '10,00 Lei' or '¥1,000' if lc_monetary is changed.

Storing a value as a numeric, possibly with the currency being used in an adjacent column, might be better.

When should you?

If you're only working in a single currency, aren't dealing with fractional cents and are only doing addition and subtraction then money might be the right thing.

Don't use serial

For new applications, identity columns should be used instead.

Why not?

The serial types have some weird behaviors that make schema, dependency, and permission management unnecessarily cumbersome.

When should you?

  • If you need support to PostgreSQL older than version 10.
  • In certain combinations with table inheritance (but see there)
  • More generally, if you somehow use the same sequence for multiple tables, although in those cases an explicit declaration might be preferable over the serial types.

Authentication

Don't use trust authentication over TCP/IP (host, hostssl)

Don't use trust authentication over any TCP/IP method (e.g. host, hostssl) in any production environment.

Especially do not set a line like this in your pg_hba.conf file:

host all all 0.0.0.0/0 trust

which allows anyone on the Internet to authenticate as any PostgreSQL user in your cluster, including the PostgreSQL superuser.

There is a list of authentication methods you can choose that are better for establishing a remote connection to PostgreSQL. It is fairly easy to set up a password based authentication method, the recommendation being scram-sha-256 that is available in PostgreSQL 10 and above.

Why not?

The manual says:

trust authentication is only suitable for TCP/IP connections if you trust every user on every machine that is allowed to connect to the server by the pg_hba.conf lines that specify trust. It is seldom reasonable to use trust for any TCP/IP connections other than those from localhost (127.0.0.1).

With trust authentication, any user can claim to be any other user and PostgreSQL will trust that assertion. This means that someone can claim to be the postgres superuser account and PostgreSQL will accept that claim and allow them to log in.

To take this a step further, it is also not a good idea to allow trust authentication to be used on local UNIX socket connections in a production environment, as anyone with access to the instance running PostgreSQL could log in as any user.

When should you?

The short answer is never.

The longer answer is there are a few scenarios where trust authentication may be appropriate:

  • Running tests against a PostgreSQL server as part of a CI/CD job that is on a trusted network
  • Working on your local development machine, but only allowing TCP/IP connections over localhost

but you should see if any of the alternative methods work better for you. For example, on UNIX-based systems, you can connect to your local development environment using peer authentication.

Read the whole story
emrox
6 days ago
reply
Hamburg, Germany
Share this story
Delete

Halftone QR Codes

1 Share

Posted 12/19/17

I recently encountered a very neat encoding technique for embedding images into Quick Response Codes, like so:

Halftone QR Code Example

A full research paper on the topic can be found here, but the core of the algorithm is actually very simple:

  1. Generate the QR code with the data you want

  2. Dither the image you want to embed, creating a black and white approximation at the appropriate size

  3. Triple the size of the QR code, such that each QR block is now represented by a grid of 9 pixels

  4. Set the 9 pixels to values from the dithered image

  5. Set the middle of the 9 pixels to whatever the color of the QR block was supposed to be

  6. Redraw the required control blocks on top in full detail, to make sure scanners identify the presence of the code

That’s it! Setting the middle pixel of each cluster of 9 generally lets QR readers get the correct value for the block, and gives you 8 pixels to represent an image with. Occasionally a block will be misread, but the QR standard includes lots of redundant checksumming blocks to repair damage automatically, so the correct data will almost always be recoverable.

There is a reference implementation in JavaScript of the algorithm I’ve described. I have extended that code so that when a pixel on the original image is transparent the corresponding pixel of the final image is filled in with QR block data instead of dither data. The result is that the original QR code “bleeds in” to any space unused by the image, so you get this:

Halftone QR with background bleed

Instead of this:

Halftone QR without background bleed

This both makes the code scan more reliably and makes it more visually apparent to a casual observer that they are looking at a QR code.

The original researchers take this approach several steps further, and repeatedly perturb the dithered image to get a result that both looks better and scans more reliably. They also create an “importance matrix” to help determine which features of the image are most critical and should be prioritized in the QR rendering. Their code can be found here, but be warned that it’s a mess of C++ with Boost written for Microsoft’s Visual Studio on Windows, and I haven’t gotten it running. While their enhancements yield a marked improvement in image quality, I wish to forgo the tremendous complexity increase necessarily to implement them.

Read the whole story
emrox
7 days ago
reply
Hamburg, Germany
Share this story
Delete
Next Page of Stories