Die kleinen Freuden und so.
Die kleinen Freuden und so.
After almost 18 months of development, comprising thousands of commits from dozens of contributors, Svelte 5 is finally stable.
It’s the most significant release in the project’s history. Svelte 5 is a ground-up rewrite: your apps will be faster, smaller and more reliable. You’ll be able to write more consistent and idiomatic code. For newcomers to the framework, there’s less stuff to learn.
Despite all that, Svelte is almost completely backwards-compatible with Svelte 4 — for the majority of users, the initial upgrade will be completely seamless:
{
"devDependencies": {
"@sveltejs/vite-plugin-svelte": "^3.0.0",
"svelte": "^4",
"@sveltejs/vite-plugin-svelte": "^4.0.0",
"svelte": "^5",
// …
}
}
Svelte is a framework for building user interfaces on the web. It uses a compiler to convert declarative component code, based on HTML, CSS and JavaScript, into tightly optimised JavaScript.
Because the compiler shifts a lot of the work out of the browser and into npm run build
, Svelte apps are small and fast. But beyond that, Svelte is designed to be an enjoyable and intuitive way to build apps: it prioritises getting stuff done.
The team behind Svelte also maintains SvelteKit, an application framework that handles routing and data loading and server-side rendering and all the gory details that go into building modern websites and apps.
For one thing, we’ve overhauled our website. You can read more about that here.
As for Svelte itself, we’ll cover the why first. We’re not fans of change for change’s sake — in fact, Svelte changed less than any other major framework between 2019 (when we launched Svelte 3) and now, which is an eon in front end development. And people really liked Svelte 3 and 4 — it routinely tops developer surveys of satisfaction.
So when we make changes, we don’t make them lightly.
With more and more people building more and bigger applications with Svelte, the limitations of some of our original design decisions started to become more apparent. For example, in Svelte 4, reactivity is driven entirely by the compiler. If you change a single property of a reactive object in Svelte 4, the entire object is invalidated, because that’s all the compiler can realistically do. Meanwhile, other frameworks have adopted fine-grained reactivity based on signals, leapfrogging Svelte’s performance.
Equally, component composition is more awkward in Svelte 4 than it should be, largely because it treats event handlers and ‘slotted content’ as separate concepts, distinct from the props that are passed to components. This is because in 2019 it seemed likely that web components would become the primary distribution mechanism for components, and we wanted to align with the platform. This was a mistake.
And while the $:
construct for reactively re-running statements is a neat trick, it turned out to be a footgun. It conflated two concepts (derived state and side-effects) that really should be kept separate, and because dependencies are determined when the statement is compiled (rather than when it runs), it resists refactoring and becomes a magnet for complexity.
Svelte 5 removes these inconsistencies and footguns. It introduces runes, an explicit mechanism for (among other things) declaring reactive state:
let count = 0;
let count = $state(0);
Interacting with state is unchanged: with Svelte — unlike other frameworks — count
is just a number, rather than a function, or an object with a value
property, or something that can only be changed with a corresponding setCount
:
function increment() {
count += 1;
console.log({ count });
}
Runes can be used in .svelte.js
and .svelte.ts
modules in addition to .svelte
components, meaning you can create reusable reactive logic using a single mechanism.
Event handlers are now just props like any other, making it easy to (for example) know whether the user of your component supplied a particular event handler (which can be useful for avoiding expensive setup work), or to spread arbitrary event handlers onto some element — things that are particularly important for library authors.
And the slot
mechanism for passing content between components (together with the confusing let:
and <svelte:fragment>
syntax) has been replaced with {#snippet …}
, a much more powerful tool.
Beyond these changes are countless improvements: native TypeScript support (no more preprocessors!), many bugfixes, and performance and scalability improvements across the board.
If you’re currently on Svelte 3, begin by migrating to Svelte 4.
From there, you can update your package.json
to use the newest version of svelte
and ancillary dependencies like vite-plugin-svelte
.
You don’t have to update your components immediately — in almost all cases, your app will continue working as-is (except faster). But we recommend that you begin to migrate your components to use the new syntax and features. You can migrate your entire app with npx sv migrate svelte-5
, or — if you’re using VS Code with the Svelte extension — you can migrate components one at a time by selecting ‘Migrate Component to Svelte 5 Syntax’ in your command palette.
Svelte has a large and robust ecosystem of component libraries that you can use in your applications such as shadcn-svelte, Skeleton, and Flowbite Svelte. But you don’t have to wait for these libraries to upgrade to Svelte 5 in order to upgrade your own application.
Eventually, support for Svelte 4 syntax will be phased out, but this won’t happen for a while and you’ll have plenty of warning.
For more details, see the comprehensive Svelte 5 migration guide.
Along with a new version of Svelte, we have a new Command Line Interface (CLI), sv
, to go with it. You can learn all about it in the sv
announcement video and in a forthcoming blog post.
We plan to release a new version of SvelteKit in the near future that takes advantage of the new Svelte 5 features. In the meantime, you can use Svelte 5 with SvelteKit today, and npx sv create
will create a new SvelteKit project with Svelte 5 installed alongside it.
After that, we have a laundry list of ideas we want to implement in Svelte itself. This release is the foundation for many improvements that would have been impossible to build on top of Svelte 4, and we can’t wait to roll up our sleeves.
Will Crichton wishes some naming conventions would die already, GitHub user brjsp noticed that Bitwarden’s new SDK dependency isn’t open source, Joaquim Rocha details his forking best practices, Sophie Koonin explains why you should go to conferences & Mike Hoye puts WordPress on SQLite.
Changelog++ members get a bonus 3 minutes at the end of this episode and zero ads. Join today!
Sponsors:
Featuring:
Will Crichton — November 17, 2018
Names are an important tool of thought. They provide a loose, lightweight way to manage and structure knowledge. However, bad names inhibit learning and impede progress. We should root out and destroy the processes that lead to bad names.
This one is first, because it is the most widespread disease afflicting the naming process in science and math.
Look, I’m all for recognizing the people who make contributions to math and science. But don’t let them (or others) name their discoveries after the discoverer. That comes at the expense of every person thereafter who needs to use the created/discovered concept. We already have Nobel Prizes, Turing Awards, etc. to commemorate these achivements.
A good name should communicate the essence of an underlying idea, usually through a few carefully-picked nouns and adjectives. For example, breadth-first search and depth-first search are wonderfully informative. If we called them “Zuse’s method” and the “Tremaux approch”, that communicates literally nothing about the methods unless I’m already a science historian who knows the respective fields of each inventor.
Recently, I’ve been re-learning a lot of probability basics, and I’m constantly reminded how many missed opportunities we have to communicate intuitions through names. A positive example is the Gaussian distribution—names like “normal distribution” convey its natural utility (in describing many natural phenomena) and “bell curve” convey its shape. By contrast, “Dirichlet distribution” is uninformative, but with no alternatives. How about “sum-to-1 distribution?” Or for the beta distribution, how about “unit-bounded distribution?” Or Bernoulli distribution as “coin-flip distribution?”
More broadly, knowledge should be constructed compositionally. If we only have to remember a few core pieces, and then can understand concepts by combining them in different ways, that’s a pretty efficient process for our brains. By contrast, human-name-based labels are effectively a random, unique identifier that we just have to remember, adding to a long list of completely unrelated identifiers. That knowledge is unstructurable, as the names “Maxwell”, “Bernoulli”, “Jacobian” have no common basis, no shared terms, no reasonable decomposition.
So please, don’t name stuff after yourself.
Few names make my blood boil as much as “Type 1 error” and “Type 2 error.” Rarely in the history of human progress have such awful names been adopted so widely than in hypothesis testing. Imagine, if you will, a programmer submitting this code for review:
enum MemoryError {
Type1,
Type2
}
fn malloc_safe(n: usize) -> Result<*mut usize, MemoryError> {
if system_out_of_memory() {
Err(MemoryError::Type1)
} else if n == 0 {
Err(MemoryError::Type2)
} else {
Ok(malloc(n))
}
}
This person would get laughed out of the building. Why would you call these Type1
and Type2
when they clearly could be OutOfMemory
and ZeroSizeAlloc
? Yet somehow, when eminent statisticians do this, that becomes precedent for a century. Imagine how many statistics students have tried to memorize what “type 1” and “type 2” mean, spending wasted time mapping useless terms to their actual meaning.
Just use false positive and false negative. This is a perfect example of how a compositional basis for terminology (i.e. (false | true) (positive | negative)) lower the barrier to reconstructing the term’s meaning. Even still, I usually have to pause and think when someone says “false negative” to both understand what kind of error, and how to contextualize it in their use case. But if they said “type 1 error”, I would be completely lost.
Dishonorable mention here to graph quadrants. Is “top right” that hard? Hilarious mention to separation axioms (thanks @twitchard) whose indexing includes .
Can you imagine putting so little thought into naming something that your name is indistinguishable from the output of a random name generator? Would you do that to your company? Your child?
Yet, we do it to software all the time. I invite you to browse the list of Apache Projects. Pig, Flink, Spark, Hive, Arrow, Kafka. If humans cannot pass a Pokemon-equivalent Turing test, your system is poorly named.
Here, I think the big danger is exclusion. If you’re having a conversation with someone about big data technologies for your company, and your CTO wants to listen in, phrases like “yeah we just have to hook up our Airflow into GCP Dataflow with a Kafka broker so our logs can get Flumed” will exclude them from the conversation. By contrast, if you use phrases like “message queue”, “cache”, “data processor,” someone can get the gist of the conversation without knowing the specific technologies.
To my understanding, this also happens in government (and in particular the military) a lot with acronyms. An acronym is effectively the same as a random word, so you have to be in-the-know to hold a conversation with others in the department.
Accidents happen. We pick a bad name in the heat of the moment, and then are forced to live with that mistake for reasons of backwards compatibility. However, we should clearly identify such mistakes, and discourage their usage where possible in the future. Regardless of what you think about the redis debate, new systems today probably shouldn’t use the term “master-slave” when plenty of other options exist.
Yet, one phenomenon I have never understood is the propensity of Lisp users to continue using car and cdr. Head and tail. Left and right. There are many sensible, well-known ways to access elements of a pair or a list. The only reason Lisp originally adopted “car” and “cdr” is due to the design of 1950s (!) hardware:
The 704 and its successors have a 36-bit word length and a 15-bit address space. These computers had two instruction formats, one of which, the Type A, had a short, 3-bit, operation code prefix and two 15-bit fields separated by a 3-bit tag. The first 15-bit field was the operand address and the second held a decrement or count. The tag specified one of three index registers. Indexing was a subtractive process on the 704, hence the value to be loaded into an index register was called a “decrement”. The 704 hardware had special instructions for accessing the address and decrement fields in a word. As a result it was efficient to use those two fields to store within a single word the two pointers needed for a list. Thus, “CAR” is “Contents of the Address part of the Register”. The term “register” in this context refers to “memory location”.
Today? Racket’s Beginning Student language still uses car
and cdr
… for beginning students. (Including the many wonderful derivations: caaar
, cadddr
, cdddr
, and so forth.) This is recommended usage of the language. If I had to guess, I think usage of these words persists because it forms an in-group of Lisp programmers “in the know” who use this archaic terminology. That’s why they can still make jokes like my other car is a cdr.
(Update: person who teaches Racket says they don’t emphasize car/cdr, so this may be less applicable to Racket then.)
This is, of course, bad. It’s a barrier to entry for novices, it harms readability for people porting over knowledge from other languages, and it generally encourages a culture of bad names. Let’s learn from our mistakes and make names more accessible, memorable, and understandable to everyone.
Be an active reader! Shoot me an email at wcrichto@cs.stanford.edu or leave a comment on Hacker News.