3564 stories
·
2 followers

The Vue Ecosystem in 2024

1 Share

Since its inception in February 2014, Vue has become one of the top JavaScript frameworks for building and maintaining web apps. It is a popular pick for many developers because of its philosophy of creating an approachable, performant, and versatile approach to building scalable apps.

Once you’ve chosen Vue to build your app, it’s important to consider what tools and libraries can help you ship features faster to users. While a lot has changed over the years, the addition of new features like the Composition API has introduced lots of exciting new changes to the ecosystem.  Let’s look at some of the popular tools and libraries used by the community today.

Getting Started with Vue

There are three primary approaches to scaffolding your Vue app:

  1. Content Delivery Network (CDN)
  2. Build Tools
  3. Using a meta-framework like Nuxt

CDN

While most JavaScript frameworks require a build toolchain to work, one of the great things about Vue is that it can be added directly on a page. No build tools necessary!

<!-- unpkg -->
<script src="https://unpkg.com/vue@3/dist/vue.global.js"></script>

<!-- jsdelivr -->
<script src="https://cdn.jsdelivr.net/npm/vue@3.4.25/dist/vue.global.min.js"></script>

Or if you’re already using ES modules, you can also use it as follows:

<script type="module">
  import { createApp, ref } from 'https://unpkg.com/vue@3/dist/vue.esm-browser.js'
  // or 'https://esm.sh/vue'

  createApp({
    setup() {
      const message = ref('Hello Vue!')
      return {
        message
      }
    }
  }).mount('#app')
</script>

To learn more, check out the official docs.

Build Tools

When your app is ready for build tools (e.g., Vite, Rollup, webpack) and/or you want to use Single-File Components (*.vue), the best way is to use Vite for Vue 3 projects. Why Vite? It leverages both latest advancements in the ecosystem: the availability of native ES modules in the browser, and the rise of JavaScript tools written in compile-to-native languages.

Fun fact, Vite was written by Evan You (founder of Vue), so naturally there’s a way to scaffold a Vue project with it!

npm create vue@latest

To learn more about creating a Vue application with Vite, check out the official docs.

Meta-frameworks

For those interested in taking Vue beyond client-side rendering to other realms such as static-site generation (SSG), server-side rendering (SSR), and other rendering methods, you’ll want to start your project with scaffolding from popular meta-frameworks such as Nuxt.

npx nuxi@latest init <project-name>

To learn more, check out the official docs for getting started with Nuxt.

That said, here are two other options I wanted to give a shoutout to:

Meta Framework

VitePress

A fantastic static site generator that uses Markdown to create beautiful docs in minutes (while powered by Vue theming for easy customization).

Meta Framework

Quasar

A cross-platform Vue meta-framework that comes with a lot things out of the box like Material components, various build modes, and much more.

Routing

When it comes to your routing solution in Vue, it’s a straightforward choice because Vue has an official library for routing, but a meta framework can help as well.

Meta Framework

Vue Router

This is the official router for Vue. Expressive, configurable and convenient routing.

Meta Framework

Nuxt

For those who love file-based routing, you’ll get that automatically in conjunction with Vue Router when you use Nuxt with no additional configuration needed.

If you’re interested in following or contributing to an up-and-coming extension on Vue Router that allows you to do file-based routing with TS support, be sure to keep an eye on unplugin-vue-router!

State Management

Similar to routing, Vue makes it easy for you by providing an official library for state management, but since state management is more particular to the requirements of an app and how a team approaches it, here are some popular options in the community.

State Management

Pinia

Pinia, formerly known as Vuex, is the official state management library for Vue. On top of being TypeScript-friendly, it takes a huge leap forward from the traditional redux pattern by eliminating the need for explicitly defining mutations. In other words, a lot less code while still being declarative!

State Management

XState

For those familiar with state machines, you’ll want to check out XState as an option for managing state within your app. As a bonus, a fantastic team backs it and has been a consistent favorite within the community.

State Management

TanStack Query

One of the major challenges that developers face with state management is around granular state management within the context of async calls. TanStack Query is a popular pick in the community for helping to manage this.

State Management

Composables

With the addition of the Composition API, Vue makes it easier than ever for developers to create their own state management solutions with composables. While this can be useful in certain scenarios, it’s recommended to use a standard like Pinia to ensure consistent patterns across teams.

Component Libraries

Selecting the right component library depends on your app’s requirements. Here are some popular libraries that aim to provide accessible components and are backed by good English documentation.

Component Library

PrimeVue

A powerful and flexible component library that contains styled and unstyled components to make it easier to cater to your app’s needs.

Component Library

Radix Vue

Unstyled and accessible components for building high‑quality design systems and web apps in Vue

Component Library

Quasar Components

UI components that follow Material Guidelines for those using Quasar.

Component Library

Vuetify

A longstanding favorite in the Vue community for Material components that has great docs.

Component Library

NuxtLabs UI

Fully styled and customizable components for those using Nuxt

Honorable Mentions

Component Library

Buefy

For those who love Bulma, this is in active development for Vue 3.

Component Library

Element Plus

A Vue 3 UI Library made by Element team with a strong Chinese and English community.

Component Library

BootstrapVue

For those who love Bootstrap, it’s not fully ready for Vue 3 at the time of publication, but keep an eye on this library.

Component Library

Vuestic UI — Vue 3 UI framework

A free and open source UI Library for Vue 3

Testing

When it comes to testing your app, here’s a quick overview of what the community is using.

Unit

Unit Testing

Vitest

No contest here. With most Vue projects being Vite based, the performance gain here over previous unit testing libraries like Jest is hard to ignore.

Component

There are three primary options for component testing depending on your use case:

Component Testing

Vitest

Vitest is great for components or composables that render headlessly.

Component Testing

Vue Testing Library

Vue Testing Library is light-weight solution for testing Vue components that provides light utility functions on top of @vue/test-utils.

Component Testing

Vue Test Utils

Vue Test Utils is another option for testing components and DOM.

Component Testing

Cypress Component Testing

Cypress is great for components whose expected behavior depends on properly rendering styles or triggering native DOM events.

E2E

While other tests are great, one of my mentors likes to say that there are only two tests that matter for any code base. 

  1. Can the user login?
  2. Can the user pay us?

Everything else is optional.  

And that’s where end-to-end (E2E) tests come into play. There are two primary candidates to choose from when selecting your E2E testing framework:

End to End Testing

Cypress

Cypress has been a longtime favorite of the community with its intuitive API and ability to run and debug tests directly in the browser.

End to End Testing

Playwright

Playwright has become another popular choice amongst the community as well for its ability to test across any browser or platform using a singular API.

For additional reading about testing in Vue, check out the official docs.

Mobile Development

Mobile Framework

Capacitor

You might be familiar with Ionic, a popular open-source UI toolkit for building mobile apps with web technologies. Well, Capacitor is the foundation that Ionic is built on top of which enables you to build the actual app.

It’s also backed by a team that have been great partners to the Vue community. Definitely worth checking out if you want to build a mobile app with Vue.

You can learn more at their official docs for Capacitor + Vue.

Mobile Framework

NativeScript

If you’re looking for a React Native equivalent in Vue, while there is no official support from the core team, NativeScript is a solution for building true native mobile apps with JavaScript.

Note: It’s currently in release candidate (RC) and not officially production ready, but it has been a past favorite for Vue 2 apps, so it’s included as an alternative to Capacitor.

To learn more or follow along, check out the official repo for NativeScript

Bonus Libraries

These libraries don’t have a distinct category of their own, but I wanted to give a special shoutout to the following libraries:

Bonus

VueUse

When it comes to utility libraries, VueUse gets the MVP award for providing an incredible assortment of useful abstractions for common functionality such as copying to clipboard, dark / light mode, local storage, and more.

Bonus

Petite-Vue

For those who need only a subset of Vue features and just want to sprinkle some interactivity onto a page, you definitely want to look into petite-vue, which is only ~6kb! 

Bonus

FormKit

Let’s face it. Forms are a big part of most web apps, whether you like it or not. FormKit is a library that aims to make the process easier by providing standards for things like form structure, generation, validation, and more.

Bonus

TresJS

TresJS is a library that aims to make building 3D experiences with Three.js and Vue a lot easier.

Looking Ahead

While I’ve highlighted some popular solutions in the ecosystem, I want to emphasize that at the end of the day, what matters most is choosing the right solution for your team and app. If the solution you and/or your team are using didn’t make it on this list, that doesn’t make it any less valid.

Finally, the reality is that the landscape for tooling and libraries is constantly changing. While it may seem like specific problems have “already been solved,” I assure you there is still much room for new ideas and improvements. The community is still ripe for contributions, and I hope to recommend something you build someday!

Read the whole story
emrox
26 minutes ago
reply
Hamburg, Germany
Share this story
Delete

Printing music with CSS grid

3 Shares

Too often have I witnessed the improvising musician sweaty-handedly attempting to pinch-zoom an A4 pdf on a tiny mobile screen at the climax of a gig. We need fluid and responsive music rendering for the web!

Stephen Band

Read the whole story
emrox
2 hours ago
reply
Hamburg, Germany
Share this story
Delete

Cloudflare / Vercel comparison

1 Share

Introduction

Recently we have seen a lot of fuss around the so-called serverless horror stories, or the infinite scale of serverless computing that can lead to a huge bill at the end of the month. The truth is that Vercel is a great platform for developers, but can be expensive in comparisom to other alternatives.

Vercel can be truly costly if you make a mistake or dont have a good understanding of how the platform works and how to optmize your website or application. For example, Ilias Ism shared how he was getting billed $2,000+ per month on Vercel for basic services.

But that is not the only or most horrifying story.

There are many other posts over the internet describing developers getting billed thousands of dollars for a simple website or application. Thats the case of Michael Aubury that got a $23,000 bill when someone targeted his Vercel deployment with a DDoS attack, and Mike Ramirez that got a $3,000 bill in 6 hours for a small mistake in his code.

If you are looking for a more cost-effective alternative to Vercel, Cloudflare is a great option. Cloudflare offers a wide range of services that can help you optimize your website or application, and can be a great alternative to Vercel.

Of course, you don’t need to migrate your entire application to Cloudflare, leaving Vercel completely. Ilias, for example, took a mixed approach, including moving image optmization to Cloudflare, making better development choices like disabling prefetch in <Link> tags, and moving things around in the Vercel ecosystem itself, like using the edge runtime whenever possible.

In this article we will explore Vercel features and pricing, and how Cloudflare can be the best alternative to Vercel in 2024.

What is Cloudflare?

Its hard to explain what Cloudflare is in a few words. Its well known by its CDN and security services but thats not all. Its a serverless hosting platform that can help you deploy your website or application in a cost-effective way, a DNS registrar that can help you manage your domains, a DNS provider that can help you manage your DNS records. And much more than that.

Honestly, Cloudflare is all you can think of when it comes to web infrastructure for a better, faster and more secure internet. In the heart of all its services is the Cloudflare Edge Network, a global network of servers that run code, provide compute, and store data close to the end user, reducing latency and improving performance.

Of course Vercel also has a global network of servers, but Cloudflare network is much larger. Vercel’s network has 18 regions and a bit over 100 points of presence, while Cloudflare has over 300 data centers around the world.

The availability of data centers for deployment is also a big advantage of Cloudflare. In Vercel as in other providers such as AWS, you have to choose a region to deploy your application. In Cloudflare, the region is the world. You don’t need to worry about where your application is deployed or where your data is stored.

Your website is always at the edge of the network, close to all your end users.

Cloudflare vs Vercel: Hosting and Deployment

A comparison between Cloudflare and Vercel hosting and deployment services

Lets start with Vercel.

What made Vercel so popular is how simple it makes hosting a website or web application. You can literally connect your Github repository to Vercel and deploy your website in a few clicks, without having to worry about servers, infrastructure, or anything else.

What you do is push your code to Github, create and vinculate it to a project and vercel and start deployments with every push to the repository. Vercel will build your website and deploy it to the cloud, making it available to the world.

It has a great integration with Next.js, as it was created by the Vercel team, but it also supports a wide range of the most popular frontend frameworks, optimizing how your site builds no matter what tool you use.

When hosting your website on Vercel, it charges mostly for bandwidth, or the amount of data transferred from your website to your users. Data transferred includes both outgoing data, or data sent from your website to your users, and incoming data, or data sent from your users to your website.

Originally, the free plan included 100 GB of bandwidth per month, and the Pro plan 1 TB of bandwidth per month. If you exceeded your bandwidth limit, it would charge you $40 per extra 100 GB of bandwidth. But with the new pricing model, there is no more a one size fits all approach, and Vercel is breaking down the bandwidth pricing for hosting applications and websites into three variables: Fast Data Transfer, Edge Requests, and Data Cache.

Fast Data Transfer is the data transferred between the Vercel Edge Newtwork and the end user. The free plan includes 100 GB of fast data transfer, and the Pro plan include 1TB of fast data transfer with extra charges starting at $0.15 per GB. Prices will now vary depending on the region where the data is being transferred:

  • $0.15 per GB: Cleveland, USA (cle1), Washington, D.C., USA (iad1), Portland, USA (pdx1), Stockholm, Sweden (arn1), London, United Kingdom (lhr1), Frankfurt, Germany (fra1), Dublin (dub1), Paris, France (cdg1), San Francisco, USA (sfo1)
  • $0.30 per GB: Singapore (sin1), Hong Kong (hkg1)
  • $0.31 per GB: Osaka, Japan (kix1), Tokio, Japan (hnd1)
  • $0.32 per GB: Sydney, Australia (syd1)
  • $0.33 per GB: Mumbai, India (bom1)
  • $0.39 per GB: Cape Town, South Africa (cpt1)
  • $0.44 per GB: São Paulo, Brazil (gru1)
  • $0.47 per GB: Seoul, South Korea (icn1)

Edge Requests refer to the number of requests made to the Vercel Edge Network when serving your website or application to an end user. When a user accesses your website, their request is routed to the nearest Vercel Edge Network, reducing latency and improving performance. For example, loading a single web page might involve requests for the HTML document, CSS files, JavaScript files, images, and so on. Each of these requests is counted as an Edge Request and incurs charges according to Vercel’s pricing model.

The free plan includes 1 million edge requests per month, while the Pro plan includes 10 million edge requests per month with additional charges starting at $2 per 1 million edge requests. Pricing will also vary depending on the region where the request is being made:

  • $2 per 1M requests: Cleveland, USA (cle1), Washington, D.C., USA (iad1), Portland, USA (pdx1)
  • $2.20 per 1M requests: Stockholm, Sweden (arn1), Mumbai, India (bom1)
  • $2.40 per 1M requests: London, United Kingdom (lhr1), Dublin (dub1), Paris, France (cdg1), San Francisco, USA (sfo1)
  • $2.60 per 1M requests: Singapore (sin1), Sydney, Australia (syd1), Osaka, Japan (kix1), Seoul, South Korea (icn1), Tokio, Japan (hnd1), Frankfurt, Germany (fra1)
  • $2.80 per 1M requests: Hong Kong (hkg1), Cape Town, South Africa (cpt1)
  • $3.20 per 1M requests: São Paulo, Brazil (gru1)

Data Cache refers to the sum of all data that has been written to the Vercel Edge Network for quick access and subsequently retrieved (read) from the cache storage. The pricing varies depending on the action performed on the cache storage, write or read, and the region where the data is being cached. The free plan includes 2M bytes of cache writes and 10M bytes of cache reads, while the Pro plan charges $4 per additional 1M bytes of cache writes and $0.40 per additional 1M bytes of cache reads. The pricing will also vary depending on the region where the data is being cached:

  • Data Cache Reads:

    • $4 per 1M bytes: Cleveland, USA (cle1), Washington, D.C., USA (iad1), Portland, USA (pdx1)
    • $4.40 per 1M bytes: Stockholm, Sweden (arn1), Mumbai, India (bom1)
    • $4.80 per 1M bytes: London, United Kingdom (lhr1), Dublin (dub1), Paris, France (cdg1), San Francisco, USA (sfo1)
    • $5.20 per 1M bytes: Singapore (sin1), Sydney, Australia (syd1), Osaka, Japan (kix1), Seoul, South Korea (icn1), Tokio, Japan (hnd1), Frankfurt, Germany (fra1)
    • $5.60 per 1M bytes: Hong Kong (hkg1), Cape Town, South Africa (cpt1)
    • $6.40 per 1M bytes: São Paulo, Brazil (gru1)
  • Data Cache Writes:

    • $0.40 per 1M bytes: Cleveland, USA (cle1), Washington, D.C., USA (iad1), Portland, USA (pdx1)
    • $0.44 per 1M bytes: Stockholm, Sweden (arn1), Mumbai, India (bom1)
    • $0.48 per 1M bytes: London, United Kingdom (lhr1), Dublin (dub1), Paris, France (cdg1), San Francisco, USA (sfo1)
    • $0.52 per 1M bytes: Singapore (sin1), Sydney, Australia (syd1), Osaka, Japan (kix1), Seoul, South Korea (icn1), Tokio, Japan (hnd1), Frankfurt, Germany (fra1)
    • $0.56 per 1M bytes: Hong Kong (hkg1), Cape Town, South Africa (cpt1)
    • $0.64 per 1M bytes: São Paulo, Brazil (gru1)

Additional variables you need to be aware of is build execution time, or the time it takes to build your website or application during a deployment, and concurrent builds, or the number of builds that can run at the same time.

The free plan includes 1 concurrent build, and the Pro plan also includes 1 and charge $50 for each additional concurrent build. In relation to build execution time, the free plan includes 6,000 minutes of build time, and the Pro plan includes 24,000 minutes of build time.

In Vercel, the Pro plan starts at $20 per month.

What about Cloudflare?

Cloudflare offers a similar service to Vercel, that is called Cloudflare Pages.

Cloudflare Pages started as a JAMstack hosting service that gained notority by leveraging the Cloudflare Edge Network to deliver websites fast and securely, but now it has evolved for a full serverless hosting platform that can run any kind of application, from static website to full-stack web applications that require server-side logic.

As in Vercel, you can also connect your Github repository and deploy your website in a few clicks. Doesnt matter if you are using Next.js, or any other framework, Cloudflare Pages will build your website and deploy it to the cloud, making it available to the world.

The biggest difference you can see between Vercel and Cloudflare Pages is the pricing. Cloudflare Pages is much more cost-effective than Vercel, primarily because it doesnt charge for bandwidth.

Thats right, you’re not billed by bandwidth when hosting your website on Cloudflare Pages, so you dont need to worry about how many users are accessing your website or how much data is being transferred.

As in Vercel, Cloudflare Pages also limits concurrent builds and the total number of builds an account can run per month. The free plan includes 1 concurrent build and 500 builds per month, and the Pro plan includes 5 concurrent builds and 5,000 builds per month.

In Cloudflare, the Pro plan starts at $25 per month.

Cloudflare vs Vercel: Serverless Functions

A comparison between Cloudflare and Vercel serverless functions services

Serverless functions are a great way to add dynamic functionality to your website. They are basically pieces of code that run in the cloud, and can be triggered by events like HTTP requests, database changes, or scheduled tasks.

Vercel offers a service called Vercel Functions that allows you ro run serverless functions in the Vercel Edge Network close to your users. The functions scale automatically on demand and can interact with APIs, databases, and resources on the web and at the Vercel ecossystem.

The infrastructure and what serverless functions can do are limited by the runtime environment you choose for your functions to execute. Available runtimes include: Node.js, Go, Ruby, Python, and Edge runtime. Serverless functions that run at the Edge runtime are more lightweight and are billed differently.

Vercel functions can suffer from cold starts, or the delay that occurs when an inactive function is invoked for the first time, as the function has to be initialized and loaded into memory. Cold starts can be reduced by keeping your functions warm, or invoking them periodically to prevent them from being suspended.

As in AWS lambda, Vercel asks you to choose a region where you want to deploy your functions. This is a crucial step and can highly impact latency and the performance, as the closer the function is to the user, the faster it will be executed. If you use a vercel storage service, like KV or Postgres, you should also consider the region where your data is stored and deploy your functions close to it.

Before the pricing updates on April 2024, Vercel did not charge differently per regions as AWS, but now the pricing can vary depending on the region where the function is being executed because of the data transfer costs. Function Duration and Invocation will still be the same price across the regions, but Data Transfer will be charged differently.

For billing purposes, Edge functions charge for CPU time, or the time spent directly executing your function, while other runtimes charge for wall-clock time, or the total time your function is running, including idle time or the time it takes to boot the environment and load your function into memory. CPU time is measured in execution units of 50ms each, while wall-clock time is measured in GB-hours, which is the memory allocated for each function in GB, multiplied by the time in hours they were running.

Before the pricing update, the free plan included 500,000 execution units for Edge functions, and 100 GB-hours for serverless functions, while the Pro plan included 1M execution units for Edge functions, 1,000 GB-hours for serverless functions, and charged $2.00 for each additional 1M execution units and $40 for each additional 100 GB-hours.

Now, the free plan includes 1M of execution units for Edge runtime functions, and 1000 gb-hours for serverless functions, while the Pro plan charges $2 per additional 1M execution units, and $18 per additional 100 GB-hours ($0.18 per 1GB-hour).

Appart from the duration variables, Vercel is now also charging for two new variables: Function Invocations and Data Transfer. That was a big change in the pricing model, as before you were only charged for the time your function was running.

Function Invocations are the number of times your function is invoked, including both successful and failed invocations. The free plan includes 1M function invocations, while the Pro plan includes 10M invocations and charges $0.60 for each additional 1M invocations.

Data Transfer in the context of serverless functions is called Fast Origin Transfer and refers to the data transferred between the Vercel Edge Network and your functions. The free plan includes 100 GB of fast origin transfer, and the Pro plan include 1TB of fast origin transfer with extra charges strating at $0.06 per GB. Prices will also vary depending on the region where the data is being transferred:

  • $0.06 per GB: Cleveland, USA (cle1), Washington, D.C., USA (iad1), Portland, USA (pdx1), Stockholm, Sweden (arn1), London, United Kingdom (lhr1), Frankfurt, Germany (fra1), Dublin (dub1), Paris, France (cdg1), San Francisco, USA (sfo1)
  • $0.27 per GB: Singapore (sin1), Osaka, Japan (kix1), Tokio, Japan (hnd1), Hong Kong (hkg1)
  • $0.24 per GB: Seoul, South Korea (icn1)
  • $0.25 per GB: Mumbai, India (bom1)
  • $0.29 per GB: Sydney, Australia (syd1)
  • $0.43 per GB: Cape Town, South Africa (cpt1)
  • $0.41 per GB: São Paulo, Brazil (gru1)

Cloudflare also offers a service called Cloudflare Workers that allows you to run serverless functions in the Cloudflare Edge Network close to your users. The function also scales automatically on demand and can interact with APIs, databases, and resources on the web and at the Cloudflare ecossystem.

Currently, Cloudflare serverless functions must be written in TypeScript/JavaScript or any language that can be compiled to WebAssembly.

Cloudflare Workers do not suffer from cold starts, because it runs on top of V8 isolates that can warm up a function in under 5 milliseconds. This means that your functions are always ready to execute, no matter how long they have been inactive. This is a huge advantage over Vercel functions, as cold starts can be a big issue for some applications.

By using Workers, you also dont need to worry about which region to deploy your application. By default on Cloudflare the region is the world and this means that your code will always run close to your resources and your users.

As Cloudflare eliminated cold starts, Workers do not charge for wall time and by default only CPU time is used for billing purposes. The free plan includes 100,000 request per day and an average of 10ms of CPU time per invocation, while the Standard plan starts at $5 and includes 10M requests and 30M CPUms per month. Additional requests are billed at $0.30 per 1M requests and additional CPUms are billed at $0.02 per 1M CPUms.

Cloudflare vs Vercel: KV storage

A comparison between Cloudflare and Vercel KV storage services

Vercel KV is a durable Redis database that enables you to store and retrieve JSON data. Its not a service that is native to Vercel, but powered by a partnership with Upstash.

By default, a single Redis database is provisioned in the primary region you specify when you create a KV database. This primary region is where write operations will be routed. A KV database may have additional read regions, and read operations will be run in the nearest region to the request that triggers them.

Note that when you do that you are replicating a database and this will increase exponentially the usage and the cost of the service, as each write command will be issued once to your primary database and once to each read replicas you have configured.

Not all Vercel regions are supported by KV Storage. Actually, only the following regions are supported: Dublin, Frankfurt, São Paulo, Washington, Portland, San Francisco, Singapore, and Sydney.

Changing the primary region of a Vercel KV store is also not supported. If you wish to change the region of your database, you must create a new store and migrate all your data.

The KV storage monthly charges depend on four main variables that are:

  • Databases: the free plan include 1 database and does not allow any replicas, while the Pro plan includes up to 5 databases including replicas. Each additional database or replica costs $1,00.
  • Storage: represents the maximum amount of storage used per month. The free plan includes 512MB of memory, while the Pro plan includes 1GB and additional storage is billed at $0.25 per GB.
  • Requests: requests are the number of Redis commands issued against all KV databases for an account, including write operations on replicas. The free plan includes 150,000 requests per month, while the Pro plan includes 150,000 requests and charges 0.35 per 100,000 additional requests.
  • Data transfer: data transfer is the total amount of data transferred between the functions querying the KV databases on your account. The free plan includes 256MB of data transfer, while the Pro plan includes 1GB and additional data transfer is billed at $0.10 per GB.

An alternative to Vercel KV is Cloudflare KV. Cloudflare KV is a serverless key-value database that enables you to store and retrieve data on the Cloudflare Edge Network. Differently from Vercel, it is a native service of Cloudflare and is not powered by a partnership with another company. Its also not a Redis database, but a key-value database that is optimized for edge computing on Cloudflare.

The most common way to access data in Cloudflare KV is through Workers, but you can also access it through the Cloudflare API.

Cloudflare KV is a global database, which means that your data is replicated to all Cloudflare data centers around the world. Your data is not limited to a single region and you do not need to worry about creating replicas on different regions for better performance.

The Cloudflare KV pricing and limits varies depending on the plan you choose and the nature of the operation you are performing. You are not charge by data transfer or by the number of databases you create, but by the number of requests you make and the amount of data you store.

For storage, at the free plan you can store up to 1GB of data, while in the Paid plan you are charged $0.5 per additional GB of data.

For requests, on the free plan you can make up to 100,000 read requests per day and 1,000 write, delete, and list requests per day. On the Paid plan you are charged $0.5 per additional 10M read requests and $5 per additional 1M write, delete, and list requests.

Cloudflare vs Vercel: Serverless database

A comparison between Cloudflare and Vercel KV storage services

Vercel serverless database is a PostgreSQL database that is designed to integrate with Vercel Functions and your frontend framework. Its also not a native service of Vercel, but powered by a partnership with Neon.

When you create a Vercel Postgres database in your dashboard, a serverless database running PostgreSQL version 15 is provisioned in the region you specify. This region is where read and write operations will be routed and cannot be changed after the database is created.

Not many regions are available for deploying your serverless database. Currently only Cleveland, Washington, Portland, Frankfurt, Singapore, and Sydney are supported.

The choice of region is crucial for the performance of your application, as the closer the database is to the function that is querying it, the faster the response time will be.

Another important thing to consider is that Vercel Postgres databases are not always active. If there are no incoming requests for a specified duration, the database scales down to zero, effectively pausing compute time billing. This means that you may experience a cold start of up to 1 second when the database is accessed after being inactive. At the Pro plan you can configure the inactive time threshold to decrease the frequency of cold starts.

The total cost of a Vercel postgres database is calculated based on five factors:

  • Databases: the number of databases you have in your account. The free plan includes 1 database, while the Pro plan includes 1 and charges $1.00 per additional database.
  • Compute time: compute time is calculated based on the active time of your database, multiplied by the number of CPUs available. In the free plan, databases are set up with 0.25 logical CPUs, and at the Pro plan they start with 1 CPU and users are flexible to modify the number of CPUs allocated. A database is considered active when processing requests or within the configured idle timeout period post the last request. The free plan includes 100 hours of compute time per month, while the Pro plan includes 100 hours and charges $0.10 per additional hour.
  • Storage: storage is calculated as the maximum amount of storage used per month for all Postgres databases for your account. In the free plan users are limited to 512MB of storage, while in the Pro plan they are limited to 1GB and additional storage is billed at $0.12 per additional GB.
  • Written data: written data is the amount of data changes commited from computed resources to storage and include operations such as inserts, updated, deletes, and schema migrations. In the free plan users are limited to 512MB, while in the Pro plan they are limited to 1GB and additional $0.096 per additional GB.
  • Data transfer: data transfer is the volume of data transferred out of the database. In the free plan users are limited to 512MB, while in the Pro plan they are limited to 1GB and additional data transfer is billed at $0.10 per additional GB.

An alternative to Vercel postgres is Cloudflare D1. Cloudflare D1 is a serverless database native to the Workers platform build on SQLite that enables you to store and retrieve data on the Cloudflare Edge Network.

Cloudflare D1 is a global database, which means that your data is available in all Cloudflare data centers around the world. Your data is not limited to a single region and the choice of region is not a concern for the performance of your application.

The D1 database is accessible from the Cloudflare Dashboard and can also be accessed through Workers via the SDK or integrating with ORM libraries such as Drizzle.

Cloudflare D1 is also based on the pay-as-you go model, which means that you are charged only for the resources you use, and can scale-to-zero as Vercel without suffering from cold starts when its back online. On Cloudflare D1 you are not charged by data transfer, compute time, or the number of databases you create, but by the amount of data you store and the number of rows you read and write.

Rows read are how many rows a query reads (scans), regardless of the size of each row, while rows written measure how many rows were written to D1 database. Note that Cloudflare charges for row scans and not for the number of rows returned by a query. Thats why optimizing your database with indexes is crucial for reducing costs while using Cloudflare D1. Defining indexes on your table(s) reduces the number of rows read by a query when filtering on that indexed field.

The free plan includes 1GB of storage, while in the Paid plan you are charged $0.75 per additional GB of data.

For requests, in the free plan you can make up to 5M row read requests and 100,00 row write requests per day. On the paid plan you have up to 25B row read requests and 50M row write requests per month. Additional requests are billed at $0.001 per 1M row read requests and $1 per 1M row write requests.

Cloudflare vs Vercel: Image Optimization

A comparison between Cloudflare and Vercel image optimization services

Vercel Images is a service that manages the upload, optimization, and delivery of images based on factors like size, quality, format, and pixel density. Images that have been optmized are automatically cached on the Vercel Edge Network, ensuring a faster delivery to users once they are requested again.

The best way to use the service is by integrating with Frameworks like Next.js, Astro, and Nuxt. When you use the <Images>component in each of those frameworks and deploy your project on Vercel, the platform automatically adjusts your images and optmize it for different screen sizes.

The pricing of Vercel Images is based on the number of unique source images requested during the billing period. A source image is the value that is passed to the src prop and can produce multiple optmized images with different sizes and quality.

The free plan includes 1000 source image requests, while the Pro plan includes 5000 source image requests and charges $5 per 1000 source images.

Additionally, charges apply for the bandwidth used when optmized images are delivered from Vercel’s Edge Network to clients.

Cloudflare Images is a similar service from Cloudflare that manages the upload, optimization, and delivery of images from the Cloudflare Edge Network. Images are automatically resized, compressed, and converted to the most efficient format for the user’s device and network conditions.

You can upload images to Cloudflare Images through the Cloudflare Dashboard or the Cloudflare API. Once uploaded, you can access the images directly through the Cloudflare CDN or through the Cloudflare API.

Once uploaded, images can be resized for different use cases using Image variants. By default images are served using a public variant, but you can also create up to 100 custom variants for different screen sizes, devices, and network conditions. You can also transform images when requesting via URL or via Cloudflare Workers, but note that transformations are billed separately from the images delivered.

Cloudflare Images pricing is based in a post paid model, with charges based on the total number of images delivered, transformed, and stored per month.

If you are storing images in Cloudflare Images, you will be billed $5 per 100,000 images stored and $1 per per 100,000 images requested by the browser and delivered to the user. You will not be billed for images delivered if you are optimizing images that are stored elsewhere, such as S3 or R2 buckets.

Apart from images stored and delivered, you also can be charged for images transformed. An unique transformation is defined as a request to transform an original image and costs $0.50 per 100,000 transformations. Transformation prices doesnt include image variants that you previously setup.

Conclusion

By looking at the features and pricing of Vercel and Cloudflare, we can see that Cloudflare is a great alternative to Vercel in 2024 for all major services that Vercel offers.

If you need an alternative to Vercel for hosting your website or web applicatio: Cloudflare Pages. If you need an alternative to Vercel for running serverless functions: Cloudflare Workers. If you need an alternative to Vercel for storing data: Cloudflare KV and Cloudflare D1. If you need an alternative to Vercel for image optimization: Cloudflare Images

Of course you dont need to migrate your entire application to Cloudflare, leaving Vercel completely. You can take a mixed approach, like Ilias did, and move only some parts of your application to Cloudflare, making better development choices and optimizing your website or application to reduce costs and improve performance.

You can also take a slow approach and migrate your application to Cloudflare gradually, starting with the most critical parts of your application and moving the rest as you see fit.

Read the whole story
emrox
2 hours ago
reply
Hamburg, Germany
Share this story
Delete

The guide to Git I never had

1 Share

🩺 Doctors have stethoscopes.
🔧 Mechanics have spanners.
👨‍💻 We developers, have Git.

Have you noticed that Git is so integral to working with code that people hardly ever include it in their tech stack or on their CV at all? The assumption is you know it already, or at least enough to get by, but do you?

Git is a Version Control System (VCS). The ubiquitous technology that enables us to store, change, and collaborate on code with others.

🚨 As a disclaimer, I would like to point out that Git is a massive topic. Git books have been written, and blog posts that could be mistaken for academic papers too. That's not what I’m going for here. I'm no Git expert. My aim here is to write the Git fundamentals post I wish I had when learning Git.

As developers, our daily routine revolves around reading, writing, and reviewing code. Git is arguably one of the most important tools we use. Mastering the features and functionalities Git offers is one of the best investments you can make in yourself as a developer.

So let’s get started

If you feel I missed or should go into more detail on a specific command, let me know in the comments below. And I will update this post accordingly. 🙏

While we are on the topic

If you are looking to put your Git skills to work and would like to contribute to Glasskube, we officially launched in February and we aim to be the no-brainer, default solution for Kubernetes package management. With your support, we can make it happen. The best way to show your support is by starring us on GitHub ⭐

Let’s lay down the foundations

Does Git ever make you feel like Peter Griffin? If you don’t learn Git the right way you run the risk of constantly scratching your head, getting stuck on the same issues, or rueing the day you see another merge conflict appear in your terminal. Let's ensure that doesn’t happen by defining some foundational Git concepts.

Branches

In a Git repository, you'll find a main line of development, typically named "main" or "master" (deprecated) from which several branches diverge. These branches represent simultaneous streams of work, enabling developers to tackle multiple features or fixes concurrently within the same project.

Git branches

Commits

Git commits serve as bundles of updated code, capturing a snapshot of the project's code at a specific point in time. Each commit records changes made since the last commit was recorded, all together building a comprehensive history of the project's development journey.

Git commits

When referencing commits you will generally use its uniquely identified cryptographic hash.

Example:

This shows detailed information about the commit with that hash.

Tags

Git tags serve as landmarks within the Git history, typically marking significant milestones in a project's development, such as releases, versions, or standout commits. These tags are invaluable for marking specific points in time, often representing the starting points or major achievements in a project's journey.

Git tags

HEAD

The most recent commit on the currently checked-out branch is indicated by the HEAD, serving as a pointer to any reference within the repository. When you're on a specific branch, HEAD points to the latest commit on that branch. Sometimes, instead of pointing to the tip of a branch, HEAD can directly point to a specific commit (detached HEAD state).

Stages

Understanding Git stages is crucial for navigating your Git workflow. They represent the logical transitions where changes to your files occur before they are committed to the repository. Let's delve into the concept of Git stages:

Git stages

Working directory 👷

The working directory is where you edit, modify, and create files for your project. Representing the current state of your files on your local machine.

Staging area 🚉

The staging area is like a holding area or a pre-commit zone where you prepare your changes before committing them to the repository.

Useful command here: git add Also git rm can be used to unstage changes

Local repository 🗄️

The local repository is where Git permanently stores the committed changes. It allows you to review your project's history, revert to previous states, and collaborate with others on the same codebase.

You can commit changes that are ready in the staging area with: git commit

Remote repository 🛫

The remote repository is a centralized location, typically hosted on a server (like GitHub, GitLab, or Bitbucket), where you can share and collaborate with others on your project.

You can use commands like git push and git pull to push/pull your committed changes from your local repository to the remote repository.

Getting Started with Git

Well, you have to start somewhere, and in Git that is your workspace. You can fork or clone an existing repository and have a copy of that workspace, or if you are starting completely fresh in a new local folder on your machine you have to turn it into a git repository with git init. The next step, crucially not to be overlooked is setting up your credentials.

Source:  Shuai Li

Credentials set up

When running pushing and pulling to a remote repository you don’t want to have to type your username and password every time, avoid that by simply executing the following command:

git config --global credential.helper store

The first time you interact with the remote repository, Git will prompt you to input your username and password. And after that, you won’t be prompted again

It's important to note that the credentials are stored in a plaintext format within a .git-credentials file.

To check the configured credentials, you can use the following command:

git config --global credential.helper

Working with branches

When working locally it’s crucial to know which branch you are currently on. These commands are helpful:

# Will show the changes in the local repository
git branch

# Or create a branch directly with
git branch feature-branch-name

To transition between branches use:

Additionally to transitioning between them, you can also use:

git checkout 
# A shortcut to switch to a branch that is yet to be created with the -b flag
git checkout -b feature-branch-name

To check the repository's state, use:

A great way to always have a clear view of your current branch is to see it right in the terminal. Many terminal add-ons can help with this. Here is one.

Terminal view

Working with commits

When working with commits, utilize git commit -m to record changes, git amend to modify the most recent commit, and try your best to adhere to commit message conventions.

# Make sure to add a message to each commit
git commit -m "meaningful message"

If you have changes to your last commit, you don’t have to create another commit altogether, you can use the -—amend flag to amend the most recent commit with your staged changes

# make your changes
git add .
git commit --amend
# This will open your default text editor to modify the commit message if needed.
git push origin your_branch --force

⚠️ Exercise caution when utilizing --force, as it has the potential to overwrite the history of the target branch. Its application on the main/master branch should be generally avoided.

As a rule of thumb it's better to commit more often than not, to avoid losing progress or accidentally resetting the unstaged changes. One can rewrite the history afterward by squashing multiple commits or doing an interactive rebase.

Use git log to show a chronological list of commits, starting from the most recent commit and working backward in time

Manipulating History

Manipulating History involves some powerful commands. Rebase rewrites commit history, Squashing combines multiple commits into one, and Cherry-picking selects specific commits.

Rebasing and merging

It makes sense to compare rebasing to merging since their aim is the same but they achieve it in different ways. The crucial difference is that rebasing rewrites the project's history. A desired choice for projects that value clear and easily understandable project history. On the other hand, merging maintains both branch histories by generating a new merge commit.

During a rebase, the commit history of the feature branch is restructured as it's moved onto the HEAD of the main branch

rebase

The workflow here is pretty straightforward.

Ensure you're on the branch you want to rebase and fetch the latest changes from the remote repository:

git checkout your_branch
git fetch

Now choose the branch you want to rebase onto and run this command:

git rebase upstream_branch

After rebasing, you might need to force-push your changes if the branch has already been pushed to a remote repository:

git push origin your_branch --force

⚠️ Exercise caution when utilizing --force, as it has the potential to overwrite the history of the target branch. Its application on the main/master branch should be generally avoided.

Squashing

Git squashing is used to condense multiple commits into a single, cohesive commit.

git squashing

The concept is easy to understand and especially useful if the method of unifying code that is used is rebasing, since the history will be altered, it’s important to be mindful of the effects on the project history. There have been times I have struggled to perform a squash, especially using interactive rebase, luckily we have some tools to help us. This is my preferred method of squashing which involves moving the HEAD pointer back X number of commits while keeping the staged changes.

# Change to the number after HEAD~ depending on the commits you want to squash
git reset --soft HEAD~X
git commit -m "Your squashed commit message"
git push origin your_branch --force

⚠️ Exercise caution when utilizing --force, as it has the potential to overwrite the history of the target branch. Its application on the main/master branch should be generally avoided.

Cherry-picking

Cherry-picking is useful for selectively incorporating changes from one branch to another, especially when merging entire branches is not desirable or feasible. However, it's important to use cherry-picking judiciously, as it can lead to duplicate commits and divergent histories if misapplied

Cherry-picking

To perform this first you have to identify the commit hash of the commit you would like to pick, you can do this with git log. Once you have the commit hash identified you can run:

git checkout target_branch
git cherry-pick <commit-hash> # Do this multiple times if multiple commits are wanted
git push origin target_branch

Advanced Git Commands

Signing commits

Signing commits is a way to verify the authenticity and integrity of your commits in Git. It allows you to cryptographically sign your commits using your GPG (GNU Privacy Guard) key, assuring Git that you are indeed the author of the commit. You can do so by creating a GPG key and configuring Git to use the key when committing. Here are the steps:

# Generate a GPG key
gpg --gen-key

# Configure Git to Use Your GPG Key
git config --global user.signingkey <your-gpg-key-id>

# Add the public key to your GitHub account

# Signing your commits with the -S flag
git commit -S -m "Your commit message"

# View signed commits
git log --show-signature

Git reflog

A topic that we haven’t explored is Git references, they are pointers to various objects within the repository, primarily commits, but also tags and branches. They serve as named points in the Git history, allowing users to navigate through the repository's timeline and access specific snapshots of the project. Knowing how to navigate git references can be very useful and they can use git reflog to do just that. Here are some of the benefits:

  • Recovering lost commits or branches
  • Debugging and troubleshooting
  • Undoing mistakes

Interactive rebase

Interactive rebase is a powerful Git feature that allows you to rewrite commit history interactively. It enables you to modify, reorder, combine, or delete commits before applying them to a branch.

In order to use it you have to become familiar with the possible actions such are:

  • Pick (“p“)
  • Reword (“r“)
  • Edit (“e“)
  • Squash (“s“)
  • Drop (“d“)

Interactive rebase

Here is a useful video to learn how to perform an interactive rebase in the terminal, I have also linked a useful tool at the bottom of the blog post.

Collaborating with Git

Origin vs Upstream

The origin is the default remote repository associated with your local Git repository when you clone it. If you've forked a repository, then that fork becomes your "origin" repository by default.

Upstream on the other hand refers to the original repository from which your repository was forked.

To keep your forked repository up-to-date with the latest changes from the original project, you git fetch changes from the "upstream" repository and merge or rebase them into your local repository.

# By pulling the pulled changes will be merged into your working branch
git pull <remote_name> <branch_name>
# If you don't want to merge the changes use
git fetch <remote_name>

To see the remote repositories associated with you local Git repo, run:

Conflicts

Don’t panic, when trying to merge or rebase a branch and conflicts are detected it only means that there are conflicting changes between different versions of the same file or files in your repository and they can be easily resolved (most times).

They are typically indicated within the affected files, where Git inserts conflict markers <<<<<<<, ======= and >>>>>>> to highlight the conflicting sections. Decide which changes to keep, modify, or remove, ensuring that the resulting code makes sense and retains the intended functionality.

After manually resolving conflicts in the conflicted files, remove the conflict markers <<<<<<<, =======, and >>>>>>> and adjust the code as necessary.

Save the changes in the conflicted files once you're satisfied with the resolution.

If you have issues resolving conflicts, this video does a good job at explaining it.

Popular Git workflows

Various Git workflows exist, however, it's important to note that there's no universally "best" Git workflow. Instead, each approach has its own set of pros and cons. Let's explore these different workflows to understand their strengths and weaknesses.

Feature Branch Workflow 🌱

Each new feature or bug fix is developed in its own branch and then merge it back into the main branch once completed by opening a PR.

  • Strength: Isolation of changes and reducing conflicts.
  • Weakness: Can become complex and require diligent branch management.

Gitflow Workflow 🌊

Gitflow defines a strict branching model with predefined branches for different types of development tasks.

It includes long-lived branches such as main, develop, feature branches, release branches, and hotfix branches.

  • Strength: Suitable for projects with scheduled releases and long-term maintenance.
  • Weakness: Can be overly complex for smaller teams

Forking Workflow 🍴

In this workflow, each developer clones the main repository, but instead of pushing changes directly to it, they push changes to their own fork of the repository. Developers then create pull requests to propose changes to the main repository, allowing for code review and collaboration before merging.

This is the workflow we use to collaborate on the open-source Glasskube repos.

  • Strength: Encourages collaboration from external contributors without granting direct write access to the main repository.
  • Weakness: Maintaining synchronization between forks and the main repository can be challenging.

Trunk-Based Development 🪵

If you are on a team focused on rapid iteration and continuous delivery, you might use trunk-based development which developers work directly on the main branch committing small and frequent changes.

  • Strength: Promotes rapid iteration, continuous integration, and a focus on delivering small, frequent changes to production.
  • Weakness: Requires robust automated testing and deployment pipelines to ensure the stability of the main branch, may not be suitable for projects with stringent release schedules or complex feature development.

What the fork?

Forking is highly recommended for collaborating on Open Source projects since you have complete control over your own copy of the repository. You can make changes, experiment with new features, or fix bugs without affecting the original project.

💡 What took me a long time to figure out was that although forked repositories start as separate entities, they retain a connection to the original repository. This connection allows you to keep track of changes in the original project and synchronize your fork with updates made by others.

That’s why even when you push to your origin repository. Your changes will show up on the remote also.

Git Cheatsheet


# Clone a Repository
git clone <repository_url>

# Stage Changes for Commit
git add <file(s)>

# Commit Changes
git commit -m "Commit message"

# Push Changes to the Remote Repository
git push

# Force Push Changes (use with caution)
git push --force

# Reset Working Directory to Last Commit
git reset --hard

# Create a New Branch
git branch <branch_name>

# Switch to a Different Branch
git checkout <branch_name>

# Merge Changes from Another Branch
git merge <branch_name>

# Rebase Changes onto Another Branch (use with caution)
git rebase <base_branch>

# View Status of Working Directory
git status

# View Commit History
git log

# Undo Last Commit (use with caution)
git reset --soft HEAD^

# Discard Changes in Working Directory
git restore <file(s)>

# Retrieve Lost Commit References
git reflog

# Interactive Rebase to Rearrange Commits
git rebase --interactive HEAD~3

# Pull changes from remote repo
git pull <remote_name> <branch_name>

# Fetch changes from remote repo
git fetch <remote_name>

  • Tool for interactive rebasing.

  • Cdiff to view colorful, incremental diffs.

  • Interactive Git branching playground


If you like this sort of content and would like to see more of it, please consider supporting us by giving us a Star on GitHub 🙏

Read the whole story
emrox
17 hours ago
reply
Hamburg, Germany
Share this story
Delete

A brief history of web development. And why your framework doesn't matter.

1 Share
Read the whole story
emrox
3 days ago
reply
Hamburg, Germany
Share this story
Delete

Linux On Desktop In 2023

1 Share
Read the whole story
emrox
3 days ago
reply
Hamburg, Germany
Share this story
Delete
Next Page of Stories