3562 stories
·
2 followers

Cloudflare / Vercel comparison

1 Share

Introduction

Recently we have seen a lot of fuss around the so-called serverless horror stories, or the infinite scale of serverless computing that can lead to a huge bill at the end of the month. The truth is that Vercel is a great platform for developers, but can be expensive in comparisom to other alternatives.

Vercel can be truly costly if you make a mistake or dont have a good understanding of how the platform works and how to optmize your website or application. For example, Ilias Ism shared how he was getting billed $2,000+ per month on Vercel for basic services.

But that is not the only or most horrifying story.

There are many other posts over the internet describing developers getting billed thousands of dollars for a simple website or application. Thats the case of Michael Aubury that got a $23,000 bill when someone targeted his Vercel deployment with a DDoS attack, and Mike Ramirez that got a $3,000 bill in 6 hours for a small mistake in his code.

If you are looking for a more cost-effective alternative to Vercel, Cloudflare is a great option. Cloudflare offers a wide range of services that can help you optimize your website or application, and can be a great alternative to Vercel.

Of course, you don’t need to migrate your entire application to Cloudflare, leaving Vercel completely. Ilias, for example, took a mixed approach, including moving image optmization to Cloudflare, making better development choices like disabling prefetch in <Link> tags, and moving things around in the Vercel ecosystem itself, like using the edge runtime whenever possible.

In this article we will explore Vercel features and pricing, and how Cloudflare can be the best alternative to Vercel in 2024.

What is Cloudflare?

Its hard to explain what Cloudflare is in a few words. Its well known by its CDN and security services but thats not all. Its a serverless hosting platform that can help you deploy your website or application in a cost-effective way, a DNS registrar that can help you manage your domains, a DNS provider that can help you manage your DNS records. And much more than that.

Honestly, Cloudflare is all you can think of when it comes to web infrastructure for a better, faster and more secure internet. In the heart of all its services is the Cloudflare Edge Network, a global network of servers that run code, provide compute, and store data close to the end user, reducing latency and improving performance.

Of course Vercel also has a global network of servers, but Cloudflare network is much larger. Vercel’s network has 18 regions and a bit over 100 points of presence, while Cloudflare has over 300 data centers around the world.

The availability of data centers for deployment is also a big advantage of Cloudflare. In Vercel as in other providers such as AWS, you have to choose a region to deploy your application. In Cloudflare, the region is the world. You don’t need to worry about where your application is deployed or where your data is stored.

Your website is always at the edge of the network, close to all your end users.

Cloudflare vs Vercel: Hosting and Deployment

A comparison between Cloudflare and Vercel hosting and deployment services

Lets start with Vercel.

What made Vercel so popular is how simple it makes hosting a website or web application. You can literally connect your Github repository to Vercel and deploy your website in a few clicks, without having to worry about servers, infrastructure, or anything else.

What you do is push your code to Github, create and vinculate it to a project and vercel and start deployments with every push to the repository. Vercel will build your website and deploy it to the cloud, making it available to the world.

It has a great integration with Next.js, as it was created by the Vercel team, but it also supports a wide range of the most popular frontend frameworks, optimizing how your site builds no matter what tool you use.

When hosting your website on Vercel, it charges mostly for bandwidth, or the amount of data transferred from your website to your users. Data transferred includes both outgoing data, or data sent from your website to your users, and incoming data, or data sent from your users to your website.

Originally, the free plan included 100 GB of bandwidth per month, and the Pro plan 1 TB of bandwidth per month. If you exceeded your bandwidth limit, it would charge you $40 per extra 100 GB of bandwidth. But with the new pricing model, there is no more a one size fits all approach, and Vercel is breaking down the bandwidth pricing for hosting applications and websites into three variables: Fast Data Transfer, Edge Requests, and Data Cache.

Fast Data Transfer is the data transferred between the Vercel Edge Newtwork and the end user. The free plan includes 100 GB of fast data transfer, and the Pro plan include 1TB of fast data transfer with extra charges starting at $0.15 per GB. Prices will now vary depending on the region where the data is being transferred:

  • $0.15 per GB: Cleveland, USA (cle1), Washington, D.C., USA (iad1), Portland, USA (pdx1), Stockholm, Sweden (arn1), London, United Kingdom (lhr1), Frankfurt, Germany (fra1), Dublin (dub1), Paris, France (cdg1), San Francisco, USA (sfo1)
  • $0.30 per GB: Singapore (sin1), Hong Kong (hkg1)
  • $0.31 per GB: Osaka, Japan (kix1), Tokio, Japan (hnd1)
  • $0.32 per GB: Sydney, Australia (syd1)
  • $0.33 per GB: Mumbai, India (bom1)
  • $0.39 per GB: Cape Town, South Africa (cpt1)
  • $0.44 per GB: São Paulo, Brazil (gru1)
  • $0.47 per GB: Seoul, South Korea (icn1)

Edge Requests refer to the number of requests made to the Vercel Edge Network when serving your website or application to an end user. When a user accesses your website, their request is routed to the nearest Vercel Edge Network, reducing latency and improving performance. For example, loading a single web page might involve requests for the HTML document, CSS files, JavaScript files, images, and so on. Each of these requests is counted as an Edge Request and incurs charges according to Vercel’s pricing model.

The free plan includes 1 million edge requests per month, while the Pro plan includes 10 million edge requests per month with additional charges starting at $2 per 1 million edge requests. Pricing will also vary depending on the region where the request is being made:

  • $2 per 1M requests: Cleveland, USA (cle1), Washington, D.C., USA (iad1), Portland, USA (pdx1)
  • $2.20 per 1M requests: Stockholm, Sweden (arn1), Mumbai, India (bom1)
  • $2.40 per 1M requests: London, United Kingdom (lhr1), Dublin (dub1), Paris, France (cdg1), San Francisco, USA (sfo1)
  • $2.60 per 1M requests: Singapore (sin1), Sydney, Australia (syd1), Osaka, Japan (kix1), Seoul, South Korea (icn1), Tokio, Japan (hnd1), Frankfurt, Germany (fra1)
  • $2.80 per 1M requests: Hong Kong (hkg1), Cape Town, South Africa (cpt1)
  • $3.20 per 1M requests: São Paulo, Brazil (gru1)

Data Cache refers to the sum of all data that has been written to the Vercel Edge Network for quick access and subsequently retrieved (read) from the cache storage. The pricing varies depending on the action performed on the cache storage, write or read, and the region where the data is being cached. The free plan includes 2M bytes of cache writes and 10M bytes of cache reads, while the Pro plan charges $4 per additional 1M bytes of cache writes and $0.40 per additional 1M bytes of cache reads. The pricing will also vary depending on the region where the data is being cached:

  • Data Cache Reads:

    • $4 per 1M bytes: Cleveland, USA (cle1), Washington, D.C., USA (iad1), Portland, USA (pdx1)
    • $4.40 per 1M bytes: Stockholm, Sweden (arn1), Mumbai, India (bom1)
    • $4.80 per 1M bytes: London, United Kingdom (lhr1), Dublin (dub1), Paris, France (cdg1), San Francisco, USA (sfo1)
    • $5.20 per 1M bytes: Singapore (sin1), Sydney, Australia (syd1), Osaka, Japan (kix1), Seoul, South Korea (icn1), Tokio, Japan (hnd1), Frankfurt, Germany (fra1)
    • $5.60 per 1M bytes: Hong Kong (hkg1), Cape Town, South Africa (cpt1)
    • $6.40 per 1M bytes: São Paulo, Brazil (gru1)
  • Data Cache Writes:

    • $0.40 per 1M bytes: Cleveland, USA (cle1), Washington, D.C., USA (iad1), Portland, USA (pdx1)
    • $0.44 per 1M bytes: Stockholm, Sweden (arn1), Mumbai, India (bom1)
    • $0.48 per 1M bytes: London, United Kingdom (lhr1), Dublin (dub1), Paris, France (cdg1), San Francisco, USA (sfo1)
    • $0.52 per 1M bytes: Singapore (sin1), Sydney, Australia (syd1), Osaka, Japan (kix1), Seoul, South Korea (icn1), Tokio, Japan (hnd1), Frankfurt, Germany (fra1)
    • $0.56 per 1M bytes: Hong Kong (hkg1), Cape Town, South Africa (cpt1)
    • $0.64 per 1M bytes: São Paulo, Brazil (gru1)

Additional variables you need to be aware of is build execution time, or the time it takes to build your website or application during a deployment, and concurrent builds, or the number of builds that can run at the same time.

The free plan includes 1 concurrent build, and the Pro plan also includes 1 and charge $50 for each additional concurrent build. In relation to build execution time, the free plan includes 6,000 minutes of build time, and the Pro plan includes 24,000 minutes of build time.

In Vercel, the Pro plan starts at $20 per month.

What about Cloudflare?

Cloudflare offers a similar service to Vercel, that is called Cloudflare Pages.

Cloudflare Pages started as a JAMstack hosting service that gained notority by leveraging the Cloudflare Edge Network to deliver websites fast and securely, but now it has evolved for a full serverless hosting platform that can run any kind of application, from static website to full-stack web applications that require server-side logic.

As in Vercel, you can also connect your Github repository and deploy your website in a few clicks. Doesnt matter if you are using Next.js, or any other framework, Cloudflare Pages will build your website and deploy it to the cloud, making it available to the world.

The biggest difference you can see between Vercel and Cloudflare Pages is the pricing. Cloudflare Pages is much more cost-effective than Vercel, primarily because it doesnt charge for bandwidth.

Thats right, you’re not billed by bandwidth when hosting your website on Cloudflare Pages, so you dont need to worry about how many users are accessing your website or how much data is being transferred.

As in Vercel, Cloudflare Pages also limits concurrent builds and the total number of builds an account can run per month. The free plan includes 1 concurrent build and 500 builds per month, and the Pro plan includes 5 concurrent builds and 5,000 builds per month.

In Cloudflare, the Pro plan starts at $25 per month.

Cloudflare vs Vercel: Serverless Functions

A comparison between Cloudflare and Vercel serverless functions services

Serverless functions are a great way to add dynamic functionality to your website. They are basically pieces of code that run in the cloud, and can be triggered by events like HTTP requests, database changes, or scheduled tasks.

Vercel offers a service called Vercel Functions that allows you ro run serverless functions in the Vercel Edge Network close to your users. The functions scale automatically on demand and can interact with APIs, databases, and resources on the web and at the Vercel ecossystem.

The infrastructure and what serverless functions can do are limited by the runtime environment you choose for your functions to execute. Available runtimes include: Node.js, Go, Ruby, Python, and Edge runtime. Serverless functions that run at the Edge runtime are more lightweight and are billed differently.

Vercel functions can suffer from cold starts, or the delay that occurs when an inactive function is invoked for the first time, as the function has to be initialized and loaded into memory. Cold starts can be reduced by keeping your functions warm, or invoking them periodically to prevent them from being suspended.

As in AWS lambda, Vercel asks you to choose a region where you want to deploy your functions. This is a crucial step and can highly impact latency and the performance, as the closer the function is to the user, the faster it will be executed. If you use a vercel storage service, like KV or Postgres, you should also consider the region where your data is stored and deploy your functions close to it.

Before the pricing updates on April 2024, Vercel did not charge differently per regions as AWS, but now the pricing can vary depending on the region where the function is being executed because of the data transfer costs. Function Duration and Invocation will still be the same price across the regions, but Data Transfer will be charged differently.

For billing purposes, Edge functions charge for CPU time, or the time spent directly executing your function, while other runtimes charge for wall-clock time, or the total time your function is running, including idle time or the time it takes to boot the environment and load your function into memory. CPU time is measured in execution units of 50ms each, while wall-clock time is measured in GB-hours, which is the memory allocated for each function in GB, multiplied by the time in hours they were running.

Before the pricing update, the free plan included 500,000 execution units for Edge functions, and 100 GB-hours for serverless functions, while the Pro plan included 1M execution units for Edge functions, 1,000 GB-hours for serverless functions, and charged $2.00 for each additional 1M execution units and $40 for each additional 100 GB-hours.

Now, the free plan includes 1M of execution units for Edge runtime functions, and 1000 gb-hours for serverless functions, while the Pro plan charges $2 per additional 1M execution units, and $18 per additional 100 GB-hours ($0.18 per 1GB-hour).

Appart from the duration variables, Vercel is now also charging for two new variables: Function Invocations and Data Transfer. That was a big change in the pricing model, as before you were only charged for the time your function was running.

Function Invocations are the number of times your function is invoked, including both successful and failed invocations. The free plan includes 1M function invocations, while the Pro plan includes 10M invocations and charges $0.60 for each additional 1M invocations.

Data Transfer in the context of serverless functions is called Fast Origin Transfer and refers to the data transferred between the Vercel Edge Network and your functions. The free plan includes 100 GB of fast origin transfer, and the Pro plan include 1TB of fast origin transfer with extra charges strating at $0.06 per GB. Prices will also vary depending on the region where the data is being transferred:

  • $0.06 per GB: Cleveland, USA (cle1), Washington, D.C., USA (iad1), Portland, USA (pdx1), Stockholm, Sweden (arn1), London, United Kingdom (lhr1), Frankfurt, Germany (fra1), Dublin (dub1), Paris, France (cdg1), San Francisco, USA (sfo1)
  • $0.27 per GB: Singapore (sin1), Osaka, Japan (kix1), Tokio, Japan (hnd1), Hong Kong (hkg1)
  • $0.24 per GB: Seoul, South Korea (icn1)
  • $0.25 per GB: Mumbai, India (bom1)
  • $0.29 per GB: Sydney, Australia (syd1)
  • $0.43 per GB: Cape Town, South Africa (cpt1)
  • $0.41 per GB: São Paulo, Brazil (gru1)

Cloudflare also offers a service called Cloudflare Workers that allows you to run serverless functions in the Cloudflare Edge Network close to your users. The function also scales automatically on demand and can interact with APIs, databases, and resources on the web and at the Cloudflare ecossystem.

Currently, Cloudflare serverless functions must be written in TypeScript/JavaScript or any language that can be compiled to WebAssembly.

Cloudflare Workers do not suffer from cold starts, because it runs on top of V8 isolates that can warm up a function in under 5 milliseconds. This means that your functions are always ready to execute, no matter how long they have been inactive. This is a huge advantage over Vercel functions, as cold starts can be a big issue for some applications.

By using Workers, you also dont need to worry about which region to deploy your application. By default on Cloudflare the region is the world and this means that your code will always run close to your resources and your users.

As Cloudflare eliminated cold starts, Workers do not charge for wall time and by default only CPU time is used for billing purposes. The free plan includes 100,000 request per day and an average of 10ms of CPU time per invocation, while the Standard plan starts at $5 and includes 10M requests and 30M CPUms per month. Additional requests are billed at $0.30 per 1M requests and additional CPUms are billed at $0.02 per 1M CPUms.

Cloudflare vs Vercel: KV storage

A comparison between Cloudflare and Vercel KV storage services

Vercel KV is a durable Redis database that enables you to store and retrieve JSON data. Its not a service that is native to Vercel, but powered by a partnership with Upstash.

By default, a single Redis database is provisioned in the primary region you specify when you create a KV database. This primary region is where write operations will be routed. A KV database may have additional read regions, and read operations will be run in the nearest region to the request that triggers them.

Note that when you do that you are replicating a database and this will increase exponentially the usage and the cost of the service, as each write command will be issued once to your primary database and once to each read replicas you have configured.

Not all Vercel regions are supported by KV Storage. Actually, only the following regions are supported: Dublin, Frankfurt, São Paulo, Washington, Portland, San Francisco, Singapore, and Sydney.

Changing the primary region of a Vercel KV store is also not supported. If you wish to change the region of your database, you must create a new store and migrate all your data.

The KV storage monthly charges depend on four main variables that are:

  • Databases: the free plan include 1 database and does not allow any replicas, while the Pro plan includes up to 5 databases including replicas. Each additional database or replica costs $1,00.
  • Storage: represents the maximum amount of storage used per month. The free plan includes 512MB of memory, while the Pro plan includes 1GB and additional storage is billed at $0.25 per GB.
  • Requests: requests are the number of Redis commands issued against all KV databases for an account, including write operations on replicas. The free plan includes 150,000 requests per month, while the Pro plan includes 150,000 requests and charges 0.35 per 100,000 additional requests.
  • Data transfer: data transfer is the total amount of data transferred between the functions querying the KV databases on your account. The free plan includes 256MB of data transfer, while the Pro plan includes 1GB and additional data transfer is billed at $0.10 per GB.

An alternative to Vercel KV is Cloudflare KV. Cloudflare KV is a serverless key-value database that enables you to store and retrieve data on the Cloudflare Edge Network. Differently from Vercel, it is a native service of Cloudflare and is not powered by a partnership with another company. Its also not a Redis database, but a key-value database that is optimized for edge computing on Cloudflare.

The most common way to access data in Cloudflare KV is through Workers, but you can also access it through the Cloudflare API.

Cloudflare KV is a global database, which means that your data is replicated to all Cloudflare data centers around the world. Your data is not limited to a single region and you do not need to worry about creating replicas on different regions for better performance.

The Cloudflare KV pricing and limits varies depending on the plan you choose and the nature of the operation you are performing. You are not charge by data transfer or by the number of databases you create, but by the number of requests you make and the amount of data you store.

For storage, at the free plan you can store up to 1GB of data, while in the Paid plan you are charged $0.5 per additional GB of data.

For requests, on the free plan you can make up to 100,000 read requests per day and 1,000 write, delete, and list requests per day. On the Paid plan you are charged $0.5 per additional 10M read requests and $5 per additional 1M write, delete, and list requests.

Cloudflare vs Vercel: Serverless database

A comparison between Cloudflare and Vercel KV storage services

Vercel serverless database is a PostgreSQL database that is designed to integrate with Vercel Functions and your frontend framework. Its also not a native service of Vercel, but powered by a partnership with Neon.

When you create a Vercel Postgres database in your dashboard, a serverless database running PostgreSQL version 15 is provisioned in the region you specify. This region is where read and write operations will be routed and cannot be changed after the database is created.

Not many regions are available for deploying your serverless database. Currently only Cleveland, Washington, Portland, Frankfurt, Singapore, and Sydney are supported.

The choice of region is crucial for the performance of your application, as the closer the database is to the function that is querying it, the faster the response time will be.

Another important thing to consider is that Vercel Postgres databases are not always active. If there are no incoming requests for a specified duration, the database scales down to zero, effectively pausing compute time billing. This means that you may experience a cold start of up to 1 second when the database is accessed after being inactive. At the Pro plan you can configure the inactive time threshold to decrease the frequency of cold starts.

The total cost of a Vercel postgres database is calculated based on five factors:

  • Databases: the number of databases you have in your account. The free plan includes 1 database, while the Pro plan includes 1 and charges $1.00 per additional database.
  • Compute time: compute time is calculated based on the active time of your database, multiplied by the number of CPUs available. In the free plan, databases are set up with 0.25 logical CPUs, and at the Pro plan they start with 1 CPU and users are flexible to modify the number of CPUs allocated. A database is considered active when processing requests or within the configured idle timeout period post the last request. The free plan includes 100 hours of compute time per month, while the Pro plan includes 100 hours and charges $0.10 per additional hour.
  • Storage: storage is calculated as the maximum amount of storage used per month for all Postgres databases for your account. In the free plan users are limited to 512MB of storage, while in the Pro plan they are limited to 1GB and additional storage is billed at $0.12 per additional GB.
  • Written data: written data is the amount of data changes commited from computed resources to storage and include operations such as inserts, updated, deletes, and schema migrations. In the free plan users are limited to 512MB, while in the Pro plan they are limited to 1GB and additional $0.096 per additional GB.
  • Data transfer: data transfer is the volume of data transferred out of the database. In the free plan users are limited to 512MB, while in the Pro plan they are limited to 1GB and additional data transfer is billed at $0.10 per additional GB.

An alternative to Vercel postgres is Cloudflare D1. Cloudflare D1 is a serverless database native to the Workers platform build on SQLite that enables you to store and retrieve data on the Cloudflare Edge Network.

Cloudflare D1 is a global database, which means that your data is available in all Cloudflare data centers around the world. Your data is not limited to a single region and the choice of region is not a concern for the performance of your application.

The D1 database is accessible from the Cloudflare Dashboard and can also be accessed through Workers via the SDK or integrating with ORM libraries such as Drizzle.

Cloudflare D1 is also based on the pay-as-you go model, which means that you are charged only for the resources you use, and can scale-to-zero as Vercel without suffering from cold starts when its back online. On Cloudflare D1 you are not charged by data transfer, compute time, or the number of databases you create, but by the amount of data you store and the number of rows you read and write.

Rows read are how many rows a query reads (scans), regardless of the size of each row, while rows written measure how many rows were written to D1 database. Note that Cloudflare charges for row scans and not for the number of rows returned by a query. Thats why optimizing your database with indexes is crucial for reducing costs while using Cloudflare D1. Defining indexes on your table(s) reduces the number of rows read by a query when filtering on that indexed field.

The free plan includes 1GB of storage, while in the Paid plan you are charged $0.75 per additional GB of data.

For requests, in the free plan you can make up to 5M row read requests and 100,00 row write requests per day. On the paid plan you have up to 25B row read requests and 50M row write requests per month. Additional requests are billed at $0.001 per 1M row read requests and $1 per 1M row write requests.

Cloudflare vs Vercel: Image Optimization

A comparison between Cloudflare and Vercel image optimization services

Vercel Images is a service that manages the upload, optimization, and delivery of images based on factors like size, quality, format, and pixel density. Images that have been optmized are automatically cached on the Vercel Edge Network, ensuring a faster delivery to users once they are requested again.

The best way to use the service is by integrating with Frameworks like Next.js, Astro, and Nuxt. When you use the <Images>component in each of those frameworks and deploy your project on Vercel, the platform automatically adjusts your images and optmize it for different screen sizes.

The pricing of Vercel Images is based on the number of unique source images requested during the billing period. A source image is the value that is passed to the src prop and can produce multiple optmized images with different sizes and quality.

The free plan includes 1000 source image requests, while the Pro plan includes 5000 source image requests and charges $5 per 1000 source images.

Additionally, charges apply for the bandwidth used when optmized images are delivered from Vercel’s Edge Network to clients.

Cloudflare Images is a similar service from Cloudflare that manages the upload, optimization, and delivery of images from the Cloudflare Edge Network. Images are automatically resized, compressed, and converted to the most efficient format for the user’s device and network conditions.

You can upload images to Cloudflare Images through the Cloudflare Dashboard or the Cloudflare API. Once uploaded, you can access the images directly through the Cloudflare CDN or through the Cloudflare API.

Once uploaded, images can be resized for different use cases using Image variants. By default images are served using a public variant, but you can also create up to 100 custom variants for different screen sizes, devices, and network conditions. You can also transform images when requesting via URL or via Cloudflare Workers, but note that transformations are billed separately from the images delivered.

Cloudflare Images pricing is based in a post paid model, with charges based on the total number of images delivered, transformed, and stored per month.

If you are storing images in Cloudflare Images, you will be billed $5 per 100,000 images stored and $1 per per 100,000 images requested by the browser and delivered to the user. You will not be billed for images delivered if you are optimizing images that are stored elsewhere, such as S3 or R2 buckets.

Apart from images stored and delivered, you also can be charged for images transformed. An unique transformation is defined as a request to transform an original image and costs $0.50 per 100,000 transformations. Transformation prices doesnt include image variants that you previously setup.

Conclusion

By looking at the features and pricing of Vercel and Cloudflare, we can see that Cloudflare is a great alternative to Vercel in 2024 for all major services that Vercel offers.

If you need an alternative to Vercel for hosting your website or web applicatio: Cloudflare Pages. If you need an alternative to Vercel for running serverless functions: Cloudflare Workers. If you need an alternative to Vercel for storing data: Cloudflare KV and Cloudflare D1. If you need an alternative to Vercel for image optimization: Cloudflare Images

Of course you dont need to migrate your entire application to Cloudflare, leaving Vercel completely. You can take a mixed approach, like Ilias did, and move only some parts of your application to Cloudflare, making better development choices and optimizing your website or application to reduce costs and improve performance.

You can also take a slow approach and migrate your application to Cloudflare gradually, starting with the most critical parts of your application and moving the rest as you see fit.

Read the whole story
emrox
11 minutes ago
reply
Hamburg, Germany
Share this story
Delete

The guide to Git I never had

1 Share

🩺 Doctors have stethoscopes.
🔧 Mechanics have spanners.
👨‍💻 We developers, have Git.

Have you noticed that Git is so integral to working with code that people hardly ever include it in their tech stack or on their CV at all? The assumption is you know it already, or at least enough to get by, but do you?

Git is a Version Control System (VCS). The ubiquitous technology that enables us to store, change, and collaborate on code with others.

🚨 As a disclaimer, I would like to point out that Git is a massive topic. Git books have been written, and blog posts that could be mistaken for academic papers too. That's not what I’m going for here. I'm no Git expert. My aim here is to write the Git fundamentals post I wish I had when learning Git.

As developers, our daily routine revolves around reading, writing, and reviewing code. Git is arguably one of the most important tools we use. Mastering the features and functionalities Git offers is one of the best investments you can make in yourself as a developer.

So let’s get started

If you feel I missed or should go into more detail on a specific command, let me know in the comments below. And I will update this post accordingly. 🙏

While we are on the topic

If you are looking to put your Git skills to work and would like to contribute to Glasskube, we officially launched in February and we aim to be the no-brainer, default solution for Kubernetes package management. With your support, we can make it happen. The best way to show your support is by starring us on GitHub ⭐

Let’s lay down the foundations

Does Git ever make you feel like Peter Griffin? If you don’t learn Git the right way you run the risk of constantly scratching your head, getting stuck on the same issues, or rueing the day you see another merge conflict appear in your terminal. Let's ensure that doesn’t happen by defining some foundational Git concepts.

Branches

In a Git repository, you'll find a main line of development, typically named "main" or "master" (deprecated) from which several branches diverge. These branches represent simultaneous streams of work, enabling developers to tackle multiple features or fixes concurrently within the same project.

Git branches

Commits

Git commits serve as bundles of updated code, capturing a snapshot of the project's code at a specific point in time. Each commit records changes made since the last commit was recorded, all together building a comprehensive history of the project's development journey.

Git commits

When referencing commits you will generally use its uniquely identified cryptographic hash.

Example:

This shows detailed information about the commit with that hash.

Tags

Git tags serve as landmarks within the Git history, typically marking significant milestones in a project's development, such as releases, versions, or standout commits. These tags are invaluable for marking specific points in time, often representing the starting points or major achievements in a project's journey.

Git tags

HEAD

The most recent commit on the currently checked-out branch is indicated by the HEAD, serving as a pointer to any reference within the repository. When you're on a specific branch, HEAD points to the latest commit on that branch. Sometimes, instead of pointing to the tip of a branch, HEAD can directly point to a specific commit (detached HEAD state).

Stages

Understanding Git stages is crucial for navigating your Git workflow. They represent the logical transitions where changes to your files occur before they are committed to the repository. Let's delve into the concept of Git stages:

Git stages

Working directory 👷

The working directory is where you edit, modify, and create files for your project. Representing the current state of your files on your local machine.

Staging area 🚉

The staging area is like a holding area or a pre-commit zone where you prepare your changes before committing them to the repository.

Useful command here: git add Also git rm can be used to unstage changes

Local repository 🗄️

The local repository is where Git permanently stores the committed changes. It allows you to review your project's history, revert to previous states, and collaborate with others on the same codebase.

You can commit changes that are ready in the staging area with: git commit

Remote repository 🛫

The remote repository is a centralized location, typically hosted on a server (like GitHub, GitLab, or Bitbucket), where you can share and collaborate with others on your project.

You can use commands like git push and git pull to push/pull your committed changes from your local repository to the remote repository.

Getting Started with Git

Well, you have to start somewhere, and in Git that is your workspace. You can fork or clone an existing repository and have a copy of that workspace, or if you are starting completely fresh in a new local folder on your machine you have to turn it into a git repository with git init. The next step, crucially not to be overlooked is setting up your credentials.

Source:  Shuai Li

Credentials set up

When running pushing and pulling to a remote repository you don’t want to have to type your username and password every time, avoid that by simply executing the following command:

git config --global credential.helper store

The first time you interact with the remote repository, Git will prompt you to input your username and password. And after that, you won’t be prompted again

It's important to note that the credentials are stored in a plaintext format within a .git-credentials file.

To check the configured credentials, you can use the following command:

git config --global credential.helper

Working with branches

When working locally it’s crucial to know which branch you are currently on. These commands are helpful:

# Will show the changes in the local repository
git branch

# Or create a branch directly with
git branch feature-branch-name

To transition between branches use:

Additionally to transitioning between them, you can also use:

git checkout 
# A shortcut to switch to a branch that is yet to be created with the -b flag
git checkout -b feature-branch-name

To check the repository's state, use:

A great way to always have a clear view of your current branch is to see it right in the terminal. Many terminal add-ons can help with this. Here is one.

Terminal view

Working with commits

When working with commits, utilize git commit -m to record changes, git amend to modify the most recent commit, and try your best to adhere to commit message conventions.

# Make sure to add a message to each commit
git commit -m "meaningful message"

If you have changes to your last commit, you don’t have to create another commit altogether, you can use the -—amend flag to amend the most recent commit with your staged changes

# make your changes
git add .
git commit --amend
# This will open your default text editor to modify the commit message if needed.
git push origin your_branch --force

⚠️ Exercise caution when utilizing --force, as it has the potential to overwrite the history of the target branch. Its application on the main/master branch should be generally avoided.

As a rule of thumb it's better to commit more often than not, to avoid losing progress or accidentally resetting the unstaged changes. One can rewrite the history afterward by squashing multiple commits or doing an interactive rebase.

Use git log to show a chronological list of commits, starting from the most recent commit and working backward in time

Manipulating History

Manipulating History involves some powerful commands. Rebase rewrites commit history, Squashing combines multiple commits into one, and Cherry-picking selects specific commits.

Rebasing and merging

It makes sense to compare rebasing to merging since their aim is the same but they achieve it in different ways. The crucial difference is that rebasing rewrites the project's history. A desired choice for projects that value clear and easily understandable project history. On the other hand, merging maintains both branch histories by generating a new merge commit.

During a rebase, the commit history of the feature branch is restructured as it's moved onto the HEAD of the main branch

rebase

The workflow here is pretty straightforward.

Ensure you're on the branch you want to rebase and fetch the latest changes from the remote repository:

git checkout your_branch
git fetch

Now choose the branch you want to rebase onto and run this command:

git rebase upstream_branch

After rebasing, you might need to force-push your changes if the branch has already been pushed to a remote repository:

git push origin your_branch --force

⚠️ Exercise caution when utilizing --force, as it has the potential to overwrite the history of the target branch. Its application on the main/master branch should be generally avoided.

Squashing

Git squashing is used to condense multiple commits into a single, cohesive commit.

git squashing

The concept is easy to understand and especially useful if the method of unifying code that is used is rebasing, since the history will be altered, it’s important to be mindful of the effects on the project history. There have been times I have struggled to perform a squash, especially using interactive rebase, luckily we have some tools to help us. This is my preferred method of squashing which involves moving the HEAD pointer back X number of commits while keeping the staged changes.

# Change to the number after HEAD~ depending on the commits you want to squash
git reset --soft HEAD~X
git commit -m "Your squashed commit message"
git push origin your_branch --force

⚠️ Exercise caution when utilizing --force, as it has the potential to overwrite the history of the target branch. Its application on the main/master branch should be generally avoided.

Cherry-picking

Cherry-picking is useful for selectively incorporating changes from one branch to another, especially when merging entire branches is not desirable or feasible. However, it's important to use cherry-picking judiciously, as it can lead to duplicate commits and divergent histories if misapplied

Cherry-picking

To perform this first you have to identify the commit hash of the commit you would like to pick, you can do this with git log. Once you have the commit hash identified you can run:

git checkout target_branch
git cherry-pick <commit-hash> # Do this multiple times if multiple commits are wanted
git push origin target_branch

Advanced Git Commands

Signing commits

Signing commits is a way to verify the authenticity and integrity of your commits in Git. It allows you to cryptographically sign your commits using your GPG (GNU Privacy Guard) key, assuring Git that you are indeed the author of the commit. You can do so by creating a GPG key and configuring Git to use the key when committing. Here are the steps:

# Generate a GPG key
gpg --gen-key

# Configure Git to Use Your GPG Key
git config --global user.signingkey <your-gpg-key-id>

# Add the public key to your GitHub account

# Signing your commits with the -S flag
git commit -S -m "Your commit message"

# View signed commits
git log --show-signature

Git reflog

A topic that we haven’t explored is Git references, they are pointers to various objects within the repository, primarily commits, but also tags and branches. They serve as named points in the Git history, allowing users to navigate through the repository's timeline and access specific snapshots of the project. Knowing how to navigate git references can be very useful and they can use git reflog to do just that. Here are some of the benefits:

  • Recovering lost commits or branches
  • Debugging and troubleshooting
  • Undoing mistakes

Interactive rebase

Interactive rebase is a powerful Git feature that allows you to rewrite commit history interactively. It enables you to modify, reorder, combine, or delete commits before applying them to a branch.

In order to use it you have to become familiar with the possible actions such are:

  • Pick (“p“)
  • Reword (“r“)
  • Edit (“e“)
  • Squash (“s“)
  • Drop (“d“)

Interactive rebase

Here is a useful video to learn how to perform an interactive rebase in the terminal, I have also linked a useful tool at the bottom of the blog post.

Collaborating with Git

Origin vs Upstream

The origin is the default remote repository associated with your local Git repository when you clone it. If you've forked a repository, then that fork becomes your "origin" repository by default.

Upstream on the other hand refers to the original repository from which your repository was forked.

To keep your forked repository up-to-date with the latest changes from the original project, you git fetch changes from the "upstream" repository and merge or rebase them into your local repository.

# By pulling the pulled changes will be merged into your working branch
git pull <remote_name> <branch_name>
# If you don't want to merge the changes use
git fetch <remote_name>

To see the remote repositories associated with you local Git repo, run:

Conflicts

Don’t panic, when trying to merge or rebase a branch and conflicts are detected it only means that there are conflicting changes between different versions of the same file or files in your repository and they can be easily resolved (most times).

They are typically indicated within the affected files, where Git inserts conflict markers <<<<<<<, ======= and >>>>>>> to highlight the conflicting sections. Decide which changes to keep, modify, or remove, ensuring that the resulting code makes sense and retains the intended functionality.

After manually resolving conflicts in the conflicted files, remove the conflict markers <<<<<<<, =======, and >>>>>>> and adjust the code as necessary.

Save the changes in the conflicted files once you're satisfied with the resolution.

If you have issues resolving conflicts, this video does a good job at explaining it.

Popular Git workflows

Various Git workflows exist, however, it's important to note that there's no universally "best" Git workflow. Instead, each approach has its own set of pros and cons. Let's explore these different workflows to understand their strengths and weaknesses.

Feature Branch Workflow 🌱

Each new feature or bug fix is developed in its own branch and then merge it back into the main branch once completed by opening a PR.

  • Strength: Isolation of changes and reducing conflicts.
  • Weakness: Can become complex and require diligent branch management.

Gitflow Workflow 🌊

Gitflow defines a strict branching model with predefined branches for different types of development tasks.

It includes long-lived branches such as main, develop, feature branches, release branches, and hotfix branches.

  • Strength: Suitable for projects with scheduled releases and long-term maintenance.
  • Weakness: Can be overly complex for smaller teams

Forking Workflow 🍴

In this workflow, each developer clones the main repository, but instead of pushing changes directly to it, they push changes to their own fork of the repository. Developers then create pull requests to propose changes to the main repository, allowing for code review and collaboration before merging.

This is the workflow we use to collaborate on the open-source Glasskube repos.

  • Strength: Encourages collaboration from external contributors without granting direct write access to the main repository.
  • Weakness: Maintaining synchronization between forks and the main repository can be challenging.

Trunk-Based Development 🪵

If you are on a team focused on rapid iteration and continuous delivery, you might use trunk-based development which developers work directly on the main branch committing small and frequent changes.

  • Strength: Promotes rapid iteration, continuous integration, and a focus on delivering small, frequent changes to production.
  • Weakness: Requires robust automated testing and deployment pipelines to ensure the stability of the main branch, may not be suitable for projects with stringent release schedules or complex feature development.

What the fork?

Forking is highly recommended for collaborating on Open Source projects since you have complete control over your own copy of the repository. You can make changes, experiment with new features, or fix bugs without affecting the original project.

💡 What took me a long time to figure out was that although forked repositories start as separate entities, they retain a connection to the original repository. This connection allows you to keep track of changes in the original project and synchronize your fork with updates made by others.

That’s why even when you push to your origin repository. Your changes will show up on the remote also.

Git Cheatsheet


# Clone a Repository
git clone <repository_url>

# Stage Changes for Commit
git add <file(s)>

# Commit Changes
git commit -m "Commit message"

# Push Changes to the Remote Repository
git push

# Force Push Changes (use with caution)
git push --force

# Reset Working Directory to Last Commit
git reset --hard

# Create a New Branch
git branch <branch_name>

# Switch to a Different Branch
git checkout <branch_name>

# Merge Changes from Another Branch
git merge <branch_name>

# Rebase Changes onto Another Branch (use with caution)
git rebase <base_branch>

# View Status of Working Directory
git status

# View Commit History
git log

# Undo Last Commit (use with caution)
git reset --soft HEAD^

# Discard Changes in Working Directory
git restore <file(s)>

# Retrieve Lost Commit References
git reflog

# Interactive Rebase to Rearrange Commits
git rebase --interactive HEAD~3

# Pull changes from remote repo
git pull <remote_name> <branch_name>

# Fetch changes from remote repo
git fetch <remote_name>

  • Tool for interactive rebasing.

  • Cdiff to view colorful, incremental diffs.

  • Interactive Git branching playground


If you like this sort of content and would like to see more of it, please consider supporting us by giving us a Star on GitHub 🙏

Read the whole story
emrox
14 hours ago
reply
Hamburg, Germany
Share this story
Delete

A brief history of web development. And why your framework doesn't matter.

1 Share
Read the whole story
emrox
2 days ago
reply
Hamburg, Germany
Share this story
Delete

Linux On Desktop In 2023

1 Share
Read the whole story
emrox
3 days ago
reply
Hamburg, Germany
Share this story
Delete

Cardiorespiratory fitness is a strong and consistent predictor of morbidity and mortality among adults: an overview of meta-analyses representing over 20.9 million observations from 199 unique cohort studies

1 Share

Cardiorespiratory fitness is a strong and consistent predictor of morbidity and mortality among adults: an overview of meta-analyses representing over 20.9 million observations from 199 unique cohort studies

  1. http://orcid.org/0000-0002-1768-319XJustin J Lang1,2,3,
  2. http://orcid.org/0000-0001-6729-5649Stephanie A Prince1,2,
  3. Katherine Merucci4,
  4. http://orcid.org/0000-0002-4513-9108Cristina Cadenas-Sanchez5,6,
  5. http://orcid.org/0000-0002-5607-5736Jean-Philippe Chaput2,7,8,
  6. http://orcid.org/0000-0002-1752-5431Brooklyn J Fraser3,9,
  7. http://orcid.org/0000-0001-5461-5981Taru Manyanga10,
  8. Ryan McGrath3,11,12,13,
  9. http://orcid.org/0000-0003-2001-1121Francisco B Ortega5,14,
  10. http://orcid.org/0000-0002-7227-2406Ben Singh3,
  11. http://orcid.org/0000-0001-7601-9670Grant R Tomkinson3
  1. Correspondence to Dr Justin J Lang, Public Health Agency of Canada, Ottawa, Canada; justin.lang@phac-aspc.gc.ca

BMJ Learning - Take the Test

Abstract

Objective To examine and summarise evidence from meta-analyses of cohort studies that evaluated the predictive associations between baseline cardiorespiratory fitness (CRF) and health outcomes among adults.

Design Overview of systematic reviews.

Data source Five bibliographic databases were searched from January 2002 to March 2024.

Results From the 9062 papers identified, we included 26 systematic reviews. We found eight meta-analyses that described five unique mortality outcomes among general populations. CRF had the largest risk reduction for all-cause mortality when comparing high versus low CRF (HR=0.47; 95% CI 0.39 to 0.56). A dose–response relationship for every 1-metabolic equivalent of task (MET) higher level of CRF was associated with a 11%–17% reduction in all-cause mortality (HR=0.89; 95% CI 0.86 to 0.92, and HR=0.83; 95% CI 0.78 to 0.88). For incident outcomes, nine meta-analyses described 12 unique outcomes. CRF was associated with the largest risk reduction in incident heart failure when comparing high versus low CRF (HR=0.31; 95% CI 0.19 to 0.49). A dose–response relationship for every 1-MET higher level of CRF was associated with a 18% reduction in heart failure (HR=0.82; 95% CI 0.79 to 0.84). Among those living with chronic conditions, nine meta-analyses described four unique outcomes in nine patient groups. CRF was associated with the largest risk reduction for cardiovascular mortality among those living with cardiovascular disease when comparing high versus low CRF (HR=0.27; 95% CI 0.16 to 0.48). The certainty of the evidence across all studies ranged from very low-to-moderate according to Grading of Recommendations, Assessment, Development and Evaluations.

Conclusion We found consistent evidence that high CRF is strongly associated with lower risk for a variety of mortality and incident chronic conditions in general and clinical populations.

  • Cardiovascular Diseases
  • Review
  • Cohort Studies
  • Physical fitness

Data availability statement

Data are available on reasonable request.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from <a href="http://Altmetric.com" rel="nofollow">Altmetric.com</a>

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

  • Many systematic reviews have examined the prospective link between baseline cardiorespiratory fitness and health outcomes, but no study has compiled all the evidence to help identify important gaps in the literature.

WHAT THIS STUDY ADDS

  • This study identified 26 systematic reviews with meta-analysis representing over 20.9 million observations from 199 unique cohort studies. Cardiorespiratory fitness was strongly and consistently protective of a variety of incident chronic conditions and mortality-related outcomes.

  • Gaps in the literature continue to exist, with limited evidence available among women, and certain clinical populations. Several health outcomes could benefit from future meta-analyses, including specific cancer types, especially among women (eg, breast cancer) and mental health conditions beyond depression.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

  • Given the strength of the predictive utility of cardiorespiratory fitness across many health outcomes, cardiorespiratory fitness would be a valuable risk stratification tool in clinical practice.

Introduction

Cardiorespiratory fitness (CRF) is a physical trait that reflects the integrated function of numerous bodily systems to deliver and use oxygen to support muscle activity during sustained, rhythmic, whole-body, large muscle physical activity.1 CRF can be objectively measured using direct (usually by maximal exercise testing with concomitant gas exchange analysis)2 or indirect (exercise predicted equations)3 4 methods with a variety of maximal or submaximal protocols using different modalities (eg, stationary cycling, treadmill running/walking, bench stepping, field-based running/walking). Non-exercise prediction equations with reasonable validity are also available when direct CRF measurement is not feasible.5 6 CRF is commonly expressed as the maximum or peak rate of oxygen consumption per kilogram of body mass (common units: mL/kg/min) or metabolic equivalents of task (METs). Nearly half of the variance in CRF is attributable to genetics, with the remainder modified primarily through habitual physical activity.7 For example, brisk walking for approximately 150 min per week can result in large relative improvements in CRF among sedentary and unfit individuals.8 9 Even those with severe chronic disease can improve CRF through well-planned aerobic physical activity programmes.10

Low CRF is considered a strong chronic disease risk factor that is not routinely assessed in clinical practice.11 Evidence suggests that the inclusion of CRF as a clinical vital sign would enhance patient management by improving the classification of those at high risk of adverse outcomes.11 The evidence supporting CRF as an important risk factor has accumulated since the 1980s through large cohort studies that investigated the prospective risk of all-cause mortality and cardiovascular events associated with CRF.12–14 Research has linked CRF to the incidence of some cancers (eg, colon/rectum, lung),15 type 2 diabetes,16 metabolic syndrome,17 stroke18 and depression.19 Higher CRF may even improve the prognosis in those with chronic conditions such as cancer,20 peripheral artery disease,21 heart failure22 and chronic kidney disease.23

Given the mounting evidence supporting CRF as an important risk factor, numerous systematic reviews with meta-analyses summarising results of primary studies for various health outcomes have been published. Kodama et al 24 published the first meta-analysis on the health-related predictive validity of CRF and found that a 1-MET (3.5 mL/kg/min) higher level of CRF was associated with a 13% and 15% reduction in the risk of all-cause mortality and cardiovascular disease (CVD) events, respectively. This study helped to establish the meaningful clinically important difference (MCID) of 1-MET for exercise trials. Since Kodama’s study, there have been several systematic reviews with meta-analyses, with several published in recent years (ie, 2020+). Most systematic reviews have focused on a single health outcome. To date, there has not been a systematic synthesis of the relationships between CRF and a broad range of health outcomes. To help summarise the breadth of evidence, an overview of reviews provides a systematic method to examine evidence across a range of outcomes for a specific exposure.25 Thus, the objective of this study was to conduct an overview of systematic reviews with meta-analyses from cohort studies that investigated relationships between CRF and prospective health-related outcomes among adults. We also aimed to assess the certainty of the evidence for each identified health outcome.

Methods

This overview followed the methods outlined in the Cochrane handbook,25 and additional methods that were published elsewhere.26 We adhered to both the Preferred Reporting Items for Overviews of Reviews statement27 and the Meta-analyses of Observational Studies in Epidemiology reporting standards.28 The overview was prospectively registered with the PROSPERO international prospective register of systematic reviews (#CRD42022370149). Here, we present a condensed methods section with the full methods available in online supplemental methods.

Supplemental material

Eligibility criteria

Population

Adult populations (≥18 years) including apparently healthy and clinical populations with diagnosed chronic conditions. Studies that focused on certain special populations were excluded (ie, those recovering from surgery, athletes, disease at birth, pregnant individuals).

Exposure

The primary exposure was CRF measured using the following approaches: (1) maximal exercise testing with gas analysis (ie, directly measured V̇O2max/peak), (2) maximal or submaximal exercise testing without gas analysis, which used either exercise prediction equations to estimate CRF or the measured exercise performance (ie, indirect measures) or (3) non-exercise prediction equations for estimating CRF.

Outcome

Any health-related outcome such as all-cause or cause-specific mortality, incident conditions related to physical risk factors, chronic conditions or mental health issues were included. Among populations with diagnosed chronic conditions, we included evidence on outcomes such as mortality or disease severity.

Study design

Only systematic reviews with meta-analyses that searched a minimum of two bibliographic databases and provided a sample search strategy were included. We also included meta-analyses that pooled data from primary prospective/retrospective cohort or case-control studies. These studies were the focus because of their ability to assess causality for observational research.

Publication status and language restriction

Only systematic reviews published in peer-reviewed journals in English, French or Spanish (based on authors’ language capacity) were eligible. Conference abstracts or papers, commentaries, editorials, dissertations or grey literature were ineligible.

Time frame

Systematic reviews published during the past 20 years for the initial search.

Information sources

Five bibliographic databases, including OVID Medline, OVID Embase, Scopus, CINAHL and EBSCOhost SPORTDiscus, were searched from 1 January 2002 to 21 November 2022. The search was later updated from 1 November 2022 to 8 March 2024.

Search strategy

A research librarian (KM) created the search strategy in collaboration with the authorship team, and the final search was peer-reviewed by an independent research librarian using the Peer Review of Electronic Search Strategies guidelines.29 The search strategies for each database are available in online supplemental appendix 1. The reference lists of included papers were also searched for additional relevant systematic reviews.

Selection process

All records were imported into RefWorks where duplicates were removed using automated and manual methods. Records were imported into Covidence for further deduplication and record screening. Reviewers were not blinded to the study metadata when screening. The title and abstract from each record were screened by two of the following independent reviewers (JJL, SAP, CC-S, J-PC, BJF, TM, BS and GRT) against the inclusion criteria. Full-text papers were obtained for each record that met the inclusion criteria or provided insufficient evidence to make a conclusive decision at the title and abstract stage. Conflicts during title and abstract screening automatically advanced to full-text screening. Each full-text record was screened by two of the following independent reviewers (JJL, SAP, CC-S, J-PC, BJF, TM, BS and GRT) against the inclusion criteria. Conflicts at the full-text stage were resolved through discussion by two reviewers (JJL and SAP), with a third reviewer resolving disagreements (GRT).

Data collection process

Data extraction was completed in Covidence using a form that was piloted by the authorship group for accuracy. Data from the included studies were extracted by two of the following independent reviewers (JJL, SAP, CC-S, J-PC, BJF, TM, FBO, BS and GRT). Conflicts were resolved by one reviewer (JJL), who contacted the reviewers who extracted the data when necessary to resolve conflicts.

Data items

The data extraction form included several items related to the demographic characteristics of the primary studies, the meta-analyses effect estimates and related statistics, and details for risk of bias and subgroup analyses.

Review quality

We extracted the original risk of bias assessment for each primary study, as reported by the study authors. Most of the included studies used the Newcastle-Ottawa Scale (NOS) to assess risk of bias for cohort studies.30 In the event that risk of bias was not assessed, a new assessment was conducted and verified by two reviewers using the NOS. We also assessed quality of the systematic reviews using the second edition of A MeaSurement Tool to Assess systematic Reviews 2 (AMSTAR2) checklist.31 Two of the following independent reviewers (JJL, SAP, CC-S, J-PC, BJF, TM, FBO, BS and GRT) assessed review quality. Conflicts were resolved by one reviewer (JJL), with the reviewers who extracted the data contacted to resolve outstanding conflicts.

Effect measures

We presented pooled hazard ratios (HRs) or relative risks (RRs) for an incident event (ie, mortality or morbidity) across the included systematic reviews. We extracted data from models that compared high versus low CRF and those that examined the impact of a 1-MET higher level of CRF.

Synthesis of data

We followed an outcome-centric approach, as outlined by Kho et al.26 Our goal was to identify systematic reviews with non-overlapping primary studies for each outcome to avoid double counting evidence. When more than one eligible systematic review was identified for a single outcome, we calculated the corrected covered area (CCA) to assess the degree of overlap in the primary studies.32

Embedded Image

Where, N is the total number of times a primary study appeared across reviews (inclusive of double counting), r is the number of unique primary studies and c is the number of systematic reviews included for the outcome.

The CCA was interpreted as slight (0%–5%), moderate (6%–10%), high (11%–15%) or very high (>15%). If the CCA was slight or moderate, we included multiple systematic reviews per outcome. If the CCA was high or very high, we selected the highest quality systematic review according to the AMSTAR2 assessment. We included the most recent systematic review when reviews of the same outcome were rated as equal in quality.

Synthesis of results

For each health outcome, we reported evidence for apparently healthy and clinical populations separately. We summarised results using a narrative synthesis approach using summary of findings tables. Results were reported as described by the systematic review authors. Meta-analytical results, including the effect, confidence limits, number of studies and number of participants, were presented by outcome using a forest plot to allow for easy comparison between studies. RR values were taken to approximate the HR. When comparing high versus low CRF, we inverted the scale when studies compared low versus high by taking the reciprocal (ie, HR=2.00 was changed to HR=0.50). Dose−response values were rescaled to a 1-MET higher level of CRF when more than 1-MET was used or if the unit increase was in VO2. We rescaled by taking the natural log of the HR, dividing or multiplying it to correspond with 1-MET, and exponentiating the result. Subgroup analyses for sex were described when available.

Certainty of the evidence assessment

For each outcome, the certainty of the evidence was assessed using a modified Grading of Recommendations, Assessment, Development and Evaluations (GRADE) approach.33 Observational cohort evidence began at ‘high’ certainty because randomised controlled trials were deemed not feasible for our research question.34 The certainty of the evidence could be rated down based on five domains (ie, risk of bias, imprecision, inconsistency, indirectness and publication bias). See online supplemental table 1 for a GRADE decision rules table.

Equity, diversity and inclusion statement

Our research team included diversity across genders with representation from researchers at all career stages. We stratified our results by sex which allowed use to identify the potential need for more diversity in this area of the literature. This stratification allowed us to discuss the overall generalisability of our results. The GRADE evaluation carried out in this study assessed the indirectness of the results. We downgraded evidence that did not demonstrate good global representation or did not provide a gender-balanced sample. Reducing indirectness is important for ensuring the results are representative of the target population.

Results

We identified 9062 records after removing duplicates, assessed 199 full-text papers, and excluded 165 papers during full-text screening, and 8 papers because of high or very high overlap based on the CCA calculation (see figure 1 and online supplemental appendix 2 for full texts with reasons for exclusion). The proportion of agreement between reviewers for title and abstract screening ranged from 95% to 100% while the agreement for full-text screening ranged from 75% to 100%. We included 26 systematic reviews with meta-analyses representing over 20.9 million observations from 199 unique cohort studies, including 21 mortality or incident chronic disease outcomes. We identified CCA values in the high or very high range for sudden cardiac mortality (CCA=33%; n=2), incident heart failure (33%; n=2), incident depression (50%; n=2), incident type 2 diabetes (25%; n=4) and all-cause mortality among those living with heart failure (14%; n=3; see online supplemental appendix 2 for more details). We included multiple systematic reviews for all-cause mortality because the CCA was moderate (10%; n=3).

Tables 1–3 describe the study characteristics. We identified 8 systematic reviews that investigated mortality outcomes, with pooled data from 95 unique primary cohort studies. Nine systematic reviews investigated incident outcomes, pooling data from 63 unique primary cohort studies. The remaining 9 systematic reviews investigated health-related outcomes among populations living with chronic conditions, which represented data from 51 unique primary cohort studies. 11 reviews were of critically low quality, 4 were low, 8 were moderate and 3 were of high quality as assessed using the AMSTAR2 (see online supplemental table 2). See online supplemental table 3 for a detailed summary of findings with the certainty of the evidence for each outcome.

View this table:

Table 1

Study characteristics for general populations without known disease at baseline and mortality outcomes

View this table:

Table 2

Study characteristics for general populations without known disease at baseline and incident outcomes

View this table:

Table 3

Study characteristics for clinical populations with diagnosed chronic disease at baseline and mortality outcomes

Figure 2 illustrates results for CRF as a predictor of mortality outcomes, which included all-cause, CVD, sudden cardiac, all cancer and lung cancer mortality. When comparing high versus low CRF across all outcomes, there was a 41% (HR for all-cause mortality24=0.59; 95% CI 0.52 to 0.66) to 53% (HR for all-cause mortality35=0.47; 95% CI 0.39 to 0.56) reduction in the risk of premature mortality. The certainty of the evidence was assessed as very low-to-moderate, mainly due to serious indirectness (ie, most studies only included male participants). In assessing the dose–response relationship, a 1-MET higher level of CRF was associated with a 7% (HR for all cancer mortality35=0.93; 95% CI 0.91 to 0.96) to 51% (HR for sudden cardiac mortality36=0.49; 95% CI 0.33 to 0.73) reduction in the risk of premature mortality. The certainty of the evidence ranged from very low-to-moderate, largely due to serious indirectness from a large proportion of male-only studies. Sex differences were similar between outcomes with larger CIs for females because of smaller samples (see online supplemental figure 1). For example, there were 1 858 274 male participants compared with 180 202 female participants for all-cause mortality.

Figure 2

HRs for each mortality outcome in apparently healthy populations at baseline for high versus low CRF and per 1-MET increase in CRF. Estimates from Laukkanen (2022), Han (2022), Kodama (2009) and Aune (2020) were reported as RR, the remaining studies were reported as HR. Qui (2021) reported estimates from self-reported CRF. Kodama (2009) reported low versus high CRF which were inverted for this study. CRF, cardiorespiratory fitness; CVD, cardiovascular disease; eCRF, estimated non-exercise cardiorespiratory fitness; GRADE, Grading of Recommendations, Assessment, Development and Evaluations; MET, metabolic equivalent of task; NA, not applicable; NR, not reported; RR, relative risk.

Figure 3 describes results for CRF as a predictor of newly diagnosed chronic conditions, including: hypertension, heart failure, stroke, atrial fibrillation, dementia, chronic kidney disease, depression and type 2 diabetes. Online supplemental figure 2 describes results for all cancer (male only), lung cancer (male only), colon/rectum cancer (male only) and prostate cancer. When comparing high versus low CRF, there was a 37% (HR for incident hypertension37=0.63; 95% CI 0.56 to 0.70) to 69% (HR for incident heart failure38=0.31; 95% CI 0.19 to 0.49) reduction in the risk of incident conditions. The certainty of this evidence was rated as very low-to-low largely due to inconsistency and indirectness (ie, high heterogeneity that could not be described by subgroup analysis and largely male populations). The dose–response relationship per 1-MET higher level of CRF was associated with a 3% (HR for incident stroke39=0.97; 95% CI 0.96 to 0.98) to 18% (HR for incident heart failure38=0.82; 95% CI 0.79 to 0.84) reduction in the risk of incident conditions. The certainty of the evidence ranged from very low-to-low due to inconsistency and indirectness. Only two studies reported results for females separately. High versus low CRF was more protective for incident stroke and type 2 diabetes among females compared with males (online supplemental figure 2). Among men, there was a null association between high versus low CRF for prostate cancer (HR=1.15; 95% CI 1.00 to 1.30).40

Figure 3

HRs for each incident outcome in apparently healthy populations at baseline for high versus low CRF and per 1-MET increase in CRF. Note: Estimates from Cheng (2022), Aune (2021), Wang (2020), Xue (2020), Tarp (2019) and Kunutsor (2023) were reported as RR, the remaining studies were reported as HR. Kandola (2019) reported estimates for low versus high which were inverted for this study. The estimates from Tarp (2019) are fully adjusted for adiposity. Aune (2021) was reported per 5-MET increase which we converted to 1-MET increase for this study. CRF, cardiorespiratory fitness; CVD, cardiovascular disease; GRADE, Grading of Recommendations, Assessment, Development and Evaluations; MET, metabolic equivalent of task; NA, not applicable; NR, not reported; RR, relative risk.

Figure 4 highlights results comparing high versus low CRF among individuals living with chronic conditions. There was a 19% (HR for adverse events among those living with pulmonary hypertension41=0.81; 95% CI 0.78 to 0.85) to 73% (HR for cardiovascular mortality among those living with CVD42=0.27; 95% CI 0.16 to 0.48) reduction in the risk of all-cause and type-specific mortality. Comparing delayed versus not delayed heart rate recovery was associated with an 83% reduced risk of adverse events among those living with coronary artery disease. The certainty of the evidence for mortality in those living with a chronic condition was rated as very low-to-low largely due to risk of bias, indirectness and imprecision (ie, low-quality studies, mainly male participants and small sample sizes). No evidence examining sex differences were available. See online supplemental table 3 for a detailed summary of findings.

Figure 4

HRs for health outcomes in patients living with chronic conditions at baseline for high versus low CRF and delayed versus not delayed HRR. Estimates from Morris (2014) were reported as RR, the remaining estimates were reported as HR. Yang (2023), Fuentes-Abolafio (2020), Morris (2014), Rocha (2022) and Lachman (2018) reported estimates as low versus high which were inverted for this study. Cantone (2023) was reported per 1-unit VO2 increase which we converted to 1-MET increase for this study. Adverse events for Lachman (2018) were all-cause mortality, cardiovascular mortality and hospitalisations for congestive heart failure. CRF, cardiorespiratory fitness; CVD, cardiovascular disease; GRADE, Grading of Recommendations, Assessment, Development and Evaluations; HRR, heart rate recovery; MET, metabolic equivalent of task; NA, not applicable; NR, not reported; RR, relative risk.

Discussion

This overview of systematic reviews demonstrated that CRF is a strong and consistent predictor of risk across many mortality outcomes in the adult general population. Among populations living with chronic conditions such as cancer, heart failure and CVD, this study showed better prognosis for those with higher CRF. We also demonstrated that low CRF is an important risk factor for developing future chronic conditions such as hypertension, heart failure, stoke, atrial fibrillation, dementia and depression. Given that we summarised evidence from cohort studies, and randomised controlled trials cannot be used in our investigation, the results of this study may signal a causal relationship between CRF and future health outcomes. We also found a significant dose–response effect showing protection for every 1-MET higher level of CRF. This evidence further supports 1-MET as an MCID for CRF and could be considered as a target for interventions. The strength and consistency of the evidence across a wide range of outcomes supports the importance of CRF for clinical assessment and public health surveillance.

Several studies have identified the need for the routine measurement of CRF in clinical and public health practice.11 43 For instance, a scientific statement from the American Heart Association concluded that healthcare providers should assess CRF during annual routine clinical visits using submaximal tests (eg, treadmill, cycling or bench stepping tests) or self-report estimates and that patients living with chronic conditions should have CRF measured regularly using a symptom-limited direct measure.11 There are several benefits to regular measurement of CRF in clinical practice. First, CRF is an important risk factor that provides additional information beyond traditional risk factors such as blood pressure, total cholesterol and smoking status.44 Second, given the strong link with habitual physical activity, CRF could be a valuable tool to help guide exercise prescription. In those with low CRF (defined based on age, sex and health status), large relative improvements can be attained through additional moderate physical activity (ie, brisk walking at a heart rate of 50% of peakO2).45 The largest health benefits have been observed when individuals move from being unfit to fit.46 Lastly, CRF measured using field-based tests are easy to implement with a variety of tests that could be adapted to suit space and time limitations.

Areas of future work

Applying the GRADE approach to evaluate the certainty of the evidence helped identify several important gaps in the literature. Nearly all the outcomes identified in this study were downgraded due to the evidence being generated largely from samples comprising males. Although an increase in female samples would help improve the certainty of the evidence, it likely would not impact the magnitude of the observed effects because the benefits of CRF were similar for males and females in our study (see online supplemental figures 1,2) and other large cohort studies.47 There is also a need for higher-quality studies with larger samples sizes in clinical populations, as many of the outcomes were downgraded due to primary studies with high risk of bias, low sample sizes (<4000 participants), and inconsistencies in the measurement of CRF across studies. Improving the evidence for CRF in clinical populations remains an important research gap. For instance, outcomes in clinical populations with a serious or very serious risk of bias were often rated this way due to a lack of adequate control for confounding, including a lack of adjustment for age, sex, and body mass.

In addition to the need for higher-quality studies with greater samples in more diverse populations including females, we did not identify any systematic reviews that explored the association between CRF and breast cancer48 or mental health outcomes beyond incident depression and dementia, as an example. These outcomes present important areas for future work. Finally, future studies would benefit from repeated longitudinal measures of CRF to further establish causality.

Implications for clinical practice

This study further demonstrates the importance of including CRF measurement in regular clinical practice. For every 1-MET (3.5 mL/kg/min) higher level of CRF, we identified substantial reductions in the risk of all-cause, CVD and cancer mortality. We also identified significant reductions in the risk of incident hypertension, heart failure, stroke, atrial fibrillation and type 2 diabetes per higher MET. For most, a 1-MET higher level of CRF is attainable through a regular aerobic exercise programme. For example, in a large population-based observational study of over 90 000 participants, nearly 30% were able to increase their CRF by 1-MET (median follow-up was 6.3 years) without intervention.49 However, for some, improvements as small as 0.5-METs may substantially benefit health.50 51

Given the strength of the predictive utility of CRF across many health outcomes, CRF would be a valuable risk stratification tool in clinical practice. Furthermore, the predictive strength of CRF is maintained regardless of age, sex and race.47 Through regular CRF measurement, clinicians could better identify patients at greater risk of premature mortality, initiating the need for targeted exercise prescription. Improvements in CRF through regular physical activity results in a proportional reduction in mortality risk, regardless of the presence of other major risk factors such as higher body mass index, hypertension, type 2 diabetes, dyslipidaemia, or smoking.49 There is an important need for clinical and public health guidelines around the assessment, interpretation of results and MCID of CRF across age, sex and clinical populations.

Strengths and limitations

Our paper has several strengths, including a focus on pooled meta-analyses from cohort studies, assessment of the certainty of the evidence using a modified GRADE, and an evaluation of the systematic review quality using AMSTAR2. Our study identifies gaps where new evidence is needed across a broad range of health outcomes. However, this study is not without limitations. As in any overview, the quality of the data is restricted to the included papers. In our case, heterogeneity was high for many of the included meta-analyses and was often not explained by subgroup analyses. We also identified low-to-very low certainty of the evidence for most outcomes, suggesting the need for higher-quality studies in this research area including adequate adjustment for confounding and greater representation of females. The evidence was also limited to studies examining associations between a single measure of CRF and prospective health outcomes.

Conclusion

Our findings showed that high CRF is strongly associated with lower risk of premature mortality, incident chronic conditions (ie, hypertension, heart failure, stroke, atrial fibrillation, dementia and depression), and poor prognosis in those with existing chronic conditions. The consistency of the evidence across a variety of health outcomes demonstrates the importance of CRF and the need to incorporate this measure in routine clinical and public health practice. Future studies should focus on outcomes with limited evidence and where the certainty of the evidence was rated as very low by improving study quality.

Data availability statement

Data are available on reasonable request.

Ethics statements

Patient consent for publication

Not applicable.

Acknowledgments

We would like to acknowledge the support of Valentine Ly, MLIS, Research Librarian at the University of Ottawa for her help with translating and conducting the search strategy in CINAHL and SPORTDiscus. We would also like to acknowledge the Health Library at Health Canada and the Public Health Agency of Canada for their support in constructing and carrying out the search strategy for MEDLINE, Embase and Scopus. The PRESS peer-review of the search strategy was carried out by Shannon Hayes, MLIS, research librarian, from the Health Library at Health Canada and the Public Health Agency of Canada. We would also like to thank Joses Robinson and Iryna Demchenko for their help with the paper.

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Read the full text or download the PDF:

Log in using your username and password

Read the whole story
emrox
3 days ago
reply
Hamburg, Germany
Share this story
Delete

Raspberry Pi 5 vs Intel N100 mini PC comparison - Features, Benchmarks, and Price

1 Share

The Raspberry Pi 5 Arm SBC is now powerful enough to challenge some Intel systems in terms of performance, while Intel has made the Intel Alder Lake-N family, notably the Intel Processor N100, inexpensive and efficient enough to challenge Arm systems when it comes to price, form factor, and power consumption.

So we’ll try to match the Raspberry Pi 5 to typical Intel processor N100 mini PCs with a comparison of features/specifications, performance (benchmarks), and pricing with different use cases. That’s something I’ve been wanting to look into for a while but I was busy with reviews and other obligations (Hello, Mr. Taxman!), and this weekend I had some spare time to carry on the comparison.

Raspberry Pi 5 vs Intel N100 Mini PC

Raspberry Pi 5 vs Intel N100 mini PC specifications

I’ll start by comparing the specifications of a Raspberry Pi 5 against the ones for typical Intel Processor N100-based mini PCs also mentioning optional features that come at extra cost.

Some remarks:

  1. Intel N100 systems with DDR4/DDR5 usually rely on one (replaceable/upgradeable) SO-DIMM module while LPDDR5 is soldered on the main board. Raspberry Pi 5 always comes with soldered-on memory.
  2. Not all USB-C ports found in Alder Lake-N mini PCs support Displayport Alt mode, it depends on the model.
  3. Some Intel N100 SBCs such as the AAEON UP 7000 provide a GPIO header, but I wanted to focus on the features in typical mini PCs in this post. It’s also possible to add GPIO headers through a USB adapter
  4. The Blackview MP80 mini PC is not powered by an Intel Processor N100, but by the similar Intel N95 or N97 Alder lake-N processor, and was used to show it’s possible to get a really small x86 mini PC. Most are larger than that, and the Raspberry Pi 5 should be a better option for space-constrained applications.

The Raspberry Pi 5 was tested with Raspberry Pi OS Desktop 64-bit, and I selected the benchmark results for the GEEKOM Mini Air12 mini PC on Ubuntu 22.04. I haven’t run Geekbench 6 on my Raspberry Pi 5, so I took one of the results from the Geekbench 6 database for a Raspberry Pi 5 at stock (2.4 GHz) frequency. The storage read speed for the Raspberry Pi 5 was measured with a 128GB MAKERDISK NVMe SSD and the PCIe interface configured as PCIe Gen3 x1.

The Raspberry Pi 5 can be overclocked to get more performance, and some people managed to achieve 1,033 points (single-core) and 2146 points (multi-core) at 3.10 GHz, but it is still lower than on Intel Processor N100 mini PC, and may not work on all Raspberry Pi 5 boards.

The Intel N100-powered GEEKOM Mini Air12 is faster for most tasks, and in some cases up to almost three times as fast (Speedometer 2.0 in Firefox), except for memset (similar results) and OpenSSL AES-256 where the higher sustained single-core CPU frequency helps the Arm SBC.

Raspberry Pi 5 vs Intel N100 mini PC price comparison

This one will be tough as everybody has different requirements, local or import taxes, and so on. But I’ll first calculate the price of a minimum working system and the Raspberry Pi 5 equivalent of the MINIX Z100-0dB mini PC with 8GB RAM, a 256GB NVMe SSD, 2.5GbE, WiFi 6, and a fanless enclosure.

For the minimum working configuration, we’ll assume the user wants a Linux or Windows system that boots to the OS, and connects to the network and a display without any other specific requirements. The Raspberry Pi 5 4GB is good enough for this along with the active cooler, a 5V/5A power adapter (although 5V/3A might do too), and a microSD card. I also searched for the cheapest N100 mini PC I could find with storage and memory: the CHUWI Larkbox X (12GB RAM, 512GB SATA SSD) sold for $125.93 with free shipping on Aliexpress at the time of writing.

I tried to select low-cost items for the Raspberry Pi 5 and considered adding an enclosure unnecessary for the minimum configuration (it would add $5 to $20). Taxes and handling fees are not considered for either device, and the shipping fee is not included for the Raspberry Pi 5 kit which ends up being about $33 cheaper. The Larkbox X mini PC delivers higher performance and offers more memory, dual Gigabit Ethernet, and WiFi 6. The Raspberry Pi 5 remains the ideal candidate for use cases requiring GPIO, low power consumption, and a small size.

Now let’s switch to another user who will wonder “What year is this?!” when hearing or reading the words “gigabit Ethernet”, “WiFi 5”,  “4GB RAM”, and/or “microSD card”. He won’t allow any noisy fan to pollute his room either, and he’d been fine with a fanless mini PC like the MINIX Z100-0dB with 8GB RAM that’s currently sold for $220.71 on Amazon excluding taxes with an 8% discount coupon selectable before order.

MINIX Z100-0dB Amazon Discount

Let’s see what happens if we try to reproduce this setup with a Raspberry Pi 5 8GB. We’ll still need the 5V/5A power adapter and a micro HDMI cable, but we’ll replace the active cooler with a fanless metal case that can still take HAT expansion boards and the microSD card with an NVMe SSD with an M.2 PCIe Hat. We’ll need a WiFi 6 USB 3.0 dongle and a 2.5GbE USB 3.0 dongle, although HAT expansion boards could be daisy-chained to achieve the same result, but that would start to get messy and be more expensive.

The Raspberry Pi 5 system is still cheaper (by $20) before taking into account the shipping fees which may add up when purchasing from multiple vendors. The EDATEC fanless case is also hard to get as it’s not for sale on Aliexpress anymore, and finding another complete Raspberry Pi 5 case that takes a HAT+ expansion board is challenging. We’ve also created a monster with a HAT and all four USB ports would be used in a typical system with a USB keyboard, a USB mouse, and our two USB 3.0 dongles for WiFi 6 and 2.5GbE. In that specific use case, I’d consider the Raspberry Pi 5 to be undesirable, and people would be better served by a mini PC. I reckon I’ve pushed the requirements a bit far with WiFi 6 and 2.5GbE, as I’d expect many people would be fine the the built-in gigabit Ethernet and WiFi 5 connectivity, in which case the Pi 5 could still be considered.

Final words

As one would expect, there’s no simple answer to the question “Which is the best? A Raspberry Pi 5 SBC or an Intel N100 mini PC?” since it will depend on the user’s specific requirements. The Raspberry Pi SBC was first introduced as cheap hardware for the education market, and I would recommend the Raspberry Pi 4 over the Raspberry Pi 5 for this purpose since it’s cheaper and does the job. The Raspberry Pi 5 is more suitable for projects that require extra performance while keeping the small form factor, GPIO header, and camera connectors. Intel Processor N100 mini PCs offer a better performance/price ratio as a general-purpose computer running Windows 11 or a Linux distribution such as Ubuntu, although you may potentially save a few dollars by using a Raspberry Pi 5.

Jean Luc Aufranc

Jean-Luc started CNX Software in 2010 as a part-time endeavor, before quitting his job as a software engineering manager, and starting to write daily news, and reviews full time later in 2011.

Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress

Read the whole story
emrox
5 days ago
reply
Hamburg, Germany
Share this story
Delete
Next Page of Stories