3686 stories
·
3 followers

Ruby on (Guard)Rails

1 Share

24 October 2024

I’ve worked on a few Ruby apps in my career at varying scales:

  • Homebrew (2009-present): created 2009, I started working on it ~5 months in and was maintainer #3.
  • AllTrails (2012-2013): created 2010, I was employee ~#8 and worked on their (smallish) Ruby on Rails application for ~1.5 years.
  • GitHub (2013-2023): created 2007, I was employee ~#232 and worked on their (huge) Ruby on Rails application for ~10 years.
  • Workbrew (2023-present): I cofounded Workbrew in 2023 and built the Workbrew Console Ruby on Rails application from scratch.

Over all of these Ruby codebases, there’s been a consistent theme:

  • Ruby is great for moving fast
  • Ruby is great for breaking things

What do I mean by “breaking things”?

nil:NilClass (NoMethodError)

If you’ve been a Ruby developer for any non-trivial amount of time, you’ve lost a non-trivial amount of your soul through the number of times you’ve seen this error. If you’ve worked with a reasonably strict compiled language (e.g. Go, Rust, C++, etc.) this sort of issue would be caught by the compiler and never make it into production. The Ruby interpreter, however, makes it very hard to actually catch these errors at runtime (so they often do make it into production).

This is when, of course, you’ll jump in with “well, of course you just need to…” but: chill, we’ll get to that. I’m setting the scene for:

🤨 The Solution

The solution to these problems is simple, just …

Actually, no, the solution is never simple and, like almost anything in engineering: it depends entirely on what you’re optimising for.

What I’m optimising for (in descending priority):

  • 👩‍💻 developer happiness: well, this is why we’re using Ruby. Ruby is optimised for developer happiness and productivity. There’s a reason many Ruby developers love it and have stuck with it even when it is no longer “cool”. Also, we need to keep developers happy because otherwise they’ll all quit and I’ll have to do it all myself. That said, there’s more we can do here (and I’ll get to that).
  • 🕺 customer/user happiness: they don’t care about Ruby or developers being happy. They care about having software that works. This means software where bugs are caught by the developers (or their tools) and not by customers/users. This means bugs that are found by customers/users are fixed quickly.
  • 🚄 velocity/quality balance: this is hard. It requires accepting that, to ship fast, there will be bugs. Attempting to ship with zero bugs means shipping incredibly slowly (or not at all). Prioritising only velocity means sloppy hacks, lots of customer/user bugs and quickly ramping up tech debt.
  • 🤖 robot pedantry, human empathy: check out the post on this topic. TL;DR: you want to try to automate everything that doesn’t benefit from the human touch.

The Specifics

Ok, enough about principles, what about specifics?

👮‍♀️ linters

I define “linters” as anything that’s going to help catch issues in either local development or automated test environments. They are good at screaming at you so humans don’t have to.

  • 👮‍♀️ rubocop: the best Ruby linter. I generally try to enable as much as possible in Rubocop and disable rules locally when necessary.
  • 🪴 erb_lint: like Rubocop, but for ERB. Helps keep your view templates a bit more consistent.
  • 💐 better_html: helps keep your HTML a bit more consistent through development-time checks.
  • 🖖 prosopite: avoids N+1 queries in development and test environments.
  • 🪪 licensed: ensures that all of your dependencies are licensed correctly.
  • 🤖 actionlint: ensures that your GitHub Actions workflows are correct.
  • 📇 eslint: when you inevitably have to write some JavaScript: lint that too.

I add these linters to my Gemfile with something like this:

group :development do
  gem "better_html"
  gem "erb_lint"
  gem "licensed"
  gem "rubocop-capybara"
  gem "rubocop-performance"
  gem "rubocop-rails"
  gem "rubocop-rspec"
  gem "rubocop-rspec_rails"
end

If you want to enable/disable more Rubocop rules, remember to do something like this:

require:
  - rubocop-performance
  - rubocop-rails
  - rubocop-rspec
  - rubocop-rspec_rails
  - rubocop-capybara

AllCops:
  TargetRubyVersion: 3.3
  ActiveSupportExtensionsEnabled: true
  NewCops: enable
  EnabledByDefault: true

Layout:
  Exclude:
    - "db/migrate/*.rb"

Note, this will almost certainly enable things you don’t want. That’s fine, disable them manually. Here you can see we’ve disabled all Layout cops on database migrations (as they are generated by Rails).

My approach for using linters in Homebrew/Workbrew/the parts of GitHub where I had enough influence was:

  • enable all linters/rules
  • adjust the linter/rule configuration to better match the existing code style
  • disable rules that you fundamentally disagree with
  • use safe autocorrects to get everything consistent with minimal/zero review
  • use unsafe autocorrects and manual corrections to fix up the rest with careful review and testing

When disabling linters, consider doing so on a per-line basis when possible:

# Bulk create BrewCommandRuns for each Device.
# Since there are no callbacks or validations on
# BrewCommandRun, we can safely use insert_all!
#
# rubocop:disable Rails/SkipsModelValidations
BrewCommandRun.insert_all!(new_brew_command_runs)
# rubocop:enable Rails/SkipsModelValidations

I also always recommend a comment explaining why you’re disabling the linter in this particular case.

🧪 tests

I define “tests” as anything that requires the developer to actually write additional, non-production code to catch problems. In my opinion, you want as few of these as you can to maximally exercise your codebase.

  • 🧪 rspec: the Ruby testing framework used by most Ruby projects I’ve worked on. Minitest is fine, too.
  • 🙈 simplecov: the standard Ruby code coverage tool. Integrates with other tools (like CodeCov) and allows you to enforce code coverage.
  • 🎭 playwright: dramatically better than Selenium for Rails system tests with JavaScript. If you haven’t already read Justin Searls’ post explaining why you should use Playwright: go do so now.
  • 📼 vcr: record and replay HTTP requests. Nicer than mocking because they test actual requests. Nicer than calling out to external services because they are less flaky and work offline.
  • 🪂 parallel_tests: run your tests in parallel. You’ll almost certainly get a huge speed-up on your multi-core local development machine.
  • 📐 CodeCov: integrates with SimpleCov and allows you to enforce and view code coverage. Particularly nice to have it e.g. comment inline on PRs with code that wasn’t covered.
  • 🤖 GitHub Actions: run your tests and any other automation for (mostly) free on GitHub. I love it because I always try to test and automate as much as possible. Check out Homebrew’s sponsors-maintainers-man-completions.yml for an example of a complex GitHub Actions workflow that opens pull requests to updates files. Here’s a recent automated pull request updating GitHub Sponsors in Homebrew’s README.md.

I add these tests to my Gemfile with something like this:

group :test do
  gem "capybara-playwright-driver"
  gem "parallel_tests"
  gem "rspec-github"
  gem "rspec-rails"
  gem "rspec-sorbet"
  gem "simplecov"
  gem "simplecov-cobertura"
  gem "vcr"
end

In Workbrew, running our tests looks like this:

$ bin/parallel_rspec
Using recorded test runtime
10 processes for 80 specs, ~ 8 specs per process
....................................................................
....................................................................
....................................................................
....................................................................
....................................................................
....................................................................
....................................................................
......................
Coverage report generated to /Users/mike/Workbrew/console/coverage.
Line Coverage: 100.0% (6371 / 6371)
Branch Coverage: 89.6% (1240 / 1384)

Took 15 seconds

I’m sure it’ll get slower over time but: it’s nice and fast just now and it’s at 100% line coverage.

There has been (and will continue to be) many arguments over line coverage and what you should aim for. I don’t really care enough to get involved in this argument but I will state that working on a codebase with (required) 100% line coverage is magical. It forces you to write tests that actually cover the code. It forces you to remove dead code (either that’s no longer used or cannot actually be reached by a user). It encourages you to lean into a type system (more on that, later).

🖥️ monitoring

I define “monitoring” as anything that’s going to help catch issues in production environments.

  • 💂‍♀️ Sentry (or your error/performance monitoring tool of choice): catches errors and performance issues in production.
  • 🪡 Logtail (or your logging tool of choice): logs everything to an easily queryable location for analysis and debugging.
  • 🥞 Better Stack (or your alerting/monitoring/on-call tool of choice): alerts you, waking you up if needed, when things are broken.

I’m less passionate about these specific tools than others. They are all paid products with free tiers. It doesn’t really matter which ones you use, as long as you’re using something.

I add this monitoring to my Gemfile with something like this:

group :production do
  gem "sentry-rails"
  gem "logtail-rails"
end

🍧 types

Well, in Ruby, this means “pick a type system”. My type system of choice is Sorbet. I’ve used this at GitHub, Homebrew and Workbrew and it works great for all cases. Note that it was incrementally adopted on both Homebrew and GitHub.

I add Sorbet to my Gemfile with something like this:

gem "sorbet-runtime"

group :development do
  gem "rubocop-sorbet"
    gem "sorbet"
    gem "tapioca"
end

group :test do
  gem "rspec-sorbet"
end

A Rails view component using Sorbet in strict mode might look like this:

class AvatarComponent < ViewComponent::Base
  sig { params(user: User).void }
  def initialize(user:)
    super
    @user = user
  end

  sig { returns(User) }
  attr_reader :user

  sig { returns(String) }
  def src
    if user.github_id.present?
      "https://avatars.githubusercontent.com/u/#{user.github_id}"
    else
      ...
    end
  end

In this case, we don’t need to check the types or nil of user because we know from Sorbet it will always be a non-nil User. This means, at both runtime and whenever we run bin/srb tc (done in the VSCode extension and in GitHub Actions), we’ll catch any type issues. These are fatal in development/test environments. In the production environment, they are non-fatal but reported to Sentry.

Note: Sorbet will take a bit of getting used to. To get the full benefits, you’ll need to change the way that you write Ruby and “lean into the type system”. This means preferring e.g. raising exceptions over raising nil (or similar) and using T.nilable types. It may also include not using certain Ruby/Rails methods/features or adjusting your typical code style. You may hate it for this at first (I and many others did) but: stick with it. It’s worth it for the sheer number of errors that you’ll never encounter in production again. It’ll also make it easier for you to write fewer tests.

TL;DR: if you use Sorbet in this way: you will essentially never see another nil:NilClass (NoMethodError) error in production again.

That said, if you’re on a single-developer, non-critical project, have been writing for a really long time and would rather die than change how you do so: don’t use Sorbet.

Well, I hear you cry, “that’s very easy for you to say, you’re working on a greenfield project with no legacy code”. Yes, that’s true, it does make things easier.

That said, I also worked on large, legacy codebases like GitHub and Homebrew that, when I started, were doing very few of these things and now are doing many of them. I can’t take credit for most of that but I can promise you that adopting these things was easier than you would expect. Most of these tools are built with incrementalism in mind.

Perfect is the enemy of good. Better linting/testing/monitoring and/or types in a single file is better than none.

🤥 Cheating

You may feel like the above sounds overwhelming and oppressive. It’s not. Cheating is fine. Set yourself strict guardrails and then cheat all you want to comply with them. You’ll still end up with dramatically better code and it’ll make you, your team and your customers/users happier. The key to success is knowing when to break your own rules. Just don’t tell the robots that.

Read the whole story
emrox
7 hours ago
reply
Hamburg, Germany
Share this story
Delete

CSS sprite sheet animations

1 Share

Check out this demo first (Click it!):

Yes, it’s the Twitter heart button. This heart animation was done using an old technique called sprite sheets🡵.

On the web sprite sheets are used mainly to reduce the amount of HTTP requests by bundling multiple images together into a single image file. Displaying a sub-image involves clipping the sheet in the appropriate coordinates.

Sprite sheet / texture atlas of Minecraft blocks

The bandwidth benefit has been largely mitigated by HTTP/2 now, but sprite sheets have another purpose: animations! Displaying animations is one of the primary uses of sprite sheets, besides loading performance.

Characters w/ animations, sprite sheet by GrafxKid

It’s neat for small raster-based animations such as loading spinners, characters, icons, and micro-interactions.

How

Assumming you already have a sprite sheet image and coordinates in hand, all you need is a way to clip that image for display. There are a few ways to clip an image in CSS.

method coordinates via
background-image background-position
overflow: hidden with nested <img> left, top on the nested element
clip-path clip-path, left, top

The left and top rules can be substituted for transform: translate(…).

The background-image way is the most convenient since you only need one element.

.element {
  background-image: url('heart.png');
  /* size of one frame */
  width: 100px;
  height: 100px;
  /* size of the whole sheet */
  background-size: 2900px 100px;
  /* coordinates of the desired frame (negated) */
  background-position: -500px 0px;
}

This is the sprite sheet for the heart animation from Twitter:

Heart animation sprite sheet from Twitter

Using this image, the code above produces a still image of the frame at (500,0) — the sixth frame.

Removing the clipping method reveals that it’s just a part of the whole sheet (this view will be fun when it’s actually animating):

If the sprite sheet wasn’t made to be animated, that is, if it was just a collection of multiple unrelated sub-images like the Minecraft example earlier, then the CSS rules above are all we need to know. That’s it.

Since this sprite sheet was made to be animated, that is, it contains animation frames, more needs to be done.

To animate this, we animate the background-position over each frame in the sequence, flashing each frame in quick succession.

 .element {
   background-image: url('heart.png');
   /* size of one frame */
   width: 100px;
   height: 100px;
   /* size of the whole sheet */
   background-size: 2900px 100px;
-  /* coordinates of the desired frame (negated) */
-  background-position: -500px 0px;
+  /* animate the coordinates */
+  animation: heartAnimation 2s steps(29, jump-none) infinite;
+}
+
+@keyframes heartAnimation {
+  from {
+    /* first frame */
+    background-position: 0px 0px;
+  }
+  to {
+    /* last frame */
+    background-position: -2800px 0px;
+  }
+}

Important: Note the steps()🡵 timing function in the animation rule above! This is required for the transition to land exactly on the frames.

Voilà.

And the view without clipping:

The exact parameters for the steps() function are a bit fiddly and it depends on whether you loop it or reverse it, but here’s what worked for the heart animation with 29 total frames.

animation-timing-function: steps(29, jump-none);

Using any other timing function results in a weird smooth in-betweening movement like this:

Remember, steps()🡵 is crucial!

Why not APNG?

For autoplaying stuff like loading spinners, you might want plain old GIFs or APNGs🡵 instead.

But we don’t have tight control over the playback with these formats.

With sprite sheets, we can pause, reverse, play on hover, change the frame rate…

…make it scroll-driven,

… or make it interactive!

Interactivity

The nice thing about this being in CSS is that we can make it interactive via selectors.

Continuing with the heart example, we can turn it into a stylised toggle control via HTML & CSS:

 .element {
   background-image: url('heart.png');
   /* size of one frame */
   width: 100px;
   height: 100px;
   /* size of the whole sheet */
   background-size: 2900px 100px;
+ }
+
+.input:checked ~ .element {
   /* animate the coordinates */
-  animation: heartAnimation 2s steps(29, jump-none) infinite;
+  animation: heartAnimation 2s steps(29, jump-none) forwards;
 }
 
 @keyframes heartAnimation {
   from {
     /* first frame */
     background-position: 0px 0px;
   }
   to {
     /* last frame */
     background-position: -2800px 0px;
   }
 }

Or use the new :has(:checked).

Additionally, CSS doesn’t block the main thread. In modern browsers, the big difference between CSS animations and JS-driven animations (i.e. requestAnimationFrame loops) is that the JS one runs on the main thread along with event handlers and DOM operations, so if you have some heavy JS (like React rerendering the DOM), JS animations would suffer along with it.

Of course, JS could still be used, if only to trigger these CSS sprite animations by adding or removing CSS classes.

Why not animated SVGs?

If you have a vector format, then an animated SVG🡵 is a decent option!

This format is kinda hard to author and integrate though — one would need both animation skills and coding skills to implement it. Some paid tools apparently exist to make it easier?

And Lottie? That 300-kilobyte library? Uh, sure, if you really need it.

Limitations of sprite sheets

  • The sheet could end up as a very large image file if you’re not very careful.
  • It’s only effective for the narrow case of small frame-by-frame raster animations. Beyond that, better options may exist, such animated SVGs, the <video> tag, the <canvas> tag, etc.
  • How do you support higher pixel densities? Media queries on background-image? <img> with srcset could work, but the coordinates are another matter. But it could be solved generally with CSS custom properties and calc.

Gallery

I actually had to implement these hover animations via sprite sheets at work.

Behind the scenes

Read the whole story
emrox
7 hours ago
reply
Hamburg, Germany
Share this story
Delete

React Native, and "the native feel"

1 Share

When I joined Bluesky in March 2024, our flagship app was in a strange place. Written in React Native, it was a real "native" app on paper, yet it didn't feel native.

This is not uncommon, yet it’s hard to put into words what that means. Detractors will point the blame at React Native itself, which I think is unfair—React Native is uniquely placed to excel at this, unlike, say, Flutter. That said, feeling native does not come for free—it requires a lot of hard work and attention to detail.

For the past 6 months, we've undertaken various projects to improve the native feel, and people are starting to notice:

Jane Manchun Wong: This app certainly feels a lot more “native” compared to last year :)

Thanks Jane!

Getting here was no small task, and it took a lot of relatively small changes across all parts of the app.

Thinning out the borders

Coming from a web background, you might instinctively set the borderWidth to 1px when you want a border. Surprisingly, that's wrong—you’re likely looking for StyleSheet.hairlineWidth! It may seem like a small change, but when your app has lots of borders—between posts, around quote-posts, around link cards—they add up. We changed every single border in the app to hairline width and it was an unexpectedly huge improvement. Nearly imperceptibly, the entire app became more detailed and precise and reduced the visual clutter.

Before/After: feeds screen

Before/After: notifications

Beware though—you might be expecting 1px borders in unexpected places. We had a couple of places where we were absolutely positioning items with a 1px inset—to account for the border—which caused a gap when the border got thinner. So double-check after you replace them all!

Native sheets

A distinctive characteristic of iOS apps is the "sheet" presentation—a modal view where the backdrop drops back a little. It feels slick, and really native—you don't see websites doing this.

For the post composer, we were originally absolutely positioning a view above the app with a manual fade/slide animation. It was fine, but didn't feel very special. We swapped it out with a real native sheet using React Native's built-in <Modal> component with presentation="pageSheet". This greatly improved the feel of the composer, and gained a new feature for free: swipe to dismiss!

Sadly, this was not without problems. First, it didn't work on Android. Trying to auto-focus the text input was straight-up failing, so we went back to the original implementation there—which was super easy, thanks to platform-split files. But there was also a lot of strange, awkward behaviour which made us reticent to use this in other places, and some of the more interesting features of native sheets are not available (medium/custom detents and swiping to dismiss without resistance come to mind).

Replacing React Native Bottom Sheet

We soon found ourselves with a reason to go further. Our standard <Dialog> component was using React Native Bottom Sheet, causing us a lot of issues. The worst offender was broken accessibility on Android, but the problems (and hacky fixes) just kept on piling up and up. We spent months tracking down a bug where dismissing a sheet wrong would cause the Android back button to stop working. Content in the bottom sheet requires special components, especially TextInput. For reasons I can't remember, we ended up using Discord's fork. RNBS is a very common library that you will likely find in most React Native apps but was causing us a huge amount of pain and generating a lot of complexity.

In just a couple of days, Hailey sat down and implemented a completely native bottom-sheet implementation on both iOS and Android, seamlessly swapping out our existing implementation with minimal changes needed in terms of using the <Dialog>. This incredible engineering work massively improved the native feel of the app, fixed all our accessibility issues and various janky fixes that had accumulated in the JS-based sheet implementation, and added a feature that even the native sheets don't have out of the box—dynamically sizing the sheets based on content height. We haven't (yet?) published this implementation as a separate package, but True Sheet is similar—I thoroughly recommend checking it out, especially if RNBS is giving you trouble.

Here's a before and after comparison of the "Add App Password" dialog. Which one looks more native to you?

Comparison of the old and new add App Password dialogs

Huge props to Hailey for landing that one—I'm still in awe of how good our new sheets are.

Three things you should know about native sheets

1. Status bar colours

When you open a sheet, make sure that the status bar style is set to "light", otherwise the status bar won't be visible when the black background is revealed. We do this by keeping a count of the number of active sheets, and setting it to "light" if it's greater than zero. We also increment this count when opening a system sheet, like the image picker, since that has the same presentation!

2. React Navigation

Native sheets can also be used via React Navigation—it lets you set the presentation of a screen to formSheet or pageSheet. This is powerful since you can nest a stack navigator within a sheet. I used this a bunch when I was working on Graysky, and it's a great pattern. It's also super easy to do with Expo Router by setting a group to use sheet presentation.

3. iPad

We don't support iPad in the Bluesky app yet but be wary that iPad sheets can display very differently. For large iPads, custom detents are not supported, and the status bar needs to stay as-is since it doesn't reveal the background. The worst part is, Platform.isPad doesn't always help detect this change in behaviour since the iPad Mini will still display sheets like on iOS! I have yet to find a good answer for this; part of why we've never enabled iPad support.

Animations, animations, animations

It's a bit of a no-brainer that animations make apps feel more polished, but when it comes to React Native apps specifically, it's often one of the biggest factors in the elusive sense of "feeling native". SwiftUI apps naturally come with a lot of little animations, but in React Native you've got to do all of it by hand. This is a large contributing factor in what I'd call the "webby" feel of a lot of React Native apps.

Animations are a huge topic; I could talk about the new animation for when you like a post or the new swipe-to-dismiss interactions on the chat screen. That said, many low-hanging fruits are simple to take advantage of which can drastically improve the perceived quality of your app.

Layout animations

Unlike React Native's LayoutAnimation.configureNext(), which is hard to fire correctly and can often start animating things unintentionally, React Native Reanimated's layout animations are very easy to use and slot naturally into your app.

If you want something to move around smoothly, it's as simple as:

<Animated.View layout={LinearTransition} />

If you want something to appear and disappear smoothly, you’d write:

<Animated.View entering={FadeIn} exiting={FadeOut} />

Sprinkling those across your app can hugely improve its feel. We use this in the composer, for example: if there's a problem with your post, you get a little banner at the top. The component structure looked somewhat like this (simplified):

<Container>

<Header onSubmit={...} />

{error && <ErrorBanner error={error} />}

<ScrollView>

<TextInput />

<MediaPreview />

{/* etc */}

</ScrollView>

</Container>

When an error happened, it would simply snap into place. This is somewhat fine for the banner, but pretty jarring for the rest of the content in the scrollview to suddenly snap down by 50 or so pixels.

Layout animations to the rescue:

<Container>

<Header onSubmit={...} />

{error && (

<Animated.View entering={FadeIn} exiting={FadeOut}>

<ErrorBanner error={error} />

</Animated.View>

)}

<Animated.ScrollView layout={LinearTransition}>

<TextInput />

<MediaPreview />

{/* etc */}

</Animated.ScrollView>

</Container>

Now, the ScrollView smoothly slides down to make space for the banner, which fades in smoothly.

Reanimated's LayoutAnimationConfig component is very helpful for cases where you want elements to animate when things change, but not when the screen first appears.

Squishy buttons

Animations will often make something feel more polished, but what makes a particular impact in native apps is a feeling of tactility. Haptics is one part of that, but another is "squishiness". Used sparingly, a scale-on-press animation can make buttons feel very dynamic. We use these in a few places, most prominently on the bottom navigation tabs. Compared to the old TouchableOpacity animation, the app feels so much more alive!

Here's the complete implementation of our PressableScale component. We can drop this into our higher-level Button component to replace its underlying Pressable to instantly make a button squishy!

import React from 'react'

import {Pressable, PressableProps, StyleProp, ViewStyle} from 'react-native'

import Animated, {

cancelAnimation,

runOnJS,

useAnimatedStyle,

useReducedMotion,

useSharedValue,

withTiming,

} from 'react-native-reanimated'

import {isTouchDevice} from '#/lib/browser'

import {isNative} from '#/platform/detection'

const DEFAULT_TARGET_SCALE = isNative || isTouchDevice ? 0.98 : 1

const AnimatedPressable = Animated.createAnimatedComponent(Pressable)

export function PressableScale({

targetScale = DEFAULT_TARGET_SCALE,

children,

style,

onPressIn,

onPressOut,

...rest

}: {

targetScale?: number

style?: StyleProp<ViewStyle>

} & Exclude<PressableProps, 'onPressIn' | 'onPressOut' | 'style'>) {

const reducedMotion = useReducedMotion()

const scale = useSharedValue(1)

const animatedStyle = useAnimatedStyle(() => ({

transform: [{scale: scale.value}],

}))

return (

<AnimatedPressable

accessibilityRole="button"

onPressIn={e => {

'worklet'

if (onPressIn) {

runOnJS(onPressIn)(e)

}

cancelAnimation(scale)

scale.value = withTiming(targetScale, {duration: 100})

}}

onPressOut={e => {

'worklet'

if (onPressOut) {

runOnJS(onPressOut)(e)

}

cancelAnimation(scale)

scale.value = withTiming(1, {duration: 100})

}}

style={[!reducedMotion && animatedStyle, style]}

{...rest}>

{children}

</AnimatedPressable>

)

}

This made the bottom bar feel so much better—I highly recommend this if yours is feeling rather static.

Native navigation

I touched on this briefly earlier, but leaning on React Navigation is a great way to get your app to feel more native. First off: use the native stack navigator. This needs no further explanation.

Next native headers. We don't yet do this at Bluesky, but resist the temptation to roll your own headers—it's hard to perceive, but you lose a lot of subtle interactions when you do that. The easiest way to tell if headers are native (on iOS) is to long press the back button, which lets you quickly jump back through previous pages. Native headers unlock doing things like dynamic blurry backdrops after you start scrolling, which feels extremely native—it's a distinctive look of SwiftUI apps as it does that automatically.

Finally, another thing we don't use at Bluesky but I wanted to give a quick shout-out to: Oskar Kwaśniewski's brand new React Native Bottom Tabs library. This uses real native tab bars and is super easy to slot into React Navigation or Expo Router. If you want to go all-in on the native look (which is a good idea!) I highly recommend you check it out.

Beware of fullScreenGestureEnabled

fullScreenGestureEnabled lets you swipe anywhere on the screen to go back. This feels really good, and a lot of other apps do it: Threads, Twitter (subsequently X), and Instagram, to name a few. React Navigation's implementation, however, leaves many things desired and has the significant drawback of changing the back animation to use the custom simple_push rather than the native one, which is far more janky. We ultimately decided to keep fullScreenGestureEnabled enabled for Bluesky, but it's a substantial trade-off and I hope we’ll find a better solution.

Conclusion

There’s no silver bullet for making your app "feel native". None of these changes moved the needle by themselves, but adding them all up makes a huge difference, as I hope you noticed over the past few months in the app. My one overarching piece of advice is: be curious. Look at other apps close, really close, ask yourself how they did that, and try and do it yourself. It's worth the effort, and it's a lot more fun than it sounds at first.

As for me, my next goal is improving the headers and navigation—I’m confident that that will bring a subtle, yet substantial improvement to the feel.

And if you haven't already, check out Bluesky - it's only going to get more native-er!

Many many thanks to Alice for helping with copyediting and digging up old Android APKs to take screenshots of!

Read the whole story
emrox
1 day ago
reply
Hamburg, Germany
Share this story
Delete

Optimize resource loading with the Fetch Priority API

1 Share

The Fetch Priority API indicates the relative priority of resources to the browser. It can enable optimal loading and improve Core Web Vitals.

When a browser parses a web page and begins to discover and download resources such as images, scripts, or CSS, it assigns them a fetch priority so it can download them in an optimal order. A resource's priority usually depends on what it is and where it is in the document. For example, in-viewport images might have a High priority, and the priority for early loaded, render-blocking CSS using <link>s in the <head> might be Very High. Browsers are pretty good at assigning priorities that work well but may not be optimal in all cases.

This page discusses the Fetch Priority API and the fetchpriority HTML attribute, which lets you hint at the relative priority of a resource (high or low). Fetch Priority can help optimize the Core Web Vitals.

Summary

A few key areas where Fetch Priority can help:

  • Boosting the priority of the LCP image by specifying fetchpriority="high" on the image element, causing LCP to happen sooner.
  • Increasing the priority of async scripts, using better semantics than the current most common hack (inserting a <link rel="preload"> for the async script).
  • Decreasing the priority of late-body scripts to allow for better sequencing with images.

Historically, developers have had limited influence over resource priority using preload and preconnect. Preload lets you tell the browser about critical resources you want to load early before the browser would naturally discover them. This is especially useful for resources that are harder to discover, such as fonts included in stylesheets, background images, or resources loaded from a script. Preconnect helps warm up connections to cross-origin servers and can help improve metrics like Time to first byte. It's useful when you know an origin but not necessarily the exact URL of a resource that will be needed.

Fetch Priority complements these Resource Hints. It's a markup-based signal available through the fetchpriority attribute that developers can use to indicate the relative priority of a particular resource. You can also use these hints through JavaScript and the Fetch API with the priority property to influence the priority of resource fetches made for data. Fetch Priority can also complement preload. Take a Largest Contentful Paint image, which, when preloaded, will still get a low priority. If it is pushed back by other early low-priority resources, using Fetch Priority can help how soon the image gets loaded.

Resource priority

The resource download sequence depends on the browser's assigned priority for every resource on the page. The factors that can affect priority computation logic include the following:

  • The type of resource, such as CSS, fonts, scripts, images, and third-party resources.
  • The location or order the document references resources in.
  • Whether the async or defer attributes are used on scripts.

The following table shows how Chrome prioritizes and sequences most resources:

  Load in layout-blocking phase Load one-at-a-time in layout-blocking phase
Blink
Priority
VeryHigh High Medium Low VeryLow
DevTools
Priority
Highest High Medium Low Lowest
Main resource
CSS (early**) CSS (late**) CSS (media mismatch***)
Script (early** or not from preload scanner) Script (late**) Script (async)
Font Font (rel=preload)
Import
Image (in viewport) Image (first 5 images > 10,000px2) Image
Media (video/audio)
Prefetch
XSL
XHR (sync) XHR/fetch* (async)

The browser downloads resources with the same computed priority in the order they're discovered. You can check the priority assigned to different resources when loading a page under the Chrome Dev Tools Network tab. (Make sure you include the priority column by right-clicking the table headings and ticking that).

When priorities change, you can see both the initial and final priority in the Big request rows setting or in a tooltip.

When might you need Fetch Priority?

Now that you understand the browser's prioritization logic, you can tweak your page's download order to optimize its performance and Core Web Vitals. Here are some examples of things you can change to influence the priority of resource downloads:

  • Place resource tags like <script> and <link> in the order you want the browser to download them. Resources with the same priority are generally loaded in the order they are discovered.
  • Use the preload resource hint to download necessary resources earlier, particularly for resources that are not easily discovered early by the browser.
  • Use async or defer to download scripts without blocking other resources.
  • Lazy-load below-the-fold content so the browser can use the available bandwidth for more critical above-the-fold resources.

These techniques help to control the browser's priority computation, thereby improving performance and Core Web Vitals. For example, when a critical background image is preloaded, it can be discovered much earlier, improving the Largest Contentful Paint (LCP).

Sometimes these handles may not be enough to prioritize resources optimally for your application. Here are some of the scenarios where Fetch Priority could be helpful:

  • You have several above-the-fold images, but not all of them should have the same priority. For example, in an image carousel, only the first visible image needs a higher priority, and the others, typically offscreen initially can be set to have lower priority.
  • Images inside the viewport typically start at a Low priority. After the layout is complete, Chrome discovers that they're in the viewport and boosts their priority. This usually adds a significant delay to loading the critical images, like hero images. Providing the Fetch Priority in markup lets the image start at a High priority and start loading much earlier. In an attempt to automate this somewhat, the first five larger images are set to Medium priority by Chrome which will help, but an explicit fetchpriority="high" will be even better.

    Preload is still required for early discovery of LCP images included as CSS backgrounds. To boost your background images' priority, include fetchpriority='high' on the preload.

  • Declaring scripts as async or defer tells the browser to load them asynchronously. However, as shown in the priority table, these scripts are also assigned a "Low" priority. You might want to increase their priority while ensuring asynchronous download, especially for scripts that are critical for the user experience.
  • If you use the JavaScript fetch() API to fetch resources or data asynchronously, the browser assigns it High priority. You might want some of your fetches to run with lower priority, especially if you're mixing background API calls with API calls that respond to user input. Mark the background API calls as Low priority and the interactive API calls as High priority.
  • The browser assigns CSS and fonts a High priority, but some of those resources might be more important than others. You can use Fetch Priority to lower the priority of non-critical resources (note early CSS is render blocking so should usually be a High priority).

The fetchpriority attribute

Use the fetchpriority HTML attribute to specify download priority for resource types such as CSS, fonts, scripts, and images when downloaded using link, img, or script tags. It can take the following values:

  • high: The resource is a higher priority, and you want the browser to prioritize it higher than usual, as long as the browser's own heuristics don't prevent that from happening.
  • low: The resource is a lower priority, and you want the browser to deprioritize it, again if its heuristics let it.
  • auto: The default value, which lets the browser choose the appropriate priority.

Here are a few examples of using the fetchpriority attribute in markup, as well as the script-equivalent priority property.

Effects of browser priority and fetchpriority

You can apply the fetchpriority attribute to different resources as shown in the following table to increase or reduce their computed priority. fetchpriority="auto" (◉) in each row marks the default priority for that type of resource. (also available as a Google Doc).

  Load in layout-blocking phase Load one at a time in layout-blocking phase
Blink
Priority
VeryHigh High Medium Low VeryLow
DevTools
Priority
Highest High Medium Low Lowest
Main Resource
CSS (early**) ⬆◉
CSS (late**)
CSS (media mismatch***) ⬆*** ◉⬇
Script (early** or not from preload scanner) ⬆◉
Script (late**)
Script (async/defer) ◉⬇
Font
Font (rel=preload) ⬆◉
Import
Image (in viewport - after layout) ⬆◉
Image (first 5 images > 10,000px2)
Image ◉⬇
Media (video/audio)
XHR (sync) - deprecated
XHR/fetch* (async) ⬆◉
Prefetch
XSL

fetchpriority sets relative priority, meaning it raises or lowers the default priority by an appropriate amount, rather instead of explicitly setting the priority to High or Low. This often results in High or Low priority, but not always. For example, critical CSS with fetchpriority="high" retains the "Very High"/"Highest" priority, and using fetchpriority="low" on these elements retains the "High" priority. Neither of these cases involve explicitly setting priority to High or Low.

Use cases

Use the fetchpriority attribute when you want to give the browser an extra hint about what priority to fetch a resource with.

Increase the priority of the LCP image

You can specify fetchpriority="high" to boost the priority of the LCP or other critical images.

The following comparison shows the Google Flights page with an LCP background image loaded with and without Fetch Priority. With the priority set to high, the LCP improved from 2.6s to 1.9s.

Lower the priority of above-the-fold images

Use fetchpriority="low" to lower the priority of above-the-fold images that aren't immediately important, for example offscreen images in an image carousel.

While images 2-4 will be outside of the viewport, they may be considered "close enough" to boost them to high and also load even if a load=lazy attribute is added. Therefore fetchpriority="low" is the correct solution for this.

In an earlier experiment with the Oodle app, we used this to lower the priority of images that don't appear on load. It decreased page load time by 2 seconds.

Lower the priority of preloaded resources

To stop preloaded resources from competing with other critical resources, you can reduce their priority. Use this technique with images, scripts, and CSS.

Reprioritize scripts

Scripts your page needs to be interactive should be load quickly, but shouldn't block other, more critical, render-blocking resources. You can mark these as async with high priority.

You can't mark a script as async if it relies on specific DOM states. However, if they run later on the page, you can load them with lower priority:

This will still block the parser when it reaches this script, but will allow content before this to be prioritised.

An alternative, if the completed DOM is needed, is to use the defer attribute (which runs, in order, after DOMContentLoaded), or even async at the bottom of the page.

Lower the priority for non-critical data fetches

The browser executes fetch with a high priority. If you have multiple fetches that might fire simultaneously, you can use the high default priority for the more important data fetches and lower the priority of less critical data.

Fetch Priority implementation notes

Fetch Priority can improve performance in specific use cases but there are some things to be aware of when using Fetch Priority:

  • The fetchpriority attribute is a hint, not a directive. The browser tries to respect the developer's preference, but it can also apply its resource priority preferences for resource priority to resolve conflicts.
  • Don't confuse Fetch Priority with preloading:

    • Preload is a mandatory fetch, not a hint.
    • Preload lets the browser discover a resource early, but it still fetches the resource with the default priority. Conversely, Fetch Priority doesn't help with discoverability, but it does let you increase or decrease the fetch priority.
    • It's often easier to observe and measure the effects of a preload than the effects of a priority change.

    Fetch Priority can complement preloads by increasing the granularity of prioritization. If you've already specified a preload as one of the first items in the <head> for an LCP image, then a high Fetch Priority might not improve LCP significantly. However, if the preload happens after other resources load, a high Fetch Priority can improve LCP more. If a critical image is a CSS background image, preload it with fetchpriority = "high".

  • Load time improvements from prioritization are more relevant in environments where more resources compete for the available network bandwidth. This is common for HTTP/1.x connections where parallel downloads aren't possible, or on low bandwidth HTTP/2 or HTTP/3 connections. In these cases, prioritization can help resolve bottlenecks.

  • CDNs don't implement HTTP/2 prioritization uniformly, and similarly for HTTP/3. Even if the browser communicates the priority from Fetch Priority, the CDN might not reprioritize resources in the specified order. This makes testing Fetch Priority difficult. The priorities are applied both internally within the browser and with protocols that support prioritization (HTTP/2 and HTTP/3). It's still worth using Fetch Priority just for the internal browser prioritization independent of CDN or origin support, because priorities often change when the browser requests resources. For example, low-priority resources like images are often held back from being requested while the browser processes critical <head> items.

  • You might not be able to introduce Fetch Priority as a best practice in your initial design. Later in your development cycle, you can the priorities being assigned to different resources on the page, and if they don't match your expectations, you can introduce Fetch Priority for further optimization.

Developers should use preload for its intended purpose—to preload resources not detected by the parser (fonts, imports, background LCP images). The placement of the preload hint will affect when the resource is preloaded.

Fetch priority is about how the resource should be fetched when it is fetched.

Tips for using preloads

Keep the following in mind when using preloads:

  • Including a preload in HTTP headers puts it before everything else in the load order.
  • Generally, preloads load in the order the parser gets to them for anything with Medium priority or higher. Be careful if you're including preloads at the beginning of your HTML.
  • Font preloads probably work best toward the end of the head or beginning of the body.
  • Import preloads (dynamic import() or modulepreload) should run after the script tag that needs the import, so make sure the script gets loaded or parsed first so it can be evaluated while its dependencies are loading.
  • Image preloads have a Low or Medium priority by default. Order them relative to async scripts and other low or lowest priority tags.

History

Fetch Priority was first experimented with in Chrome as an origin trial in 2018, and then again in 2021 using the importance attribute. At that time it was called Priority Hints. The interface has since changed to fetchpriority for HTML and priority for JavaScript's Fetch API as part of the web standards process. To reduce confusion, we now call this API Fetch Priority.

Conclusion

Developers are likely to be interested in Fetch Priority with the fixes in preload behavior and the recent focus on Core Web Vitals and LCP. They now have additional knobs available to achieve their preferred loading sequence.

Read the whole story
emrox
1 day ago
reply
Hamburg, Germany
Share this story
Delete

Fit-to-Width Text: A New Technique

1 Share
Read the whole story
emrox
1 day ago
reply
Hamburg, Germany
Share this story
Delete

Low-Poly Image Generation using Evolutionary Algorithms in Ruby

1 Share
Read the whole story
emrox
1 day ago
reply
Hamburg, Germany
Share this story
Delete
Next Page of Stories