3054 stories
·
1 follower

Why your daily stand-ups don't work and how to fix them

1 Share

Daily stand-ups are a classic example of learned helplessness. We all know they’re useless, but we tell ourselves “that’s just how things are” and do nothing about it.

These days, we do stand-ups because that’s what we’re told to, not because they solve any particular problems.

The software industry has been doing daily stand-ups for so long that it doesn’t remember why they exist. At some point along the way, stand-ups went from a solution to a meaningless dogmatic ritual.

Here are symptoms which indicate you’re doing your stand-ups in the wrong way, for the wrong reasons:

  1. Stand-ups take more than 15-minutes
  2. People talk about their work instead of talking about goals
  3. People stop showing up regularly
  4. People talk to their manager (or “scrum master”) instead of talking to their peers
  5. If the manager or “scrum master” can’t show up, the stand-up doesn’t happen

If your team is experiencing three or more of these symptoms, the diagnostic is clear: your stand-ups are useless.

In this post, I’ll explain the actual goal of a stand-up and why it’s a productive meeting to have, providing teams to do it right. Furthermore, I’ll explain what “right” means and the nuance involved in tailoring the stand-up meeting to suit your needs. As usual in Software Engineering, there’s no such thing as “one size fits all”.

How did we get here?

As it happened with every single tenet of the Agile manifesto, we turned a set of principles into a set of prescriptive rules. We forgot that “our highest priority is to satisfy the customer through early and continuous delivery of valuable software”, and started thinking that being agile meant adopting rigid frameworks like Scrum and Extreme Programming. We turned our means into ends.

Why did we do that? We did it because it’s easier to blindly follow a set of rules than to understand the principles behind them and tune them to your goal.

The daily stand-up is perhaps the best example of a solution with solid principles turned into a pointless dogma. In fact, with daily stand-ups, we did even worse; we misunderstood the dogma!

Here’s what Scrum.org says the purpose of a daily stand-up is:

As described in the Scrum Guide, the purpose of the Daily Scrum is to inspect progress toward the Sprint Goal and adapt the Sprint Backlog as necessary, adjusting the upcoming planned work.

The Developers can select whatever structure and techniques they want, as long as their Daily Scrum focuses on progress toward the Sprint Goal and produces an actionable plan for the next day of work. This creates focus and improves self-management.

Now think about your daily stand-ups:

  • Do you ever adjust the backlog or the sprint’s goal because of them, or do you just tell people to “work harder” so that you don’t have to change the plan?
  • Do you focus on progress towards the sprint’s goal or people’s busyness?
  • Do you create an actionable plan to respond to new information?
  • Do these stand-ups incentivize teams to collaborate and self-manage, or do they instil a bureaucratic culture of fear?

What happened to stand-ups is that we shifted our focus from “getting things” done to “ensuring people are working”.

The truth is that many managers weaponized their stand-up so that they could keep people busy. These types of managers believe in efficiency rather than effectiveness, which also usually leads them to load their team to maximum capacity utilization thinking it will accelerate deliveries, when in fact it just slows their teams down.

A classic example of the weaponized stand-up is the stand-up at the very first minute of the work day.

“We have flexible working hours”, they say. Sure, but there’s also a stand-up meeting at exactly 8:30 AM to ensure everyone will be online early.

To fix our stand-ups, we must repurpose them. We must drive stand-ups so that they focus on our goals and adjust these goals as new information comes up.

Stand-up meetings can be valuable: they’re an excellent way of creating cadence for a low cost. When there’s a fixed time in everyone’s calendar every day, folks will organize themselves around it, reducing the overhead of scheduling meetings with busy people.

Furthermore, when a system produces synchronous preemptive feedback daily, problems will take a maximum of one day to be surfaced.

Finally, stand-ups help different functions align their schedules around a goal, improve your metric’s precision, and incentivize self-management.

How to make stand-ups work

Here’s what I recommend teams do to improve their stand-ups:

  1. Stop rambling. Go through a Kanban board.
  2. Prioritize ageing issues.
  3. Focus on blockers.
  4. Invite the entire team, including PMs and Designers.
  5. Move detailed discussions asynchronously.
  6. Incentivize self-management and instil psychological safety.

Except for the last item, I don’t consider the others mandatory. These are recommendations.

Now, I’ll explain why each recommendation usually works and their caveats.

Stop rambling. Go through a Kanban board

What did you do yesterday? What are you going to do today? These are possibly the worst questions one could ask during the stand-up.

“How are things going?” is not a good question either. This question is the embodiment of MBWA (management by walking around), and it’s equally as useless, although it’s twice as loathsome.

These questions tell nothing about your progress towards the sprint’s goal. Instead, they create a culture of fear and waste everyone’s time, especially because they incentivize people to ramble to prove they’ve been productive.

Going through a Kanban board solves all of these problems. It incentivizes people to be concise because it focuses on the sprint’s goal, not on someone’s productivity. Instead of explaining what they’ve done and all the nitty gritty technical details, engineers focus on whether they need help and what they must do to move a particular item towards the right of the board: the “finished” column.

Furthermore, it incentivizes teams to track work which would otherwise be invisible. If a critical bug comes up, people will add it to the board so that they can ask others for help and alert everyone that there’s something concerning going on.

One last benefit of using a Kanban board is that it makes metrics more precise. In software development, our metrics are usually measured in “days”. Therefore, when you go through the board daily, you’ll update the tasks at least once a day, preventing the “I forgot to move it to done” situation.

Just be careful not to manage by metrics exclusively! Metrics help detect anomalies, but they don’t tell you anything about how to fix these anomalies or why they happened, although they can provide helpful insight.

Figures on productivity in the United States do not help to improve productivity in the United States. Measures of productivity are like statistics on accidents: they tell you all about the number of accidents in the home, on the road, and at the workplace, but they do not tell you how to reduce the frequency of accidents. — Deming, W. Edwards. Out of the Crisis, reissue (p. 13). MIT Press. Kindle Edition.

Prioritize ageing issues

Take a moment to look at the United States actuarial life table. As you can see, as age increases, the size of the population that age obviously decreases.

The longer someone lives, the more of an outlier they are because they survived a larger number of fatal events that could’ve happened.

Similarly, the longer an issue lives, the more of an outlier it becomes. Therefore, the more attention it deserves.

An issue that has just started doesn’t deserve as much attention because it’s still within our process’ cycle time bounds. Why would we waste time discussing something which isn’t a problem?

Issues that live longer, on the other hand, are outliers. For them to have lived so long, it must mean there’s something inherently difficult about them or one or more events preventing it from being completed.

By focusing your stand-ups on ageing issues, you’ll naturally stop wasting time on easy tasks and start investing time into solving the difficult ones.

That doesn’t mean you shouldn’t talk about issues which just started. It just means that ageing issues should raise an alert.

Once this alert is raised, the team should debate whether they need to clarify requirements, cut scope, or ask someone for the piece of input that’s missing to complete the task.

It’s also crucial for this to be a blameless process. If certain types of tasks keep taking longer, the team should discuss which processes or policies they should change to prevent the problem from happening again. Instilling fear in the name of “accountability” will just cause people to hide that which makes them look bad.

TIP: JIRA and most other project management software allows you to highlight issues above a particular age. Take advantage of that feature to make your stand-up’s Kanban board more intuitive.

Focus on blockers

Focusing on blockers has a similar effect as focusing on ageing issues. It directs time investment to issues that matter.

If an issue has just started, it may not be worth it for an engineer to report anything except for a blocker or the possibility of a blocker appearing soon.

Once blockers are reported, the team should discuss what they must do to unblock the task. The longer a task remains blocked, the larger average cycle times get, and the less predictable teams become. Remember: finishing what you start makes teams more productive and predictable.

In my experience, as soon as teams start focusing on blockers and ageing issues, their status updates go from technobabble like:

I’m detecting nano wave frequency shift in the plasma gluon crystal. Today I’ll re-invert the ion transporter and replace the torsional neogenic casing. I wonder whether there’s a temporal anomaly in the dorsal bipolar thruster bracket. What does everyone think? [insert long meaningless discussion here]

To:

Everything is going well. No blockers.

How much better is that?

When you focus on goals and impediments to those goals, everyone wastes less time proving they’re productive and more time actually being productive.

Invite the entire team, including PMs and Designers

The folks you call Salespeople, Product Managers, Designers, and Software engineers I call team. Product is a combination of all those functions. Therefore, if you want to build a successful product, you must do it in a multi-disciplinary way.

Tasks are not only blocked by complicated technical issues. They’re often blocked due to complex specifications, missing designs, and an unsound business opportunity.

Including folks with different roles in your stand-ups makes it easier to solve blockers which depend on functions other than software engineering — and that happens very often.

By having those people regularly attend your stand-up meetings, they can provide input on blockers, and they may even be able to tell you when a particular requirement isn’t worth meeting.

Suppose you’ve discovered that implementing a particular feature would take at least a few extra weeks due to its complexity, for example. In that case, these folks may be able to discuss which compromises could be made to enable an early delivery for a fraction of the cost without significantly impacting your bottom line.

Furthermore, it helps sales, marketing, and design adjust their priorities to match the software team’s speed. Multi-disciplinary stand-ups help everyone.

Please notice that I’m not saying all these people should always attend every stand-up. I’m just saying small multi-disciplinary teams tend to be more Agile because these stand-up meetings will shorten the feedback loop between the different teams.

Not everyone has to say something either. Sometimes just listening is a huge benefit, especially given that you follow my previous advice, which will keep the meeting short and thus reduce its cost.

Move detailed discussions asynchronously

Whenever engineers fall into the trap of discussing deep technical issues, move those discussions asynchronously and include in the follow-up discussion only those that are interested.

If they have to hear about uninteresting topics that don’t contribute to the progress towards the team’s goal, people will — rightfully — stop coming to the stand-up (or complain about it, as we do now).

Think of the stand-up as the moment to identify issues, not necessarily as the moment to solve them.

Responsible teams are self-organizing, which means that once an issue surfaces, they’ll go after what they need to solve it, provided they’re given guidelines on the desired outcome.

If something can be solved right on the spot, ideally in less than three or four minutes, that’s fine. The cost of scheduling another meeting and coordinating with different people is higher than using everyone’s time to solve the problem. If that’s not the case, it’s more costly to keep everyone listening to something that doesn’t matter to them.

Please notice I’m not saying you should exclude people from these discussions. In fact, it’s quite the opposite. I’m just saying you should not assume everyone is interested. Instead, you should assume they aren’t interested but give everyone the chance to participate in the follow-up meeting.

Incentivize self-management and instil psychological safety

This last piece of advice is the only one I consider to be mandatory to follow.

When folks get penalized for surfacing issues, they won’t do so. Consequently, problems will only appear when they become too costly or nearly impossible to solve. Instead, you should incentivize the team to raise issues as soon as they appear.

When raised early, problems are usually easy to solve. For example, altering a problematic specification is usually much less costly than scrambling to fix a critical bug in production.

By instilling a sense of psychological safety in the team, they’ll know that any questions are valid. Then, instead of assuming they know the answer and causing problems down the line, they’ll simply ask, given there’s no penalty for doing so.

Furthermore, I believe psychological safety is a worker’s right. No one deserves to live in fear and anxiety because of things they didn’t understand, particularly because most of the time it’s not a fault of their own. Even if it were, it doesn’t matter. Product development is not about blame. Blaming others doesn’t help you achieve your goals. Answering questions and solving problems do.

Teams that feel psychologically safe are more innovative and productive and, therefore, are capable of self-organizing.

Psychological safety helps people focus on the goal at hand, which means they know what they must do and will go after solutions even when their manager or “scrum master” is not present.

The best architectures, requirements, and designs emerge from self-organizing teams. — The Agile Manifesto

Putting it all together

Daily stand-ups are a classic example of self-learned helplessness. We all know they suck. Yet, we don’t do anything about them. These days, we do stand-ups because that’s what we’re told to do, not because they solve any particular problems.

Stand-ups themselves are not a waste of time. They’ve become a waste of time because of how most teams do them.

Focusing on people’s work and using busyness as a proxy for productivity hinders a stand-up meeting’s benefits. Instead, stand-ups should be short, concise, and focus on the team’s collective goal.

When done right, stand-ups diminish the time it takes to get feedback, reduce communication overhead, and allow different functions to align their priorities.

For teams to improve their stand-ups and consequently achieve their goals, I recommend that they:

  1. Stop rambling and go through a Kanban board instead.
  2. Prioritize ageing issues.
  3. Focus on blockers.
  4. Invite the entire team, including PMs and Designers.
  5. Move detailed discussions asynchronously.
  6. Incentivize self-management and instil psychological safety.
Read the whole story
emrox
15 hours ago
reply
Hamburg, Germany
Share this story
Delete

Overview of JavaScript Data Grids and Spreadsheets

1 Share

Save this story

Read the whole story
emrox
7 days ago
reply
Hamburg, Germany
Share this story
Delete

Load Testing: An Unorthodox Guide

1 Share

Have you ever built a new web application, and shortly before launch your boss burst in with one of those dreaded questions:

  • Will your web app scale?

  • Can you handle 10000 simultaneous users?

  • What if you’re going to become the next Amazon?

Even worse, then you’ll open up AWS’s EC2 instance type page, and you are bombarded with hundred different instance types, ranging from A1 to z1d. Which one to take for your next Amazon?

Load-Testing will help you form an answer. But, interestingly, there is very little information out there on how to sensibly approach the questions mentioned above, apart from running a couple of random JMeter tests to tick off some launch-list check boxes.

So, rephrasing your boss’s question: How do you find out if your web app will scale?

You’ll learn a process to find an answer to that question in the remainder of this document.

Read the whole story
emrox
7 days ago
reply
Hamburg, Germany
Share this story
Delete

It's time to say goodbye to these obsolete Python libraries

1 Share

With every Python release, there are new modules being added and new and better ways of doing things get introduced. We all get used to using the good old Python libraries, but it’s time say goodbye to os.path, random, pytz, namedtuple and many more obsolete Python libraries and start using the latest and greatest ones instead.

Discuss on Changelog News

Read the whole story
emrox
7 days ago
reply
Hamburg, Germany
Share this story
Delete

It’s time to leave the leap second in the past

1 Share

The leap second concept was first introduced in 1972 by the International Earth Rotation and Reference Systems Service (IERS) in an attempt to periodically update Coordinated Universal Time (UTC) due to imprecise observed solar time (UT1) and the long-term slowdown in the Earth’s rotation. This periodic adjustment mainly benefits scientists and astronomers as it allows them to observe celestial bodies using UTC for most purposes. If there were no UTC correction, then adjustments would have to be made to the legacy equipment and software that synchronize to UTC for astronomical observations.

As of today, since the introduction of the leap second, UTC has been updated 27 times.

While the leap second might have been an acceptable solution in 1972, when it made both the scientific community and the telecom industry happy, these days UTC is equally bad for both digital applications and scientists, who often choose TAI or UT1 instead.

At Meta, we’re supporting an industry effort to stop future introductions of leap seconds and stay at a current level of 27. Introducing new leap seconds is a risky practice that does more harm than good, and we believe it is time to introduce new technologies to replace it.

Leap of faith

One of many contributing factors to irregularities in the Earth’s rotation is the constant melting and refreezing of ice caps on the world’s tallest mountains. This phenomenon can be simply visualized by thinking about a spinning figure skater, who manages angular velocity by controlling their arms and hands. As they spread their arms the angular velocity decreases, preserving the skater’s momentum. As soon as the skater tucks their arms back in the angular velocity increases.

To visualize angular velocity change, think of a spinning figure skater.

So far, only positive leap seconds have been added. In the early days, this was done by simply adding an extra second, resulting in an unusual timestamp:

23:59:59 -> 23:59:60 -> 00:00:00

At best, such a time jump crashed programs or even corrupted data, due to weird timestamps in the data storage.

With the Earth’s rotation pattern changing, it’s very likely that we will get a negative leap second at some point in the future. The timestamp will then look like this:

23:59:58 -> 00:00:00

The impact of a negative leap second has never been tested on a large scale; it could have a devastating effect on the software relying on timers or schedulers.

In any case, every leap second is a major source of pain for people who manage hardware infrastructures.

Smearing

More recently, it has become a common practice to “smear” a leap second by simply slowing down or speeding up the clock. There is no universal way to do this, but at Meta we smear the leap second throughout 17 hours, starting at 00:00:00 UTC based on the time zone data (tzdata) package content.

leap second
Leap second smearing at Meta.

Let’s break this down a bit.

We chose a 17-hour duration primarily because smearing is happening on Stratum 2, where hundreds of NTP servers perform smearing at the same time. To ensure that the difference between them is tolerable, the steps must be minimal. If the smearing steps are too big, NTP clients may consider some devices faulty and exclude them from quorum, which may lead to an outage.

The starting point at 00:00:00 UTC is also not standardized, and there are many possible options. For example, some companies begin smearing at 12:00:00 UTC the day before and throughout 24 hours; some do so two hours before the event, and others right at the edge.

There are also different algorithms on the smearing itself. There is a kernel leap second correction, linear smearing (when equal steps are applied), cosine, and quadratic (which Meta uses). The algorithms are based on different mathematical models and produce different offset graphs:

leap second
Kernel leap second smearing with NTPD

The source of the leap indicator differs between GNSS constellations (e.g., GPS, GLONASS, Galileo, and BeiDou). In some cases, it is broadcast by satellites several hours in advance. In other cases, time is propagated in UTC with the leap already applied. In different constellations, the leap second value differs depending on when it was launched.

leap second
Difference in leap second values between GNSS constellations.

All of this requires the nontrivial conversion logic inside of the time sources, including our very own Time Appliance. Loss of a GNSS signal during such a sensitive time may lead to a loss of a leap indicator and a split-brain situation, which could lead to an outage.

The leap event is also propagated via tzdata package months in advance, and for ntpd fans, via a leap second file distributed through the Internet Engineering Taskforce (IETF) website. Not having a fresh copy of the file may lead to forgetting about a leap second and causing an outage.

As already mentioned, the smearing is a very sensitive moment. If the NTP server is restarted during this period, we will likely end up with either “old” or “new” time, which may propagate to the clients and lead to an outage.

Because of such ambiguities, public NTP pools don’t do smearing, sometimes passing a leap indicator to the clients to figure this out. SNTP clients usually end up stepping the clock and dealing with the consequences described earlier. Smarter clients may choose a default strategy to smear the leap locally. All in all, this means big players like Meta, who perform smearing on public services, can’t join the public pools.

And even after the leap event, things are still at risk. NTP software needs to constantly apply offset compared to the source of time it’s using (GNSS, TAI, or Atomic Clock), and PTP software needs to propagate a so-called UTC offset flag in the announce messages. 

The negative impact of leap seconds

The leap second and the offset it creates cause issues all over the industry. One of the simplest ways to cause an outage is to bake in an assumption of time always going forward. Say we have a code like this:

start := time.Now()

// do something

spent := time.Now().Sub(start)

Depending on how spent is used, we may end up in a situation relying on a negative value during a leap second event. Such assumptions have caused numerous outages, and there are plenty of articles that describe these cases.

Back in 2012, Reddit experienced a massive outage because of a leap second; the site was inaccessible for 30 to 40 minutes. This happened when the time change confused the high-resolution timer (hrtimer), sparking hyperactivity on the servers, which locked up the machines’ CPUs.

In 2017, Cloudflare posted a very detailed article about the impact of a leap second on the company’s public DNS. The root cause of the bug that affected their DNS service was the belief that time cannot go backward. The code took the upstream time values and fed them to Go’s rand.Int63n() function. The rand.Int63n() function promptly panicked because the argument was negative, which caused the DNS server to fail.

Moving beyond the leap second

Leap second events have caused issues across the industry and continue to present many risks. As an industry, we bump into problems whenever a leap second is introduced. And because it’s such a rare event, it devastates the community every time it happens. With a growing demand for clock precision across all industries, the leap second is now causing more damage than good, resulting in disturbances and outages.

As engineers at Meta, we are supporting a larger community push to stop the future introduction of leap seconds and remain at the current level of 27, which we believe will be enough for the next millennium.

The post It’s time to leave the leap second in the past appeared first on Engineering at Meta.

Read the whole story
emrox
12 days ago
reply
Hamburg, Germany
Share this story
Delete

Holograms, light-leaks and how to build CSS-only shaders

1 Share

I might be understating it a bit, but WebGL is a big deal. You only need to spend five minutes on one of the many design awards sites to see site-after-site fully leaning into the power of canvas. Tools like threejs bring the power of 3D and GLSL shaders to the browser and, with that, a whole new level of visual effects.

This got me thinking though; why let JS have all the fun? With mix-blend-mode finally gaining wide browser-support, we now have access to many of most common shading techniques in CSS. With some choice images and a bit of careful layering it's possible to build some surprisingly high-quality effects without the need for introducing any JS dependencies.

Let's take a look at an example. As you scroll past the image below, the sunlight blooms a warm orange, before fading to a cool blue. You'll also briefly see some lens bokeh.

Ooooh shiny. Let's break it down.

What is a CSS 'shader'? permalink

Shaders in the WebGL world are complex GLSL scripts that determine how each individual pixel is rendered to the screen. We still don't have that level of control in our CSS so, at its most basic level, our CSS 'shader' is just an image with additional background-image layers above it. Yes, I'm taking a few liberties with the name but with careful use of gradients, masking, nesting and mix-blend-mode we can manage how these layers interact with both one-another and the image at the bottom of the stack.

For the sake of visualising this core structure, the 'shader' example above is set up with a few nested divs:

<div class="shader">
<img src="tower.jpg" alt="Asakusa at dusk">
<div class="shader__layer specular">
<div class="shader__layer mask"></div>
</div>
</div>

To keep each layer aligned with the image at the base, we keep the nested content positioned with the following CSS:

  .shader {
position: relative;
overflow: hidden;
backface-visibility: hidden;
}

.shader__layer {
background: black;
position: absolute;
left: 0;
top: 0;
width: 100%;
height: 100%;
background-size: 100%;
}

Ok, with the basic layout taken care of, lets take a look at the first layer of our effect - the lighting.

Simulating specularity. permalink

First of all we need to think about how light travels from light to dark across the surface of our image. We'll need an area of brightness where the light is at its most intense, falling off gradually into darkness as the light dissipates. We'll do this with a gradient.

.specular {
background-image: linear-gradient(180deg, black 20%, #3c5e6d 35%, #f4310e, #f58308 80%, black);
}

If you imagine looking at a shiny surface, the light that reflects back is known as a specular reflection. How and where that highlight appears is dependant on the light source, but also your viewing angle - the highlight moves with you. Whilst our gradient is looking lovely, it's all a bit static. We'll need to introduce a bit of movement to really sell the effect.

Fortunately, there's a vintage CSS Level 1 property that can help with that; setting .specular to background-attachment: fixed means that, as the page scrolls, the gradient remains locked to the browser's viewport. This not only brings some much needed motion to our shader, but also means that we can very roughly simulate the changing view-angle without reaching for JavaScript.

.specular {
background-attachment: fixed;
background-image: linear-gradient(180deg, black 20%, #3c5e6d 35%, #f4310e, #f58308 80%, black);
}

Great! Now, let's get to applying this lighting to our base image.

Know your blend modes. permalink

As the name implies, mix-blend-mode mixes the colours of each pixel in one element layer together with the layer directly below it. As with GLSL, CSS gives us a nice long list of options to choose from, and creating the right effect means knowing which blend is going to give the results we need. But what do these blend modes actually do? Before we get stuck in with our shader, let's take a quick look at the blend modes we'll be using.

Below you can see the images that we'll be using for these examples. On the left is the upper layer to be blended, and the right is the base image we'll be blending onto.

First off, let's look at a multiply blend. Multiply takes the colour of each pixel in the current layer and multiplies it with the colour of the pixel directly beneath it. In practice this means that darker colours in the current layer will obscure those in the layer below:

Setting the blend to screen takes the inverse of each pixel and multiplies them before inverting the result. This might sound complicated, but you can think of screen as the opposite of multiply - Darker colours become transparent and only the lighter colours will show through to the layer below:

Lastly, color-dodge and color-burn are like taking multiply and screen into overdrive. Both modes divide the pixel colour on the base layer with the one in the current layer.

For color-dodge this means that highlights and midtones get blown out whilst dark tones have no affect at all. color-burn will boost shadows and darker midtones, whilst lighter tones have no affect at all.

In the image below, the left example is color-dodge and the right is color-burn.

Compositing layers. permalink

A diagram summarising how layers can be composited together to form a new image

Now we know what we're working with, it might seem like the next step would be to slap a blend mode onto our gradient, place it over our base image and call it a day. That will definitely work, but it won't reach the level of quality we're going for.

Blending the gradient directly to the base layer will mean that the lighting will be totally uniform across the image. Aside from chromed surfaces, that doesn't happen too often in nature. To really sell the effect we want to have control over the areas of the image where light can fall and where it can't. To simulate this, we can use a predominately dark image to mask out our gradient. This technique of using a dark image to mask off areas of a lighter one is often known as a specular map.

You might be wondering how it will be possible to do this with CSS if mix-blend-mode only affects the pixels in the layer directly below it, and we can only set one blend mode at a time. This is where our HTML structure starts to shine.

<div class="shader">
<img src="tower.jpg" alt="Asakusa at dusk">
<div class="shader__layer specular">
<div class="shader__layer mask"></div>
</div>
</div>

By nesting div layers inside of one another, we can work outwards applying additional mix-blend-modes to each wrapping div. Essentially, this lets us add another mix-blend-mode to the output of the previous blend. This process of layering different blend modes to produce a final output is known as compositing.

Let's give it a try. Using a suitably dark background-image we'll set mix-blend-mode: multiply on our .mask layer and throw away the parts of our gradient where we don't want light to show through.

Two images are combined with a multiply blend to form a mask
.mask {
mix-blend-mode: multiply;
background-image: url(/tower_spec.jpg);
}

.specular {
background-attachment: fixed;
background-image: linear-gradient(180deg, black 20%, #3c5e6d 35%, #f4310e, #f58308 80%, black);
}

Now that we have our specular map, we can apply the final lighting to the base image. We need to use one of the blend modes that ignores black and dark tones. This means that we should set our .specular layer to use mix-blend-mode: screen or mix-blend-mode: color-dodge. Either would work in this case, but because we want the highlights to blow out into a nice sunlight-bloom effect, we'll go for color-dodge.

Let's take a look:

two images are combined with color-dodge blend to form a bloom effect
.specular {
mix-blend-mode: color-dodge;
background-attachment: fixed;
background-image: linear-gradient(180deg, black 20%, #3c5e6d 35%, #f4310e, #f58308 80%, black);
}

And with that, the effect is complete! Here's the completed HTML and CSS:

<div class="shader">
<img src="tower.jpg" alt="Asakusa at dusk">
<div class="shader__layer specular">
<div class="shader__layer mask"></div>
</div>
</div>

<style>
.shader {
position: relative;
overflow: hidden;
backface-visibility: hidden;
}

.shader__layer {
background: black;
position: absolute;
left: 0;
top: 0;
width: 100%;
height: 100%;
background-size: 100%;
background-position: center;
}

.specular {
mix-blend-mode: color-dodge;
background-attachment: fixed;
background-image: linear-gradient(180deg, black 20%, #3c5e6d 35%, #f4310e, #f58308 80%, black);
}

.mask {
mix-blend-mode: multiply;
background-image: url(/tower_spec.jpg);
}
</style>

Let's take another look at the completed effect, but this time with the added ability to isolate each layer of the shader. Change the view mode dropdown to step through the effect and get a better idea of how the layers work together to produce the final image:

Taking it further. permalink

In the example above, we used a greyscale version of the main image (with added scratches and bokeh) as a mask. That's a great way to add interest to an image, but shader layers can be whatever you want them to be. Let's look at a few more examples.

Aurora Borealis. permalink

In this example, repeating the gradient background gradient and reducing background-size-y causes the light efffect to move more quickly across the screen. When masked out with a specular map, this creates the illusion of an aurora rippling over the main image. Blown-out highlights with colour-dodge creates the wrong effect here, so swapping our .specular layer to mix-blend-mode: screen maintains the sharp definition of the aurora.

Light-leak. permalink

Until now the examples have all used a greyscale specular map, but a full colour specular map can introduce new effects. In this example the mask image is created from an inverted and blurred version of the primary image, with a blue-red tone overlaid on top. When it's all layered together with a hot red-orange gradient, the resulting blend of colours results in something that looks a little like the light-leaks you get on vintage film cameras.

Hologram. permalink

Layering within the mask opens up even more possiblilities. What would happen if we add another layer with background-attachment: fixed?

In this final example the mask layer has a background image SVG, and another black-white gradient running at the opposite angle to the specular gradient in the .specular layer. Setting the nested mask layer to color-burn causes it to blow out the definition of the SVG and you get this sweet two-way holographic look. CSS is awesome.

I've said it before, but it bears repeating: modern CSS is such an amazing tool to work with and I'm consistently impressed with the level of fidelity you can achieve. That said, It's time to address the potential elephant in the room. Depending on the device or browser you've used to read this article, your scroll performance might be on the floor and your fans may have kicked up a notch.

At the time of writing, blend modes in browsers are still pretty resource heavy. For more complex effects, with several layers of compositing, you will see a real performance hit. Add any CSS animations or transitions to the mix and it will really tank - particularly in Safari. After a bit of tweaking I eventually managed to claw back a bit of a performance boost by using backface-visibility: hidden, but my initial impulse was to force GPU rendering by using the trusty transform: translateZ(0); hack. Sadly, adding a transform revealed another quirk to be aware of.

Due to the reliance on background-attachment: fixed, applying CSS transforms to the shader can cause some strange side-effects. In Chrome it broadly works, but the gradients may appear offset depending on the transforms applied. Firefox on the other hand, will simply ignore the fixed positioning and your gradients will totally appear static. I'm sure there are ways and means around that, but they'd likely be testing the spirit of doing all of this without using JS.

All-in-all this was really fun to explore. Sure, we can't get quite the same level of fidelity as GLSL, but for simpler effects this technique is a great alternative to introducing extra libraries to your project. As great as these effects can look though, I think that for now this is very much a case of: just because you can, doesn't mean you should.

Until CSS filters and blend modes become more performant (or until browsers allow linking GLSL filters directly from CSS 🤞) restraint and subtlety might be the best way to go.

The demos in this article use a collection of images or composited textures made available by the Unsplash photographers below. Please go check out their beautiful work.

Read the whole story
emrox
12 days ago
reply
Hamburg, Germany
Share this story
Delete
Next Page of Stories