3719 stories
·
3 followers

View Transitions Applied: More performant ::view-transition-group(*) animations

1 Share

If the dimensions of the ::view-transition-group(*) don’t change between the old and new snapshot, you can optimize its keyframes so that the pseudo-element animates on the compositor.

~

🌟 This post is about View Transitions. If you are not familiar with the basics of it, check out this 30-min talk of mine to get up to speed.

~

With View Transitions, the ::view-transition-group() pseudos are the ones that move around on the screen and whose dimensions get adjusted as part of the View Transition. You can see this in the following visualization when hovering the browser window:

See the Pen
View Transition Pseudos Visualized (2)
by Bramus (@bramus)
on CodePen.

The keyframes to achieve this animation are automatically generated by the browser, as detailed in step 3.9.5 of the setup transition pseudo-elements algorithm.

Set capturedElement’s group keyframes to a new CSSKeyframesRule representing the following CSS, and append it to document’s dynamic view transition style sheet:

@keyframes -ua-view-transition-group-anim-transitionName {
  from {
    transform: transform;
    width: width;
    height: height;
    backdrop-filter: backdropFilter;
  }
}

Note: There are no to keyframes because the relevant ::view-transition-group has styles applied to it. These will be used as the to values.

~

While this all works there one problem with it: the width and height properties are always included in those keyframes, even when the size of the group does not change from its start to end position. Because the width and height properties are present in the keyframes, the resulting animation runs on the main thread, which is typically something you want to avoid.

Having UAs omit the width and height from those keyframes when they don’t change could allow the animation to run on the compositor, but OTOH that would break the predictability of things. If you were to rely on those keyframes to extract size information and the info was not there, your code would break.

TEASER: Some of the engineers on the Blink team have explored a path in which width and height animations would be allowed to run on the compositor under certain strict conditions. One of those conditions being that the values don’t change between start and end. The feature, however, has only been exploratory so far and at the time of writing there is no intention to dig deeper into it because of other priorities.

~

In a previous post I shared how you can get the old and new positions of a transitioned element yourself. This is done by calling a getBoundingClientRect before and after the snapshotting process.

const rectBefore = document.querySelector('.box').getBoundingClientRect();
const t = document.startViewTransition(updateTheDOMSomehow);

await t.ready;
const rectAfter = document.querySelector('.box').getBoundingClientRect();

With this information available, you can calculate the delta between those positions and create your own FLIP keyframes to use with translate to move the group around in case when the old and new dimensions (width and height) are the same.

const flip = [
	&grave${(rectBefore.left - rectAfter.left)}px ${(rectBefore.top - rectAfter.top)}px&grave,
	&grave0px 0px&grave,
];

const flipKeyframes = {
	translate: flip,
	easing: "ease",
};

The generated keyframes can then be set on the group by sniffing out the relevant animation and updating the effect’s keyframes.

const boxGroupAnimation = document.getAnimations().find((anim) => {
	return anim.effect.target === document.documentElement &&
	anim.effect.pseudoElement == '::view-transition-group(box)';
});

boxGroupAnimation.effect.setKeyframes(flipKeyframes);

Because the new keyframes don’t include the width and height properties, these animations can now run on the compositor.

In the following demo (standalone version here) this technique is used.

See the Pen
Better performing View Transition Animations, attempt #2, simplified
by Bramus (@bramus)
on CodePen.

(Instructions: click the document to trigger a change on the page)

When the width and height are different, you could calculate a scale transform that needs to be used. I’ll leave that up to you, dear reader, as an exercise.

~

In the following demo the default generated animation and my FLIP-hijack version are shown side-by-side so that you can compare how both perform.

See the Pen Regular and Better performing View Transition Animations side-by-side by Bramus (@bramus)
on CodePen.

Especially on mobile devices the results are remarkable.

~

Read the whole story
emrox
12 hours ago
reply
Hamburg, Germany
Share this story
Delete

Don't be Frupid

1 Share

Frupidity: The Silent Killer of Productivity and Innovation

Frugality is a virtue. The art of doing more with less, making sharp trade-offs, and keeping waste at bay so the good stuff – innovation, growth, maybe even a little joy – has room to thrive. Any engineer worth their salt knows the power of an elegant, efficient solution. A few well-placed optimizations can turn a sluggish system into a rocket.

But frugality has a dark twin – a reckless, shortsighted impostor that mistakes cost-cutting for efficiency and penny-pinching for wisdom. Enter frupidity, or stupid frugality – the obsessive drive to save money in ways that ultimately cost far more in lost productivity, morale, and sanity. It’s the engineering equivalent of “optimizing” a car by removing the brakes to improve gas mileage.

The Many Faces of Frupidity in Engineering

Frupidity thrives in large orgs, where budgets are tight, bureaucracies dense, and someone, somewhere, is always trying to impress a spreadsheet. The best part? It usually masquerades as good stewardship.

Tool Penny-Pinching

“Why are we spending $15 a month per seat on this tool?” a manager asks. A fair question – except no one factors in that without it, engineers will burn hundreds of hours manually wrestling with tasks that a good automation could have handled in minutes. Multiply that by a hundred devs, and suddenly that “savings” is bleeding the company dry.

Hardware Stinginess

“No reason to buy high-end laptops when these entry-level machines get the job done.” Sure – if your definition of ‘getting the job done’ includes waiting five minutes for each build to compile. Those little delays don’t show up in the budget, but they pile up in engineers’ heads like a slow poison, turning momentum into molasses.

Infrastructure Sabotage

Cutting cloud costs by downgrading instances or consolidating databases into a single underpowered behemoth? Brilliant. Until query performance plummets and every engineer waits an extra five seconds per request, day in, day out. That’s not saving – it’s death by a thousand cuts.

Travel Masochism

Slashing travel budgets? Great idea – until engineers start taking three-hop flights with overnight layovers instead of direct ones. Productivity nosedives. Morale craters. Someone finally realizes the cost of these savings is burning more money than the travel budget ever did.

Conference Austerity

Conferences get nuked because someone upstairs sees them as a “nice to have.” The irony? That conference could’ve been where your engineers learned about a new technique that would’ve saved you a million bucks in infrastructure costs. Instead, they’re stuck reinventing the wheel – badly.

The True Cost of Frupidity

The worst thing about frupidity? It doesn’t look like a single, catastrophic failure. It creeps in quietly, like rust on an old bridge. The slow grind of waiting for a test suite to run because someone thought dedicated CI/CD machines were too expensive. The lost momentum of a brilliant engineer who spends half his time wrestling with red tape instead of building something great. The death of a thousand paper cuts.

And because it doesn’t feel like an immediate disaster, leadership rarely notices – until it’s too late.

Case Study

Take a company I once worked with – let’s call them PennyTech. Picture a bland office park in the outer ring of a mid-sized city, where the air is thick with the dull hum of fluorescent lights and motivational posters. PennyTech had a frupidity problem so bad, it felt like a social experiment in suffering.

They refused to pay for the professional version of a critical SaaS tool because the free tier technically worked. Never mind that it came with rate limits that forced engineers to stagger their work, or that it lacked automation features that would have streamlined half the team’s workflow. Still, the Powers That Be declared: We do not pay for what we can get for free.

Then one day, some unsuspecting soul opened a spreadsheet, probably out of boredom or because they’d read too many corporate best-practice blogs. Turns out, that “free” tier had devoured over 500 hours of productivity in a single year.

It’s not that PennyTech lacked intelligence; it’s that they had somehow misplaced it behind a locked budget door, believing they could outfox mathematics with good intentions and a healthy dose of denial. If there’s a moral here, it’s that sometimes the most expensive thing you can buy is the illusion of getting something for nothing.

The Frupidity Playbook

Want to maximize frupidity in your company and tank your engineering org? Let’s go:

  1. Give engineers the cheapest laptops money can buy. Who needs fast compile times when they can take a coffee break between builds?
  2. Ban taxis. If employees aren’t suffering through public transport, are they even working hard enough?
  3. Buy the worst coffee available. Bonus points if it comes in a bucket labeled Instant Beverage, Brown, Powdered.
  4. Make travel as painful as possible. If it’s not a three-hop flight with an overnight layover, you’re just throwing money away.
  5. Cancel all training and conferences. If devs really want to learn, they’ll figure it out in their spare time.
  6. Consolidate databases onto one overloaded server. Nothing screams optimization like a query that takes ten minutes to run.
  7. Mandate approval processes for everything. Need a second monitor? That’s a three-step approval process with a 90-day SLA.
  8. Measure success in savings, not productivity. If your engineers are miserable but the budget looks good, mission accomplished.

Go forth and squeeze those pennies! Just don’t be surprised when your best people walk out the door – probably straight into a taxi you wouldn’t reimburse.

Fighting Frupidity Before It Kills Your Org

The antidote to frupidity isn’t reckless spending. It’s smart spending. It’s understanding that good engineering isn’t about cutting corners – it’s about knowing which investments will pay off in speed, efficiency, and sanity.

Here’s how to fight it:

  1. Treat engineering time Like the Scarce Resource it is. Saving a few bucks on tools or infrastructure is meaningless if it costs engineers hours of wasted effort.
  2. Track hidden costs, not just line items. Look beyond the immediate savings – what’s the real cost of this decision in lost productivity?
  3. If it’s a tool for engineers, let engineers decide. Frupidity often comes from non-technical managers making technical decisions.
  4. Fix problems, don’t work around them. A small investment in the right place can remove an entire category of pain.
  5. Make it personal. Would you be willing to suffer through the inconvenience of a frupid decision? If not, don’t expect your team to.

Bureaucracy: Frupidity’s Best Friend

Bureaucracy and frupidity feed off each other. Bureaucracy clogs the gears, frupidity makes sure no one oils them. Process takes priority over results, and before long, efficiency is just a memory.

At its core, bureaucracy exists to create consistency, reduce risk, and ensure accountability. Sounds reasonable. But large orgs don’t just create a little process to keep things smooth; they create layers upon layers of rules, approvals, and oversight, each one designed to solve a specific problem without ever considering the full picture.

The result? No one is empowered to make simple, logical decisions. Instead, every request gets funneled through multiple approval steps, each gatekeeper incentivized to say “no” because “yes” means taking responsibility. Need a new tool? Fill out a form. Need a monitor? Get approval from finance. Need to travel? Justify the expense to three managers who have no context on why it matters.

And here’s the real kicker: bureaucracies love small, quantifiable savings but hate measuring intangible costs. A finance department can easily see that cutting taxi reimbursements saves $50 per trip. What they can’t see is that forcing employees to take three-hour public transit journeys leads to exhaustion, frustration, and bad decision-making that costs thousands in lost productivity. Since no one is directly accountable for those hidden costs, they don’t count.

Bureaucracy also breeds fear. The safest move in a bureaucratic system is always the one that involves the least immediate risk – so people default to defensive decision-making. It’s safer to deny an expense than approve it. It’s safer to force engineers to “make do” with slow tools than to spend money on something that might not show instant ROI. Every layer of approval adds more hesitation, more process, more distance from common sense.

And because bureaucracies are designed to be self-sustaining, they never get smaller. Instead, they metastasize. Every new problem gets a new rule. Every new rule adds friction. Every added friction creates more inefficiency. Until one day, the company realizes it’s spending more time on approvals than on actual work.

By then, the best employees have left. The ones who remain are either too exhausted to fight or have figured out that the real skill in a bureaucratic org isn’t building great things – it’s navigating the system. And that’s how frupidity wins.

Conclusion: Smart Frugality is the Real Efficiency

Frupidity isn’t just an engineering problem – it infects every corner of a company when the culture values cheapness over wisdom. The best companies don’t just cut costs – they invest in the right places.

The final test for frupidity? Would you make the same trade-off if it was your own time, your own money, your own problem to solve? If the answer is no, it’s not frugality.

It’s just plain frupid.

Read the whole story
emrox
14 hours ago
reply
Hamburg, Germany
Share this story
Delete

YouTube audio quality

1 Share


YouTube Audio Quality - How Good Does it Get?


You Tube is clearly used by a very large number of people. In general, they will be interested in watching videos of various types of content. One specific purpose is that it gets used to distribute, and make people aware of, audio recordings. A conversation on the “Pink Fish” webforum devoted to music and audio set me wondering about the technical quality of the audio that is on offer from You Tube (YT) videos. The specific comment which drew my attention was a claim that the ‘opus’ audio codec gives better results than the ’aac / mp4’ alternative. So I decided to investigate...

Ideally to assess this requires a copy of what was uploaded to YT as a ‘source’ version which can then be compared with what YT then make available as output. By coincidence I had also quite recently joined the Ralph Vaughan-Williams Society (RVWSoc). They have been putting videos up onto YT which provide excerpts of the recordings they sell on Audio CDs. These proved excellent ‘tasters’ for anyone who wants to know what they are recording and releasing. And when I asked, they kindly provided me with some examples to help me investigate this issue of YT audio quality and codec choice.

For the sake of simplicity I’ll ignore the video aspect of this entirely and only discuss the audio side. The RVWSoc kindly let me have ‘source uploaded’ copies of two examples. The choice of audio formats that YT offered for these videos are as follows:


Pan’s Anniversary
Available audio formats for HZHVTr1w6L8:
ID       EXT  | ACODEC     ABR  ASR    MORE INFO
------------------------------------------------------------
139-dash m4a  | mp4a.40.5  49k 22050Hz DASH audio, m4a_dash
140-dash m4a  | mp4a.40.2 130k 44100Hz DASH audio, m4a_dash
251-dash webm | opus      153k 48000Hz DASH audio, webm_dash
139      m4a  | mp4a.40.5  48k 22050Hz low, m4a_dash
140      m4a  | mp4a.40.2 129k 44100Hz medium, m4a_dash
251      webm | opus      135k 48000Hz medium, webm_dash

VW on Brass
Available audio formats for KsILRbZtTwc:
ID       EXT  | ACODEC     ABR  ASR    MORE INFO
------------------------------------------------------------
139-dash m4a  | mp4a.40.5  50k 22050Hz DASH audio, m4a_dash
140-dash m4a  | mp4a.40.2 130k 44100Hz DASH audio, m4a_dash
251-dash webm | opus      149k 48000Hz DASH audio, webm_dash
139      m4a  | mp4a.40.5  48k 22050Hz low, m4a_dash
140      m4a  | mp4a.40.2 129k 44100Hz medium, m4a_dash
251      webm | opus      136k 48000Hz medium, webm_dash

The ‘ID’ numbers refer to the audio streams. Other ID values would produce a video of a user-chosen resolution, etc., normally accompanied by one of the above types of audio stream.

One aspect of this stands out immediately. This is the variety of audio sample rate (ASR) options on offer. However in each case only one version of a RVWSoc video was uploaded, at a sample rate chosen by the RVWSoc. I had expected to see a choice of YT output audio codecs (compression systems), but was quite surprised, in particular, to see ASRs as low as 22·05k on offer as that means the YT audio would then have a frequency range that only extends to about 10kHz!

Given the main interest here is in determining what choice may deliver highest YT output audio quality I decided that analysis should focus on the higher, more conventional rates – 48k and 44k1. In addition, the above shows that – since in each case only one source file (and hence only one sample rate) was uploaded – some of the above YT output versions at 48k or 44k1 have also been thorough a sample rate conversion in addition to a codec conversion. That introduces another factor that may degrade sound quality! In this case I had copies of what had been uploaded, so could determine which of the YT output versions had been though such a rate conversion. For the present examination, I will therefore limit investigation to the YT output that maintains the same ASR as the source that was uploaded, so as to dodge this added conversion. Unfortunately, in general YT users probably won’t know which output version may have dodged this particular potential bullet! Choosing the least-tampered-with output therefore may be a challenge in practice for YT users.

Pan’s Anniversary [48k ASR 194 kb/s aac(LC) source audio]

I’ll begin the detailed comparisons with the video of an excerpt from the CD titled,“Pan’s Anniversary”. The version uploaded contains the audio in the form encoded in the aac(LC) codec at a bitrate of 194 kb/s and using a sample rate of 48k. The audio lasts 4 mins 54·88 sec. The table below compares this with the same aspects of the high ABR versions offered by YT.


version codec ABR bitrate (kb/s) duration (m:s)
source aac(LC) 48k 194 4:54·88
YT-140 aac(LC) 44k1 127 fltp 4:54·94
YT-251 opus 48k 135 fltp 4:54·90

We can see that YT-140 uses the same codec as the source, but alters the information bitrate, and the ASR. YT-251 transcodes the input aac(LC) into the opus codec, but doesn’t alter the sample rate. Both of the YT versions are of longer duration than the source uploaded. By loading the files into Audacity and examining the waveforms by eye it became clear that the YT versions were not time-aligned with each other, or with the source.

To avoid any changes caused by alteration of the ABR I decided to concentrate on comparing the source version with YT-251 – i.e. where the output uses the opus codec, not aac, but maintain’s the source sample rate. Having chosen matching sample rates the simplest and easiest was to check how similar two versions are is to time-align them and then subtract, sample by sample, each sample in a sequence in one version from the nominally ‘same instant’ matching one in the other. If the patterns are the same, the result is a series of zero-valued samples. If they don’t match we get a ‘difference’ pattern. However, before we can do this we have to determine the correct time-offset to align the sample sequences of the two versions.

In some cases that can be fairly obvious from looking at the sample patterns using a program like Audacity. But in other cases this is hard to see with enough clarity to determine with complete precision. Fortunately, we can use a mathematical method known as cross-correlation to show us the time alignment of similar waveform patterns. (See https://en.wikipedia.org/wiki/Cross-correlation if you want to know more about cross correlation.) This also can show us where the best alignment may occur in terms of any offset between the two patterns of samples being cross correlated.


Fig1.png - 54Kb

The above graph shows the result of cross correlating a section of the source and YT-251 versions of the audio. (The red and blue lines show the Left and Right channels of the stereo.) The results cover offsets over a range of +/- 800 samples. The process used 180,000 successive sample pairs from each set of samples. i.e. about 3·75 sec of audio from each. The best alignment is indicated by the location of the largest peak. If the sample sequences were already aligned this would happen at an offset = 0. However here we can see that the YT-251 version is ‘late’ by just over 300 samples.

Fig2.png - 40Kb


Zooming in, we can see that the peak is at an offset of -312 samples. Which at 48k sample rate corresponds to YT-251 being 6·5 milliseconds late. Having determined this I could trim 312 samples from the start of each channel of YT-251 and this aligned the two series of samples. (I also then had to trim the end to make them of equal length.) Once this was done it becomes possible to run though the samples and take a sample-by-sample difference between the source version and YT-251. This different set then details how the YT-251 output differs from the source version.


Fig3.png - 105Kb

The graph above shows how the rms audio power level of the audio varies with time. The red and blue lines show the levels in the Pan’s Anniversary source file. The grey and purple lines show the power levels versus time obtained from subtracting the source file sample values from the audio samples from YT-251. Ideally, we’d wish to find a subtraction like this producing a series of zeros as the difference samples. This is because such a result would tell us that the output was identical to the input and going via YT had not changed the audio at all! However in practice the above results show this clearly is not the case! There is a residual ‘error’ which is typically somewhere around 20dB below the input (and output) musical level.

In traditional terms for audio, 20dB would be regarded as a very poor signal/noise ratio. And if the change was considered as being equivalent to conventional distortion it would be assumed to indicate a level of around 10% distortion! So it represents a rather underwhelming result. However a more benign interpretation may be that it arises as a result of the process applied by YT slightly altering the overall amplitudes of the waveforms so they don’t quite match in size – hence leave a non-zero difference when the input and output are subtracted. With this in mind we can compare the input and output samples using other methods that aren’t sensitive to an overall change in signal pattern levels.


Fig4.png - 51Kb

The above graph compares the input and output files in terms of Crest Factor. This measures the peak-rms power level difference of the waveform shapes defined by the series of samples. The red/blue lines show the Left and Right channel results for the source file sent to YT. The orange/green lines the equivalent results for YT-251. To obtain these results each set of samples was divided up into a series 0·1 Sec sections. The peak and rms power level of each was calculated, and the above shows how often a given value was obtained, grouped into 1dB wide statistical ‘bins’. For a pure sinewave the peak/rms crest factor is 3dB. i.e. the peak levels of a sinewave are 3dB larger than the rms power. For well recorded music from acoustic instruments the Crest Factor tends to be in the range from a few dB up to well over 10dB for the most ‘spiky’ waveforms.

The result is interesting as we can see that the YT-251 output clearly exhibits a different Crest Factor distribution to the source file. It seems doubtful this could be produced by a simple change in the overall signal pattern level. (e.g. a volume control does change the overall level, but it should not change the shape of the audio waveform, and hence should leave the Crest Factor unaltered. (If it did alter this, you’d be advised to replace the control with one that worked properly!) It is particularly curious that the Crest factor seems to be increased by having the audio pass though the YT processing. Although possibly this may arise due to an input which is aac(LC) coded being transcoded into ‘opus’ codec form. OTOH perhaps YT apply some form of ‘tarting up’ to make audio ‘sound better’...

Fig5.png - 85Kb


A more familiar way to show the character of an audio recording is to plot its spectrum. The above graph shows the spectrum of the Pan ‘source’ file and of the series of samples obtained by subtracting the YT-251 output sample series. We can then say that - at any given frequency - the bigger the gap between these plotted line, the closer the YT-251 output is to the source supplied to YT. Looking at the graph we can then see that the results indicate that the faithfulness of the YT-251 result to the input is at its best at low frequencies where the gap is widest and the contributions to the overall signal level are greatest. However at higher frequencies the level of the error becomes a larger fraction of the input. And above about 16kHz the error level quite similar to the input level. The implication being that the original details in the source above about have essentially been lost. They may have been replaced by something generated to ‘fake’ this lost information in a plausible manner.


Fig6.png - 72Kb

In fact, if we compare the spectra of the input with that of the output we can see that the YT-251 output using the opus codec has essentially removed anything from the source that was above 20 kHz. This is replaced by an HF ‘noise floor’ produced via a process like dithering, probably employed to suppress quantisation distortion, etc. We can also see that the source, although at a 48k sample rate, has a sharp cutoff at just over 22 kHz. This indicates that that although what was submitted to YT was at 48k sample rate it was actually generated from a 44k1 (i.e. audio CD rate) version. The behaviour of the above spectra may well be another sign of changes that also produced the change in the typical Crest Factor distribution.

Having applied the above analysis to an example that produced a YT output using the opus codec we can now examine another example which uses aac for the output to compare that with the use of opus.

Vaughan-Williams on Brass [44k1 ABR 127 kb/s aac(LC)]

In this case the choice of source file had an aac audio content at an ASR of 44k1. This was chosen to match the sample rate of YT-140 and thus avoid any sample rate resampling effects. As with the previous “Pan’s Anniversary” example the YT-140 output was of a different duration to the source file, and not sample-aligned. So as before, I edited the output samples to trim them to a sample-aligned series that matched the timing of the source version.


Fig7.png - 86Kb

The graph above shows the time-averaged spectra of the source file and the difference between this and the YT-140 output. This represents the residual error level


Fig8.png - 102Kb

As with the earlier YT-251 example, the above shows the source signal level versus time, and the level versus of time of the sample-by-sample difference between the source and the YT output (now YT-140) level. In this case the error level seems to be around -30 to -35dB compared with the source which is an improvement on the earlier YT-251 example. Viewed as distortion it would be equivalent to a level of around 3% or less.

More broadly speaking, the results look similar to the previous example. However...

Fig9.png - 78Kb

...when we compare the actual spectra of the input and output we find that the output now has its high-frequency range cropped off at just under 16kHz! This is distinctly lower than the range present in the 44k1 sample rate aac input submitted to YT.

It is also worth noting that although the YT-251 example’s output spectrum seems to extend to around 20kHz we found that the part above about 15kHz looked like being ‘all error’! i.e. not the source HF details but some sort of facsimile!

Overall therefore, we might judge the YT-140 example to be, technically, more accurate than the YT-251 example. But to tell if this result was a general one we’d need to examine more examples. And given the changes in the applied high-frequency cut-off, we might also find that allowing the sample rate to altered might, overall, yield a different outcome! So the above results are interesting, but raise further questions which deserve future investigations.

BBC iPlayer as a comparison [48k ASR 320 kb/s aac]

In 2017 the BBC experimented with streaming BBC Radio 3 using flac in parallel with their standard iPlayer output formats. They did this for two reasons. One was to assess the practical requirements they would need to support if it became a standard. The other was to investigate if it produced audibly ‘better’ results. Some programmes were parallel streamed in days leading up to the Proms of that year. And almost all of the Proms were also streamed in flac format as well as their usual 320k aac format. As it wasn’t a formal service some special arrangements were needed for someone to receive the flac stream version. However given some advance notice I was able to use a modified version of the ffmpeg utility to capture examples of the flac stream for comparison with the aac. Having looked at the above YT examples it may be useful to compare them with the results from a BBC example.

Since the flac stream essentially represents the LPCM input to the BBC’s aac encoder they can serve as an example of what is possible if we compare and and contrast them with the above results of examining the YT output. For the sake of illustration I’ve chosen a section of a “Record Review” programme on R3.


Fig10.png - 83Kb

The above shows the time averaged spectra for a 5-min section of the program whilst music was being played. Here the source is taken from the flac data, and the output is taken from the 320k aac. These were then sample time-aligned so it was also possible to generate a sample-by-sample difference set of values and plot the spectrum of the residual ‘error’ pattern. i.e. the above can now be compared with the previous spectra for the YT examples. When this is done we can see that the error residual is somewhat lower relative to the source and output than in the YT examples – particularly at and above a few kHz. The output also effectively extends to higher frequencies than the YT examples. In addition we can see that the error level remains low over a wider range of frequencies than the YT examples.


Fig11.png - 103Kb

The above plots the audio levels of the source versus time, and also the residual level of the difference (error) between the source and output for the same section of programme. Again, this is better than the results for YT. In this BBC example the difference error level is around 40dB below the signal level. Again, this is nominally poor if it were a standard noise level or distortion result where it would look like a distortion level of around 1%. But it is better than the YT examples – as, of course, we’d hope given the higher bitrate (320k) employed by the BBC! (The BBC eventually decided that flac streaming wasn’t required, although some audiophiles have disagreed with them and regret the decision to stay with 320k aac.)

More evidence would be needed to be confident in drawing reliable general conclusions. It is also worth adding that the BBC now limit the audio for the video streams to a maximum of just 128k aac as they judge that to be fine for video, even of musical events. Despite streaming Proms via R3 at 320k, their TV video streams of the same events are limited to 128k aac. So there does seem to be an assumption that video somehow reduces the need for more accurate audio. That may be so, but the above examination does perhaps serve as a pointer to the possibility that the audio quality of YT’s output is being limited from an audiophile’s POV by their offered choice of birates/codecs. So this general area may be worth further examination...

J. C. G. Lesurf  
24th July 2022
  

  I’d like to thank the RVW Society for kindly providing me with source copies of files for the purpose of the comparisons. You can find out more about the Society and their CD releases on the Albion label here: The Ralph Vaughan Williams Society.


ambut.gif - 3891 bytes



Read the whole story
emrox
9 days ago
reply
Hamburg, Germany
Share this story
Delete

The modern way to write JavaScript servers

1 Share

If you've visited Node's homepage at some point in your life you have probably seen this snippet:

import { createServer } from "node:http";

const server = createServer((req, res) => {
res.writeHead(200, { "Content-Type": "text/plain" });
res.end("Hello World!\n");
});

// starts a simple http server locally on port 3000
server.listen(3000, "127.0.0.1", () => {
console.log("Listening on 127.0.0.1:3000");
});

It shows how to create a plain web server and respond with plain text. This the API Node became famous for and it was kinda revolutionary when it came out. These days this call might be abstracted away behind frameworks, but under the hood, this is what they're all doing.

It's nice, it works and a whole ecosystem has formed around that. However, it's not all roses. A pretty big downside of this API is that you have to bind a socket and actually spawn the server. For production usage this is perfectly fine, but quickly becomes a hassle when it comes to testing or interoperability. Sure, for testing we can abstract away all that boilerplate like supertest does, but even so it cannot circumvent having to bind to an actual socket.

// Example supertest usage
request(app)
.get("/user")
.expect(200)
.end(function (err, res) {
if (err) throw err;
});

What if there was a better API that doesn't need to bind to a socket?

Request, Response and Fetch

Enter the modern times with the fetch-API that you might be already familiar with from browsers.

const response = await fetch("https://example.com");
const html = await response.text();

The fetch()-call returns a bog standard JavaScript class instance. But the kicker is that the same is true for the request! So if we zoom out a little this API allows every server to be expressed as a function that takes a Request instance and returns a Response.

type MyApp = (req: Request) => Promise<Response>;

Which in turn means you don't need to bind sockets anymore! You can literally just create a new Request object and call your app with that. Let's see what that looks like. This is our app (I know it's a boring one!):

const myApp = async (req: Request) => {
return new Response("Hello world!");
};

...and this is how we can test it:

// No fetch testing library needed
const response = await myApp(new Request("http://localhost/"));
const text = await response.text();

expect(text).toEqual("Hello world!");

There are no sockets involved or anything like that. We don't even need an additional library. It's just plain old boring JavaScript. You create an object and pass it to a function. Nothing more, nothing less. So, how much overhead does binding to sockets end up being?

Benchmark time/iter (avg) iter/s
Node-API 806.5 µs 1,240
Request/Response-API 3.0 µs 329,300

Well, it turns out quite a bit! This benchmark might make it look like the difference are miniscule, but when let's say have a test suite that runs this server a thousand times the differences become more stark:

Spawn 1000x time/ms speedup
Node-API 1.531s -
Request/Response-API 5.29ms 289x faster

Going from 1.5s to 5.2ms which is practically instant, made it much more enjoyable to work on the tests.

How do I launch my server?

Now, to be fair, so far we haven't launched the server yet. And that's exactly the beauty of this API, because we didn't need to! The exact API depends on the runtime in use, but usually it's nothing more than a few lines. In Deno it looks like this, for example:

Deno.serve(req => new Response("Hello world!"));

What's way more important than this though is that the WinterTC group (formerly known as WinterCG) has standardized exporting your app function directly. This means that it's much easier to run your code in different runtimes without changing your code (at least the server handling stuff).

export default {
fetch() {
return new Response("hello world");
},
};

Conclusion

This API works everywhere today! The only exception here is Node and although they're part of WinterTC they haven't shipped this yet. But with a little good ol polyfilling you can teach it the modern ways to build servers. Once Node supports this natively, a bunch of tooling for frameworks will become easier.

They too, tend to try to turn every runtime into Node which is a big task and causes lots of friction.. That's the exact scenario I ran into when writing adapters for frameworks for Deno and prototyping new APIs.

Shoutout to SvelteKit which is one of the modern frameworks that nailed this aspect and made writing an adapter a breeze!

Read the whole story
emrox
11 days ago
reply
Hamburg, Germany
Share this story
Delete

JavaScript Temporal is coming

1 Share
A new way to handle dates and times is being added to JavaScript. Let's take a look at Temporal, what problems it solves, the current state, and what you'll find in the new documentation about it on MDN.

Read the whole story
emrox
17 days ago
reply
Hamburg, Germany
Share this story
Delete

How to know when it's time to go

1 Share

RICHARD: So you’re going to quit, just like that? How can you do that?
GILFOYLE: By saying the words “I” and “quit” in conjunction together, i.e. “I quit”.
JARED: Um… there’s actually some paperwork involved.
“Silicon Valley”

I don’t have to tell you things are bad. Everybody knows things are bad. When you hate your job, no matter how much you try to put up with it, there comes a point where you’re mad as hell and you’re just not going to take it anymore. So, maybe this is the right moment to reflect: is it time to go?

Making the decision to leave a job is never easy. And it’s a pretty drastic step, especially if you don’t yet have another job to go to. But sometimes it has to be done. Let’s look at some signs that might indicate it’s time to say the words “I” and “quit” in conjunction together.

The comfort trap

If you’re feeling miserable and finding your work unrewarding, you may find it relatively easy to flip the mental switch that says “time to leave”. But sometimes you can run into the opposite problem: getting too comfortable where you are.

You may have had the misfortune to become an expert on your subject area. Being an expert sounds great, and it does bring you high status. But you’re the king of a very small hill, and one that will eventually be washed away by rising sea levels. If this is the case, you need to migrate to higher ground before it’s too late. But many people will hesitate, fatally, because they don’t like the idea of getting their feet wet.

Alternatively, you may find yourself in a job that’s so easy you can basically do it on autopilot. This situation is much harder to leave, because it’s so pleasant to be in.

And maybe it is okay for you to stay here for the rest of your career—if that’s really what you want. Is it?

PETER: I generally come in at least fifteen minutes late, and after that I sorta space out for about an hour. I just stare at my desk, but it looks like I’m working.
I do that for probably another hour after lunch, too. I’d say in a given week I probably only do about fifteen minutes of real, actual, work.
“Office Space”

Ultimately, work that’s too easy is no fun, and it’s not the basis of a rewarding career. All you’re doing is selling time, and as you get a little older you’ll come to realise that time is a non-renewable resource.

It’s no good just selling your life; you won’t be able to buy it back. My book Code For Your Life is a guide to the alternatives: building a meaningful career, becoming a master of your craft, and maybe even starting a successful independent business. In this excerpt, let’s talk about signs that your career might be starting to stagnate, and whether it’s time to quit so you can get ahead.

Why is everyone around me getting dumber?

Even if you enjoy your work and find it stimulating at first, especially if you’re surrounded by lots of smart and skilled people, you may find that the higher you rise in the organisation, the less this is the case. If it seems like everyone around you is getting dumber, what’s going on?

One might expect that people at the higher levels of a company would be more competent than those below, but this usually turns out not to be true, because of the Peter Principle:

Although some people function competently, I observed others who had risen above their level of competence and were habitually bungling their jobs, frustrating their co-workers and eroding the efficiency of the organization.
My analysis of hundreds of cases of occupational incompetence led me on to formulate The Peter Principle: In a hierarchy every employee tends to rise to his level of incompetence.
—Laurence J. Peter, “The Peter Principle”

In other words, if you’re good enough at your job, you’ll be promoted to another job, and another, until you eventually reach a job that you can’t do well, at which point you’ll stay in it (possibly for the rest of your career).

Which explains a lot about some organisations, doesn’t it?

The good engineers are evaporating

Another reason you can find yourself adrift on a ship of fools is the “Dead Sea” effect:

The more talented and effective engineers are the ones most likely to leave—to evaporate, if you will. They are the ones least likely to put up with the frequent stupidities and workplace problems that plague large organisations; they are also the ones most likely to have other opportunities that they can readily move to.
What tends to remain behind is the ‘residue’—the least talented and effective engineers. They tend to be grateful they have a job and make fewer demands on management. They tend to entrench themselves, becoming maintenance experts on critical systems, so that the organization can’t afford to let them go.
—Bruce F. Webster, “The Wetware Crisis: the Dead Sea effect”

If you’re in a company like this, it’s not hard to stand out from the “residue”, and as a result you may be showered with promotions, fancy titles, and maybe even money. That sounds great, but there’s a hidden danger to watch out for.

Stranded by the tide

If you’re promoted too far, too soon, you may find that when you look for other jobs at the same level, you don’t really have the necessary skills for them.

For example, if you’ve already become a so-called “senior” developer at Company A, and then you apply for the same job at Company B, you may find that their definition of “senior” is rather different, and that you don’t meet it. You’re a victim of title inflation: the currency of “senior” has become devalued.

Hence there’s a tendency for you to stay at Company A, because who wants to take a step down in job grade and salary? If you find yourself surrounded by Principal and Distinguished Engineers and Architects who don’t seem to actually know anything useful, then they may be suffering from this kind of title inflation. Make sure you don’t become one of them.

The company won’t love you back

Companies like to tell themselves pleasant little stories about how they’re like a “family”, everyone is a valued team member, the company will look after them, and so on.

The truth is that, however benevolent its messaging, a company exists to make profits and increase its own value. If this happens to benefit the people who work there, too, that’s nice, but it’s not what the company is fundamentally about.

Indeed, when the interests of the staff and the company’s profits come into conflict, the profits will always win. Welcome to capitalism.

In particular, the HR department is not your friend. They’re not your enemy, necessarily, but they exist to protect the company from you, and not vice versa.

HR exist to represent the interests of the company and those interests always have a degree of divergence with employees. It pays to get informed about your rights, because your HR and Legal teams are not going to do that for you.
This isn’t to say your HR team are bad people. They’re almost certainly not. They’re just doing their job. But don’t forget what their job is, and it’s not to protect your interests, so make sure you have someone at the table who is.
—Colm Doyle, “Having Friends in HR Is Fine, but HR Is Not Your Friend”

When the company says wonderful things about how much it values you, don’t take them quite at face value: after all, they would say that, wouldn’t they? And when you’re laid off, don’t take that personally either.

The company simply doesn’t have any feelings about you one way or the other, and once you know that, everything else about the way it treats you starts to make perfect sense.

When it’s time to quit

When you finally make the decision to leave, whatever the reason, there’s a right and a wrong way to go about it.

The first thing to say is that your departure should not come as a surprise, at least to your team leader or line manager. This would embarrass them professionally—they’re supposed to know what’s going on with their reports—and there’s no need to do that. Indeed, you want their goodwill, ideally in the form of a glowing reference, so you should do everything you can to smooth this potentially difficult transition.

In particular, you should give your boss a chance to change your mind—or, at least, you should let them feel that they’ve had that chance. It’s no good nursing your private resentments for years, while telling your boss once a week that everything’s fine, and then suddenly walking out on them.

Instead, you should make sure that if you’re unhappy about something, your boss knows it pretty much as soon as you do.

Say no to exit interviews

Companies will sometimes ask you to take part in an “exit interview”, usually presented as an information-gathering exercise where you can give honest feedback about your employment, and why you’re leaving. Sometimes they’ll ask for suggestions that could help improve things for the employees who are staying. Sounds innocent enough, right?

It’s a trap. Don’t agree to an exit interview: it can’t benefit you, since you’re leaving anyway. In fact, it could even hurt you. Once you start talking, there’s a danger that you might say too much, venting your deepest frustrations and criticisms. That could harm your relationship with both the company and the people involved.

There are all sorts of ways that your former employers might retaliate. They might decide not to give a reference. They might say bad things about you at industry events. They could refuse to confirm your employment to a background investigator. They could call your new company and tell them you were fired for fraud.
These are all real stories: every single one of those hypotheticals is actually something I’ve seen happen.
—Jacob Kaplan-Moss, “Exit Interviews Are a Trap”

Instead, just politely decline the exit interview, or any other solicitation for feedback. They can’t make you answer any questions. If you feel you can’t refuse, then just give bland answers, like “nothing comes to mind”.

So, the doors close behind you, and a new world of possibility opens up ahead. After perhaps many years or even decades of doing what you’re told, you’ve reached the threshold of a new relationship with your work: independence. What now?

And we’ll talk about that in the next post. See you in a minute.

Read the whole story
emrox
19 days ago
reply
Hamburg, Germany
Share this story
Delete
Next Page of Stories