3720 stories
·
3 followers

How to professionally say

1 Share
  • You are overcomplicating this

    • Being mindful of timelines. Let’s concentrate on the initial scope.

  • That meeting sounds like a waste of my time

    • I’m unable to add value to this meeting but I would be happy to review the minutes.

  • I told you so

    • As per my prediction, this outcome does not come as a surprise.

  • That sounds like a horrible idea

    • Are we confident that this is the best solution or are we still exploring alternatives?

  • I already told you this

    • As Indicated prior

    • The information has not changed since the last time it was communicated.

  • Can you answer all of the questions I asked and not just pick and choose one

    • Are you able to provide some clarity around the other questions previously asked?

  • Did you even read my email?

    • Reattaching my email to provide further clarity

    • I’ll provide an update when I have one

  • Stop bothering me

    • You have not heard from me because further information is not available at this time, once I have an update I’ll be sure to loop you in.

  • I don’t want to talk to you right now!

    • I am currently tied up with something but I will connect with you once I am free.

  • Do your job!

    • It is my understanding that you are the appropriate person to contact in regards to this. But if there’s is someone better equipped for this let me know.

  • That’s not my job

    • This falls outside of my responsibilities but I would be happy to connect you with someone who can help.

    • I’m not the correct person to assist with this but I am happy to connect you with <insert name> who will be able to help

  • Stop assigning me so many tasks if you want any of them to get done

    • As my workload is quite heavy, can you help me understand what I should reprioritize to accommodate this new task?

  • Answer my emails

    • If there’s a better way to get in contact with you please let me know as I am hoping to have this resolved as soon as possible.

  • This is not my problem

    • I recommend directing this issue to <Name> as they have the proper expertise to best assist you

  • If you would have read the whole email you’d know the answer to this

    • I have included my initial email below which contains all of the details you are looking for.

  • I have absolutely no idea what you are talking about

    • Can you help me better understand what exactly is it that you require on my end?

  • Stop micromanaging

    • I am confident in my ability to complete this project and will be sure to reach out, if or when I require your input.

  • Please hurry and get this done!

    • It is important that we have this completed in order to meet our targeted deadlines which are quickly approaching.

  • Stay in your own lane

    • Thank you for your input. I’ll keep that in mind as I move forward with decisions that fall within my responsibilities.

  • I’ve told you this multiple times

    • There seems to be a disconnect here as this information has already been provided.

  • I’m not doing your job for you

    • I do not have the capacity to take this on in addition to my own workload but I’m happy to support where it makes sense.

  • We do not need to have a meeting about this

    • Being respectful of everyone’s time let’s discuss this through email until we have a more defined agenda.

    • Being respectful of everyone’s time, can we communicate about this via email moving forward?

  • Did you just take credit for my work?

    • It is great to see my ideas being exposed to a wider audience and I would have appreciated the opportunity to have been included in the delivery.

  • Google that yourself

    • The internet is a great resource for these types of questions and I am available to clarify elements that you are not able to find online.

  • What you are saying does not make sense

    • We seem to have a different understanding on this. Can you elaborate further on your thought process here?

  • I am not paid enough to do this

    • This falls out of my job description but if the opportunity for a role expansion becomes available I would be happy to discuss reworking my contract to better align with these new responsibilities.

  • I totally forgot about your email

    • Thank you for your patience,

  • I’m going to need a whole lot of more information if you want me to do this

    • Please let me know when further details become available as I require more information to successfully complete this task.

  • Stop calling me before my workday even starts

    • If you need to contact me, please note that my working hours being at 8 am and 6 pm. Communications received prior to this won’t be seen.

  • Check your inbox, I already sent this to you!

    • I previously sent you an email regarding that but please let me know if something went wrong in transit.

  • I couldn’t care less

    • I will defer to your judgment on this as I am not passionate either way and I trust your expertise.

  • I told you so and now this is your problem

    • I did previously note that this was a likely outcome. How do you plan to resolve this?

  • Stop trying to make me do your work!

    • I am not able to offer you additional support in completing your workload.

  • Try problem solving on your own before you come to me

    • I encourage you to brainstorm possible solutions prior to looping me in for additional support.

  • Can you do your job so I don’t have to?

    • Please let me know when your deliverables have been completed.

  • If further changes are required do them yourself.

    • If further edits are required, I have attached a version of the document that you can apply your edits directly into.

  • You should be the one doing this, not me.

    • It is my understanding that this is your responsibility. If that is not the case please let me know.

  • I don’t need to be included on this.

    • I do not feel as though I am able to add any value to this conversation. Please remove me from this thread and feel free to loop me back in in the future should my involvement be required.

  • I can’t read your mind. Be more clear on what you want.

    • In order to successfully complete this I will need further details on what is required.

  • Does taking on all this extra work come with extra pay?

    • With my role expanding is there a plan to review my title and compensation to better reflect these additional responsibilities?

    • As my role has organically evolved, can we schedule time to review my overall compensation and discuss whether or not it is still aligned with my current role and responsibilities?

    • Will these tasks be part of my job long-term? If so, is there an opportunity to reevaluate my job description, title, and overall compensation to more accurately reflect these additional responsibilities?

  • Stop promising unrealistic timelines.

    • Can you walk me through your thinking on these timelines? I have some hesitations with the dates shared.

  • I’ve not been properly trained on this

    • Is training available in order for me to successfully complete this?

  • If you have me scheduled in meetings all day, when do you expect me to get this work done?

    • My calendar is currently heavily scheduled with meetings. To ensure appropriate time is available to get this done I can sit out of lower priority meetings this week or extend the deadline on this project. Please let me know which is preferred?

  • If I’m doing your job for you, then what are you doing all day?

    • Is there a higher priority task that is consuming all of your capacity at the moment?

  • I can’t take on anymore work right now

    • I am unable to take that on at the moment as my current workload is quite heavy. Is there someone else who can assist with this?

  • Your micromanaging isn’t making this go any faster

    • Though I appreciate your attention to this, I feel as though I could be more productive if I had an opportunity to work independently here.

  • If you expect me to do the job of 3 people, then I expect you to pay me the salary of 3 people.

    • Are additional team members being added to take on these roles or will I be expected to absorb these responsibilities? If the latter, I would be happy to set up some time to discuss appropriate compensation for this role expansion

  • You are not my boss, stop trying to assign me work.

    • Have you connected with <manager name> in regards to me taking this on? As it has not been communicated to me that I’ll be working on this.

  • If you want it done your way then just do it yourself.

    • As you seem to have a very clear vision for the execution of this, I encourage you to take the lead here and I’m happy to support you where necessary.

  • I don’t want to work with you more than I have to.

    • Would you be open to replacing our frequent communications with a monthly touch base where we can discuss all updates during that time?

  • I don’t want to attend a work event during my personal time.

    • I’m unable to attend after working hours.

  • Does the company actually care about the employees?

    • Are there resources and boundaries in place to support the physical and mental health of employees?

  • How much does this role pay?

    • Can you share what the overall compensation looks like for this role?

  • Are promotions based on performance or politics?

    • Is there an opportunity for growth within the company and if so what is the main metric for promotion?

  • Do you have a culture that expects employees to put in over 40 hours each week?

    • Is it common within the company for employees to exceed 40 working hours per week?

  • Is the manager of this role a micromanager?

    • How involved is the reporting manager with this role?

  • Do you expect employees to be available 24/7?

    • What is the expectation for being available outside of working hours?

  • I don’t believe you.

    • I’m not confident that the information you have provided is correct.

  • You are wasting my time.

    • Being respectful of time let’s regroup when more details become available.

  • I deserve a raise.

    • Given my contributions to the company’s success along with a current market analysis of my role. I am setting up time to discuss a salary review.

  • The way you speak to me is disrespectful.

    • I encourage you to reevaluate the way you are speaking to me, as the disrespect you are currently displaying towards me is not welcomed nor will it be tolerated.

  • That idea is going to be an epic fail.

    • I am not in agreement with this idea and have hesitations moving forward.

  • I am burning out with this workload and your lack of support.

    • My productivity is being impacted by the overwhelming workload I am currently assigned. Is there any support you or the team can offer?

  • You are underpaying me.

    • There is a notable discrepancy between my current salary and the going market rate for the comparable roles.

  • My job has evolved but my salary has stayed the same.

    • As my role has expanded since joining the company, I would like to review my compensation so that it better reflects my evolved responsibilities.

  • I want a higher salary than you are offering me.

    • I was hoping to be in the X to Y salary range for this position.

  • If you can’t pay any more then what else can you offer.

    • I understand there are budget restrictions that limit your ability to increase my salary. However, I would like to understand if additional vacation days, benefits, bonus, and other overall compensation items can be offered.

  • Maybe if you communicated that with us sooner we wouldn’t be in this mess.

    • In the future it is important to share information like this with the team sooner so we might mitigate these sort of issues.

  • Nobody has a clue.

    • I failed to find any knowledge keeper for this.

  • Figure it out yourself!

    • I cannot assist. However, I am confident in your ability to find a solution here.

  • Well, one of us needs to do this, and it's not going to be me.

    • This task has to be prioritized, and I encourage you to take the lead on it.

  • Read the whole story
    emrox
    5 days ago
    reply
    Hamburg, Germany
    Share this story
    Delete

    View Transitions Applied: More performant ::view-transition-group(*) animations

    1 Share

    If the dimensions of the ::view-transition-group(*) don’t change between the old and new snapshot, you can optimize its keyframes so that the pseudo-element animates on the compositor.

    ~

    🌟 This post is about View Transitions. If you are not familiar with the basics of it, check out this 30-min talk of mine to get up to speed.

    ~

    With View Transitions, the ::view-transition-group() pseudos are the ones that move around on the screen and whose dimensions get adjusted as part of the View Transition. You can see this in the following visualization when hovering the browser window:

    See the Pen
    View Transition Pseudos Visualized (2)
    by Bramus (@bramus)
    on CodePen.

    The keyframes to achieve this animation are automatically generated by the browser, as detailed in step 3.9.5 of the setup transition pseudo-elements algorithm.

    Set capturedElement’s group keyframes to a new CSSKeyframesRule representing the following CSS, and append it to document’s dynamic view transition style sheet:

    @keyframes -ua-view-transition-group-anim-transitionName {
      from {
        transform: transform;
        width: width;
        height: height;
        backdrop-filter: backdropFilter;
      }
    }

    Note: There are no to keyframes because the relevant ::view-transition-group has styles applied to it. These will be used as the to values.

    ~

    While this all works there one problem with it: the width and height properties are always included in those keyframes, even when the size of the group does not change from its start to end position. Because the width and height properties are present in the keyframes, the resulting animation runs on the main thread, which is typically something you want to avoid.

    Having UAs omit the width and height from those keyframes when they don’t change could allow the animation to run on the compositor, but OTOH that would break the predictability of things. If you were to rely on those keyframes to extract size information and the info was not there, your code would break.

    TEASER: Some of the engineers on the Blink team have explored a path in which width and height animations would be allowed to run on the compositor under certain strict conditions. One of those conditions being that the values don’t change between start and end. The feature, however, has only been exploratory so far and at the time of writing there is no intention to dig deeper into it because of other priorities.

    ~

    In a previous post I shared how you can get the old and new positions of a transitioned element yourself. This is done by calling a getBoundingClientRect before and after the snapshotting process.

    const rectBefore = document.querySelector('.box').getBoundingClientRect();
    const t = document.startViewTransition(updateTheDOMSomehow);
    
    await t.ready;
    const rectAfter = document.querySelector('.box').getBoundingClientRect();

    With this information available, you can calculate the delta between those positions and create your own FLIP keyframes to use with translate to move the group around in case when the old and new dimensions (width and height) are the same.

    const flip = [
    	&grave${(rectBefore.left - rectAfter.left)}px ${(rectBefore.top - rectAfter.top)}px&grave,
    	&grave0px 0px&grave,
    ];
    
    const flipKeyframes = {
    	translate: flip,
    	easing: "ease",
    };

    The generated keyframes can then be set on the group by sniffing out the relevant animation and updating the effect’s keyframes.

    const boxGroupAnimation = document.getAnimations().find((anim) => {
    	return anim.effect.target === document.documentElement &&
    	anim.effect.pseudoElement == '::view-transition-group(box)';
    });
    
    boxGroupAnimation.effect.setKeyframes(flipKeyframes);

    Because the new keyframes don’t include the width and height properties, these animations can now run on the compositor.

    In the following demo (standalone version here) this technique is used.

    See the Pen
    Better performing View Transition Animations, attempt #2, simplified
    by Bramus (@bramus)
    on CodePen.

    (Instructions: click the document to trigger a change on the page)

    When the width and height are different, you could calculate a scale transform that needs to be used. I’ll leave that up to you, dear reader, as an exercise.

    ~

    In the following demo the default generated animation and my FLIP-hijack version are shown side-by-side so that you can compare how both perform.

    See the Pen Regular and Better performing View Transition Animations side-by-side by Bramus (@bramus)
    on CodePen.

    Especially on mobile devices the results are remarkable.

    ~

    Read the whole story
    emrox
    8 days ago
    reply
    Hamburg, Germany
    Share this story
    Delete

    Don't be Frupid

    1 Share

    Frupidity: The Silent Killer of Productivity and Innovation

    Frugality is a virtue. The art of doing more with less, making sharp trade-offs, and keeping waste at bay so the good stuff – innovation, growth, maybe even a little joy – has room to thrive. Any engineer worth their salt knows the power of an elegant, efficient solution. A few well-placed optimizations can turn a sluggish system into a rocket.

    But frugality has a dark twin – a reckless, shortsighted impostor that mistakes cost-cutting for efficiency and penny-pinching for wisdom. Enter frupidity, or stupid frugality – the obsessive drive to save money in ways that ultimately cost far more in lost productivity, morale, and sanity. It’s the engineering equivalent of “optimizing” a car by removing the brakes to improve gas mileage.

    The Many Faces of Frupidity in Engineering

    Frupidity thrives in large orgs, where budgets are tight, bureaucracies dense, and someone, somewhere, is always trying to impress a spreadsheet. The best part? It usually masquerades as good stewardship.

    Tool Penny-Pinching

    “Why are we spending $15 a month per seat on this tool?” a manager asks. A fair question – except no one factors in that without it, engineers will burn hundreds of hours manually wrestling with tasks that a good automation could have handled in minutes. Multiply that by a hundred devs, and suddenly that “savings” is bleeding the company dry.

    Hardware Stinginess

    “No reason to buy high-end laptops when these entry-level machines get the job done.” Sure – if your definition of ‘getting the job done’ includes waiting five minutes for each build to compile. Those little delays don’t show up in the budget, but they pile up in engineers’ heads like a slow poison, turning momentum into molasses.

    Infrastructure Sabotage

    Cutting cloud costs by downgrading instances or consolidating databases into a single underpowered behemoth? Brilliant. Until query performance plummets and every engineer waits an extra five seconds per request, day in, day out. That’s not saving – it’s death by a thousand cuts.

    Travel Masochism

    Slashing travel budgets? Great idea – until engineers start taking three-hop flights with overnight layovers instead of direct ones. Productivity nosedives. Morale craters. Someone finally realizes the cost of these savings is burning more money than the travel budget ever did.

    Conference Austerity

    Conferences get nuked because someone upstairs sees them as a “nice to have.” The irony? That conference could’ve been where your engineers learned about a new technique that would’ve saved you a million bucks in infrastructure costs. Instead, they’re stuck reinventing the wheel – badly.

    The True Cost of Frupidity

    The worst thing about frupidity? It doesn’t look like a single, catastrophic failure. It creeps in quietly, like rust on an old bridge. The slow grind of waiting for a test suite to run because someone thought dedicated CI/CD machines were too expensive. The lost momentum of a brilliant engineer who spends half his time wrestling with red tape instead of building something great. The death of a thousand paper cuts.

    And because it doesn’t feel like an immediate disaster, leadership rarely notices – until it’s too late.

    Case Study

    Take a company I once worked with – let’s call them PennyTech. Picture a bland office park in the outer ring of a mid-sized city, where the air is thick with the dull hum of fluorescent lights and motivational posters. PennyTech had a frupidity problem so bad, it felt like a social experiment in suffering.

    They refused to pay for the professional version of a critical SaaS tool because the free tier technically worked. Never mind that it came with rate limits that forced engineers to stagger their work, or that it lacked automation features that would have streamlined half the team’s workflow. Still, the Powers That Be declared: We do not pay for what we can get for free.

    Then one day, some unsuspecting soul opened a spreadsheet, probably out of boredom or because they’d read too many corporate best-practice blogs. Turns out, that “free” tier had devoured over 500 hours of productivity in a single year.

    It’s not that PennyTech lacked intelligence; it’s that they had somehow misplaced it behind a locked budget door, believing they could outfox mathematics with good intentions and a healthy dose of denial. If there’s a moral here, it’s that sometimes the most expensive thing you can buy is the illusion of getting something for nothing.

    The Frupidity Playbook

    Want to maximize frupidity in your company and tank your engineering org? Let’s go:

    1. Give engineers the cheapest laptops money can buy. Who needs fast compile times when they can take a coffee break between builds?
    2. Ban taxis. If employees aren’t suffering through public transport, are they even working hard enough?
    3. Buy the worst coffee available. Bonus points if it comes in a bucket labeled Instant Beverage, Brown, Powdered.
    4. Make travel as painful as possible. If it’s not a three-hop flight with an overnight layover, you’re just throwing money away.
    5. Cancel all training and conferences. If devs really want to learn, they’ll figure it out in their spare time.
    6. Consolidate databases onto one overloaded server. Nothing screams optimization like a query that takes ten minutes to run.
    7. Mandate approval processes for everything. Need a second monitor? That’s a three-step approval process with a 90-day SLA.
    8. Measure success in savings, not productivity. If your engineers are miserable but the budget looks good, mission accomplished.

    Go forth and squeeze those pennies! Just don’t be surprised when your best people walk out the door – probably straight into a taxi you wouldn’t reimburse.

    Fighting Frupidity Before It Kills Your Org

    The antidote to frupidity isn’t reckless spending. It’s smart spending. It’s understanding that good engineering isn’t about cutting corners – it’s about knowing which investments will pay off in speed, efficiency, and sanity.

    Here’s how to fight it:

    1. Treat engineering time Like the Scarce Resource it is. Saving a few bucks on tools or infrastructure is meaningless if it costs engineers hours of wasted effort.
    2. Track hidden costs, not just line items. Look beyond the immediate savings – what’s the real cost of this decision in lost productivity?
    3. If it’s a tool for engineers, let engineers decide. Frupidity often comes from non-technical managers making technical decisions.
    4. Fix problems, don’t work around them. A small investment in the right place can remove an entire category of pain.
    5. Make it personal. Would you be willing to suffer through the inconvenience of a frupid decision? If not, don’t expect your team to.

    Bureaucracy: Frupidity’s Best Friend

    Bureaucracy and frupidity feed off each other. Bureaucracy clogs the gears, frupidity makes sure no one oils them. Process takes priority over results, and before long, efficiency is just a memory.

    At its core, bureaucracy exists to create consistency, reduce risk, and ensure accountability. Sounds reasonable. But large orgs don’t just create a little process to keep things smooth; they create layers upon layers of rules, approvals, and oversight, each one designed to solve a specific problem without ever considering the full picture.

    The result? No one is empowered to make simple, logical decisions. Instead, every request gets funneled through multiple approval steps, each gatekeeper incentivized to say “no” because “yes” means taking responsibility. Need a new tool? Fill out a form. Need a monitor? Get approval from finance. Need to travel? Justify the expense to three managers who have no context on why it matters.

    And here’s the real kicker: bureaucracies love small, quantifiable savings but hate measuring intangible costs. A finance department can easily see that cutting taxi reimbursements saves $50 per trip. What they can’t see is that forcing employees to take three-hour public transit journeys leads to exhaustion, frustration, and bad decision-making that costs thousands in lost productivity. Since no one is directly accountable for those hidden costs, they don’t count.

    Bureaucracy also breeds fear. The safest move in a bureaucratic system is always the one that involves the least immediate risk – so people default to defensive decision-making. It’s safer to deny an expense than approve it. It’s safer to force engineers to “make do” with slow tools than to spend money on something that might not show instant ROI. Every layer of approval adds more hesitation, more process, more distance from common sense.

    And because bureaucracies are designed to be self-sustaining, they never get smaller. Instead, they metastasize. Every new problem gets a new rule. Every new rule adds friction. Every added friction creates more inefficiency. Until one day, the company realizes it’s spending more time on approvals than on actual work.

    By then, the best employees have left. The ones who remain are either too exhausted to fight or have figured out that the real skill in a bureaucratic org isn’t building great things – it’s navigating the system. And that’s how frupidity wins.

    Conclusion: Smart Frugality is the Real Efficiency

    Frupidity isn’t just an engineering problem – it infects every corner of a company when the culture values cheapness over wisdom. The best companies don’t just cut costs – they invest in the right places.

    The final test for frupidity? Would you make the same trade-off if it was your own time, your own money, your own problem to solve? If the answer is no, it’s not frugality.

    It’s just plain frupid.

    Read the whole story
    emrox
    8 days ago
    reply
    Hamburg, Germany
    Share this story
    Delete

    YouTube audio quality

    1 Share


    YouTube Audio Quality - How Good Does it Get?


    You Tube is clearly used by a very large number of people. In general, they will be interested in watching videos of various types of content. One specific purpose is that it gets used to distribute, and make people aware of, audio recordings. A conversation on the “Pink Fish” webforum devoted to music and audio set me wondering about the technical quality of the audio that is on offer from You Tube (YT) videos. The specific comment which drew my attention was a claim that the ‘opus’ audio codec gives better results than the ’aac / mp4’ alternative. So I decided to investigate...

    Ideally to assess this requires a copy of what was uploaded to YT as a ‘source’ version which can then be compared with what YT then make available as output. By coincidence I had also quite recently joined the Ralph Vaughan-Williams Society (RVWSoc). They have been putting videos up onto YT which provide excerpts of the recordings they sell on Audio CDs. These proved excellent ‘tasters’ for anyone who wants to know what they are recording and releasing. And when I asked, they kindly provided me with some examples to help me investigate this issue of YT audio quality and codec choice.

    For the sake of simplicity I’ll ignore the video aspect of this entirely and only discuss the audio side. The RVWSoc kindly let me have ‘source uploaded’ copies of two examples. The choice of audio formats that YT offered for these videos are as follows:


    Pan’s Anniversary
    Available audio formats for HZHVTr1w6L8:
    ID       EXT  | ACODEC     ABR  ASR    MORE INFO
    ------------------------------------------------------------
    139-dash m4a  | mp4a.40.5  49k 22050Hz DASH audio, m4a_dash
    140-dash m4a  | mp4a.40.2 130k 44100Hz DASH audio, m4a_dash
    251-dash webm | opus      153k 48000Hz DASH audio, webm_dash
    139      m4a  | mp4a.40.5  48k 22050Hz low, m4a_dash
    140      m4a  | mp4a.40.2 129k 44100Hz medium, m4a_dash
    251      webm | opus      135k 48000Hz medium, webm_dash

    VW on Brass
    Available audio formats for KsILRbZtTwc:
    ID       EXT  | ACODEC     ABR  ASR    MORE INFO
    ------------------------------------------------------------
    139-dash m4a  | mp4a.40.5  50k 22050Hz DASH audio, m4a_dash
    140-dash m4a  | mp4a.40.2 130k 44100Hz DASH audio, m4a_dash
    251-dash webm | opus      149k 48000Hz DASH audio, webm_dash
    139      m4a  | mp4a.40.5  48k 22050Hz low, m4a_dash
    140      m4a  | mp4a.40.2 129k 44100Hz medium, m4a_dash
    251      webm | opus      136k 48000Hz medium, webm_dash

    The ‘ID’ numbers refer to the audio streams. Other ID values would produce a video of a user-chosen resolution, etc., normally accompanied by one of the above types of audio stream.

    One aspect of this stands out immediately. This is the variety of audio sample rate (ASR) options on offer. However in each case only one version of a RVWSoc video was uploaded, at a sample rate chosen by the RVWSoc. I had expected to see a choice of YT output audio codecs (compression systems), but was quite surprised, in particular, to see ASRs as low as 22·05k on offer as that means the YT audio would then have a frequency range that only extends to about 10kHz!

    Given the main interest here is in determining what choice may deliver highest YT output audio quality I decided that analysis should focus on the higher, more conventional rates – 48k and 44k1. In addition, the above shows that – since in each case only one source file (and hence only one sample rate) was uploaded – some of the above YT output versions at 48k or 44k1 have also been thorough a sample rate conversion in addition to a codec conversion. That introduces another factor that may degrade sound quality! In this case I had copies of what had been uploaded, so could determine which of the YT output versions had been though such a rate conversion. For the present examination, I will therefore limit investigation to the YT output that maintains the same ASR as the source that was uploaded, so as to dodge this added conversion. Unfortunately, in general YT users probably won’t know which output version may have dodged this particular potential bullet! Choosing the least-tampered-with output therefore may be a challenge in practice for YT users.

    Pan’s Anniversary [48k ASR 194 kb/s aac(LC) source audio]

    I’ll begin the detailed comparisons with the video of an excerpt from the CD titled,“Pan’s Anniversary”. The version uploaded contains the audio in the form encoded in the aac(LC) codec at a bitrate of 194 kb/s and using a sample rate of 48k. The audio lasts 4 mins 54·88 sec. The table below compares this with the same aspects of the high ABR versions offered by YT.


    version codec ABR bitrate (kb/s) duration (m:s)
    source aac(LC) 48k 194 4:54·88
    YT-140 aac(LC) 44k1 127 fltp 4:54·94
    YT-251 opus 48k 135 fltp 4:54·90

    We can see that YT-140 uses the same codec as the source, but alters the information bitrate, and the ASR. YT-251 transcodes the input aac(LC) into the opus codec, but doesn’t alter the sample rate. Both of the YT versions are of longer duration than the source uploaded. By loading the files into Audacity and examining the waveforms by eye it became clear that the YT versions were not time-aligned with each other, or with the source.

    To avoid any changes caused by alteration of the ABR I decided to concentrate on comparing the source version with YT-251 – i.e. where the output uses the opus codec, not aac, but maintain’s the source sample rate. Having chosen matching sample rates the simplest and easiest was to check how similar two versions are is to time-align them and then subtract, sample by sample, each sample in a sequence in one version from the nominally ‘same instant’ matching one in the other. If the patterns are the same, the result is a series of zero-valued samples. If they don’t match we get a ‘difference’ pattern. However, before we can do this we have to determine the correct time-offset to align the sample sequences of the two versions.

    In some cases that can be fairly obvious from looking at the sample patterns using a program like Audacity. But in other cases this is hard to see with enough clarity to determine with complete precision. Fortunately, we can use a mathematical method known as cross-correlation to show us the time alignment of similar waveform patterns. (See https://en.wikipedia.org/wiki/Cross-correlation if you want to know more about cross correlation.) This also can show us where the best alignment may occur in terms of any offset between the two patterns of samples being cross correlated.


    Fig1.png - 54Kb

    The above graph shows the result of cross correlating a section of the source and YT-251 versions of the audio. (The red and blue lines show the Left and Right channels of the stereo.) The results cover offsets over a range of +/- 800 samples. The process used 180,000 successive sample pairs from each set of samples. i.e. about 3·75 sec of audio from each. The best alignment is indicated by the location of the largest peak. If the sample sequences were already aligned this would happen at an offset = 0. However here we can see that the YT-251 version is ‘late’ by just over 300 samples.

    Fig2.png - 40Kb


    Zooming in, we can see that the peak is at an offset of -312 samples. Which at 48k sample rate corresponds to YT-251 being 6·5 milliseconds late. Having determined this I could trim 312 samples from the start of each channel of YT-251 and this aligned the two series of samples. (I also then had to trim the end to make them of equal length.) Once this was done it becomes possible to run though the samples and take a sample-by-sample difference between the source version and YT-251. This different set then details how the YT-251 output differs from the source version.


    Fig3.png - 105Kb

    The graph above shows how the rms audio power level of the audio varies with time. The red and blue lines show the levels in the Pan’s Anniversary source file. The grey and purple lines show the power levels versus time obtained from subtracting the source file sample values from the audio samples from YT-251. Ideally, we’d wish to find a subtraction like this producing a series of zeros as the difference samples. This is because such a result would tell us that the output was identical to the input and going via YT had not changed the audio at all! However in practice the above results show this clearly is not the case! There is a residual ‘error’ which is typically somewhere around 20dB below the input (and output) musical level.

    In traditional terms for audio, 20dB would be regarded as a very poor signal/noise ratio. And if the change was considered as being equivalent to conventional distortion it would be assumed to indicate a level of around 10% distortion! So it represents a rather underwhelming result. However a more benign interpretation may be that it arises as a result of the process applied by YT slightly altering the overall amplitudes of the waveforms so they don’t quite match in size – hence leave a non-zero difference when the input and output are subtracted. With this in mind we can compare the input and output samples using other methods that aren’t sensitive to an overall change in signal pattern levels.


    Fig4.png - 51Kb

    The above graph compares the input and output files in terms of Crest Factor. This measures the peak-rms power level difference of the waveform shapes defined by the series of samples. The red/blue lines show the Left and Right channel results for the source file sent to YT. The orange/green lines the equivalent results for YT-251. To obtain these results each set of samples was divided up into a series 0·1 Sec sections. The peak and rms power level of each was calculated, and the above shows how often a given value was obtained, grouped into 1dB wide statistical ‘bins’. For a pure sinewave the peak/rms crest factor is 3dB. i.e. the peak levels of a sinewave are 3dB larger than the rms power. For well recorded music from acoustic instruments the Crest Factor tends to be in the range from a few dB up to well over 10dB for the most ‘spiky’ waveforms.

    The result is interesting as we can see that the YT-251 output clearly exhibits a different Crest Factor distribution to the source file. It seems doubtful this could be produced by a simple change in the overall signal pattern level. (e.g. a volume control does change the overall level, but it should not change the shape of the audio waveform, and hence should leave the Crest Factor unaltered. (If it did alter this, you’d be advised to replace the control with one that worked properly!) It is particularly curious that the Crest factor seems to be increased by having the audio pass though the YT processing. Although possibly this may arise due to an input which is aac(LC) coded being transcoded into ‘opus’ codec form. OTOH perhaps YT apply some form of ‘tarting up’ to make audio ‘sound better’...

    Fig5.png - 85Kb


    A more familiar way to show the character of an audio recording is to plot its spectrum. The above graph shows the spectrum of the Pan ‘source’ file and of the series of samples obtained by subtracting the YT-251 output sample series. We can then say that - at any given frequency - the bigger the gap between these plotted line, the closer the YT-251 output is to the source supplied to YT. Looking at the graph we can then see that the results indicate that the faithfulness of the YT-251 result to the input is at its best at low frequencies where the gap is widest and the contributions to the overall signal level are greatest. However at higher frequencies the level of the error becomes a larger fraction of the input. And above about 16kHz the error level quite similar to the input level. The implication being that the original details in the source above about have essentially been lost. They may have been replaced by something generated to ‘fake’ this lost information in a plausible manner.


    Fig6.png - 72Kb

    In fact, if we compare the spectra of the input with that of the output we can see that the YT-251 output using the opus codec has essentially removed anything from the source that was above 20 kHz. This is replaced by an HF ‘noise floor’ produced via a process like dithering, probably employed to suppress quantisation distortion, etc. We can also see that the source, although at a 48k sample rate, has a sharp cutoff at just over 22 kHz. This indicates that that although what was submitted to YT was at 48k sample rate it was actually generated from a 44k1 (i.e. audio CD rate) version. The behaviour of the above spectra may well be another sign of changes that also produced the change in the typical Crest Factor distribution.

    Having applied the above analysis to an example that produced a YT output using the opus codec we can now examine another example which uses aac for the output to compare that with the use of opus.

    Vaughan-Williams on Brass [44k1 ABR 127 kb/s aac(LC)]

    In this case the choice of source file had an aac audio content at an ASR of 44k1. This was chosen to match the sample rate of YT-140 and thus avoid any sample rate resampling effects. As with the previous “Pan’s Anniversary” example the YT-140 output was of a different duration to the source file, and not sample-aligned. So as before, I edited the output samples to trim them to a sample-aligned series that matched the timing of the source version.


    Fig7.png - 86Kb

    The graph above shows the time-averaged spectra of the source file and the difference between this and the YT-140 output. This represents the residual error level


    Fig8.png - 102Kb

    As with the earlier YT-251 example, the above shows the source signal level versus time, and the level versus of time of the sample-by-sample difference between the source and the YT output (now YT-140) level. In this case the error level seems to be around -30 to -35dB compared with the source which is an improvement on the earlier YT-251 example. Viewed as distortion it would be equivalent to a level of around 3% or less.

    More broadly speaking, the results look similar to the previous example. However...

    Fig9.png - 78Kb

    ...when we compare the actual spectra of the input and output we find that the output now has its high-frequency range cropped off at just under 16kHz! This is distinctly lower than the range present in the 44k1 sample rate aac input submitted to YT.

    It is also worth noting that although the YT-251 example’s output spectrum seems to extend to around 20kHz we found that the part above about 15kHz looked like being ‘all error’! i.e. not the source HF details but some sort of facsimile!

    Overall therefore, we might judge the YT-140 example to be, technically, more accurate than the YT-251 example. But to tell if this result was a general one we’d need to examine more examples. And given the changes in the applied high-frequency cut-off, we might also find that allowing the sample rate to altered might, overall, yield a different outcome! So the above results are interesting, but raise further questions which deserve future investigations.

    BBC iPlayer as a comparison [48k ASR 320 kb/s aac]

    In 2017 the BBC experimented with streaming BBC Radio 3 using flac in parallel with their standard iPlayer output formats. They did this for two reasons. One was to assess the practical requirements they would need to support if it became a standard. The other was to investigate if it produced audibly ‘better’ results. Some programmes were parallel streamed in days leading up to the Proms of that year. And almost all of the Proms were also streamed in flac format as well as their usual 320k aac format. As it wasn’t a formal service some special arrangements were needed for someone to receive the flac stream version. However given some advance notice I was able to use a modified version of the ffmpeg utility to capture examples of the flac stream for comparison with the aac. Having looked at the above YT examples it may be useful to compare them with the results from a BBC example.

    Since the flac stream essentially represents the LPCM input to the BBC’s aac encoder they can serve as an example of what is possible if we compare and and contrast them with the above results of examining the YT output. For the sake of illustration I’ve chosen a section of a “Record Review” programme on R3.


    Fig10.png - 83Kb

    The above shows the time averaged spectra for a 5-min section of the program whilst music was being played. Here the source is taken from the flac data, and the output is taken from the 320k aac. These were then sample time-aligned so it was also possible to generate a sample-by-sample difference set of values and plot the spectrum of the residual ‘error’ pattern. i.e. the above can now be compared with the previous spectra for the YT examples. When this is done we can see that the error residual is somewhat lower relative to the source and output than in the YT examples – particularly at and above a few kHz. The output also effectively extends to higher frequencies than the YT examples. In addition we can see that the error level remains low over a wider range of frequencies than the YT examples.


    Fig11.png - 103Kb

    The above plots the audio levels of the source versus time, and also the residual level of the difference (error) between the source and output for the same section of programme. Again, this is better than the results for YT. In this BBC example the difference error level is around 40dB below the signal level. Again, this is nominally poor if it were a standard noise level or distortion result where it would look like a distortion level of around 1%. But it is better than the YT examples – as, of course, we’d hope given the higher bitrate (320k) employed by the BBC! (The BBC eventually decided that flac streaming wasn’t required, although some audiophiles have disagreed with them and regret the decision to stay with 320k aac.)

    More evidence would be needed to be confident in drawing reliable general conclusions. It is also worth adding that the BBC now limit the audio for the video streams to a maximum of just 128k aac as they judge that to be fine for video, even of musical events. Despite streaming Proms via R3 at 320k, their TV video streams of the same events are limited to 128k aac. So there does seem to be an assumption that video somehow reduces the need for more accurate audio. That may be so, but the above examination does perhaps serve as a pointer to the possibility that the audio quality of YT’s output is being limited from an audiophile’s POV by their offered choice of birates/codecs. So this general area may be worth further examination...

    J. C. G. Lesurf  
    24th July 2022
      

      I’d like to thank the RVW Society for kindly providing me with source copies of files for the purpose of the comparisons. You can find out more about the Society and their CD releases on the Albion label here: The Ralph Vaughan Williams Society.


    ambut.gif - 3891 bytes



    Read the whole story
    emrox
    17 days ago
    reply
    Hamburg, Germany
    Share this story
    Delete

    The modern way to write JavaScript servers

    1 Share

    If you've visited Node's homepage at some point in your life you have probably seen this snippet:

    import { createServer } from "node:http";

    const server = createServer((req, res) => {
    res.writeHead(200, { "Content-Type": "text/plain" });
    res.end("Hello World!\n");
    });

    // starts a simple http server locally on port 3000
    server.listen(3000, "127.0.0.1", () => {
    console.log("Listening on 127.0.0.1:3000");
    });

    It shows how to create a plain web server and respond with plain text. This the API Node became famous for and it was kinda revolutionary when it came out. These days this call might be abstracted away behind frameworks, but under the hood, this is what they're all doing.

    It's nice, it works and a whole ecosystem has formed around that. However, it's not all roses. A pretty big downside of this API is that you have to bind a socket and actually spawn the server. For production usage this is perfectly fine, but quickly becomes a hassle when it comes to testing or interoperability. Sure, for testing we can abstract away all that boilerplate like supertest does, but even so it cannot circumvent having to bind to an actual socket.

    // Example supertest usage
    request(app)
    .get("/user")
    .expect(200)
    .end(function (err, res) {
    if (err) throw err;
    });

    What if there was a better API that doesn't need to bind to a socket?

    Request, Response and Fetch

    Enter the modern times with the fetch-API that you might be already familiar with from browsers.

    const response = await fetch("https://example.com");
    const html = await response.text();

    The fetch()-call returns a bog standard JavaScript class instance. But the kicker is that the same is true for the request! So if we zoom out a little this API allows every server to be expressed as a function that takes a Request instance and returns a Response.

    type MyApp = (req: Request) => Promise<Response>;

    Which in turn means you don't need to bind sockets anymore! You can literally just create a new Request object and call your app with that. Let's see what that looks like. This is our app (I know it's a boring one!):

    const myApp = async (req: Request) => {
    return new Response("Hello world!");
    };

    ...and this is how we can test it:

    // No fetch testing library needed
    const response = await myApp(new Request("http://localhost/"));
    const text = await response.text();

    expect(text).toEqual("Hello world!");

    There are no sockets involved or anything like that. We don't even need an additional library. It's just plain old boring JavaScript. You create an object and pass it to a function. Nothing more, nothing less. So, how much overhead does binding to sockets end up being?

    Benchmark time/iter (avg) iter/s
    Node-API 806.5 µs 1,240
    Request/Response-API 3.0 µs 329,300

    Well, it turns out quite a bit! This benchmark might make it look like the difference are miniscule, but when let's say have a test suite that runs this server a thousand times the differences become more stark:

    Spawn 1000x time/ms speedup
    Node-API 1.531s -
    Request/Response-API 5.29ms 289x faster

    Going from 1.5s to 5.2ms which is practically instant, made it much more enjoyable to work on the tests.

    How do I launch my server?

    Now, to be fair, so far we haven't launched the server yet. And that's exactly the beauty of this API, because we didn't need to! The exact API depends on the runtime in use, but usually it's nothing more than a few lines. In Deno it looks like this, for example:

    Deno.serve(req => new Response("Hello world!"));

    What's way more important than this though is that the WinterTC group (formerly known as WinterCG) has standardized exporting your app function directly. This means that it's much easier to run your code in different runtimes without changing your code (at least the server handling stuff).

    export default {
    fetch() {
    return new Response("hello world");
    },
    };

    Conclusion

    This API works everywhere today! The only exception here is Node and although they're part of WinterTC they haven't shipped this yet. But with a little good ol polyfilling you can teach it the modern ways to build servers. Once Node supports this natively, a bunch of tooling for frameworks will become easier.

    They too, tend to try to turn every runtime into Node which is a big task and causes lots of friction.. That's the exact scenario I ran into when writing adapters for frameworks for Deno and prototyping new APIs.

    Shoutout to SvelteKit which is one of the modern frameworks that nailed this aspect and made writing an adapter a breeze!

    Read the whole story
    emrox
    19 days ago
    reply
    Hamburg, Germany
    Share this story
    Delete

    JavaScript Temporal is coming

    1 Share
    A new way to handle dates and times is being added to JavaScript. Let's take a look at Temporal, what problems it solves, the current state, and what you'll find in the new documentation about it on MDN.

    Read the whole story
    emrox
    25 days ago
    reply
    Hamburg, Germany
    Share this story
    Delete
    Next Page of Stories