2672 stories
1 follower

Using console.log() debugging in Visual Studio Code

Using the new in-built JavaScript debugger in Visual Studio code you can use the browser developer tools Console right inside the editor. I just published a “TikTok” style video on the official Visual Studio Code channel explaining this and – after lots of criticism for the quality of the video (lads, this is on purpose!) […]
Read the whole story
34 minutes ago
Hamburg, Germany
6 days ago
West Grove, PA
Share this story

CCC meldet keine Sicherheitslücken mehr an CDU

1 Share

Um ihren Haustür-Wahlkampf effizient zu koordinieren, unterhält die CDU eine App mit dem Namen CDUconnect. In dieser App sammeln christdemokratische Klinkenputzerinnen Daten über Personen, die sie in ihrem Heim aufgesucht haben.

Die CCC-Aktivistin Lilith Wittmann widmete dieser App im Mai 2021 ein paar Stunden ihrer Aufmerksamkeit. Das Ergebnis: Nicht nur die persönlichen Daten von 18.500 Wahlkampfhelferinnen, mit E-Mail-Adressen & Photos, sondern auch die persönlichen Daten von 1.350 Unterstützerinnen der CDU inklusive Adresse, Geburtsdatum und Interessen waren ungeschützt und frei über das Netz zugänglich. Natürlich durfte auch die halbe Million Datensätze über politische Einstellungen kontaktierter Personen nicht fehlen.

Die Schwachstellen meldete sie verantwortungsbewusst den verantwortlichen Stellen der CDU, dem Bundesamt für Sicherheit in der Informationstechnik und den Berliner Datenschutzbeauftragten.

Die unsichere Datenbank wurde kurz danach abgeschaltet und die CDU gelobte Besserung.
So weit, so alltäglich (leider).

Responsible Disclosure

Schon im Austausch mit der Sicherheitsforscherin stellte die CDU rechtliche Schritte in Aussicht; ein häufiger erster Impuls aus Frustration und Inkompetenz. Üblicherweise verfliegt die fixe Idee, eine ehrenamtliche Sicherheitsforscherin anzuzeigen, recht schnell: Die Meldung der Schwachstelle ist eine kostenlose Nachhilfe in IT-Sicherheit und dient dem Schutz der fast 20.000 betroffenen Personen.

In der IT-Sicherheitskultur hat sich für solche Fälle das Verfahren der responsible disclosure etabliert: Die Entdeckerinnen melden die Schwachstelle, erhalten keinerlei Kompensation und berichten öffentlich über die Schwachstelle, wenn die Gefahr für die Betroffenen gebannt ist.

Shooting the Messenger

Leider erweist sich die CDU als äußerst undankbar für die ehrenamtliche Nachhilfe. Sie hat nun beim LKA Strafantrag gegen die CCC-Aktivistin gestellt. „Shooting the Messenger" wird die dysfunktionale Strategie genannt, nicht das Problem zu lösen, sondern jene anzugreifen, die darauf hinweisen.

“Das macht die CDU nicht nur in diesem Fall, sondern auch mit der Digitalisierung und anderen wichtigen politischen Problemfeldern. Insofern ist dieses destruktive Vorgehen nur konsequent.” sagte Linus Neumann, Sprecher des Chaos Computer Clubs.

CCC wünscht CDU viel Glück bei zukünftigen Schwachstellen

Leider hat die CDU damit das implizite Ladies-and-Gentlemen-Agreement der responsible disclosure einseitig aufgekündigt. “Um künftig rechtliche Auseinandersetzungen zu vermeiden, sehen wir uns leider gezwungen, bei Schwachstellen auf Systemen der CDU zukünftig auf Meldung zu verzichten”, kündigte Neumann an.

Der CCC bedauert ausdrücklich, dass damit das Risiko anonymer Full-Disclosure-Veröffentlichungen für die CDU und ihre freiwilligen Unterstützerinnen steigt. Die Verantwortung für zukünftige derartige Veröffentlichungen weisen wir vorsorglich von uns.

Read the whole story
23 hours ago
Hamburg, Germany
Share this story

How To Build Resilient JavaScript UIs


Things on the web can break — the odds are stacked against us. Lots can go wrong: a network request fails, a third-party library breaks, a JavaScript feature is unsupported (assuming JavaScript is even available), a CDN goes down, a user behaves unexpectedly (they double-click a submit button), the list goes on.

Fortunately, we as engineers can avoid, or at least mitigate the impact of breakages in the web apps we build. This however requires a conscious effort and mindset shift towards thinking about unhappy scenarios just as much as happy ones.

The User Experience (UX) doesn’t need to be all or nothing — just what is usable. This premise, known as graceful degradation allows a system to continue working when parts of it are dysfunctional — much like an electric bike becomes a regular bike when its battery dies. If something fails only the functionality dependent on that should be impacted.

UIs should adapt to the functionality they can offer, whilst providing as much value to end-users as possible.

Why Be Resilient

Resilience is intrinsic to the web.

Browsers ignore invalid HTML tags and unsupported CSS properties. This liberal attitude is known as Postel’s Law, which is conveyed superbly by Jeremy Keith in Resilient Web Design:

“Even if there are errors in the HTML or CSS, the browser will still attempt to process the information, skipping over any pieces that it can’t parse.”

JavaScript is less forgiving. Resilience is extrinsic. We instruct JavaScript what to do if something unexpected happens. If an API request fails the onus falls on us to catch the error, and subsequently decide what to do. And that decision directly impacts users.

Resilience builds trust with users. A buggy experience reflects poorly on the brand. According to Kim and Mauborgne, convenience (availability, ease of consumption) is one of six characteristics associated with a successful brand, which makes graceful degradation synonymous with brand perception.

A robust and reliable UX is a signal of quality and trustworthiness, both of which feed into the brand. A user unable to perform a task because something is broken will naturally face disappointment they could associate with your brand.

Often system failures are chalked up as "corner cases" — things that rarely happen, however, the web has many corners. Different browsers running on different platforms and hardware, respecting our user preferences and browsing modes (Safari Reader/ assistive technologies), being served to geo-locations with varying latency and intermittency increase the likeness of something not working as intended.

Error Equality

Much like content on a webpage has hierarchy, failures — things going wrong — also follow a pecking order. Not all errors are equal, some are more important than others.

We can categorize errors by their impact. How does XYZ not working prevent a user from achieving their goal? The answer generally mirrors the content hierarchy.

For example, a dashboard overview of your bank account contains data of varying importance. The total value of your balance is more important than a notification prompting you to check in-app messages. MoSCoWs method of prioritization categorizes the former as a must-have, and the latter a nice to have.

If primary information is unavailable (i.e: network request fails) we should be transparent and let users know, usually via an error message. If secondary information is unavailable we can still provide the core (must have) experience whilst gracefully hiding the degraded component.

Knowing when to show an error message or not can be represented using a simple decision tree:

Categorization removes the 1-1 relationship between failures and error messages in the UI. Otherwise, we risk bombarding users and cluttering the UI with too many error messages. Guided by content hierarchy we can cherry-pick what failures are surfaced to the UI, and what happen unbeknownst to end-users.

Prevention is Better than Cure

Medicine has an adage that prevention is better than cure.

Applied to the context of building resilient UIs, preventing an error from happening in the first place is more desirable than needing to recover from one. The best type of error is one that doesn’t happen.

It’s safe to assume never to make assumptions, especially when consuming remote data, interacting with third-party libraries, or using newer language features. Outages or unplanned API changes alongside what browsers users choose or must use are outside of our control. Whilst we cannot stop breakages outside our control from occurring, we can protect ourselves against their (side) effects.

Taking a more defensive approach when writing code helps reduce programmer errors arising from making assumptions. Pessimism over optimism favours resilience. The code example below is too optimistic:

const debitCards = useDebitCards();

return (
    {debitCards.map(card => {

It assumes that debit cards exist, the endpoint returns an Array, the array contains objects, and each object has a property named lastFourDigits. The current implementation forces end-users to test our assumptions. It would be safer, and more user friendly if these assumptions were embedded in the code:

const debitCards = useDebitCards();

if (Array.isArray(debitCards) && debitCards.length) {
  return (
      {debitCards.map(card => {
        if (card.lastFourDigits) {
          return <li>{card.lastFourDigits}</li>

return "Something else";

Using a third-party method without first checking the method is available is equally optimistic:

stripe.handleCardPayment(/* ... */);

The code snippet above assumes that the stripe object exists, it has a property named handleCardPayment, and that said property is a function. It would be safer, and therefore more defensive if these assumptions were verified by us beforehand:

if (
  typeof stripe === 'object' && 
  typeof stripe.handleCardPayment === 'function'
) {
  stripe.handleCardPayment(/* ... */);

Both examples check something is available before using it. Those familiar with feature detection may recognize this pattern:

if (navigator.clipboard) {
  /* ... */

Simply asking the browser whether it supports the Clipboard API before attempting to cut, copy or paste is a simple yet effective example of resilience. The UI can adapt ahead of time by hiding clipboard functionality from unsupported browsers, or from users yet to grant permission.

User browsing habits are another area living outside our control. Whilst we cannot dictate how our application is used, we can instill guardrails that prevent what we perceive as "misuse". Some people double-click buttons — a behavior mostly redundant on the web, however not a punishable offense.

Double-clicking a button that submits a form should not submit the form twice, especially for non-idempotent HTTP methods. During form submission, prevent subsequent submissions to mitigate any fallout from multiple requests being made.

Preventing form resubmission in JavaScript alongside using aria-disabled="true" is more usable and accessible than the disabled HTML attribute. Sandrina Pereira explains Making Disabled Buttons More Inclusive in great detail.

Responding to Errors

Not all errors are preventable via defensive programming. This means responding to an operational error (those occurring within correctly written programs) falls on us.

Responding to an error can be modelled using a decision tree. We can either recover, fallback or acknowledge the error:

When facing an error, the first question should be, “can we recover?” For example, does retrying a network request that failed for the first time succeed on subsequent attempts? Intermittent micro-services, unstable internet connections, or eventual consistency are all reasons to try again. Data fetching libraries such as SWR offer this functionality for free.

Risk appetite and surrounding context influence what HTTP methods you are comfortable retrying. At Nutmeg we retry failed reads (GET requests), but not writes (POST/ PUT/ PATCH/ DELETE). Multiple attempts to retrieve data (portfolio performance) is safer than mutating it (resubmitting a form).

The second question should be: If we cannot recover, can we provide a fallback? For example, if an online card payment fails can we offer an alternative means of payment such as via PayPal or Open Banking.

Fallbacks don’t always need to be so elaborate, they can be subtle. Copy containing text dependant on remote data can fallback to less specific text when the request fails:

The third and final question should be: If we cannot recover, or fallback how important is this failure (which relates to "Error Equality"). The UI should acknowledge primary errors by informing users something went wrong, whilst providing actionable prompts such as contacting customer support or linking to relevant support articles.


UIs adapting to something going wrong is not the end. There is another side to the same coin.

Engineers need visibility on the root cause behind a degraded experience. Even errors not surfaced to end-users (secondary errors) must propagate to engineers. Real-time error monitoring services such as Sentry or Rollbar are invaluable tools for modern-day web development.

Most error monitoring providers capture all unhandled exceptions automatically. Setup requires minimal engineering effort that quickly pays dividends for an improved healthy production environment and MTTA (mean time to acknowledge).

The real power comes when explicitly logging errors ourselves. Whilst this involves more upfront effort it allows us to enrich logged errors with more meaning and context — both of which aid troubleshooting. Where possible aim for error messages that are understandable to non-technical members of the team.

Extending the earlier Stripe example with an else branch is the perfect contender for explicit error logging:

if (
  typeof stripe === "object" &&
  typeof stripe.handleCardPayment === "function"
) {
  stripe.handleCardPayment(/* ... */);
} else {
    "[Payment] Card charge — Unable to fulfill card payment because stripe.handleCardPayment was unavailable"

Note: This defensive style needn’t be bound to form submission (at the time of error), it can happen when a component first mounts (before the error) giving us and the UI more time to adapt.

Observability helps pinpoint weaknesses in code and areas that can be hardened. Once a weakness surfaces look at if/ how it can be hardened to prevent the same thing from happening again. Look at trends and risk areas such as third-party integrations to identify what could be wrapped in an operational feature flag (otherwise known as kill switches).

Users forewarned about something not working will be less frustrated than those without warning. Knowing about road works ahead of time helps manage expectations, allowing drivers to plan alternative routes. When dealing with an outage (hopefully discovered by monitoring and not reported by users) be transparent.


It’s very tempting to gloss over errors.

However, they provide valuable learning opportunities for us and our current or future colleagues. Removing the stigma from the inevitability that things go wrong is crucial. In Black box thinking this is described as:

“In highly complex organizations, success can happen only when we confront our mistakes, learn from our own version of a black box, and create a climate where it’s safe to fail.”

Being analytical helps prevent or mitigate the same error from happening again. Much like black boxes in the aviation industry record incidents, we should document errors. At the very least documentation from prior incidents helps reduce the MTTR (mean time to repair) should the same error occur again.

Documentation often in the form of RCA (root cause analysis) reports should be honest, discoverable, and include: what the issue was, its impact, the technical details, how it was fixed, and actions that should follow the incident.

Closing Thoughts

Accepting the fragility of the web is a necessary step towards building resilient systems. A more reliable user experience is synonymous with happy customers. Being equipped for the worst (proactive) is better than putting out fires (reactive) from a business, customer, and developer standpoint (less bugs!).

Things to remember:

  • UIs should adapt to the functionality they can offer, whilst still providing value to users;
  • Always think what can wrong (never make assumptions);
  • Categorize errors based on their impact (not all errors are equal);
  • Preventing errors is better than responding to them (code defensively);
  • When facing an error, ask whether a recovery or fallback is available;
  • User facing error messages should provide actionable prompts;
  • Engineers must have visibility on errors (use error monitoring services);
  • Error messages for engineers/ colleagues should be meaningful and provide context;
  • Learn from errors to help our future selves and others.

Read the whole story
1 day ago
Hamburg, Germany
2 days ago
West Grove, PA
Share this story

The Ultimate Guide to Software Project Estimation

1 Comment

Software Project Estimation. Three words guaranteed to make anyone in software development shift uncomfortably in their seat. Until now.

Over the past 10 years, our team has planned hundreds of development projects. As you’d expect, we’ve gotten better and better at it! Our Software Development Estimation Tool started life as a spreadsheet, before progressing into a recently updated online tool (or mini web app). This article discusses how to use the estimation tool for software development, as well as the underlying methodology that powers it. In particular, how looking at every project through two lenses (team planning and tasks) can improve your understanding of the software project’s cost and timeline.

Estimating is, by definition, a guess about the future. The fact is that the majority of software projects aren’t delivered on time, run over budget, and end up with fewer features than originally planned. It’s safe to say that, as an industry, we’re not all that good at software project estimations, so it’s understandable to be unsure about the process.

Yet there is no getting around the need for robust and accurate software project estimation because, ultimately, clients need to be confident they can fund a project before they commit to it.

Estimating software projects poorly has many other side effects:

  • Depending on the contract in place, clients may have to pay more than they had budgeted.
  • The product itself may suffer, as corners get cut trying to deliver within unrealistic time and budget constraints.
  • The development team’s morale can be negatively affected, as they suffer from the stress and pressure of trying to meet unrealistic expectations.
  • A project may never be completed if funding runs out.

Over the past 10 years, my team and I have planned hundreds of development projects. In the process, we developed a methodology that has worked well and so, we built a free tool to guide others through the most essential parts of software estimations and project planning.

Why Are Software Estimates So Hard?

Until around 2011 or so, the majority of development teams used the Waterfall methodology to plan projects. Waterfall requires all the specifications of a software project to be defined upfront, which is very helpful to the estimation process. But now, everyone is going Agile. Products are shaped through ongoing stakeholder conversations, and what is delivered may not exactly resemble the initial concept. MVPs (Minimum Viable Products) are released quickly, feedback is obtained and improvements are made iteratively. While Agile has proved its value as a development framework, it has complicated things from a planning perspective.

So the question we wanted to answer was: “How can we accurately estimate within a framework that thrives on continuous unplanned change?”

The Scalable Method

Our solution, which we have very modestly named the Scalable Method, advocates what Buddhists might call the ‘Middle Way’. You see, we’re not convinced that the Agile/Waterfall dichotomy – that pervades the industry – is helpful. Things are rarely black and white in the real world. That doesn’t mean we’re rejecting either approach, the opposite, in fact, we’re cherry-picking the best aspects from each. Agility is absolutely a good thing, it delivers the best products for our clients. But that doesn’t mean that planning has nothing to offer. A healthy sprinkling of planning helps provide more accurate estimates when needed. The Scalable Method provides the best of both worlds, and it makes everybody feel more Zen.

Who Should Do a Software Estimate?

That’s a great question. The reason many people worry about the software project estimation process is that it requires a broad set of skills. From design and development through to strategy.

The need for estimates is present throughout the entire development process, from top-level project planning to version release planning to task planning. These all require estimates to be done.

In an ideal world, the person doing the estimate for a project should be someone who has built software and done estimates before. Estimating a software project is itself a valuable skill. The more you do it the better you get. Because every estimate is shaped by the estimator’s personal heuristics and past experiences, the ideal person is someone who has also worked on a similar project.

Now, not every project will have someone who fits that mold. And that’s fine. We believe that with a little help (for example, from our tools and guides) anyone can take on the role of a Product Owner and create a solid set of requirements and an estimate. It will take some time, research and maybe some help, but it is achievable.

Project Estimation and Team Planning Tool

Now let’s get into the nitty-gritty. Our estimator tool has two principal functions:

  1. Identify the team needed to deliver your project.
  2. Estimate the monthly and/or total cost for your software project.

I’ve already mentioned how estimating is akin to predicting the future. The more information you consider, the better your model of the future will be. In other words, if you skip the requirements phase your estimate will have a lot of uncertainty.  When I am asked to give an estimate on a project without proper requirements, I often refer to this as a “ballpark” or a “guess” because my margin of error will be very high.  The more refined the requirements are, and the more thought and work are put into the estimate, the lower the margin of error on the estimate.

The first real step in a project is often the creation of a Product Requirements Document (or PRD). If this is the first time you’ve heard of a PRD, then I strongly suggest that you read our in-depth guide before continuing on with this. It’s by far our most popular blog post and has helped hundreds of product owners define their requirements.

For everyone else, let’s take a quick refresher course. A PRD should answer questions like:

  • What is the purpose of this project (AKA goals)?
  • Who will use your product and how (AKA audience and user personas)?
  • What features will your product have (AKA user stories and functionality requirements)?

The Project Estimator app works seamlessly with our Job Description app. So, once you’re happy with your team and estimate, we’ll help you craft a great job description and attract the right talent to your project. Finally, as both apps are plugged directly into our network of hand-picked freelancers, it’s very easy to see what talent is available for your project, in case you get curious.

The time you spend on software project estimation will depend on the specific goals and requirements of your project. Some projects will require a Waterfall-style task-level breakdown before work can start, while others will skew towards the Agile end of the spectrum and will only need to use the team planning part of the tool (not the task breakdown).

We’ve designed the app to be flexible enough to help with all types of projects, budgets, and development approaches. To best demonstrate how this works, I’ve picked some common use cases. Let’s start with a simple use case, followed by a more robust example.

Use Case 1: Staff Augmentation

Your client is midway through building an application and they’ve decided they want to augment the project team by adding a Designer (part-time) and Senior Developer (full-time). The client has set aside a budget of $15k USD/month.


The requirements stage for a staff augmentation contract is often straightforward. In this case, we’ll assume the client has already defined the skills, hours and rates they require. With that information to hand, head over to the Estimator Tool and create a new project. Click the Start an Estimate button and proceed through the wizard. You can skip the foundational tasks section – we’ll discuss this in the next example.  

Project Description

In this section, you need to input the expected duration of your staff augmentation contract. You can enter this as a whole or decimal of a month. In this example, my project is a rolling 1-month contract. This section also includes an optional project description field.

Team Details

This is where you build out your team. Click the Add Team Member button and then choose  your first role. I’ll create a Senior Developer role, but you can select whatever role is appropriate for your project. If none of the options seem to fit, just select Other and type in the name of your role.

Now input the desired hours you have agreed on with your client. In my example, this will be a full-time role, so I’ll select 8 hours. You’ll notice the app will then pre-fill the rates. These aren’t set in stone though – you can edit them – they’re more of a guideline. In the case of my Senior Developer role, the app lists a $50/hour rate.

Next, you’ll see the Create JD (job description) option. If you’re looking for a remote developer to fill the role, we’ll be happy to help you. Simply click this link, create a job description and our team will contact you to discuss the details if you’d like. If you already have someone in mind, you can skip this step. I also need a Designer for this example, so I’ll repeat the above process for that role.

Once I’m done, the Estimated Monthly Cost is calculated at the bottom. At close to $17,000, it’s slightly above the client’s budget. I have a few options though, I can chat with the client and explain they will need to up the budget to secure the right candidates. I could try to get lower rates (by negotiating with the contractors or finding cheaper ones, though cheaper sometimes means less productive) or I can reduce the hours/day for one or both of the team members.

As this is a Staff Augmentation project, you don’t need to fill out the Tasks section. So that’s it, you’re done!

But what if you’re starting a new project, or want a more detailed estimate for a new initiative?

Don’t worry, we got you.

Use Case 2: Project-Based, with Defined Requirements

Let’s use the example of building a ToDo app featured in our How to Write an Effective Product Requirements Document article, which utilizes our free PRD tool. For this case, we’ll look at how to use the estimator to provide a quote on a project with a fixed end date, working from a completed PRD document that breaks down the project’s requirements and deliverables.

Before digging into how to use the tool for this example, I’d like to introduce the concept of “foundational tasks” which can be selected during the initial setup wizard when creating an estimate.

Foundational Tasks

Think of these as the common tasks associated with developing an app that isn’t specific to its business logic or routine coding and development procedure. Foundational tasks generally aren’t ongoing processes, although they may require some occasional or regular maintenance once put in place. At a minimum, most new projects will require the following to get off the ground:

  • UI Design
  • Database architecture
  • Development environment setup
  • Hosting and Deployment

Depending on the scope and objectives of the project, some other foundational tasks may be necessary or worthwhile to include as specific line items:

  • Wireframing
  • UX research and design
  • Google Analytics configuration
  • SSL certificate installation
  • Automated testing
  • QA testing rounds

For our ToDo app, we’ll be sticking to the fundamentals. We’ll want to do everything from the first list of foundational tasks, although the database architecture should be simple, so we may choose to reduce or even omit its cost. From the second list, we likely don’t need specific UX research or to do a prototype – it should be enough to develop an MVP directly. Analytics will be worth setting up, along with some level of testing before going live. You can see an example of what this looks like here.

You may not need to do some of these tasks if you have existing infrastructure and processes in place, and others may or may not add value to your project – it’s a judgement call. Even if there isn’t much work to do for a specific item, it’s usually worth including (with a respective cost estimate) if only to capture what will need to be done, and sum up all the “minor expenses” to produce a more accurate estimate.

It’s a common mistake to write off something as “easy to do” and ignore it during the planning process. However, these items can add up and lead to delays or increases in cost once the time comes to do them. Err on the side of caution – it’s better to come in under budget than over.


Our estimator tool offers two ways of calculating an estimate: by breaking down the team, and by breaking down the project tasks. A team estimate is a top-down approach. For example, “Who will I have on my team and for how long?” This is estimating from an Agile perspective. A task estimate is a bottom-up approach that comes more from the Waterfall school of planning. Working through both these methods independently means we can then compare and contrast the two estimates and, if they differ, try and work out why. It’s essentially a way to look at your estimate from two different perspectives and validate your numbers.

#1: Team Details

This is where you can list the roles, hourly commitment, and rates for the members of your team. When combined with the expected duration of your project, it provides a high-level estimate of the monthly and total project cost based on the defined staffing requirements. These positions often translate directly into job descriptions and postings when staffing a project.

For our ToDo app, we’re going to specify roles for two developers and a designer. The designer will have a lower hourly commitment than the developers, who will be doing the bulk of the work by bringing the designs to life. This gives us a quick ball-park estimate for what our monthly and total cost will be:

#2: Task Details

This section also outputs an Estimated Budget, but unlike the above Team Details section, it does this by tallying tasks, not roles. That’s because we like to think through the team and task sections separately. It can be easy to underestimate the work required to realize a project; small and routine items are easy to dismiss as requiring minimal effort. One-time tasks required to get up-and-running can also be easy to overlook. Creating a robust breakdown of the required work and associated costs can help produce an estimate that’s representative of what actually needs to be done.

This kind of breakdown can lead to a more accurate estimate than a high-level team approach, and also serves as a starting point for project planning with respect to the core tasks. Once you have this list, refer back to your team estimate and adjust the expected hours as needed. It may also reveal that an additional role is required. There is synergy between the two methodologies here.

The origin for these tasks is often a Product Requirements Document, and working from one makes this step straightforward. For our ToDo app, we’re going to add the Foundational Tasks we discussed above. Additionally, we’re going to include line items for the specific use cases, screens, and functionality of the app. This information can be moved over directly from an existing PRD by using the Create an Estimate button at the bottom of the PRD page.

We can then estimate how long it will take to complete each task and attribute that time to the team member(s) who will be working on it to compute the cost. This type of breakdown gives us a good overview of the core work that needs to be done in our project, with an associated estimate. We can then check the task-based estimate against the team-based one. If the numbers are close, that’s a good sign that we have an accurate representation of the work and staffing requirements to realize this project. 

Which Approach Is Better for Software Projects?

If you are primarily focused on hiring a team and not particularly worried about how much a particular project will cost, then you can simply do the Team Planning part.  However, if you are primarily looking at getting a particular project completed within a certain budget, we recommend doing both. At some point, you’ll need to decide on the breakdown of your team, and the major tasks that need to be done. Looking at things from both perspectives lets you check your work and evaluate your assumptions – this ultimately leads to a more accurate estimate. At the same time, it’s important to keep in mind who the estimate is going to be presented to since that can shape how you choose to lay things out. 

When curating your tasks list, be mindful of your audience. If this estimate will be delivered to a CEO (or another non-technical decision maker) I suggest you keep the number of tasks fairly low and high-level. If, on the other hand, you intend for these tasks to be viewed by a technical audience and/or converted into ‘to do’ items on a sprint, then, by all means, be as technical and granular as you want.  You should also correlate each task to a role specified in the Team Details list since this can help justify requirements such as  “we need X hours of design work”, while also giving you an impression of who the responsibilities of your project are going to fall upon. 

If this all seems overwhelming to you – maybe because you don’t have much experience as a Product Owner – that’s OK, we’re here to help. Contact us and someone from our team can jump in, that’s what we’re here for!

Now good luck with creating your estimate!

Software Project Estimation FAQ

Can I Use This Tool for Software Project Estimation If I Am Working in an Agile Fashion?

Yes. If you have the relationship in place with your client to run with a monthly budget and build in an Agile way, perfect. That is actually how we work with the majority of our clients.

Should I Pad My Task Hours to Account For Unknowns?

No, you’ll notice that next to the total cost there’s a range. We’ve already built in the multipliers that should make the range fairly reasonable and account for a margin of error. You can also customize these multipliers by clicking the gear icon next to the estimate.

Can You Explain the Math to Me?

Sure. There are two main calculations going on. The monthly cost is calculated using a standard 174 hour work month. The budget range is calculated using an x0.8 multiplier for the lower range and x1.3 for the upper range because projects are more likely to end up over the estimate than below it.

What Is the Difference between Team Planning and Task Parts Again?

This might be best explained with an analogy. If you need a quote to get your kitchen remodelled, you will speak with a contractor. You’re asking this individual because they have experience in this area.  Before you go to the contractor, you do some research: find photos of kitchens you like, draw some sketches, choose which floors, cabinets, fixtures, etc. you want. This is your Requirements Document for your Kitchen Project. Your contractor will use this information and, using their experience, give you an estimate. They remodel kitchens for a living, so they can guess how many people-hours will be needed to complete the project. This would be like your Team Estimate. But what if you want the contractor to really break down the elements involved in remodeling the kitchen. Maybe the estimate is close to your max budget, so you ask him to break down his estimate into tasks. This way you can decide if every aspect of the build is required, and find areas where you can save money. For example, the cabinet work you chose might be expensive, so you may choose a cheaper option. This is your Task Estimate. This is a more involved process and as such, likely a more accurate estimate of the project.

We suggest you do them independently for the same reason you proofread an important email before you send it, or ask for a second opinion on something that matters. Validating an idea is important, and thinking about an idea from different perspectives is equally important.

Editor’s note: This article was originally published February 19, 2019 and was updated on June 14, 2021.

Are you looking for help with your next software project?
You’ve come to the right place, every Scalable Path developer has been carefully handpicked by our technical recruitment team. Contact us and we’ll have your team up and running in no time.

Read the whole story
1 day ago
estimations are always hard
Hamburg, Germany
Share this story

Node `timers/promises` for browser and server

1 Share

Node recently introduced timers/promises API which provides functionality such as setTimeout and setInterval but using Promises. Developers usually achieved that functionality with various 3rd-party packages but now they have full STD support with additional features like native cancellation.

So, I thought it would be useful to have that same API available in browsers (even down to IE11) and older Node versions!

Discuss on Changelog News

Read the whole story
1 day ago
Hamburg, Germany
Share this story

Refactoring CSS: Strategy, Regression Testing And Maintenance (Part 2)

1 Share

In Part 1, we’ve covered the side effects of low-quality CSS codebase on end-users, development teams, and management. Maintaining, extending, and working with the low-quality codebase is difficult and often requires additional time and resources. Before bringing up the refactoring proposal to the management and stakeholders, it can be useful to back up the suggestion with some hard data about the codebase health — not only to convince the management department, but also have a measurable goal for the refactoring process.

Let’s assume that management has approved the CSS refactoring project. The development team has analyzed and pinpointed the weaknesses in the CSS codebase and has set target goals for the refactor (file size, specificity, color definitions, and so on). In this article, we’ll take a deep dive into the refactoring process itself, and cover incremental refactoring strategy, visual regression testing, and maintaining the refactored codebase.

Preparation And Planning

Before starting the refactoring process, the team needs to go over the codebase issues and CSS health audit data (CSS file size, selector complexity, duplicated properties, and declarations, etc.) and discuss individual issues about how to approach them and what challenges to expect. These issues and challenges are specific to the codebase and can make the refactoring process or testing difficult. As concluded in the previous article, it’s important to establish internal rules and codebase standards and keep them documented to make sure that the team is on the same page and has a more unified and standardized approach to refactoring.

The team also needs to outline the individual refactoring tasks and set the deadlines for completing the refactoring project, taking into account current tasks and making sure that refactoring project doesn’t prevent the team from addressing urgent tasks or working on planned features. Before estimating the time duration and workload of the refactoring project, the team needs to consult with the management about the short-term plans and adjust their estimates and workload based on the planned features and regular maintenance procedures.

Unlike regular features and bug fixes, the refactoring process yields little to no visible and measurable changes on the front end, and management cannot keep track of the progress on their own. It’s important to establish transparent communication to keep the management and other project stakeholders updated on the refactoring progress and results. Online collaborative workspace tools like Miro or MURAL can also be used for effective communication and collaboration between the team members and management, as well as a quick and simple task management tool.

Christoph Reinartz pointed out the importance of transparency and clear communication while the team at trivago was working on the CSS refactoring project.

“Communication and clearly making the progress and any upcoming issues visible to the whole company were our only weapon. We decided to build up a very simple Kanban board, established a project stand-up and a project Slack channel, and kept management and the company up-to-date via our internal social cast network.”

The most crucial element of planning the refactoring process is to keep the CSS refactoring task scope as small as possible. This makes the tasks more manageable, and easier to test and integrate.

Harry Roberts refers to these tasks as “refactoring tunnels”. For example, refactoring the entire codebase to follow the BEM methodology all at once can yield a massive improvement to the codebase and the development process. This might seem like a simple search-and-replace task at first. However, this task affects all elements on every page (high scope) and the team cannot “see the light at the end of the tunnel” right away; a lot of things can break in the process and unexpected issues can slow down the progress and no one can tell when the task is going to be finished. The team can spend days or weeks working on it and risk hitting a wall, accumulate additional technical debt, or making the codebase even less healthy. The team ends up either giving up on the task of starting over, wasting time and resources in the process.

By contrast, improving just the navigation component CSS is a smaller scope task and is much more manageable and doable. It is also easier to test and verify. This task can be done in a few days. Even with potential issues and challenges that slow down the task, there is a high chance of success. The team can always “see the end of the tunnel” while they’re working on the task because they know the task will be completed once the component has been refactored and all issues related to the component have been fixed.

Finally, the team needs to agree on the refactoring strategy and regression testing method. This is where the refactoring process gets challenging — refactoring should be as streamlined as possible and shouldn’t introduce any regressions or bugs.

Let’s dive into one of the most effective CSS refactoring strategies and see how we can use it to improve the codebase quickly and effectively.

Incremental Refactoring Strategy

Refactoring is a challenging process that is much more complex than simply deleting the legacy code and replacing it with the refactored code. There is also the matter of integrating the refactored codebase with the legacy codebase and avoiding regressions, accidental code deletions, preventing stylesheet conflicts, etc. To avoid these issues, I would recommend using an incremental (or granular) refactoring strategy.

In my opinion, this is one of the safest, most logical, and most recommended CSS refactoring strategies I’ve come across so far. Harry Roberts has outlined this strategy in 2017. and it has been my personal go-to CSS refactoring strategy since I first heard about it.

Let’s go over this strategy step by step.

Step 1: Pick A Component And Develop It In Isolation

This strategy relies on individual tasks having low scope, meaning that we should refactor the project component by component. It’s recommended to start with low-scope tasks (individual components) and then move onto higher-scoped tasks (global styles).

Depending on the project structure and CSS selectors, individual component styles consist of a combination of component (class) styles and global (wide-ranging element) styles. Both component styles and global styles can be the source of the codebase issues and might need to be refactored.

Let’s take a look at the more common CSS codebase issues which can affect a single component. Component (class) selectors might be too complex, difficult to reuse, or can have high specificity and enforce the specific markup or structure. Global (element) selectors might be greedy and leak unwanted styles into multiple components which need to be undone with high-specificity component selectors.

After choosing a component to refactor (a lower-scoped task), we need to develop it in an isolated environment away from the legacy code, its weaknesses, and conflicting selectors. This is also a good opportunity to improve the HTML markup, remove unnecessary nestings, use better CSS class names, use ARIA attributes, etc.

You don’t have to go out of your way to set up a whole build system for this, you can even use CodePen to rebuild the individual components. To avoid conflicts with the legacy class names and to separate the refactored code from the legacy code more clearly, we’ll use an rf- prefix on CSS class name selectors.

Step 2: Merge With The Legacy Codebase And Fix Bugs

Once we’ve finished rebuilding the component in an isolated environment, we need to replace the legacy HTML markup with refactored markup (new HTML structure, class names, attributes, etc.) and add the refactored component CSS alongside the legacy CSS.

We don’t want to act too hastily and remove legacy styles right away. By making too many changes simultaneously, we’ll lose track of the issues that might happen due to the conflicting codebases and multiple changes. For now, let’s replace the markup and add refactored CSS to the existing codebase and see what happens. Keep in mind that refactored CSS should have the .rf- prefix in their class names to prevent conflicts with the legacy codebase.

Legacy component CSS and global styles can cause unexpected side-effects and leak unwanted styles into the refactored component. Refactored codebase might be missing the faulty CSS which was required to undo these side-effects. Due to those styles having a wider reach and possibly affecting other components, we cannot simply edit the problematic code directly. We need to use a different approach to tackle these conflicts.

We need to create a separate CSS file, which we can name overrides.css or defend.css which will contain hacky, high-specificity code that combats the unwanted leaked styles from the legacy codebase.

overrides.css which will contain high-specificity selectors which make sure that the refactored codebase works with the legacy codebase. This is only a temporary file and it will be removed once the legacy code is deleted. For now, add the high-specificity style overrides to unset the styles applied by legacy styles and test if everything is working as expected.

If you notice any issues, check if the refactored component is missing any styles by going back to the isolated environment or if any other styles are leaking into the component and need to be overridden. If the component looks and works as expected after adding these overrides, remove the legacy code for the refactored component and check if any issues happen. Remove related hacky code from overrides.css and test again.

Depending on the case, you probably won’t be able to remove every override right away. For example, if the issue lies within a global element selector which leaks styles into other components that also need to be refactored. For those cases, we won’t risk expanding the scope of the task and the pull request but rather wait until all components have been refactored and tackle the high-scope tasks after we’ve removed the same style dependency from all other components.

In a way, you can treat the overrides.css file as your makeshift TODO list for refactoring greedy and wide-reaching element selectors. You should also consider updating the task board to include the newly uncovered issues. Make sure to add useful comments in the overrides.css file so other team members are on the same page and instantly know why the override has been applied and in response to which selector.

/* overrides.css */
/* Resets dimensions enforced by ".sidebar > div" in "sidebar.css" */
.sidebar > .card {
  min-width: 0;

/* Resets font size enforced by ".hero-container" in "hero.css"*/
.card {
  font-size: 18px;

Step 3: Test, Merge And Repeat

Once the refactored component has been successfully integrated with the legacy codebase, create a Pull Request and run an automated visual regression test to catch any issues that might have gone unnoticed and fix them before merging them into one of the main git branches. Visual regression testing can be treated as the last line of defense before merging the individual pull requests. We’ll cover visual regression testing in more detail in one of the upcoming sections of this article.

Now rinse and repeat these three steps until the codebase has been refactored and overrides.css is empty and can be safely removed.

Step 4: Moving From Components To Global Styles

Let’s assume that we have refactored all individual low-scoped components and all that is left in the overrides.css file are related to global wide-reaching element selectors. This is a very realistic case, speaking from the experience, as many CSS issues are caused by wide-reaching element selectors leaking styles into multiple components.

By tackling the individual components first and shielding them from the global CSS side-effects using overrides.css file, we’ve made these higher-scoped tasks much more manageable and less risky to do. We can move onto refactoring global CSS styles more safely than before and remove duplicated styles from the individual components and replacing them with general element styles and utilities — buttons, links, images, containers, inputs, grids, and so on. By doing so, we’re going to incrementally remove the code from our makeshift TODO overrides.css file and duplicated code repeated in multiple components.

Let’s apply the same three steps of the incremental refactoring strategy, starting by developing and testing the styles in isolation.

Next, we need to add the refactored global styles to the codebase. We might encounter the same issues when merging the two codebases and we can add the necessary overrides in the overrides.css. However, this time, we can expect that as we are gradually removing legacy styles, we will also be able to gradually remove overrides that we’ve introduced to combat those unwanted side-effects.

The downside of developing components in isolation can be found in element styles that are shared between multiple components — style guide elements like buttons, inputs, headings, and so on. When developing these in isolation from the legacy codebase, we don’t have access to the legacy style guide. Additionally, we don’t want to create those dependencies between the legacy codebase and refactored codebase.

That is why it’s easier to remove the duplicated code blocks and move these styles into separate, more general style guide components and selectors later on. It allows us to address these changes right at the end and with the lower risk as we are working with a much healthier and consistent codebase, instead of the messy, inconsistent, and buggy legacy codebase. Of course, any unintended side-effects and bugs can still happen and these should be caught with the visual regression testing tools which we’ll cover in one of the following sections of the article.

When the codebase has been completely refactored and we’ve removed all makeshift TODO items from the overrides.css file, we can safely remove it and we are left with the refactored and improved CSS codebase.

Incremental CSS Refactoring Example

Let’s use the incremental refactoring strategy to refactor a simple page that consists of a title element and two card components in a grid component. Each card element consists of an image, title, subtitle, description, and a button and is placed in a 2-column grid with horizontal and vertical spacing.

Depending on the project, testing tools don’t need to be complex or sophisticated to be effective. While working on refactoring the Sundance Institute’s CSS codebase, the development team used a simple static style guide page generated by Jekyll to test the refactored components.

“One unintended consequence of executing the refactor in abstraction on a Jekyll instance was that we could now publish it to Github pages as a living style guide. This has become an invaluable resource for our dev team and for external vendors to reference.”

Once the CSS refactor tasks have been completed and the refactored code is ready for production, the team can also consider doing an A/B test to check the effect of the refactored codebase on users. For example, if the goal of the refactoring process was to reduce the overall CSS file size, the A/B test can potentially yield significant improvements for mobile users, and these results can also be beneficial to project stakeholders and management. That’s exactly how the team at Trivago approached the deployment of their large-scale refactoring project.

“(…) we were able to release the technical migration as an A/B Test. We tested the migration for one week, with positive results on mobile devices where mobile-first paid out and accepted the migration after only four weeks.”
Keeping Track Of Refactoring Progress

Kanban board, GitHub issues, GitHub project board, and standard project management tools can do a great job of keeping track of the refactoring progress. However, depending on the tools and how the project is organized, it may be difficult to estimate the progress on a per-page basis or do a quick check on which components need to be refactored.

This is where our .rf-prefixed CSS classes come in. Harry Roberts has talked about the benefits of using the prefix in detail. The bottom line is — not only do these classes allow developers to clearly separate the refactored CSS codebase from the legacy codebase, but also to quickly show the progress to the project managers and other project stakeholders on a per-page basis.

For example, management may decide to test the effects of the refactored codebase early by deploying only the refactored homepage code and they would want to know when the homepage components will be refactored and ready for A/B testing.

Instead of wasting some time comparing the homepage components with the available tasks on the Kanban board, developers can just temporarily add the following styles to highlight the refactored components which have the rf- prefix in their class names, and the components that need to be refactored. That way, they can reorganize the tasks and prioritize refactoring homepage components.

/* Highlights all refactored components */
[class*="rf-"] {
  outline: 5px solid green;

/* Highlights all components that havent been refactored */
body *:not([class]) {
  outline: 5px solid red;

Maintaining The Refactored Codebase

After the refactoring project has been completed, the team needs to make sure to maintain the codebase health for the foreseeable future — new features will be developed, some new features might even be rushed and produce technical debt, various bugfixes will also be developed, etc. All in all, the development team needs to make sure that the codebase health remains stable as long as they’re in charge of the codebase.

Technical debt which can result in potentially faulty CSS code should be isolated, documented, and implemented in a separate CSS file which is often named as shame.css.

It’s important to document the rules and best practices that were established and applied during the refactoring projects. Having those rules in writing allows for standardized code reviews, faster project onboarding for new team members, easier project handoff, etc.

Some of the rules and best practices can also be enforced and documented with automated code-checking tools like stylelint. Andrey Sitnik, the author of widely-used CSS development tools like PostCSS and Autoprefixer, has noted how automatic linting tools can make code reviews and onboarding easier and less stressful.

“However, automatic linting is not the only reason to adopt Stylelint in your project. It can be extremely helpful for onboarding new developers on the team: a lot of time (and nerves!) are wasted on code reviews until junior developers are fully aware of accepted code standards and best practices. Stylelint can make this process much less stressful for everyone.”

Additionally, the team can create a Pull Request template and include the checklist of standards and best practices and a link to the project code rules document so that the developers making the Pull Request can check the code themselves and make sure that it follows the agreed standards and best practices.


Incremental refactoring strategy is one of the safest and most recommended approaches when it comes to refactoring CSS. The development team needs to refactor the codebase component by component to ensure that the tasks have a low scope and are manageable. Individual components need to be then developed in isolation — away from the faulty code — and then merged with the existing codebase. The issues that may come up from the conflicting codebases can be solved by adding a temporary CSS file that contains all the necessary overrides to remove the conflicts in CSS styles. After that, legacy code for the target component can be removed and the process continues until all components have been refactored and until the temporary CSS file which contains the overrides is empty.

Visual regression testing tools like Percy and Chromatic can be used for testing and to detect any regressions and unwanted changes on the Pull Request level, so developers can fix these issues before the refactored code is deployed to the live site.

Developers can use A/B testing and use monitoring tools to make sure that the refactoring doesn’t negatively affect performance and user experience before finally launching the refactored project on a live site. Developers will also need to ensure that the agreed standards and best practices are used on the project continues to maintain the codebase health and quality in the future.


Read the whole story
2 days ago
Hamburg, Germany
Share this story
Next Page of Stories