3652 stories
·
3 followers

Why Anticipatory Design Isn’t Working For Businesses

1 Share

Consider the early days of the internet, when websites like NBC News and Amazon cluttered their pages with flashing banners and labyrinthine menus. In the early 2000s, Steve Krug’s book Don’t Make Me Think arrived like a lighthouse in a storm, advocating for simplicity and user-centric design.

Today’s digital world is flooded with choices, information, and data, which is both exciting and overwhelming. Unlike Krug’s time,

Today, the problem isn’t interaction complexity but opacity. AI-powered solutions often lack transparency and explainability, raising concerns about user trust and accountability.

The era of click-and-command is fading, giving way to a more seamless and intelligent relationship between humans and machines.

Expanding on Krug’s Call for Clarity: The Pillars of Anticipatory Design

Krug’s emphasis on clarity in design is more relevant than ever. In anticipatory design, clarity is not just about simplicity or ease of use — it’s about transparency and accountability. These two pillars are crucial but often missing as businesses navigate this new paradigm. Users today find themselves in a digital landscape that is not only confusing but increasingly intrusive. AI predicts their desires based on past behavior but rarely explains how these predictions are made, leading to growing mistrust.

Transparency is the foundation of clarity. It involves openly communicating how AI-driven decisions are made, what data is being collected, and how it is being used to anticipate needs. By demystifying these processes, designers can alleviate user concerns about privacy and control, thereby building trust.

Accountability complements transparency by ensuring that anticipatory systems are designed with ethical considerations in mind. This means creating mechanisms for users to understand, question, and override automated decisions if needed. When users feel that the system is accountable to them, their trust in the technology — and the brand — deepens.

What Makes a Service Anticipatory?

Image AI like a waiter at a restaurant. Without AI, they wait for you to interact with them and place your order. But with anticipatory design powered by AI and ML, the waiter can analyze your past orders (historical data) and current behavior (contextual data) — perhaps, by noticing you always start with a glass of sparkling water.

This proactive approach has evolved since the late 1990s, with early examples like Amazon’s recommendation engine and TiVo’s predictive recording. These pioneering services demonstrated the potential of predictive analytics and ML to create personalized, seamless user experiences.

Amazon’s Recommendation Engine (Late 1990s)

Amazon was a pioneer in using data to predict and suggest products to customers, setting the standard for personalized experiences in e-commerce.

TiVo (1999)

TiVo’s ability to learn users’ viewing habits and automatically record shows marked an early step toward predictive, personalized entertainment.

Netflix’s Recommendation System (2006)

Netflix began offering personalized movie recommendations based on user ratings and viewing history in 2006. It helped popularize the idea of anticipatory design in the digital entertainment space.

How Businesses Can Achieve Anticipatory Design

Designing for anticipation is designing for a future that is not here yet but has already started moving toward us.

Designing for anticipation involves more than reacting to current trends; it requires businesses to plan strategically for future user needs. Two critical concepts in this process are forecasting and backcasting.

  • Forecasting analyzes past trends and data to predict future outcomes, helping businesses anticipate user needs.
  • Backcasting starts with a desired future outcome and works backward to identify the steps needed to achieve that goal.

Think of it like planning a dream vacation. Forecasting would involve looking at your past trips to guess where you might go next. But backcasting lets you pick your ideal destination first, then plan the perfect itinerary to get you there.

Forecasting: A Core Concept for Future-Oriented Design

This method helps in planning and decision-making based on probable future scenarios. Consider Netflix, which uses forecasting to analyze viewers’ past viewing habits and predict what they might want to watch next. By leveraging data from millions of users, Netflix can anticipate individual preferences and serve personalized recommendations that keep users engaged and satisfied.

Backcasting: Planning From the Desired Future

Backcasting takes a different approach. Instead of using data to predict the future, it starts with defining a desired future outcome — a clear user intent. The process then works backward to identify the steps needed to achieve that goal. This goal-oriented approach crafts an experience that actively guides users toward their desired future state.

For instance, a financial planning app might start with a user’s long-term financial goal, such as saving for retirement, and then design an experience that guides the user through each step necessary to reach that goal, from budgeting tips to investment recommendations.

Integrating Forecasting and Backcasting In Anticipatory Design

The true power of anticipatory design emerges when businesses efficiently integrate both forecasting and backcasting into their design processes.

For example, Tesla’s approach to electric vehicles exemplifies this integration. By forecasting market trends and user preferences, Tesla can introduce features that appeal to users today. Simultaneously, by backcasting from a vision of a sustainable future, Tesla designs its vehicles and infrastructure to guide society toward a world where electric cars are the norm and carbon emissions are significantly reduced.

Over-Promising and Under-Delivering: The Pitfalls of Anticipatory Design

As businesses increasingly adopt anticipatory design, the integration of forecasting and backcasting becomes essential. Forecasting allows businesses to predict and respond to immediate user needs, while backcasting ensures these responses align with long-term goals. Despite its potential, anticipatory design often fails in execution, leaving few examples of success.

Over the past decade, I’ve observed and documented the rise and fall of several ambitious anticipatory design ventures. Among them, three — Digit, LifeBEAM Vi Sense Headphones, and Mint — highlight the challenges of this approach.

Digit: Struggling with Contextual Understanding

Digit aimed to simplify personal finance with algorithms that automatically saved money based on user spending. However, the service often missed the mark, lacking the contextual awareness necessary to accurately assess users’ real-time financial situations. This led to unexpected withdrawals, frustrating users, especially those living paycheck to paycheck. The result was a breakdown in trust, with the service feeling more intrusive than supportive.

LifeBEAM Vi Sense Headphones: Complexity and User Experience Challenges

LifeBEAM Vi Sense Headphones was marketed as an AI-driven fitness coach, promising personalized guidance during workouts. In practice, the AI struggled to deliver tailored coaching, offering generic and unresponsive advice. As a result, users found the experience difficult to navigate, ultimately limiting the product’s appeal and effectiveness. This disconnection between the promised personalized experience and the actual user experience left many disappointed.

Mint: Misalignment with User Goals

Mint aimed to empower users to manage their finances by providing automated budgeting tools and financial advice. While the service had the potential to anticipate user needs, users often found that the suggestions were not tailored to their unique financial situations, resulting in generic advice that did not align with their personal goals.

The lack of personalized, actionable steps led to a mismatch between user expectations and service delivery. This misalignment caused some users to disengage, feeling that Mint was not fully attuned to their unique financial journeys.

The Risks of Over-promising and Under-Delivering

The stories of Digit, LifeBEAM Vi Sense, and Mint underscore a common pitfall: over-promising and under-delivering. These services focused too much on predictive power and not enough on user experience. When anticipatory systems fail to consider individual nuances, they breed frustration rather than satisfaction, highlighting the importance of aligning design with human experience.

Digit’s approach to automated savings, for instance, became problematic when users found its decisions opaque and unpredictable. Similarly, LifeBEAM’s Vi Sense headphones struggled to meet diverse user needs, while Mint’s rigid tools failed to offer the personalized insights users expected. These examples illustrate the delicate balance anticipatory design must strike between proactive assistance and user control.

Failure to Evolve with User Needs

Many anticipatory services rely heavily on data-driven forecasting, but predictions can fall short without understanding the broader user context. Mint initially provided value with basic budgeting tools but failed to evolve with users’ growing needs for more sophisticated financial advice. Digit, too, struggled to adapt to different financial habits, leading to dissatisfaction and limited success.

Complexity and Usability Issues

Balancing the complexity of predictive systems with usability and transparency is a key challenge in anticipatory design.

When systems become overly complex, as seen with LifeBEAM Vi Sense headphones, users may find them difficult to navigate or control, compromising trust and engagement. Mint’s generic recommendations, born from a failure to align immediate user needs with long-term goals, further illustrate the risks of complexity without clarity.

Privacy and Trust Issues

Trust is critical in anticipatory design, particularly in services handling sensitive data like finance or health. Digit and Mint both encountered trust issues as users grew skeptical of how decisions were made and whether these services truly had their best interests in mind. Without clear communication and control, even the most sophisticated systems risk alienating users.

Inadequate Handling of Edge Cases and Unpredictable Scenarios

While forecasting and backcasting work well for common scenarios, they can struggle with edge cases or unpredictable user behaviors. If an anticipatory service can’t handle these effectively, it risks providing a poor user experience and, in the worst-case scenario, harming the user. Anticipatory systems must be prepared to handle edge cases and unpredictable scenarios.

LifeBEAM Vi Sense headphones struggled when users deviated from expected fitness routines, offering a one-size-fits-all experience that failed to adapt to individual needs. This highlights the importance of allowing users control, even when a system proactively assists them.

Designing for Anticipatory Experiences
Anticipatory design should empower users to achieve their goals, not just automate tasks.

We can follow a layered approach to plan a service that can evolve according to user actions and explicit ever-evolving intent.

But how do we design for intent without misaligning anticipation and user control or mismatching user expectations and service delivery?

At the core of this approach is intent — the primary purpose or goal that the design must achieve. Surrounding this are workflows, which represent the structured tasks to achieve the intent. Finally, algorithms analyze user data and optimize these workflows.

For instance, Thrive (see the image below), a digital wellness platform, aligns algorithms and workflows with the core intent of improving well-being. By anticipating user needs and offering personalized programs, Thrive helps users achieve sustained behavior change.

It perfectly exemplifies the three-layered concentric representation for achieving behavior change through anticipatory design:

1. Innermost layer: Intent

Improve overall well-being: Thrive’s core intent is to help users achieve a healthier and more fulfilling life. This encompasses aspects like managing stress, improving sleep quality, and boosting energy levels.

2. Middle layer: Workflows

Personalized programs and support: Thrive uses user data (sleep patterns, activity levels, mood) to create programs tailored to their specific needs and goals. These programs involve various workflows, such as:

  • Guided meditations and breathing exercises to manage stress and anxiety.
  • Personalized sleep routines aimed at improving sleep quality.
  • Educational content and coaching tips to promote healthy habits and lifestyle changes.

3. Outermost layer: Algorithms

Data analysis and personalized recommendations: Thrive utilizes algorithms to analyze user data and generate actionable insights. These algorithms perform tasks like the following:

  • Identify patterns in sleep, activity, and mood to understand user challenges.
  • Predict user behavior to recommend interventions that address potential issues.
  • Optimize program recommendations based on user progress and data analysis.

By aligning algorithms and workflows with the core intent of improving well-being, Thrive provides a personalized and proactive approach to behavior change. Here’s how it benefits users:

  • Sustained behavior change: Personalized programs and ongoing support empower users to develop healthy habits for the long term.
  • Data-driven insights: User data analysis helps users gain valuable insights into their well-being and identify areas for improvement.
  • Proactive support: Anticipates potential issues and recommends interventions before problems arise.
The Future of Anticipatory Design: Combining Anticipation with Foresight

Anticipatory design is inherently future-oriented, making it both appealing and challenging. To succeed, businesses must combine anticipation — predicting future needs — with foresight, a systematic approach to analyzing and preparing for future changes.

Foresight involves considering alternative future scenarios and making informed decisions to navigate toward desired outcomes. For example, Digit and Mint struggled because they didn’t adequately handle edge cases or unpredictable scenarios, a failure in their foresight strategy (see an image below).

As mentioned, while forecasting and backcasting work well for common scenarios, they can struggle with edge cases or unpredictable user behaviors. Under anticipatory design, if we demote foresight for a second plan, the business will fail to account for and prepare for emerging trends and disruptive changes. Strategic foresight helps companies to prepare for the future and develop strategies to address possible challenges and opportunities.

The Foresight process generally involves interrelated activities, including data research, trend analysis, planning scenarios, and impact assessment. The ultimate goal is to gain a broader and deeper understanding of the future to make more informed and strategic decisions in the design process and foresee possible frictions and pitfalls in the user experience.

Actionable Insights for Designer

  • Enhance contextual awareness
    Help data scientists or engineers to ensure that the anticipatory systems can understand and respond to the full context of user needs, not just historical data. Plan for pitfalls so you can design safety measures where the user can control the system.
  • Maintain user control
    Provide users with options to customize or override automated decisions, ensuring they feel in control of their experiences.
  • Align short-term predictions with long-term goals
    Use forecasting and backcasting to create a balanced approach that meets immediate needs while guiding users toward their long-term objectives.
Proposing an Anticipatory Design Framework

Predicting the future is no easy task. However, design can borrow foresight techniques to imagine, anticipate, and shape a future where technology seamlessly integrates with users evolving needs. To effectively implement anticipatory design, it’s essential to balance human control with AI automation. Here’s a 3-step approach to integrate future thinking into your workflow:

  1. Anticipate Directions of Change
    Identify major trends shaping the future.
  2. Imagine Alternative Scenarios
    Explore potential futures to guide impactful design decisions.
  3. Shape Our Choices
    Leverage these scenarios to align design with user needs and long-term goals.

This proposed framework (see an image above) aims to integrate forecasting and backcasting while emphasizing user intent, transparency, and continuous improvement, ensuring that businesses create experiences that are both predictive and deeply aligned with user needs.

Step 1: Anticipate Directions of Change

Objective: Identify the major trends and forces shaping the future landscape.

Components:

1. Understand the User’s Intent

  • User Research: Conduct in-depth user research through interviews, surveys, and observations to uncover user goals, motivations, pain points, and long-term aspirations or Jobs-to-be-Done (JTBD). This foundational step helps clearly define the user’s intent.
  • Persona Development: Develop detailed user personas that represent the target audience, including their long-term goals and desired outcomes. Prioritize understanding how the service can adapt in real-time to changing user needs, offering recommendations, or taking actions aligned with the persona’s current context.

2. Forecasting: Predicting Near-Term User Needs

  • Data Collection and Analysis: Collaborate closely with data scientists and data engineers to analyze historical data (past interactions), user behavior, and external factors. This collaboration ensures that predictive analytics enhance overall user experience, allowing designers to better understand the implications of data on user behaviors.
  • Predictive Modeling: Implement continuous learning algorithms that refine predictions over time. Regularly assess how these models evolve, adapting to users’ changing needs and circumstances.
  • Explore the Delphi Method: This is a structured communication technique that gathers expert opinions to reach a consensus on future developments. It’s particularly useful for exploring complex issues with uncertain outcomes. Use the Delphi Method to gather insights from industry experts, user researchers, and stakeholders about future user needs and the best strategies to meet those needs. The consensus achieved can help in clearly defining the long-term goals and desired outcomes.

Activities:

  • Conduct interviews and workshops with experts using the Delphi Method to validate key trends.
  • Analyze data and trends to forecast future directions.

Step 2: Imagine Alternative Scenarios

Objective: Explore a range of potential futures based on these changing directions.

Components:

1. Scenario Planning

  • Scenario Development: It involves creating detailed, plausible future scenarios based on various external factors, such as technological advancements, social trends, and economic changes. Develop multiple future scenarios that represent different possible user contexts and their impact on their needs.
  • Scenario Analysis: From these scenarios, you can outline the long-term goals that users might have in each scenario and design services that anticipate and address these needs. Assess how these scenarios impact user needs and experiences.

2. Backcasting: Designing from the Desired Future

  • Define Desired Outcomes: Clearly outline the long-term goals or future states that users aim to achieve. Use backcasting to reduce cognitive load by designing a service that anticipates future needs, streamlining user interactions, and minimizing decision-making efforts.
    • Use Visioning Planning: This is a creative process that involves imagining the ideal future state you want to achieve. It helps in setting clear, long-term goals by focusing on the desired outcomes rather than current constraints. Facilitate workshops or brainstorming sessions with stakeholders to co-create a vision of the future. Define what success looks like from the user’s perspective and use this vision to guide the backcasting process.
  • Identify Steps to Reach Goals: Reverse-engineer the user journey by starting from the desired future state and working backward. Identify the necessary steps and milestones and ensure these are communicated transparently to users, allowing them control over their experience.
  • Create Roadmaps: Develop detailed roadmaps that outline the sequence of actions needed to transition from the current state to the desired future state. These roadmaps should anticipate obstacles, respect privacy, and avoid manipulative behaviors, empowering users rather than overwhelming them.

Activities:

  • Develop and analyze alternative scenarios to explore various potential futures.
  • Use backcasting to create actionable roadmaps from these scenarios, ensuring they align with long-term goals.

Step 3: Shape Our Choices

Objective: Leverage these scenarios to spark new ideas and guide impactful design decisions.

Components:

1. Integrate into the Human-Centered Design Process

  • Iterative Design with Forecasting and Backcasting: Embed insights from forecasting and backcasting into every stage of the design process. Use these insights to inform user research, prototype development, and usability testing, ensuring that solutions address both predicted future needs and desired outcomes. Continuously refine designs based on user feedback.
  • Agile Methodologies: Adopt agile development practices to remain flexible and responsive. Ensure that the service continuously learns from user interactions and feedback, refining its predictions and improving its ability to anticipate needs.

2. Implement and Monitor: Ensuring Ongoing Relevance

  • User Feedback Loops: Establish continuous feedback mechanisms to refine predictive models and workflows. Use this feedback to adjust forecasts and backcasted plans as necessary, keeping the design aligned with evolving user expectations.
  • Automation Tools: Collaborate with data scientists and engineers to deploy automation tools that execute workflows and monitor progress toward goals. These tools should adapt based on new data, evolving alongside user behavior and emerging trends.
  • Performance Metrics: Define key performance indicators (KPIs) to measure the effectiveness, accuracy, and quality of the anticipatory experience. Regularly review these metrics to ensure that the system remains aligned with intended outcomes.
  • Continuous Improvement: Maintain a cycle of continuous improvement where the system learns from each interaction, refining its predictions and recommendations over time to stay relevant and useful.
    • Use Trend Analysis: This involves identifying and analyzing patterns in data over time to predict future developments. This method helps you understand the direction in which user behaviors, technologies, and market conditions are heading. Use trend analysis to identify emerging trends that could influence user needs in the future. This will inform the desired outcomes by highlighting what users might require or expect from a service as these trends evolve.

Activities:

  • Implement design solutions based on scenario insights and iterate based on user feedback.
  • Regularly review and adjust designs using performance metrics and continuous improvement practices.
Conclusion: Navigating the Future of Anticipatory Design

Anticipatory design holds immense potential to revolutionize user experiences by predicting and fulfilling needs before they are even articulated. However, as seen in the examples discussed, the gap between expectation and execution can lead to user dissatisfaction and erode trust.

To navigate the future of anticipatory design successfully, businesses must prioritize transparency, accountability, and user empowerment. By enhancing contextual awareness, maintaining user control, and aligning short-term predictions with long-term goals, companies can create experiences that are not only innovative but also deeply resonant with their users’ needs.

Moreover, combining anticipation with foresight allows businesses to prepare for a range of future scenarios, ensuring that their designs remain relevant and effective even as circumstances change. The proposed 3-step framework — anticipating directions of change, imagining alternative scenarios, and shaping our choices — provides a practical roadmap for integrating these principles into the design process.

As we move forward, the challenge will be to balance the power of AI with the human need for clarity, control, and trust. By doing so, businesses can fulfill the promise of anticipatory design, creating products and services that are not only efficient and personalized but also ethical and user-centric.

In the end,

The success of anticipatory design will depend on its ability to enhance, rather than replace, the human experience.

It is a tool to empower users, not to dictate their choices. When done right, anticipatory design can lead to a future where technology seamlessly integrates with our lives, making everyday experiences simpler, more intuitive, and ultimately more satisfying.



Read the whole story
emrox
4 days ago
reply
Hamburg, Germany
Share this story
Delete

Original

1 Share

Original

And more!

Read the whole story
emrox
4 days ago
reply
Hamburg, Germany
Share this story
Delete

In the Trenches: What it Means to be an Engineering Manager

1 Share

This post is a collection of personal reflections from my time working as an EM and similar roles. It’s not meant to be a comprehensive guide but rather a snapshot of the lessons I’ve learned and the challenges I’ve faced. For those looking to transition into leadership, or even for engineers wanting to understand management better, I hope this offers some insight into the realities of the role.

Throughout my career, I’ve worked with some of the most capable engineers, and while it’s been challenging, it’s also been the most mentally engaging work I’ve done. A well-functioning team can achieve incredible results, and managing one is tough but inspiring work.

People

Engineers and EMs often detach the technical from the personal, focusing on KPIs, metrics, and OKRs as proxies for success. While these are useful, it’s easy to overlook the human element that drives them. As an EM, your role isn’t just about managing the technical—it’s about understanding the people behind the work.

Technology impacts people, and as leaders, we can’t ignore that. Whether it’s debates like the shift from master to main in git or deeper discussions around inclusivity in language (master/slave rename in redis1), these things can shape team culture. I’ve seen firsthand how thoughtfully handling these conversations can strengthen cohesion and foster a culture of respect and alignment with the broader mission.

Your role is to enable your engineers to do their best work, often through personal involvement rather than technical input. You’re not expected to be a life coach or a close friend, but you do need to listen carefully. Understand your team’s unique and diverse personalities, and gradually align their goals with the organization’s mission.

Team

Building a strong team isn’t just about hiring for skill; it’s about recognizing that humans are primarily social thinkers, not first principle thinkers. We’re shaped by the people around us and the culture we operate in. This truth should influence how you hire. When we talk about “cultural fit,” it’s not about hiring people who think exactly like you &mdah; that likely leads to stagnation. Instead, it’s about hiring people who share traits that matter to your team: honesty, hard work, respect, and healthy competition.

I’ve found that the biggest pitfall for engineering managers is hiring for comfort, bringing in folks who mirror their own thinking or who won’t challenge them. This is a dangerous trap. You should be hiring engineers who are better than you, especially in areas where you’re weak. That sounds obvious, but the reality is it can trigger a sense of insecurity or imposter syndrome.

Overcoming that is a personal journey. For me, it’s about understanding that I bring a lot to the table both managerially and technically. I remind myself that my role isn’t to be the most skilled person in the room but to create an environment where the most skilled people can thrive. That mindset shift helped me not only embrace hiring top-tier talent but also pushed me to keep learning every day. Your value as a leader is in recognizing and nurturing that talent, not competing with it.

Technology

How you approach technology in your team is equally important to hiring the right folks in the team. It is also the area that requires the strongest business acumen. Our objective is to enable the organisation to achieve its mission, and selecting the right technology is a key factor in guaranteeing that outcome. I use the term “technology” very loosely here. Think of this as infrastructure providers, service providers (queues, databases, integrations etc.), tech stack etc., I’ll try to break things down along with some concrete case studies.

Language Stack

The larger your language stack, the larger the risk surface area. The less interoperable your engineers are, the more difficult things are to maintain. You should understand your business and select the language that closely matches the business requirements.

  • Are you building a soft realime system with fault tolerance requirements? You’re likely going to want to use Erlang/Elixir (or rely on the BEAM).
  • Working on next-generation AI/ML? Likely want Python/Mojo.
  • Working on systems security? Likely want Rust.

That said, this will never be a perfect overlap. There are situations where your stack of choice will not enable you to achieve your goals. For example, say you are once again an AI/ML startup and you’re performing training tasks and curently use Python. You may want to be able to distribute these tasks in such a way that maximise CPU/GPU scheduling and lo and behold, the GIL completely gets in the way. In such a scenario, you should place some effort into circumventing the problem, even with workarounds to begin with. If the problem does not go away, and throwing (an acceptable amount) of money at it doesn’t mitigate it, then you can start thinking about paradigm shift such as introducing another language better suited for this work (e.g., Rust).

KIBS (Keep it Boring Stupid)

When building out your infrastructure or products, we have a tendency to reach for the newest item on the shelf. This isn’t always wrong, but it something to careful of, especially for core systems or products. You should ideally select technology that has been tried-and-tested in the past. Ignore that fancy vector-database and use postgres with pgvector. Ignore that fancy nosql database and use postgres with jsonb. Ignore that fancy time-series database and use postgres with arrays. You’re likely catching my drift here. Now, that’s not to say we should not explore what’s out there. When it comes to R&D or greenfield projects, a good amount of time should be spent seeing what the latest available technology is and understanding if it can give us some competitive edge, while being fully aware of the risks when adopting it.

Leaders Eat Last

As an EM, you occupy a unique position in the organisation. You’re not quite executive level, nor are you purely operational. Instead, you’re the connective tissue, the link between strategy and execution. At its core, your role is that of an enabler and a support function for your team. Your primary focus should be on cultivating an environment where your engineers can thrive. For each team member, constantly ask yourself:

  1. How can they grow as engineers?
  2. How can their personal growth align with our organizational goals?
  3. What motivates them, and how can I help them stay engaged?
  4. How can I inspire them to rally behind our vision?

Remember, your success in the eyes of the company is measured by your team’s outcomes. Here are some key principles I try to follow and share with you.

  • Absorb failures, distribute successes: When things go wrong, take responsibility. When things go right, ensure your team gets the credit. Cliché I know.
  • Be a political buffer: Absorb organisational politics, but don’t isolate them. Help them understand how their work impacts the company and users.
  • Engineer Independence: Avoid creating dependencies on yourself. Your team must function smoothly even in your absence.
  • Spotlight your team: Encourage your engineers to present their work to the company or customers. Writing posts, engaging in public marketing initiatives, all will help them be more involved and support them in future work.
  • Cultivate technical respect: While your role is primarily about people, maintaining your technical skills is crucial. It helps you make informed decisions and earns the respect of your team.
  • On taking risk: If you want to be in any position of leadership, you will need to take hard decisions. Those decisions will carry risk, so learning how to assess risk, especially with partial information, is critical.

Lastly, don’t forget to just have fun with your work. Software engineering is inherently creative (I believe one of the most creative art), and a playful environment often leads to innovative solutions. Keep things light when possible – not everything needs to be deadly serious. Your role as an EM is challenging but rewarding. You’re not just managing projects or code; you’re nurturing careers, and building a team that’s greater than the sum of its parts. Embrace this responsibility, and you’ll find it’s one of the most fulfilling roles in tech.

Conclusion

These reflections offer one view into the realities and challenges of engineering management, and I hope they serve as a guide to those on a similar path. Leadership is a journey, and like engineering itself, it’s an ever-evolving process of learning and adaptation. I hope you found at least a part of it interesting (especially if you made it to the end!). Would love to hear your thoughts, case studies or experiences. I can always include them in this post as well.


Go to top File an issue

Read the whole story
emrox
6 days ago
reply
Hamburg, Germany
Share this story
Delete

Announcing TypeScript 5.6

2 Shares

Today we’re excited to announce the release of TypeScript 5.6!

If you’re not familiar with TypeScript, it’s a language that builds on top of JavaScript by adding syntax for types. Types describe the shapes we expect of our variables, parameters, and functions, and the TypeScript type-checker can help catch issues like typos, missing properties, and bad function calls before we even run our code. Types also power TypeScript’s editor tooling like the auto-completion, code navigation, and refactorings that you might see in editors like Visual Studio and VS Code. In fact, if you write JavaScript in either of those editors, that experience is powered by TypeScript! You can learn more at the TypeScript website.

You can get started using TypeScript using npm with the following command:

npm install -D typescript

or through NuGet.

What’s New Since the Beta and RC?

Since TypeScript 5.6 beta, we reverted a change around how TypeScript’s language service searched for tsconfig.json files. Previously the language service would keep walking up looking for every possible project file named tsconfig.json that might contain a file. Because this could lead to opening many referenced projects, we reverted the behavior and are investigating ways to bring back the behavior in TypeScript 5.7.

Additionally, several new types have been renamed since the beta. Previously, TypeScript provided a single type called BuiltinIterator to describe every value backed by Iterator.prototype. It has been renamed IteratorObject, has a different set of type parameters, and now has several subtypes like ArrayIterator, MapIterator, and more.

A new flag called --stopOnBuildErrors has been added for --build mode. When a project builds with any errors, no other projects will continue to be built. This provides something close to the behavior of previous versions of TypeScript since TypeScript 5.6 always builds in the face of errors.

New editor functionality has been added such as direct support for commit characters and exclude patterns for auto-imports.

Disallowed Nullish and Truthy Checks

Maybe you’ve written a regex and forgotten to call .test(...) on it:

if (/0x[0-9a-f]/) {
    // Oops! This block always runs.
    // ...
}

or maybe you’ve accidentally written => (which creates an arrow function) instead of >= (the greater-than-or-equal-to operator):

if (x => 0) {
    // Oops! This block always runs.
    // ...
}

or maybe you’ve tried to use a default value with ??, but mixed up the precedence of ?? and a comparison operator like <:

function isValid(value: string | number, options: any, strictness: "strict" | "loose") {
    if (strictness === "loose") {
        value = +value
    }
    return value < options.max ?? 100;
    // Oops! This is parsed as (value < options.max) ?? 100
}

or maybe you’ve misplaced a parenthesis in a complex expression:

if (
    isValid(primaryValue, "strict") || isValid(secondaryValue, "strict") ||
    isValid(primaryValue, "loose" || isValid(secondaryValue, "loose"))
) {
    //                           ^^^^ 👀 Did we forget a closing ')'?
}

None of these examples do what the author intended, but they’re all valid JavaScript code. Previously TypeScript also quietly accepted these examples.

But with a little bit of experimentation, we found that many many bugs could be caught from flagging down suspicious examples like above. In TypeScript 5.6, the compiler now errors when it can syntactically determine a truthy or nullish check will always evaluate in a specific way. So in the above examples, you’ll start to see errors:

if (/0x[0-9a-f]/) {
//  ~~~~~~~~~~~~
// error: This kind of expression is always truthy.
}

if (x => 0) {
//  ~~~~~~
// error: This kind of expression is always truthy.
}

function isValid(value: string | number, options: any, strictness: "strict" | "loose") {
    if (strictness === "loose") {
        value = +value
    }
    return value < options.max ?? 100;
    //     ~~~~~~~~~~~~~~~~~~~
    // error: Right operand of ?? is unreachable because the left operand is never nullish.
}

if (
    isValid(primaryValue, "strict") || isValid(secondaryValue, "strict") ||
    isValid(primaryValue, "loose" || isValid(secondaryValue, "loose"))
) {
    //                    ~~~~~~~
    // error: This kind of expression is always truthy.
}

Similar results can be achieved by enabling the ESLint no-constant-binary-expression rule, and you can see some of the results they achieved in their blog post; but the new checks TypeScript performs does not have perfect overlap with the ESLint rule, and we also believe there is a lot of value in having these checks built into TypeScript itself.

Note that certain expressions are still allowed, even if they are always truthy or nullish. Specifically, true, false, 0, and 1 are all still allowed despite always being truthy or falsy, since code like the following:

while (true) {
    doStuff();

    if (something()) {
        break;
    }

    doOtherStuff();
}

is still idiomatic and useful, and code like the following:

if (true || inDebuggingOrDevelopmentEnvironment()) {
    // ...
}

is useful while iterating/debugging code.

If you’re curious about the implementation or the sorts of bugs it catches, take a look at the pull request that implemented this feature.

Iterator Helper Methods

JavaScript has a notion of iterables (things which we can iterate over by calling a [Symbol.iterator]() and getting an iterator) and iterators (things which have a next() method which we can call to try to get the next value as we iterate). By and large, you don’t typically have to think about these things when you toss them into a for/of loop, or [...spread] them into a new array. But TypeScript does model these with the types Iterable and Iterator (and even IterableIterator which acts as both!), and these types describe the minimal set of members you need for constructs like for/of to work on them.

Iterables (and IterableIterators) are nice because they can be used in all sorts of places in JavaScript – but a lot of people found themselves missing methods on Arrays like map, filter, and for some reason reduce. That’s why a recent proposal was brought forward in ECMAScript to add many methods (and more) from Array onto most of the IterableIterators that are produced in JavaScript.

For example, every generator now produces an object that also has a map method and a take method.

function* positiveIntegers() {
    let i = 1;
    while (true) {
        yield i;
        i++;
    }
}

const evenNumbers = positiveIntegers().map(x => x * 2);

// Output:
//    2
//    4
//    6
//    8
//   10
for (const value of evenNumbers.take(5)) {
    console.log(value);
}

The same is true for methods like keys(), values(), and entries() on Maps and Sets.

function invertKeysAndValues<K, V>(map: Map<K, V>): Map<V, K> {
    return new Map(
        map.entries().map(([k, v]) => [v, k])
    );
}

You can also extend the new Iterator object:

/**
 * Provides an endless stream of `0`s.
 */
class Zeroes extends Iterator<number> {
    next() {
        return { value: 0, done: false } as const;
    }
}

const zeroes = new Zeroes();

// Transform into an endless stream of `1`s.
const ones = zeroes.map(x => x + 1);

And you can adapt any existing Iterables or Iterators into this new type with Iterator.from:

Iterator.from(...).filter(someFunction);

All these new methods work as long as you’re running on a newer JavaScript runtime, or you’re using a polyfill for the new Iterator object.

Now, we have to talk about naming.

Earlier we mentioned that TypeScript has types for Iterable and Iterator; however, like we mentioned, these act sort of like “protocols” to ensure certain operations work. That means that not every value that is declared Iterable or Iterator in TypeScript will have those methods we mentioned above.

But there is still a new runtime value called Iterator. You can reference Iterator, as well as Iterator.prototype, as actual values in JavaScript. This is a bit awkward since TypeScript already defines its own thing called Iterator purely for type-checking. So due to this unfortunate name clash, TypeScript needs to introduce a separate type to describe these native/built-in iterable iterators.

TypeScript 5.6 introduces a new type called IteratorObject. It is defined as follows:

interface IteratorObject<T, TReturn = unknown, TNext = unknown> extends Iterator<T, TReturn, TNext> {
    [Symbol.iterator](): IteratorObject<T, TReturn, TNext>;
}

Lots of built-in collections and methods produce subtypes of IteratorObjects (like ArrayIterator, SetIterator, MapIterator, and more), and both the core JavaScript and DOM types in lib.d.ts, along with @types/node, have been updated to use this new type.

Similarly, there is a AsyncIteratorObject type for parity. AsyncIterator does not yet exist as a runtime value in JavaScript that brings the same methods for AsyncIterables, but it is an active proposal and this new type prepares for it.

We’d like to thank Kevin Gibbons who contributed the changes for these types, and who is one of the co-authors of the proposal.

Strict Builtin Iterator Checks (and --strictBuiltinIteratorReturn)

When you call the next() method on an Iterator<T, TReturn>, it returns an object with a value and a done property. This is modeled with the type IteratorResult.

type IteratorResult<T, TReturn = any> = IteratorYieldResult<T> | IteratorReturnResult<TReturn>;

interface IteratorYieldResult<TYield> {
    done?: false;
    value: TYield;
}

interface IteratorReturnResult<TReturn> {
    done: true;
    value: TReturn;
}

The naming here is inspired by the way a generator function works. Generator functions can yield values, and then return a final value – but the types between the two can be unrelated.

function abc123() {
    yield "a";
    yield "b";
    yield "c";
    return 123;
}

const iter = abc123();

iter.next(); // { value: "a", done: false }
iter.next(); // { value: "b", done: false }
iter.next(); // { value: "c", done: false }
iter.next(); // { value: 123, done: true }

With the new IteratorObject type, we discovered some difficulties in allowing safe implementations of IteratorObjects. At the same time, there’s been a long standing unsafety with IteratorResult in cases where TReturn was any (the default!). For example, let’s say we have an IteratorResult<string, any>. If we end up reaching for the value of this type, we’ll end up with string | any, which is just any.

function* uppercase(iter: Iterator<string, any>) {
    while (true) {
        const { value, done } = iter.next();
        yield value.toUppercase(); // oops! forgot to check for `done` first and misspelled `toUpperCase`

        if (done) {
            return;
        }
    }
}

It would be hard to fix this on every Iterator today without introducing a lot of breaks, but we can at least fix it with most IteratorObjects that get created.

TypeScript 5.6 introduces a new intrinsic type called BuiltinIteratorReturn and a new --strict-mode flag called --strictBuiltinIteratorReturn. Whenever IteratorObjects are used in places like lib.d.ts, they are always written with BuiltinIteratorReturn type for TReturn (though you’ll see the more-specific MapIterator, ArrayIterator, SetIterator more often).

interface MapIterator<T> extends IteratorObject<T, BuiltinIteratorReturn, unknown> {
    [Symbol.iterator](): MapIterator<T>;
}

// ...

interface Map<K, V> {
    // ...

    /**
     * Returns an iterable of key, value pairs for every entry in the map.
     */
    entries(): MapIterator<[K, V]>;

    /**
     * Returns an iterable of keys in the map
     */
    keys(): MapIterator<K>;

    /**
     * Returns an iterable of values in the map
     */
    values(): MapIterator<V>;
}

By default, BuiltinIteratorReturn is any, but when --strictBuiltinIteratorReturn is enabled (possibly via --strict), it is undefined. Under this new mode, if we use BuiltinIteratorReturn, our earlier example now correctly errors:

function* uppercase(iter: Iterator<string, BuiltinIteratorReturn>) {
    while (true) {
        const { value, done } = iter.next();
        yield value.toUppercase();
        //    ~~~~~ ~~~~~~~~~~~
        // error! ┃      ┃
        //        ┃      ┗━ Property 'toUppercase' does not exist on type 'string'. Did you mean 'toUpperCase'?
        //        ┃
        //        ┗━ 'value' is possibly 'undefined'.

        if (done) {
            return;
        }
    }
}

You’ll typically see BuiltinIteratorReturn paired up with IteratorObject throughout lib.d.ts. In general, we recommend being more explicit around the TReturn in your own code when possible.

For more information, you can read up on the feature here.

Support for Arbitrary Module Identifiers

JavaScript allows modules to export bindings with invalid identifier names as string literals:

const banana = "🍌";

export { banana as "🍌" };

Likewise, it allows modules to grab imports with these arbitrary names and bind them to valid identifiers:

import { "🍌" as banana } from "./foo"

/**
 * om nom nom
 */
function eat(food: string) {
    console.log("Eating", food);
};

eat(banana);

This seems like a cute party trick (if you’re as fun as we are at parties), but it has its uses for interoperability with other languages (typically via JavaScript/WebAssembly boundaries), since other languages may have different rules for what constitutes a valid identifier. It can also be useful for tools that generate code, like esbuild with its inject feature.

TypeScript 5.6 now allows you to use these arbitrary module identifiers in your code! We’d like to thank Evan Wallace who contributed this change to TypeScript!

The --noUncheckedSideEffectImports Option

In JavaScript it’s possible to import a module without actually importing any values from it.

import "some-module";

These imports are often called side effect imports because the only useful behavior they can provide is by executing some side effect (like registering a global variable, or adding a polyfill to a prototype).

In TypeScript, this syntax has had a pretty strange quirk: if the import could be resolved to a valid source file, then TypeScript would load and check the file. On the other hand, if no source file could be found, TypeScript would silently ignore the import!

This is surprising behavior, but it partially stems from modeling patterns in the JavaScript ecosystem. For example, this syntax has also been used with special loaders in bundlers to load CSS or other assets. Your bundler might be configured in such a way where you can include specific .css files by writing something like the following:

import "./button-component.css";

export function Button() {
    // ...
}

Still, this masks potential typos on side effect imports. That’s why TypeScript 5.6 introduces a new compiler option called --noUncheckedSideEffectImports, to catch these cases. When --noUncheckedSideEffectImports is enabled, TypeScript will now error if it can’t find a source file for a side effect import.

import "oops-this-module-does-not-exist";
//     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
// error: Cannot find module 'oops-this-module-does-not-exist' or its corresponding type declarations.

When enabling this option, some working code may now receive an error, like in the CSS example above. To work around this, users who want to just write side effect imports for assets might be better served by writing what’s called an ambient module declaration with a wildcard specifier. It would go in a global file and look something like the following:

// ./src/globals.d.ts

// Recognize all CSS files as module imports.
declare module "*.css" {}

In fact, you might already have a file like this in your project! For example, running something like vite init might create a similar vite-env.d.ts.

While this option is currently off by default, we encourage users to give it a try!

For more information, check out the implementation here.

The --noCheck Option

TypeScript 5.6 introduces a new compiler option, --noCheck, which allows you to skip type checking for all input files. This avoids unnecessary type-checking when performing any semantic analysis necessary for emitting output files.

One scenario for this is to separate JavaScript file generation from type-checking so that the two can be run as separate phases. For example, you could run tsc --noCheck while iterating, and then tsc --noEmit for a thorough type check. You could also run the two tasks in parallel, even in --watch mode, though note you’d probably want to specify a separate --tsBuildInfoFile path if you’re truly running them at the same time.

--noCheck is also useful for emitting declaration files in a similar fashion. In a project where --noCheck is specified on a project that conforms to --isolatedDeclarations, TypeScript can quickly generate declaration files without a type-checking pass. The generated declaration files will rely purely on quick syntactic transformations.

Note that in cases where --noCheck is specified, but a project does not use --isolatedDeclarations, TypeScript may still perform as much type-checking as necessary to generate .d.ts files. In this sense, --noCheck is a bit of a misnomer; however, the process will be lazier than a full type-check, only calculating the types of unannotated declarations. This should be much faster than a full type-check.

noCheck is also available via the TypeScript API as a standard option. Internally, transpileModule and transpileDeclaration already used noCheck to speed things up (at least as of TypeScript 5.5). Now any build tool should be able to leverage the flag, taking a variety of custom strategies to coordinate and speed up builds.

For more information, see the work done in TypeScript 5.5 to power up noCheck internally, along with the relevant work to make it publicly available on the command line and

Allow --build with Intermediate Errors

TypeScript’s concept of project references allows you to organize your codebase into multiple projects and create dependencies between them. Running the TypeScript compiler in --build mode (or tsc -b for short) is the built-in way of actually conducting that build across projects and figuring out which projects and files need to be compiled.

Previously, using --build mode would assume --noEmitOnError and immediately stop the build if any errors were encountered. This meant that “downstream” projects could never be checked and built if any of their “upstream” dependencies had build errors. In theory, this is a very cromulent approach – if a project has errors, it is not necessarily in a coherent state for its dependencies.

In reality, this sort of rigidity made things like upgrades a pain. For example, if projectB depends on projectA, then people more familiar with projectB can’t proactively upgrade their code until their dependencies are upgraded. They are blocked by work on upgrading projectA first.

As of TypeScript 5.6, --build mode will continue to build projects even if there are intermediate errors in dependencies. In the face of intermediate errors, they will be reported consistently and output files will be generated on a best-effort basis; however, the build will continue to completion on the specified project.

If you want to stop the build on the first project with errors, you can use a new flag called --stopOnBuildErrors. This can be useful when running in a CI environment, or when iterating on a project that’s heavily depended upon by other projects.

Note that to accomplish this, TypeScript now always emits a .tsbuildinfo file for any project in a --build invocation (even if --incremental/--composite is not specified). This is to keep track of the state of how --build was invoked and what work needs to be performed in the future.

You can read more about this change here on the implementation.

Region-Prioritized Diagnostics in Editors

When TypeScript’s language service is asked for the diagnostics for a file (things like errors, suggestions, and deprecations), it would typically require checking the entire file. Most of the time this is fine, but in extremely large files it can incur a delay. That can be frustrating because fixing a typo should feel like a quick operation, but can take seconds in a big-enough file.

To address this, TypeScript 5.6 introduces a new feature called region-prioritized diagnostics or region-prioritized checking. Instead of just requesting diagnostics for a set of files, editors can now also provide a relevant region of a given file – and the intent is that this will typically be the region of the file that is currently visible to a user. The TypeScript language server can then choose to provide two sets of diagnostics: one for the region, and one for the file in its entirety. This allows editing to feel way more responsive in large files so you’re not waiting as long for thoes red squiggles to disappear.

For some specific numbers, in our testing on TypeScript’s own checker.ts, a full semantic diagnostics response took 3330ms. In contrast, the response for the first region-based diagnostics response took 143ms! While the remaining whole-file response took about 3200ms, this can make a huge difference for quick edits.

This feature also includes quite a bit of work to also make diagnostics report more consistently throughout your experience. Due the way our type-checker leverages caching to avoid work, subsequent checks between the same types could often have a different (typically shorter) error message. Technically, lazy out-of-order checking could cause diagnostics to report differently between two locations in an editor – even before this feature – but we didn’t want to exacerbate the issue. With recent work, we’ve ironed out many of these error inconsistencies.

Currently, this functionality is available in Visual Studio Code for TypeScript 5.6 and later.

For more detailed information, take a look at the implementation and write-up here.

Granular Commit Characters

TypeScript’s language service now provides its own commit characters for each completion item. Commit characters are specific characters that, when typed, will automatically commit the currently-suggested completion item.

What this means is that over time your editor will now more frequently commit to the currently-suggested completion item when you type certain characters. For example, take the following code:

declare let food: {
    eat(): any;
}

let f = (foo/**/

If our cursor is at /**/, it’s unclear if the code we’re writing is going to be something like let f = (food.eat()) or let f = (foo, bar) => foo + bar. You could imagine that the editor might be able to auto-complete differently depending on which character we type out next. For instance, if we type in the period/dot character (.), we probably want the editor to complete with the variable food; but if we type the comma character (,), we might be writing out a parameter in an arrow function.

Unfortunately, previously TypeScript just signaled to editors that the current text might define a new parameter name so that no commit characters were safe. So hitting a . wouldn’t do anything even if it was “obvious” that the editor should auto-complete with the word food.

TypeScript now explicitly lists which characters are safe to commit for each completion item. While this won’t immediately change your day-to-day experience, editors that support these commit characters should see behavioral improvements over time. To see those improvements right now, you can now use the TypeScript nightly extension with Visual Studio Code Insiders. Hitting . in the code above correctly auto-completes with food.

For more information, see the pull request that added commit characters along with our adjustments to commit characters depending on context.

Exclude Patterns for Auto-Imports

TypeScript’s language service now allows you to specify a list of regular expression patterns which will filter away auto-import suggestions from certain specifiers. For example, if you want to exclude all “deep” imports from a package like lodash, you could configure the following preference in Visual Studio Code:

{
    "typescript.preferences.autoImportSpecifierExcludeRegexes": [
        "^lodash/.*$"
    ]
}

Or going the other way, you might want to disallow importing from the entry-point of a package:

{
    "typescript.preferences.autoImportSpecifierExcludeRegexes": [
        "^lodash$"
    ]
}

One could even avoid node: imports by using the following setting:

{
    "typescript.preferences.autoImportSpecifierExcludeRegexes": [
        "^node:"
    ]
}

To specify certain regular expression flags like i or u, you will need to surround your regular expression with slashes. When providing surrounding slashes, you’ll need to escape other inner slashes.

{
    "typescript.preferences.autoImportSpecifierExcludeRegexes": [
        "^./lib/internal",        // no escaping needed
        "/^.\\/lib\\/internal/",  // escaping needed - note the leading and trailing slashes
        "/^.\\/lib\\/internal/i"  // escaping needed - we needed slashes to provide the 'i' regex flag
    ]
}

The same settings can be applied for JavaScript through javascript.preferences.autoImportSpecifierExcludeRegexes in VS Code.

Note that while this option may overlap a bit with typescript.preferences.autoImportFileExcludePatterns, there are differences. The existing autoImportFileExcludePatterns takes a list of glob patterns that exclude file paths. This might be simpler for a lot of scenarios where you want to avoid auto-importing from specific files and directories, but that’s not always enough. For example, if you’re using the package @types/node, the same file declares both fs and node:fs, so we can’t use autoImportExcludePatterns to filter out one or the other.

The new autoImportSpecifierExcludeRegexes is specific to module specifiers (the specific string we write in our import statements), so we could write a pattern to exclude fs or node:fs without excluding the other. What’s more, we could write patterns to force auto-imports to prefer different specifier styles (e.g. preferring ./foo/bar.js over #foo/bar.js).

For more information, see the implementation here.

Notable Behavioral Changes

This section highlights a set of noteworthy changes that should be acknowledged and understood as part of any upgrade. Sometimes it will highlight deprecations, removals, and new restrictions. It can also contain bug fixes that are functionally improvements, but which can also affect an existing build by introducing new errors.

lib.d.ts

Types generated for the DOM may have an impact on type-checking your codebase. For more information, see linked issues related to DOM and lib.d.ts updates for this version of TypeScript.

.tsbuildinfo is Always Written

To enable --build to continue building projects even if there are intermediate errors in dependencies, and to support --noCheck on the command line, TypeScript now always emits a .tsbuildinfo file for any project in a --build invocation. This happens regardless of whether --incremental is actually on. See more information here.

Respecting File Extensions and package.json from within node_modules

Before Node.js implemented support for ECMAScript modules in v12, there was never a good way for TypeScript to know whether .d.ts files it found in node_modules represented JavaScript files authored as CommonJS or ECMAScript modules. When the vast majority of npm was CommonJS-only, this didn’t cause many problems – if in doubt, TypeScript could just assume that everything behaved like CommonJS. Unfortunately, if that assumption was wrong it could allow unsafe imports:

// node_modules/dep/index.d.ts
export declare function doSomething(): void;

// index.ts
// Okay if "dep" is a CommonJS module, but fails if
// it's an ECMAScript module - even in bundlers!
import dep from "dep";
dep.doSomething();

In practice, this didn’t come up very often. But in the years since Node.js started supporting ECMAScript modules, the share of ESM on npm has grown. Fortunately, Node.js also introduced a mechanism that can help TypeScript determine if a file is an ECMAScript module or a CommonJS module: the .mjs and .cjs file extensions and the package.json "type" field. TypeScript 4.7 added support for understanding these indicators, as well as authoring .mts and .cts files; however, TypeScript would only read those indicators under --module node16 and --module nodenext, so the unsafe import above was still a problem for anyone using --module esnext and --moduleResolution bundler, for example.

To solve this, TypeScript 5.6 collects module format information and uses it to resolve ambiguities like the one in the example above in all module modes (except amd, umd, and system). Format-specific file extensions (.mts and .cts) are respected anywhere they’re found, and the package.json "type" field is consulted inside node_modules dependencies, regardless of the module setting. Previously, it was technically possible to produce CommonJS output into a .mjs file or vice versa:

// main.mts
export default "oops";

// $ tsc --module commonjs main.mts
// main.mjs
Object.defineProperty(exports, "__esModule", { value: true });
exports.default = "oops";

Now, .mts files never emit CommonJS output, and .cts files never emit ESM output.

Note that much of this behavior was provided in pre-release versions of TypeScript 5.5 (implementation details here), but in 5.6 this behavior is only extended to files within node_modules.

More details are available on the change here.

Correct override Checks on Computed Properties

Previously, computed properties marked with override did not correctly check for the existence of a base class member. Similarly, if you used noImplicitOverride, you would not get an error if you forgot to add an override modifier to a computed property.

TypeScript 5.6 now correctly checks computed properties in both cases.

const foo = Symbol("foo");
const bar = Symbol("bar");

class Base {
    [bar]() {}
}

class Derived extends Base {
    override [foo]() {}
//           ~~~~~
// error: This member cannot have an 'override' modifier because it is not declared in the base class 'Base'.

    [bar]() {}
//  ~~~~~
// error under noImplicitOverride: This member must have an 'override' modifier because it overrides a member in the base class 'Base'.
}

This fix was contributed thanks to Oleksandr Tarasiuk in this pull request.

What’s Next?

If you want to see what’s coming next, you can also take a look at the TypeScript 5.7 iteration plan where you’ll see a list of prioritized features, bug fixes, and target release dates that you can plan around. TypeScript’s nightly releases are easy to use over npm, and there’s also an extension to use those nightly releases in Visual Studio Code. Nightly releases tend not to be disruptive, but they can give you a good sense of what’s coming next while helping the TypeScript project catch bugs early!

Otherwise, we hope TypeScript 5.6 gives you a great experience, and makes your day-to-day coding a joy!

Happy Hacking!

– Daniel Rosenwasser and the TypeScript Team

The post Announcing TypeScript 5.6 appeared first on TypeScript.

Read the whole story
emrox
6 days ago
reply
Hamburg, Germany
alvinashcraft
6 days ago
reply
West Grove, PA
Share this story
Delete

A shortcut to edit long shell commands in your $EDITOR (#tilPost)

1 Share

If you're wrangling long shell commands, you know that tweaking parameters can be a bit of a pain. Especially when your command spans multiple lines and includes line breaks, you must be careful not to lose your changes. Press one key too much, and you're shuffling through your shell history and your current changes will be gone.

But today I learned that there's hope. If you want to edit an already typed command in your favorite editor, you can press ctrl + x followed by ctrl + e.

This combination triggers the so-called edit-and-execute-command, which then creates a temporary file, opens it in the editor defined in your shell's $EDITOR variable, and when you save and close this file — your terminal command will be updated. Pretty magical!

Editing of a command using `ctrl + x` and `ctrl + e`

For the VS Code users, you must append the --wait flag to your EDITOR declaration for the shortcut to work — EDITOR="code --wait".


Reply to Stefan
Read the whole story
emrox
7 days ago
reply
Hamburg, Germany
Share this story
Delete

Building a WoW server in Elixir

1 Share

Thistle Tea is my new World of Warcraft private server project. You can log in, create a character, run around, and cast spells to kill mobs, with everything synchronized between players as expected for an MMO. It was floating around in my head to build this for a while, since I have an incurable nostalgia for early WoW. I first mentioned this on May 13th and didn’t expect to get any further than login, character creation, and spawning into the map. Here’s a recount of the first month of development.

Prep Work

Before coding, I did some research and came up a plan.

  • Code this in Elixir, using the actor model.
  • MaNGOS has done the hard work on collecting all the data, so use it.
  • Use Thousand Island as the socket server.
  • Treat this project as an excuse to learn more Elixir.
  • Reference existing projects and documentation rather than working from scratch.
  • Speed along the happy path rather than try and handle every error.

Day 1 - June 2nd

There are two main parts to a World of Warcraft server: the authentication server and the game server. Up first was authentication, since you can’t do anything without logging in.

To learn more about the requests and responses, I built a quick MITM proxy between the client and a MaNGOS server to log all packets. It wasn’t as useful as expected, since not much was consistent, but it did help me internalize how the requests and responses worked.

The first byte of an authentication packet is the opcode, which indicates which message it is, and the rest is a payload with the relevant data. I was able to extract the fields from the payload by pattern matching on the binary data.

The auth flow can be simplified as:

  • Client sends a CMD_AUTH_LOGON_CHALLENGE packet
  • Server sends back some data for the client to use in crypto calculations
  • Client sends a CMD_AUTH_LOGON_PROOF packet with the client proof
  • If the client proof matches what’s expected, server sends over the server_proof
  • Client is now authenticated

It uses SRP6, which I hadn’t heard of before this. Seems like the idea is to avoid transmitting an unencrypted password and instead both the client and server independently calculate a proof that only matches if they both know the correct password. If the proof matches, then authentication is successful.

So basically, what I needed to do was:

  • Listen for data over the socket
  • Once data received, parse what message it is out of the header section
  • Handle each one differently
  • Send back the appropriate packet

This whole part is well documented, but I still ran into some issues with the cryptography. Luckily, I found a blog post and an accompanying Elixir implementation, so I was able to substitute my broken cryptography with working cryptography. Without that, I would’ve been stuck at this part for a very long time (maybe forever). Wasn’t able to get login working on day 1, but I was close.

Links:

Day 2 - June 3rd

I spent some time cleaning up the code and found a logic error where I reversed some crypto bytes that weren’t supposed to be. Fixing that made auth work, finally getting a success with hardcoded credentials.

Next up was getting the realm list to work, by handling CMD_REALM_LIST and returning which game server to connect to.

This got me out of the tedious auth bits and I could get to building the game server.

Links:

Day 3 - June 4th

The goal for today was to get spawned into the world. But first more tedious auth bits.

The game server auth flow can be simplified as:

  • When client first connects, server sends SMSG_AUTH_CHALLENGE, with a random seed
  • Client sends back CMSG_AUTH_SESSION, with another client proof
  • If client proof matches server proof, server sends a successful SMSG_AUTH_RESPONSE

This negotaties how to encrypt/decrypt future packet headers. Luckily Shadowburn also had crypto code for this, so I was able to use it here. The server proof requires a value previously calculated by the authentication server, so I used an Agent to store that session value. It worked, but I later refactored it to use ETS for a simpler interface.

After that, it’s something like:

  • Client sends message to server
  • Server decrypts header, which contains message size (2 bytes) and opcode (4 bytes)
  • Server handles message and sends back 0 or more messages to client

First was handling CMSG_CHAR_CREATE and CMSG_CHAR_ENUM, so I could create and list characters. I originally used an Agent for storage here as well, which made things quick to prototype.

Then I got side-tracked for a bit trying to get equipment to show up, since I had all the equipment display ids hardcoded to 0. I looked through the MaNGOS database and hardcoded a few proper display ids before moving on.

After that was handling CMSG_PLAYER_LOGIN. I found an example minimal SMSG_UPDATE_OBJECT spawn packet, which was supposed to spawn me in Northshire Abbey.

That’s probably the most important packet, since it does everything from:

  • Spawning things into the world, like players, mobs, objects, etc.
  • Updating their values, like health, position, level, etc.

It has a lot of different forms, can batch multiple object updates into a single packet, and has a compressed variant.

Whoops, had the coordinates a bit off.

After fixing that, I was in the human starting area as expected. No player model yet, though.

I thought movement was broken, but it turns out all keybinds were being unset on every login, so the movement keys just weren’t bound. Manually navigating to the keybinding configuration and resetting them to default allowed me to move around.

Next up was adding more to that spawn packet to use the player race and proper starting area. The starting areas were grabbed from a MaNGOS database that I converted over to SQLite and wired up with Ecto.

Last for the night was to get logout working.

The implementation was something like:

  • After receiving a CMSG_LOGOUT_REQUEST, use Process.send_after/3 to queue a :login_complete message that would send SMSG_LOGOUT_COMPLETE to the client in 20 seconds
  • Store a reference to that timer in state
  • If received a CMSG_LOGOUT_CANCEL, cancel the timer and remove it from state

This was the first piece that really took advantage of Elixir’s message passing.

The white chat box was weird, but it was nice being able to log in.

Links:

Day 4 - June 5th

First up was reorganizing the code, since my game.ex GenServer was getting too large.

My strategy for that was:

  • Split out related messages into separate files
    • auth.ex, character.ex, ping.ex, etc.
    • wrapped in the __using__ macro
  • Include those back into game.ex with use

It worked, but it messed with line numbers in error messages and made things harder to debug.

After that, I wanted to generate that spawn packet properly rather than hardcoding. The largest piece of this was figuring out the update mask for the update fields.

There are a ton of fields for the different types of objects SMSG_UPDATE_OBJECT handles. Before the raw object fields in the payload, there’s a bit mask with bits set at offsets that correspond to the fields being sent. Without that, the client wouldn’t know what to do with the values.

So, I needed to write a function that would generate this bit mask from the fields I pass in. Luckily it’s all well documented, but it still took a while to get to a working implementation.

Links:

Day 5 - June 6th

Referencing MaNGOS, I added some more messages that the server sends to the client after a CMSG_PLAYER_LOGIN. One of these, SMSG_ACCOUNT_DATA_TIMES, fixed the white chat box and keybinds being reset.

I also added SMSG_COMPRESSED_UPDATE_OBJECT, which compresses the SMSG_UPDATE_OBJECT packet with :zlib.compress/1. This was more straightforward than expected, and I made things use the compressed variant if it’s actually smaller. I’m expecting this to have even more benefits when I get to batching object updates, but right now I’m only updating objects one by one.

Movement would come up soon, so I started adding the handlers for those packets.

Day 6 - June 7th

In the update packet, I still had the object guid hardcoded. This is because it wants a packed guid and I needed to write some functions to handle that. Rather than the entire guid, a packed guid is a byte mask followed by all non-zero bytes. The byte mask has bits set that correspond to where the following bytes go in the unpacked guid. This is for optimizing packet size, since a guid is always 8 bytes but a packed guid can be as small as 2 bytes.

This took a while, because the client was crashing when I changed the packed guid from <<1, 4>> to anything else. After trying different things and wasting a lot of time, I realized that the guid was in two places in the packet and they needed to match. A quick fix later and things were working as expected.

Links:

Day 7 - June 8th

It was about time to start implementing the actual MMO features, starting with seeing other players. To test, I hardcoded another update packet after the player’s with a different guid, to try and spawn something other than the player.

Then I used a Registry to keep track of logged in players and their spawn packets. After entering the world, I would use Registry.dispatch/3 to:

  • spawn all logged in players for that player
  • spawn that player for all other players
  • both using SMSG_UPDATE_OBJECT

After that, I added a similar dispatch when handling movement packets to broadcast movement to all other players. This is where the choice of Elixir really started to shine, and I quickly had players able to see each other move around the screen.

I tested this approach with multiple windows open and it was very cool to see everything synchronized.

I added a handler for CMSG_NAME_QUERY to get names to stop showing up as Unknown and also despawned players with SMSG_DESTROY_OBJECT when logging out.

This is where I started noticing a bug: occasionally I wouldn’t be able to decrypt a packet successfully, which would lead to all future attempts for that connection failing too, since there’s a counter as part of the decryption function. I couldn’t figure out how to resolve it yet, though, or reliably reproduce.

Links:

Day 8 - June 9th

To get chat working, I handled CMSG_MESSAGECHAT and broadcasted SMSG_MESSAGECHAT to players, using Registry.dispatch/3 here too. I only focused on /say here and it’s all players rather than nearby. Something to fix later.

Related to that weird decryption bug, I handled the case where the server received more than one packet at once. This might’ve helped a bit, but didn’t completely resolve the issue.

Links:

Day 9 - June 10th

I still had authentication with a hardcoded username, password, and salt, so it was about time to fix that. Rather than go with PostgreSQL or SQLite for the database, I decided to go with Mnesia, since one of my goals was to learn more about Elixir and its ecosystem. I briefly tried plain :mnesia, but decided to use Memento for a cleaner interface.

So, I added models for Account and Character and refactored everything to use them. The character object is kept in process state and only persisted to the database on logout or disconnect. Saving on a CMSG_PING or just periodically could be a good idea too, eventually. Right now data isn’t persisted to disk, since I’m still iterating on the data model, but that should be straightforward to toggle later.

Links:

Day 10 - June 11th

Today was standardizing the logging, handling a bit more of chat, and handling an unencrypted CMSG_PING. I was thinking that could be part of the intermittent decryption issues too, but looking back I don’t think I’ve ever had my client send that unencrypted anyways.

Day 11 - June 12th

I wanted equipment working so players weren’t naked all the time, so I started on that. Using the MaNGOS item_template table, I wired things up to set random equipment on character creation. Then I added that to the response to CMSG_CHAR_ENUM so they would show up in the login screen.

Up next was getting it showing in game.

Day 12 - June 13th

It took a bit to figure out the proper offsets for each piece of equipment in the update mask, but I eventually got it working.

Since equipment is part of the update object packet, it just worked for other players, which was nice.

Day 13 - June 14th

I had player movement synchronizing between players properly so I wanted to get sitting working too.

Whoops. Weird things happen when field offsets or sizes are incorrect when building that update mask.

After that, I wanted to play around a bit by randomizing equipment on every jump. Here I learned that you need to send all fields in the update object packet, like health, or they get reset. I was trying to just send the equipment changes but I’d die on every jump.

After making sure to send all fields, it was working as expected.

Day 14 - June 15th

Took a break.

Day 15 - June 16th

Today was refactoring and improvements. I reworked things into proper modules, since it was getting hard to debug when all the line numbers were wrong. Now game.ex called the appropriate module’s handle_packet/3 function, rather than combining everything with use.

I also reworked things so players were spawned with their current position instead of the initial position saved in the registry. This included some changes to make building an update packet more straightforward.

Day 16 - June 17th

Today was just playing around and no code changes.

Not sure why the model is messed up here, but it seems like it’s something with my computer rather than anything server related.

Day 17 - June 18th

The world was feeling a bit empty, so I wanted to spawn in mobs. First was hardcoding an update packet that should spawn a mob and having it trigger on /say.

After that, I used the creature table of the MaNGOS database to get proper mobs spawning. I used a GenServer for this so every mob would be a process and keep track of their own state. Communication between mobs and players would happen through message passing. First I hardcoded a few select ids in the starting area to load, and after that worked I loaded them all.

Rather than spawn all ~57k mobs for the player, I wired things up to only spawn mobs within a certain range. This looked like:

  • Store mob pids in a Registry, along with their x, y, z position
  • Create a within_range/2 function that takes in two x, y, z tuples
  • On player login, dispatch on that MobRegistry, using within_range/2 to only build spawn packets for mobs within range
  • On player movement, do the same

It worked really well and I could run around and see the mobs.

Next up was optimization and despawning mobs that were now out of range.

Day 18 - June 19th

For optimization, I didn’t want to send duplicate spawn packets for mobs that were already spawned. I also wanted to despawn mobs that were out of range. To do this, I used ETS to track which guids were spawned for a player.

In the dispatch, the logic was:

  • if in_range and not spawned, spawn
  • if not in_range and spawned, despawn
  • otherwise, ignore

Despawning was done through the same SMSG_DESTROY_OBJECT packet used for despawning a player after logging out.

After getting that working, I ran around the world and explored for a bit.

I noticed something wrong when exploring Westfall. Bugs were spawning in the air and then falling down to the ground. Turns out I wasn’t separating mobs by map, so Westfall had mobs from Silithus mixed in. To fix, I reworked both the mob and player registries to use the map as the key.

Having mobs standing in place was a bit boring and I wanted them to move around. Turns out this is pretty complicated and I’ll actually have to use the map files to generate paths that don’t float or clip through the ground. There are a few projects for this, all a bit difficult to include in an Elixir project. I’m thinking RPC could work, but not sure if it’ll be performant enough yet.

The standard update object packet can be used for mob movement here, since it has a movement block, but there might be some more specialized packets to look into later too.

Without using the map data, I couldn’t get the server movement to line up with what happened in the client. So, I settled with getting mobs to spin at random speeds.

That was a bit silly and used a lot of CPU, so I tweaked it to just randomly change orientation instead.

Links:

Day 19 - June 20th

Here I got mob names working by implementing CMSG_CREATURE_QUERY. This crashed the client when querying mobs that didn’t have a model, so I removed them from being loaded. I also started loading in mob movement data and optimized the query a bit to speed up startup.

I finally got some people to help me test the networking later that day. It didn’t start very well.

Turns out I hadn’t tested this locally since adding mobs and the player/mob spawn/despawns were conflicting with each other due to guid collisions. Players were being constantly spawned in and out.

I did some emergency patching to make it so players are never despawned, even out of range. I also turned off /say spawning boars since that was getting annoying. That worked for now.

There were still some major issues. My helper had 450 ms latency and would crash when running to areas with a lot of mobs. I couldn’t reproduce, though, with my 60 ms latency.

Links:

Day 20 - June 21

To reproduce the issue from the previous night, I connected to my local server from my laptop on the same network. On my laptop, I used tc to simulate a ton of latency and wired things up so equipment would change on any movement instead of just jump. This sent a lot of packets when spinning and I was finally able to reproduce.

Turns out the crashing issues were from not receiving a complete packet, but still trying to decrypt and handle it. I was handling if the server got more than one packet, but not if the server got a partial packet.

Referencing Shadowburn’s implementation, the fix for this was to let the packet data accumulate until there’s enough to handle. This finally resolved the weird decryption issue I ran into on day 7.

For the guid collision issue, I added a large offset to creature guids so they won’t conflict with player guids.

Day 21 - June 22

Took a break.

Day 22 - June 23

Worked on CMSG_ITEM_NAME_QUERY a bit, but there’s still something wrong here. It could be that it’s trying to calculate damage using some values I’m not passing to the client yet.

Decided spells would be next, so I started on that. First was sending spells over with SMSG_INITIAL_SPELLS on login, using the initial spells in MaNGOS, so I’d have something in the spellbook. Everything was instant cast though, for some reason.

Turns out I needed to set unit_mod_cast_speed in the player update packet for cast times to show up properly in the client.

I started by handling CMSG_CAST_SPELL, which would send a successful SMSG_CAST_RESULT after the spell cast time, so other spells could be cast. I also handled CMSG_CANCEL_CAST, to cancel that timer. This implementation looked a bit like the logout logic.

The starting animation for casting a spell would play, but no cast bar or anything further.

Links:

Days 23 to 26 - June 24 to 27

Took a longer break.

Day 27 - June 28

I was able to get a cast bar showing up by sending SMSG_SPELL_START after receiving the cast spell packet.

The projectile effect took a bit longer to figure out. I needed to send a SMSG_SPELL_GO after the cast was complete, with the proper target guids.

Links:

Day 28 - June 29

I got self-cast spells working by setting the target guid to the player’s guid.

Day 29 - June 30

Another break.

Day 30 - July 1

Since I had spells somewhat working, next I had to clean up the implementation. I dispatched the SMSG_SPELL_START and SMSG_SPELL_GO packets to nearby players and fixed spell cancelling.

Day 31 - July 2

I added levels to mobs, random from their minimum to maximum level, previously hardcoded 1. Then I made spells do some hardcoded damage, so mobs could die. Mobs would still randomly change orientation when dead, so added a check to only move if alive.

That seemed like a good stopping point and was one month since I started writing code for the project.

Future Plans

I’ll slowly work on this, adding more functionality as I go. My goal isn’t a 1:1 Vanilla server, but more something that fits well with Elixir’s capabilities, so I don’t plan on accepting limitations for the sake of accuracy or similar. I’d like to see how many players this approach can handle and how it compares in performance to other implementations eventually too.

Some things on the list:

  • proper mob + player stats
  • proper damage calculations
  • pvp
  • quests
  • mob movement + combat ai
  • loot + inventory management
  • more spells + effects
  • tons of refactoring
  • benchmarking
  • gameplay loop, in general

So still plenty more work to do. :)

Thanks to all the projects I’ve referenced for this, most of which I’ve tried to link here.

I wouldn’t have gotten very far without them and their awesome documentation.


Read the whole story
emrox
11 days ago
reply
Hamburg, Germany
Share this story
Delete
Next Page of Stories