3090 stories
1 follower

Historical Dates

2 Comments and 8 Shares
Evidence suggests the 1899 transactions occurred as part of a global event centered around a deity associated with the lotus flower.
Read the whole story
16 hours ago
Hamburg, Germany
Share this story
2 public comments
2 days ago
Hahaha. Yep.
Atlanta, GA
2 days ago
An explanation, for those as puzzled as I was:

Unix-based systems traditionally represent timestamps as "seconds since 1970-01-01". If some program requires a modification timestamp for some file and no accurate value is available, it will commonly use the value 0 as a default. When other systems carefully preserve that value, it leads to files with 1970-01-01 timestamp.

Spreadsheets traditionally represent timestamps as "(fractional) days since 1900-01-01". In the same way, asking for a timestamp when no valid information is available will often get the value 0 as a default. Since businesses run on Excel, there's a lot of transaction records with a 0 timestamp in Excel spreadsheets out there.

One last wrinkle: The first wildly popular spreadsheet, Lotus 1-2-3, had a bug: it assumed that the year 1900 was a leap year, so it assumed 1900-02-29 existed, and so every date timestamp after that point was off by one. When Microsoft created Excel, they carefully reimplemented that bug for compatibility's sake, but if you're a historian looking at the data, you might reasonable assume that spreadsheets stored timestamps as "(fractional) days since 1899-12-31". If you're storing timestamps in some different format, it might not be obvious that those are zero dates - you might assume there's just a lot of transactions processed on that day.
2 days ago
Staggeringly thorough, thank you!
14 hours ago
I knew the 1970 thing but didn't understand the provenance of the 1899 one; I assumed it had something to do with the Y2K bug (as a colleage of mine put it, "1899 and 1999 were two very different years, but they had one thing in common: the next year would be 1900")

Designing Better Inline Validation UX

1 Share

Undoubtedly, there are major advantages of inline validation. We validate input as users type, and so as people move from one green checkmark to another, we boost their confidence and create a sense of progression. If an input expects a particular type of content, we can flag errors immediately, so users can fix them right away. This is especially useful when choosing a secure password, or an available username.

Sometimes inline validation works really well, for example with a password strength meter, as used by Slack. (Image credit: Error Forms Design Guidelines)

However, inline validation can be problematic. Mostly because we can’t really show an error at just the right time when it occurs. And the reason for that is that we can’t really know for sure when a user has actually finished their input, unless they explicitly tell us that they have.

Clicking on a “Submit” button is a very clear signal that users think they are done, but our implementations usually consider leaving an input field as a strong enough signal to kick off the validation for that input field. Often it will be a correct assumption, but since it’s merely an assumption, eventually it will be wrong for some users — we just don’t know how many people, and how often, will be affected by it.

Surely we don’t want to show “wrong” errors; nor do we want to confuse and frustrate users with flashing error messages as they type. We want to show errors as they happen, and we want to replace them with friendly green checkmarks once they are fixed. How challenging can that be to implement? As it turns out, it is indeed quite challenging.

This article is part of our ongoing series on design patterns. It’s also a part of the 4-weeks live UX training 🍣 and will be in our recently released video course soon.

The Many Faces Of Inline Validation

There are surprisingly many flavours of inline validation. Over the years, we’ve learned to avoid premature validation — inline validation that happens when a user just focuses on an empty input field. In that case, we would display errors way too early, before users even have a chance to type anything. This isn’t helpful, and it is frustrating.

Eventually we moved to real-time validation which happens as users are typing. To do that, for every single field, we define a threshold of entered characters, after which we start validating. This doesn’t really remove frustration entirely, but rather delays it. As users eventually reach the threshold, if the input isn’t complete or properly formatted yet, they start getting confronted with flashes of premature error messages.

Inline validation also typically requires quite elaborate and strict formatting rules. For example, at which point should we validate a day and a month for a date input? Would we validate them separately, or validate the date as a whole? Because both day and month inputs are interdependent, getting inline validation right there is difficult. From testing, it seems that validating the date at once helps avoid premature errors for good. In practice, each input, and each type of input, requires a conversation about custom validation rules.

The most common type of inline validation is late validation: we validate input once a user has left the input field (on blur), and just let them be as they are filling in the data or copy-paste it. This surely helps us avoid flashes of errors. However, we assume a particular order of progression from one field to another. We also prompt users to interrupt their progression and head back to fix an error once it has happened.

So which inline validation technique works best? As it turns out in usability testing, users sincerely appreciate both — the live validation and the late validation — if things go perfectly smoothly. Ironically, they also feel utterly frustrated by any kind of inline validation once errors start showing up one after another.

The Downsides Of Inline Validation

This frustration shows up in different ways, from the task abandonment to the increased frequency of errors. And usually it’s related to a few well-known issues that inline validation always entails:

  • Inline validation always interrupts users.
    A user might be just trying to answer a question, but error messages keep flashing in front of them as they type. That’s annoying, disruptive and expensive.
  • Inline validation often kicks in too early or too late.
    Errors appear either when the user is typing, or once they have moved to the next input field. Both of these options aren’t ideal: the user is interrupted as they type, or they are focused on the next question, yet we prompt them to fix their previous answer.
  • Inline validation isn’t reliable enough.
    Even though an inline validator might give the user’s input green light, it can still flash an error message once the input has been re-checked on the server. A correct format of the input doesn’t mean that the input is also accurate.

This applies, for example, to ill-formatted VAT-numbers, which always start with a 2-digit-prefix, such as DE or LT. But it also helps with any standardized input such as IBAN number, credit card number, prefixed policy insurance number or lengthy digits-only gift-coupon-codes.

We also want to avoid wrong assumptions or wasted time between pages that potentially don’t even apply to users. The more severe an error is, the more likely it is that users might want to see it sooner, rather than later. However, when we do display errors, we need to ensure users will appreciate the interruption.

2. Late Validation Is Almost Always Better

Especially for complex forms, with plenty of columns, view switchers and filters, premature error messages are often perceived as an annoyance, and a very distracting one. As users are typing, any kind of distraction in such environments is highly unwanted and counter-productive. In fact, distraction often leads to even more errors, but also reduced accuracy of data and increased completion times.

Late validation almost always performs better. It’s just that by validating late, we can be more confident that the user isn’t still in the process of typing the data in the input field. The main exception would be any kind of input, for which users can benefit from real-time feedback, such as password strength meter, or a choice of an available username, or the character count limit. There we need to respond to user’s input immediately, as not doing so would only slow down users desperately trying to find they way around system’s requirements.

In practical terms, that means that for every input in a form, we need to review just what kind of feedback is needed, and design the interaction accordingly. It’s unlikely that one single rule for all inputs will work well: to be effective, we need a more granular approach, with a few validation modes that could be applied separately for each individual input.

3. Validate Empty Fields Only On Submit

Not all errors are equally severe, of course. Surely sometimes input is just ill-formatted or erroneous, but how do we deal with empty form fields or indeterminate radio buttons that are required? Users might have left them empty accidentally or intentionally, and there isn’t really a sure way for us to predict it. Should we throw an error immediately once the user has left the field empty? The answer isn’t obvious at all.

The user might have indeed overlooked the input field, but that’s not the only option. They might as well just have jumped in a wrong field by mistake, and left it right away. Or they had to jump back to the previous field to correct an error triggered by the validator. Or they skipped the input field because they just wanted to get something else out of the way. Or maybe they just had to clear up their input and then move to another browser’s tab to copy-paste a string of text.

In practice, getting the UX around empty fields right is surprisingly difficult. Yet again, we can’t predict the context in which a user happens to find themselves. And as it turns out, they don’t always have a perfectly linear experience from start to finish — it’s often chaotic and almost unpredictable, with plenty of jumps and spontaneous corrections, especially in complex multi-column forms. And as designers, we shouldn’t assume a particular order for filling out the form, nor should we expect and rely on a particular tabbing behavior.

In my experience, whenever we try to flag the issues with empty fields, too often we will be pointing out mistakes prematurely. A calmer option is to validate empty fields only on submit, as it’s a clear indicator that a user indeed has overlooked a required input as they wish to proceed to the next step.

The earliest time to show an error message is when a user leaves a non-empty input field. Alternatively, depending on the input at hand, we might want to define a minimum threshold of characters, after which we start validating.

4. Reward Early, Punish Late

Another issue that shows up eventually is what should happen if a user chooses to change an input field that’s already been validated? Do we validate immediately as they edit the input, or do we wait until they leave the input field?

As Mihael Konjević wrote in his article on the Reward Early, Punish Late pattern, if a user edits an erroneous field, we should be validating immediately, removing the error message and confirming that the mistake has been fixed as soon as possible (reward early). However, if the input was valid already and it is being edited, it’s probably better to wait until the user moves to the next input field and flag the errors then (punish late).

Reward users early if they fixed a mistake, and punish them later, once they’ve left the input field. A solution by Mihael Konjević.

In technical terms, we need to track the state and contents of each input field, and have thresholds for when we start validating, and then have rules for changing input fields that have been validated already, or that contain errors.

As it turns out, the implementation isn’t trivial, and making it work in a complex form will require quite a bit of validation logic. Beware that this logic might also be difficult to maintain if some fields have dependencies or show up only in certain conditions.

5. Prioritize Copy-Paste UX Over Inline Validation

For pretty much any form, copy-paste is almost unavoidable. To many users, this seems like a much more accurate way of typing data as they are less likely to make mistakes or typos. While this is less typical for simple forms such as eCommerce checkout or sign up forms, it is a common strategy for complex enterprise forms, especially when users complete repetitive tasks.

However, copy-paste is often inaccurate, too. People tend to copy-paste too few or too many characters, sometimes with delimeteres, and sometimes with “too many” empty spaces or line breaks. Sadly, this often doesn’t work as expected as the input gets truncated, causes a flash of error messages or breaks the form altogether. Not to mention friendly websites that sometimes conveniently attach a string of text (URL or something similar) to the copied string, making copy-pasting more difficult.

In all of these situations, inline validation will flag errors, and rightfully so. Of course, in an ideal world, pasting would automatically remove all unnecessary characters. However, as text strings sometimes get appended to copied text automatically, even it wouldn’t really help. So if it’s not possible, an interesting alternative would is to add the “clean-up” feature that would cleanse the input and remove all unnecessary characters from it. Surely we’d also need to confirm with the user if the input is still right.

If instead, after copy-pasting, some parts of the input are automatically removed, or auto-formatted in a wrong way, it can become quite a hassle to correct the input. If we can auto-correct reliably, it’s a good idea to do so; but often getting it right can be quite difficult. Rather than correcting their own mistakes, users now have to correct system’s mistakes, and this rarely results in improved user satisfaction. In such situations, users sometimes would remove the entire input altogether, then take a deep breath and start re-typing from scratch.

Typically, wrong auto-correct happens because the validator expects a very specific format of input. But should it actually? As long as the input is unambiguous, shouldn’t we accept pretty much any kind of input, in a form that users would prefer, rather than the one that the system expects?

A good example of that is a phone number input. In most implementations, one would often integrate fancy drop-downs and country selectors, along with auto-masking and auto-formatting in the phone number input field. Sometimes they work beautifully, but sometimes they fail miserably — mostly because they collide with the copy-paste, literally breaking the input. Not to mention that carefully selecting a country’s international code from a drop-down is much slower than just typing the number directly.

What’s wrong with the auto-formatting, by the way? Just like inline validation is never reliable, so is auto-formatting. The phone number, for example, could start with +49, or 0049 or just the country code 49. It might contain an extension code, and it might be a mobile phone number or a landline number. The question is, how can we auto-format reliably and correctly most of the time? This requires a sophisticated validator, which isn’t easy to build. In practical terms, for a given implementation, we need to test just how often auto-formatting fails and how exactly it fails, and refine the design (and implementation) accordingly.

One more thing that’s worth mentioning: disabling copy-paste is never a good idea. When we disable copy-paste for the purpose of security (e.g. email confirmation), or to prevent mistakes, people often get lost in the copy-paste-loop, wasting time trying to copy-paste multiple times, or in chunks, until they eventually give up. This doesn’t leave them with a thrilling sense of accomplishment, of course. And it does have an impact on the user satisfaction KPI.

In general, we should always allow users to type in data in their preferred way, rather than imposing a particular way that fits us well. The validation rules should support and greenlight any input as long as it’s unambiguous and not invalid (e.g. containing letters for phone input doesn’t make sense). The data cleaning, then, can be done either with late validation or on the server-side in a post-processing step.

6. Allow Users to Override Inline Validation

Because inline validation is never bulletproof, there will be situations when users will be locked-out, without any option to proceed. That’s not very different from disabled buttons, which often cause nearly 100% abandonment rates. To avoid it, we always need to provide users with a way out in situations when inline validation fails. That means adding an option to override validation if the user is confident that they are right.

To support overrides, we can simply add a note next to the input that seems to be erroneous, prompting users to review their input and proceed despite the inline validation error, should they want to do so.

We surely will end up with some wrong input in our database, but it might be quite manageable and easy to correct — and also worth it, if we manage to boost conversion as a result of that. Eventually, it’s all about making a case around the value of that design decision.

To get there, we need to measure the impact of overrides for a few weeks. We need to understand just how much more revenue is coming through with the override and just how much inaccurate input and expenses or costs we produce because of it. The decision, then, should be based on these metrics and data, captured by design KPIs. This will give you a comparison to see how costly inline validation actually is and make a case about having one, getting a buy-in to adjust it, or making a case for abandoning it.

7. Just-In-Time Validation

It might feel perfectly obvious that inline validation is a perfect tool to validate complex input. If a user types in a 16-digits-gift-code, or a lengthy insurance policy number, providing them with confidence about their input is definitely a good idea.

But typing complex data takes time and effort. For lengthy input, users often copy-paste or type chunks of data in multiple steps, often with inline validation flashing left and right as they enter and leave input fields. And because the input isn’t simple, they often review their input before proceeding to ensure that they haven’t made any mistakes. This might be one of the cases where inline validation is too much of a distraction at the time when users are heavily focused on a task at hand.

So what do we do? Well, again, we could allow users to validate their input only when they are confident that it is complete. That’s the case with the just-in-time validation: we provide users with a “Validate” button that kicks off the validation on request, while the other fields are validated live, immediately.

However, whenever many pieces of content are compounded in a large group and have restrictive rules — like the credit card details, for example — it’s better to live validate them all immediately. This can help users avoid unnecessary input and change the type of input if needed.

8. For Short Forms, Consider Validation on Submit Only

Once we validate just-in-time, we can of course go even further and validate only on submit. The benefit of it is obvious: users are never distracted or annoyed by validation, and have full control over when their input is validated.

However, the pattern doesn’t seem to work well for lengthy pages with dozens and dozens of input fields. There, users often end up typing a lot of unnecessary data before they realize that their initial input isn’t really applicable. But perhaps we could avoid the issue altogether.

As it turns out, shorter pages usually perform better than one long page. In fact, for sophisticated forms, a better way to deal with complex journeys is to simplify them. We product a sort of a dashboard of tasks that a user has to complete in our complex journey, and dedicate single pages for single tasks. In details, it works like this:

  • We split a complex form into small tasks = pages (with the one-thing-per-page pattern);
  • For every page, we validate (mostly) on submit, as users are moving from one page to the next;
  • We provide users with a task list pattern and support navigation between them, with the option to save input and continue later.

Not only does the approach make form much simpler to manage; because each part of the journey is quite simple and predictable, users are also less likely to make mistakes, but if they do make these mistakes, they can recover from them quickly — without jumping all over the entire form. Definitely an approach worth testing once you end up with a slightly more complex user journey.

When Inline Validation Works

We’ve gone all the way from the issues around inline validation towards the option to abandon it altogether. However, it’s worth stating that inline validation can be very helpful as well. It seems to be most effective when mistakes are common and quite severe.

For example, inline validation is very useful with a password strength meter. When we describe and live-update password rules as users type, it helps users choose a password that matches all requirements, is secure and won’t trigger any error messages.

Users also appreciate immediate help with any kind of complex input. And, with inline validation, users woul never fill out entire sections in the form just to realize that these sections do not apply to them.

All of these advantages make inline validation a thrilling and thriving UX technique — especially in situations when most form fields are likely to be completed by browser’s autofill. However, if the inline validation is too eager, users quickly get utterly frustrated by it when errors start creeping out.

Wrapping Up

Inline validation is useful, but when it fails, its costs can be quite high. With just-in-time validation, reward-early-punish-late-pattern and validating on submit, we avoid unnecessary distractions, complex logic and layout shifts altogether, and communicate errors without annoying users too early or too late.

The downside is, of course, the error recovery speed, which certainly will be slower, yet in the end, the number of errors might be lower as well because we’ve simplified the form massively. It’s just much more difficult to make mistakes if you have just 3–4 input fields in front of you. And that might be just enough to reduce the frequency of errors and increase completion rates.

Meet “Smart Interface Design Patterns”

If you are interested in similar insights around UX, take a look at Smart Interface Design Patterns, our shiny new 8h-video course with 100s of practical examples from real-life projects. Plenty of design patterns and guidelines on everything from accordions and dropdowns to complex tables and intricate web forms — with 5 new segments added every year. Just sayin’! Check a free preview.

Meet Smart Interface Design Patterns, our new video course on interface design & UX.

100 design patterns & real-life examples.
8h-video course + live UX training. Free preview.

Related UX Articles

Read the whole story
4 days ago
Hamburg, Germany
Share this story

Modern alternatives to BEM

1 Share

When I first heard Nicole Sullivan talk about OOCSS, I thought “Oooh, smart.” When I read Jonathan Snook’s riff on that idea in SMACSS I thought “Oooh, smart.” When I heard Harry Roberts say “never use IDs in your CSS files” I said “Oooh, smart.”

But when BEM and roboclasses came around… I didn’t have the same reaction. I never felt attracted to these tools even though thousands of developers had success with them. I’m not sure why I never jumped in, but I’m sure verbosity and/or tooling fatigue played a part. Ultimately, I ended up settling on my own SMACSS/BEM hybrid that’s more of a .block--variant .generic {} pattern with some global utilities mixed in.

I’m not anti-BEM nor anti-roboclasses, but as we enter a new era for CSS I think we have an opportunity to rethink best practices around architecting CSS. Before we get lost rethinking the wheel, let’s hold on to some good principles from the past decade or so:

  • We want to author in components
  • We generally want low-specificity to avoid collisions
  • We want a bucket (sometimes a very large bucket) of global utility classes or variables for ad-hoc composition or customizations

With those principles, let’s dive into some CSS architecture alternatives.


When Andy Bell introduced CUBE, I instantly gravitated to it. CUBE is basically the style of CSS I write but with a little more structure and language around it.

- Composition
- Utilities
- Block
- Exception

One nice thing about CUBE is that it’s less prescriptive about how you author “blocks” (read: components). You could totally use BEM inside that zone if you wanted. I like how CUBE embraces the cascade, utilities, components, and one-off exceptions that always drag a codebase down over time. CUBE has a “real world” quality to understanding how CSS degrades over time.


Let’s go all-in on the components angle. Did you know the web platform has a native built-in to control style collisions!?! In Web Components you get component-level scoping right out of the gate with Shadow DOM and you may find yourself writing CSS like it’s the year 2000 again!

:host element .class:state { } 
/* the `:host` part is really just here for the acronym */

Be as specific as you want, because the CSS never escapes the :host component. If the whole reason you were writing BEM or preprocessing with some other scoping mechanism was to avoid specificity collisions, the Shadow DOM eliminates that need.

The tradeoff here is Web Components eject from the cascade. That means no global utility classes unless you import or adopt them into your component. CSS variables operate in a strange land of inheritable styles that do pass through the Shadow DOM, but not all Web Components support CSS variables, so your ability to control style gets limited by the components you use or the amount of Light DOM in your components.

If you want to learn more about isolated styling in the Shadow DOM, check out my Web Components course on Frontend Masters.


If your goal is to reduce specificity, new native CSS tools make reducing specificity a lot easier. You can author your CSS with near-zero specificity and even control the order in which your rules cascade.

- :where() # Zero specificity
- :is()    # Specificity is highest selector in group
- @layer   # Control order styles apply
- @scope   # Control when styles stop and start applying

The new’ish :where() and :is() selectors give us some incredible de-specifying tools. Tools like @layer (available now) and @scope (coming soonish?) give us new powers to control the application of CSS rules. @layer can be setup to control what order styles apply and @scope controls where styles start and stop applying.

Over-abstracting in my head a bit, WILS is probably best on a global utility layer or base component library where you need to de-specify as much as possible.


What about going all-in on controlling order of application? CSS Cascade Layers are like “folders for CSS” where you add rules to those layer folders and specify the order the layers apply.

In 2012, Chris Coyier wrote a post called One, Two, or Three to answer the question of “How many CSS files should there be on a website?”. He came to the conclusion that “three” sounds about right: Global styles, Page level styles, and Section-specific styles. I feel like this technique still works and with some modernizations, we end up with something like this:

@layer global, page, component

@layer global {
  // reset goes here
  // utilities go here

@layer page {
  // about page layout goes here

@layer component {
  // component styles go here

@layer page {
  // oops, more page styles go here

@layer component {
  // oh yeah, one more component I forgot

All your global CSS goes in the global layer, page-specific CSS goes in the page layer, and component (section-specific) styles go in the component layer. You can author layered styles anywhere and they’ll organize themselves in the correct order. Amazing!

We can get weirder I’m sure, but this is a nice baseline for me I think.

Other acronyms on the horizon

I’ve presented some ideas and riffs, some are good and some are bad, but the whole point of this post is that there’s heaps of new tools that might reshape how we write CSS. I’m excited to read about new organization systems and how others architect their CSS in this new styling reality.

For example, the other day I dug into Miriam’s CSS setup on her new site, oversimplified, it looks a bit like this:

@layer spec, browser, reset, default, features, layout, theme

This system demonstrates how styles apply from the spec text to the browser all the way to the custom theme layer. You can select your level of enhancement and roll back the fidelity of her site if you desire. It’s clever, illustrative, and very on-brand for the person who wrote the Cascade Layers spec.

While SBRDFLT doesn’t exactly roll off the tongue (”S-Bird Felt”?), maybe it’s time we break out of catchy three letter acronyms and focus on finding good scalable architecture patterns instead. I look forward to seeing what systems people come up with in this new era of CSS. Bonus points if you blog about it and explain the problems you’re trying to solve.

Edit: Previous versions of this article said @layer and @scope where high specificity. Updated per feedback

Read the whole story
4 days ago
Hamburg, Germany
Share this story

OAuth2 explained with cute shapes

1 Share

We’re currently refurbishing 🙃 our authentication stack at Back Market, and we need to onboard our developers and teams to various OAuth2 concepts. Here’s our take on OAuth2 (with cute shapes.

OAuth2 explained with cute shapes

Discuss on Changelog News

Read the whole story
6 days ago
Hamburg, Germany
Share this story

QR codes

1 Share

Ever wondered how a QR code works? No, me neither but it's low-key fascinating. (Warning, there is some extremely nerdy shit here.👇 )

The Quick Response code was invented by a subsidiary of Toyota to track parts across the manufacturing process. Barcodes were proving inadequate - they can only be read at certain angles and didn't store much data relative to their size The QR code solves those issues and more

The most distinctive thing about a QR code are these cube shapes, called Finder Patterns, that help your reader detect the code. The smaller fourth cube, the Alignment Pattern, orientates the code so it can be at any angle and the reader will still know which way is up.

You've probably never noticed but every QR code has these alternating black and white dots called the Timing Pattern. These tell the reader how big a single module is and how big the whole QR code is - known as the version. Version 1: Smallest Version 40: Biggest

Information about the format is stored in these two strips near the Finder patterns. It's stored twice so its readable even when QR code is partially obscured. (You'll notice that this is a recurring theme.)

This stores three crucial pieces of information: - Mask. - Error correction level - Error correction format. I know these sound super fucking boring but they are actually pretty interesting.

First, error correction - what is it? Essentially, it dictates how much redundant information is stored in the code to ensure it remains readable even when part of it is missing.

This is pretty amazing - If your code is outdoors you can choose a higher redundancy level to make sure it still functions when obscured. (try it)

Second, the mask - what's that? Well, QR readers work best when there are the same amount of white and black areas. But the data might not play ball so a mask is used to even things out.

When a mask is applied to the code anything that falls under the dark part of the mask is inverted. A white area becomes black and black area becomes white.

There are 8 standard patterns which are applied one by one. The pattern that achieves the best result is used and that info is stored so the reader can unapply the mask.

Finally we get to the actual data. Weirdly, the data starts at the bottom-right corner and winds back up like pictured. It almost doesn't matter where it starts because it can be read at any angle.

The first chunk of information here tells the reader what mode the data was encoded in and the second tells it the length. In our case each character takes up 8 bit chunks, otherwise known as bytes, and there are 24 of them.

There is still a bunch of left over space after our data. This is where the error correction information is stored so that it can be read if partially obscured. The way this works is actually really really complex so I'll leave that out. That's basically it!

For the absolute nerds who made it here, a fun fact: Perhaps the coolest thing about QR codes is that Denso Wave, the company that invented them, never exercised their patent and released the technology for free! https://www.denso-wave.com/en/technology/vol1.html

Read the whole story
6 days ago
Hamburg, Germany
Share this story




:loadGive: One of the things on that to-do list is to post the next Plucked Up pages this weekend!

Title thanks to DakEJP from the stream! :loadLove:

Read the whole story
6 days ago
Melbourne, Australia
7 days ago
Hamburg, Germany
Share this story
Next Page of Stories