2836 stories
1 follower

Modern PHP Cheat Sheet

1 Share

# Destructuring 7.1

You can destructure arrays to pull out several elements into separate variables:

$array = [1, 2, 3];

// Using the list syntax:
list($a, $b, $c) = $array;

// Or the shorthand syntax:
[$a, $b, $c] = $array;

You can skip elements:

[, , $c] = $array;

As well as destructure based on keys:

$array = [
    'a' => 1,
    'b' => 2,
    'c' => 3,

['c' => $c, 'a' => $a] = $array;

# Rest and Spread Operators 5.6

Arrays can be spread into functions:

$array = [1, 2];

function foo(int $a, int $b) {  }


Functions can automatically collect the rest of the variables using the same operator:

function foo($first, ...$other) {  }

foo('a', 'b', 'c', 'd', …);

Rest parameters can even be type hinted:

function foo($first, string ...$other) {  }

foo('a', 'b', 'c', 'd', …);

7.4 Arrays with numerical keys can also be spread into a new array:

$a = [1, 2];
$b = [3, 4];

$result = [...$a, ...$b]; 

8.1 You can also spread arrays with textual keys starting from PHP 8.1

Read the whole story
3 days ago
Hamburg, Germany
Share this story

NVIDIA Research's GauGAN AI Art Demo Responds to Words

1 Share

A picture worth a thousand words now takes just three or four words to create, thanks to GauGAN2, the latest version of NVIDIA Research’s wildly popular AI painting demo.

The deep learning model behind GauGAN allows anyone to channel their imagination into photorealistic masterpieces — and it’s easier than ever. Simply type a phrase like “sunset at a beach” and AI generates the scene in real time. Add an additional adjective like “sunset at a rocky beach,” or swap “sunset” to “afternoon” or “rainy day” and the model, based on generative adversarial networks, instantly modifies the picture.

With the press of a button, users can generate a segmentation map, a high-level outline that shows the location of objects in the scene. From there, they can switch to drawing, tweaking the scene with rough sketches using labels like sky, tree, rock and river, allowing the smart paintbrush to incorporate these doodles into stunning images.

The new GauGAN2 text-to-image feature can now be experienced on NVIDIA AI Demos, where visitors to the site can experience AI through the latest demos from NVIDIA Research. With the versatility of text prompts and sketches, GauGAN2 lets users create and customize scenes more quickly and with finer control.

GauGAN2 combines segmentation mapping, inpainting and text-to-image generation in a single model, making it a powerful tool to create photorealistic art with a mix of words and drawings.

The demo is one of the first to combine multiple modalities — text, semantic segmentation, sketch and style — within a single GAN framework. This makes it faster and easier to turn an artist’s vision into a high-quality AI-generated image.

Rather than needing to draw out every element of an imagined scene, users can enter a brief phrase to quickly generate the key features and theme of an image, such as a snow-capped mountain range. This starting point can then be customized with sketches to make a specific mountain taller or add a couple trees in the foreground, or clouds in the sky.

It doesn’t just create realistic images — artists can also use the demo to depict otherworldly landscapes.

Imagine for instance, recreating a landscape from the iconic planet of Tatooine in the Star Wars franchise, which has two suns. All that’s needed is the text “desert hills sun” to create a starting point, after which users can quickly sketch in a second sun.

It’s an iterative process, where every word the user types into the text box adds more to the AI-created image.

The AI model behind GauGAN2 was trained on 10 million high-quality landscape images using the NVIDIA Selene supercomputer, an NVIDIA DGX SuperPOD system that’s among the world’s 10 most powerful supercomputers. The researchers used a neural network that learns the connection between words and the visuals they correspond to like “winter,” “foggy” or “rainbow.”

Compared to state-of-the-art models specifically for text-to-image or segmentation map-to-image applications, the neural network behind GauGAN2 produces a greater variety and higher quality of images.

The GauGAN2 research demo illustrates the future possibilities for powerful image-generation tools for artists. One example is the NVIDIA Canvas app, which is based on GauGAN technology and available to download for anyone with an NVIDIA RTX GPU.

NVIDIA Research has more than 200 scientists around the globe, focused on areas including AI, computer vision, self-driving cars, robotics and graphics. Learn more about their work.

Read the whole story
3 days ago
Hamburg, Germany
Share this story

Svelte Cubed

1 Share

Creating 3D graphics on the web has never been easier or more accessible. Svelte Cubed lets you build state-driven Three.js scenes with minimal code.

When used with a framework like SvelteKit, Svelte Cubed supercharges your Three.js development with features like hot module reloading, giving you the best possible development experience — while also optimising for user experience with techniques like treeshaking.

Read the documentation and check out the examples to learn more.

Read the whole story
5 days ago
Hamburg, Germany
Share this story

★ Apple’s New Self Service Repair Program

1 Share

Apple Newsroom:

Apple today announced Self Service Repair, which will allow customers who are comfortable with completing their own repairs access to Apple genuine parts and tools. Available first for the iPhone 12 and iPhone 13 lineups, and soon to be followed by Mac computers featuring M1 chips, Self Service Repair will be available early next year in the US and expand to additional countries throughout 2022. Customers join more than 5,000 Apple Authorized Service Providers (AASPs) and 2,800 Independent Repair Providers who have access to these parts, tools, and manuals.

The initial phase of the program will focus on the most commonly serviced modules, such as the iPhone display, battery, and camera. The ability for additional repairs will be available later next year.

“Creating greater access to Apple genuine parts gives our customers even more choice if a repair is needed,” said Jeff Williams, Apple’s chief operating officer. “In the past three years, Apple has nearly doubled the number of service locations with access to Apple genuine parts, tools, and training, and now we’re providing an option for those who wish to complete their own repairs.”

This appears to be a cause for celebration in right-to-repair circles, but I don’t see it as a big deal at all. Almost no one wants to repair their own cracked iPhone display or broken MacBook keyboard; even fewer people are actually competent enough to do so. iFixit, in a celebratory post, claims:

But we’re thrilled to see Apple admit what we’ve always known: Everyone’s enough of a genius to fix an iPhone.

Nonsense. I just don’t see how more than a sliver of people would even want to do this rather than go to a professional shop.

Also, nothing announced today changes the fact that Apple still requires Apple genuine parts for all authorized repairs, no matter who does the repairing. There’s good reason for that, and it’s not a money grab. Today’s announcement, to my eyes, is about nothing more than reducing regulatory pressure from legislators who’ve fallen for the false notion that Apple’s repair policies, to date, have been driven by profit motive — that Apple profits greatly from authorized repairs, and/or that their policies are driven by a strategy of planned obsolescence, to get people to buy new products rather than repair broken old ones. I don’t believe either of those things,1 but for those who believe either or both, I don’t see how this Self Repair Program really changes anything other than who’s performing the labor.

Brian X. Chen, hailing the announcement in his column at The New York Times:

Apple delivered an early holiday gift on Wednesday to the eco-conscious and the do-it-yourselfers: It said it would soon begin selling the parts, tools and instructions for people to do their own iPhone repairs.

The appeal to do-it-yourselfers is self-evident. I don’t see how this is eco-conscious at all. It doesn’t enable people to repair older devices that Apple itself and authorized repair shops weren’t themselves able to repair.

The company has not yet published a list of costs for parts, but said the prices for consumers would be what authorized repair shops paid. Currently, a replacement iPhone 12 screen costs an authorized shop about $234 after a broken screen is traded in. At an Apple store, repairing an out-of-warranty iPhone 12 screen costs about $280.

In short, you will have more options to mend an iPhone, which can bring your costs down.

Previously, it was easiest to visit an Apple store to get an iPhone fixed. But just as taking your car to a dealer for servicing isn’t the cheapest option, going to an Apple store also wasn’t the most cost-effective.

The alternative was to take your iPhone to a third party for repair, potentially for a more competitive price. When I took a broken iPhone XS screen to an Apple store this year, I was quoted $280 for the repair, compared with $180 from an independent outlet.

Chen is not exactly comparing like-to-like here, with his prices for a replacement iPhone XS display “from an independent outlet” and the $234 Apple charges for an iPhone 12 display component, but it seems pretty clear that for a customer to pay just $180 for the XS screen replacement, including labor, the “independent outlet” was not using Apple genuine parts. How is that relevant to this new Self Service Repair program that is based on buying genuine parts directly from Apple? What we’re looking at here is saving $46. Good luck replacing that screen yourself, without any specialized tooling.

Don’t get me wrong: this program is nice, and perhaps a bit surprising given Apple’s public stance on the issue in recent years. We’re better off with this Self Service Repair program in place than we were without it. (Making service manuals available might actually help extend the lifetime of older devices for which Apple no longer sells parts.) But to me it clearly seems to be a small deal, not a “big deal”, as Chen claims.

And if it is a big deal, it’s for Apple, politically. (Nothing wrong with that.)

  1. While running some benchmarks for another article, today I upgraded my iPhone X from 2017 to iOS 15.1. iOS 15 doesn’t just run on that four-year-old iPhone, it runs great. No company comes close to Apple in supporting older devices for longer. ↩︎

Read the whole story
10 days ago
Hamburg, Germany
Share this story

A Guide To Modern CSS Colors


There’s more to color on the web than meets the eye, and it’s about to get a lot more interesting! Today, we’ll take a look at the best ways to use colors in a design system, and what we can expect from our colors in the not-too-distant future.

Well-Known Color Values

There are many different ways to define colors in CSS. CSS named colors are one of the simplest ways to color an element:

.my-element {
  background-color: red;

These are very limited, and rarely fit the designs we are building! We could also use color hex (hexadecimal) values. This code gives our element a red background color:

.my-element {
  background-color: #ff0000;

Unless you’re a color expert, hex values are very difficult to read. It’s unlikely you would be able to guess the color of an element by reading the hex value. When building a website we might be given a hex color value by a designer, but if they asked us to make it, say 20% darker, we would have a hard time doing that by adjusting the hex value, without a visual guide or color picker.


RGB (red, green, blue) notation is an alternative way of writing colors, giving us access to the same range of colors as hex values, in a much more readable form. We have an rgb() function in CSS for this. Colors on the web are additive, meaning the higher the proportion of red, green and blue, the lighter the resulting color will be. If we only use the red channel, the result is red:

.my-element {
  background-color: rgb(255, 0, 0);

Setting the red, green and blue channels to the highest value will result in white:

.my-element {
  background-color: rgb(255, 255, 255);

We can also add an alpha channel (for transparency), by using the rgba() function:

.my-element {
  background-color: rgba(255, 0, 0, 0.5); // transparency of 50%

.my-element {
  background-color: rgba(255, 0, 0, 1); // fully opaque

rgb() and rgba() allow us to “mix” colors in our code to some extent, but the results can be somewhat unpredictable.


More recently, we have been able to use HSL (hue, saturation, lightness) values, with the hsl() and hsla() color functions. As a developer, these are far more intuitive when it comes to adjusting color values. For example, we can get darker and lighter variants of the same color by adjusting the lightness parameter:

.my-element {
  background-color: hsl(0deg, 100%, 20%); // dark red

.my-element {
  background-color: hsl(0deg, 100%, 50%); // medium red

.my-element {
  background-color: hsl(0deg, 100%, 80%); // light red

The hue parameter represents the position on a color wheel, and can be any value between 0 and 360deg. The function also accepts turn units (e.g. 0.5turn), and unitless values.

The following are all valid:

.my-element {
  background-color: hsl(180deg, 50%, 50%);

.my-element {
  background-color: hsl(0.5turn, 50%, 50%);

.my-element {
  background-color: hsl(180, 50%, 50%);

Tip: Holding down SHIFT and clicking the color swatch in the inspector in Chrome and Firefox dev tools will toggle the color value between hex, RGB and HSL!

hsl() and hsla() lend themselves well to manipulation with custom properties, as we’ll see shortly.


The currentColor keyword is worth a mention as another way of setting a color on an element that’s been around for a while. It effectively allows us to use the current text color of an element as a variable. It’s pretty limited when compared with custom properties, but it’s often used for setting the fill color of SVG icons, to ensure they match the text color of their parent. Read about it here.

Modern Color Syntax

The CSS Color Module Level 4 provides us with a more convenient syntax for our color functions, which is widely supported in browsers. We no longer need the values to be comma-separated, and the rgb() and hsl() functions can take an optional alpha parameter, separated with a forward slash:

.my-element {
  /* optional alpha value gives us 50% opacity */
  background-color: hsl(0 100% 50% / 0.5);

.my-element {
  /* With no alpha value the background is fully opaque*/
  background-color: hsl(0 100% 50%);
New CSS Color Functions


HWB stands for hue, whiteness and blackness. Like HSL, the hue can be anywhere within a range of 0 to 360. The other two arguments control how much white or black is mixed into that hue, up to 100% (which would result in a totally white or totally black color). If equal amounts of white and black are mixed in, the color becomes increasingly gray. We can think of this as being similar to mixing paint. It could be especially useful for creating monochrome color palettes

Try it out with this demo (works in Safari only):

Why do we need LAB and LCH when we have HSL? One reason is that using LAB or LCH, gives us access to a much larger range of colors. LCH and LAB are designed to give us access to the entire spectrum of human vision. Furthermore, HSL and RGB have a few shortcomings: they are not perceptually uniform and, in HSL, increasing or decreasing the lightness has quite a different effect depending on the hue.

In this demo, we can see a stark contrast between LCH and HSL by hitting the grayscale toggle. For the HSL hue and saturation strips, there are clear differences in the perceptual lightness of each square, even though the “lightness” component of the HSL function is the same! Meanwhile, the chroma and hue strips on the LCH side have an almost-uniform perceptual lightness.

We can also see a big difference when using LCH color for gradients. Both these gradients start and end with the same color (with LCH values converted to the HSL equivalents using this converter). But the LCH gradient goes through vibrant shades of blue and purple in the middle, whereas the HSL gradient looks muddier and washed-out by comparison.

LAB and LCH, while perhaps being syntactically a little less intuitive, behave in a way that makes more sense to the human eye. In her article, LCH color in CSS: what, why, and how?, Lea Verou explains in detail the advantages of LCH color. She also built this LCH color picker.

As with other color functions, hwb(), lab() and lch() can also take an optional alpha parameter.

.my-element {
  background-color: lch(80% 240 50 / 0.5); // Resulting color has 50% opacity
Browser Support And Color Spaces

hwb(), lab() and lch() are currently only supported in Safari. It’s possible to start using them straight away by providing a fallback for non-supporting browsers. Browsers that don’t support the color function will simple ignore the second rule:

.my-element {
  background-color: lch(55% 102 360);

  /* LCH color converted to RGB using Lea Verou’s tool: https://css.land/lch/ */
  background-color: rgb(98.38% 0% 53.33%);

If other styles depend on newer color functions being supported, we could use a feature query:

.my-element {
  display: none;

/* Only display this element if the browser supports lch() */
@supports (background-color: lch(55% 102 360)) {
  .my-element {
    display: block;
    background-color: lch(55% 102 360);

It’s worth noting, as Lea explains in her article, that although modern screens are capable of displaying colors beyond RGB, most browsers currently only support colors within the sRGB color space. In the LAB color demo you might notice that moving the sliders beyond a certain point doesn’t actually affect the color, even in Safari where lab() and lch() are supported. Using values outside of the sRGB range will only have an effect when hardware and browsers advance sufficiently.

Safari now supports the color() function, which enables us to display colors in the P3 space, but these are limited to RGB colors for now, and don’t yet give us all the advantages of LAB and LCH.

.my-element {
  background: rgb(98.38% 0% 53.33%); // bright pink
  background: color(display-p3 0.947 0 0.5295); // equivalent in P3 color space

Recommended Reading: Wide Gamut Color in CSS with Display-P3” by Nikita Vasilyev


Once they are widely supported, perhaps LAB and LCH can help us choose more accessible color combinations. Foreground text should have the same contrast ratio with background colors with different hue or chroma values, as long as their lightness value remains the same. That’s certainly not the case at the moment with HSL colors.

Color Management

A wider range of color functions means we have more options when it comes to managing colors in our application. Often we require several variants of a given color in our design system, ranging from dark to light.

Custom Properties

CSS custom properties allow us to store values for reuse in our stylesheets. As they allow partial property values, they can be especially useful for managing and manipulating color values. HSL lends itself particularly well to custom properties, due to its intuitiveness. In the previous demo, I’m using them to adjust the hue for each segment of the color strip by calculating a --hue value based on the element’s index (defined in another custom property).

li {
  --hue: calc(var(--i) * (360 / 10));
  background: hsl(var(--hue, 0) 50% 45%);

We can also do things like calculate complementary colors (colors from opposite sides of the color wheel). Plenty has been written about this, so I won’t cover old ground here, but if you’re curious then Sara Soueidan’s article on color management with HSL is a great place to start.

Migrating From Hex/RGB To HSL

RGB colors might serve your needs up to a point, but if you need the flexibility to be able to derive new shades from your base color palette then you might be better off switching to HSL (or LCH, once supported). I would recommend embracing custom properties for this.

Note: There are plenty of online resources for converting hex or RGB values to HSL (here’s one example).

Perhaps you have colors stored as Sass variables:

$primary: rgb(141 66 245);

When converting to HSL, we can assign custom properties for the hue, saturation and lightness values. This makes it easy to create darker or lighter, more or less saturated variants of the original color.

:root {
  --h: 265;
  --s: 70%;
  --l: 50%;

  --primary: hsl(var(--h) var(--s) var(--l));
  --primaryDark: hsl(var(--h) var(--s) 35%);
  --primaryLight: hsl(var(--h) var(--s) 75%);

HSL can be incredibly useful for creating color schemes, as detailed in the article Building a Color Scheme by Adam Argyle. In the article he creates light, dark and dim color schemes, using a brand color as a base. I like this approach because it allows for some fine-grained control over the color variant (for example, decreasing the saturation for colors in the “dark” scheme), but still retains the big advantage of custom properties: updating the brand color in just one place will be carried through to all the color schemes, so it could potentially save us a lot of work in the future.

Sass Color Functions

When it comes to mixing and adjusting colors, Sass has provided color functions to enable us to do just this for many years. We can saturate or desaturate, lighten or darken, even mix two colors together. These work great in some cases, but they have some limitations: firstly, we can only use them at compile-time, not for manipulating colors live in the browser. Secondly, they are limited to RGB and HSL, so they suffer from the same issues of perceptual uniformity, as we can see in this demo, where a color is increasingly desaturated yet appears increasingly lighter when converted to grayscale.

To ensure that the lightness remains uniform, we could use custom properties with LCH in a similar way to HSL above.

li {
  --hue: calc(var(--i) * (360 / 10));
  background: lch(50% 45 var(--hue, 0));
Color Mixing And Manipulation

Color Mixing

One thing CSS doesn’t yet allow us to do is mix colors in the browser. That’s all about to change: the Level 5 CSS Color Specification (working draft) contains proposals for color mixing functions that sound rather promising. The first is the color-mix() function, which mixes two colors much like Sass’s mix() function. But color-mix() in CSS allows us to specify a color space, and uses the LCH by default, with superior mixing as a result. The colors don’t have to be LCH when passed in as arguments either, but the interpolation will use the specified color space. We can specify how much of each color to mix, similar to gradient stops:

.my-element {
  /* equal amounts of red and blue */
  background-color: color-mix(in lch, red, blue);

.my-element {
  /* 30% red, 70% blue */
  background-color: color-mix(in lch, red 30%, blue);

Color Contrast And Accessibility

color-contrast() is another proposed function, which really does have huge implications for picking accessible colors. In fact, it’s designed with accessibility in mind first and foremost. It permits the browser to pick the most appropriate value from a list, by comparing it with another color. We can even specify the desired contrast ratio to ensure our color schemes meet WCAG guidelines. Colors are evaluated from left to right, and the browser picks the first color from the list that meets the desired ratio. If no colors meet the ratio, the chosen color will be the one with the highest contrast.

.my-element {
  color: wheat;
  background-color: color-contrast(wheat vs bisque, darkgoldenrod, olive, sienna, darkgreen, maroon to AA);

Because this isn’t supported in any browsers right now, I’ve borrowed this example directly from the spec. when the browser evaluates the expression the resulting color will be darkgreen, as it is the first one that meets the AA contrast ratio when compared to wheat, the color of the text.

Browser Support

The Level 5 Color Specification is currently in Working Draft, meaning no browsers yet support the color-contrast() and color-mix() functions and their syntax is subject to change. But it certainly looks like a bright future for color on the web!

Environmental Impact Of Colors

Did you know that your chosen color palette can have an impact on how much energy your website uses? On OLED screens (which account for most modern TVs and laptops), darker colors will use significantly less energy than light colors — with white using the most energy, and black the least. According to Tom Greenwood, author of Sustainable Web Design, blue is also more energy-intensive than colors in the red and green areas of the spectrum. To reduce the environmental impact of your applications, consider a darker color scheme, using less blue, or enabling a dark-mode option for your users. As an added bonus, a more environmentally friendly choice of colors can also reduce the impact on the battery life of mobile devices.

  • Hexplorer, Rob DiMarzo
    Learn to understand hex colors with this interactive visualization.
  • LCH color picker, Lea Verou and Chris Lilley
    Get LCH colors and their RGB counterparts.
  • HWB color picker
    Visualize HWB colors and convert to HSL, RGB and hex.
  • Ally Color Tokens, Stephanie Eckles
    An accessible color token generator.

Read the whole story
10 days ago
Hamburg, Germany
10 days ago
West Grove, PA
Share this story

Want to be great? Know a lot

1 Share

In the book


 Scott H Young argues that the difference between a great programmer and mediocre programmer is not the ability to solve problems. Both can often get the job done. But the great programmer knows a dozen or more ways to solve the problem and can apply the most effective solution for the case at hand.

Mediocre sounds like a put down and we've all been there so let's call that person less knowledgeable instead. The less knowledgeable programmer only knows a few or even one solution and deploys that solution unaware that there are better ways.

I can look it up if I need to, you might think. The problem is how do you look up something you don't even know exists? These are the unknown unknowns. Things you don't know you don't know. And those are indeed hard to look up. 

To become better you need to know what's out there. You need to turn the unknown unknowns into known unknowns.

Learning deep vs broad

In the previous posts

how to read a code


I don't understand this yet

 we looked at learning one subject deeply. Both use a top-down first approach where you scan for core concepts that you can link more information onto while bottom-up:ing.

Things change when we're dealing with a huge field of information. We need to focus on a limited set of concepts and then move on. There's simply no time nor would it be useful to know every little detail since it might never relate what you're doing.

What we will be doing is placing a bet. We're betting on our ability to select the most important concepts for future use. If we're skilled and a bit lucky we've focused on the correct concepts so we recognize them when faced with a new problem.

Maybe simply knowing that concept is enough to solve the problem. If not then we know where to go and what to read up on.

Core concepts

When learning a natural language there is a thing called a

frequency list

. It contains words sorted by their frequency of occurrence in descending order. This is a really neat thing. Learning the most frequent word in a language allows you to get vital context in a lot of sentences.

Similar to frequency lists in a natural language there are concepts in any software product that the rest of the software builds upon. I'll call them core concepts. Let's use git as an example. Three core concepts are commits, branches and conflicts.

If you know these three core concepts you can proceed with further learning. If you don't then any additional learning will be confusing and painful. Lacking frequency lists for concepts in a software package we can use many sources as an approximation to find our core concepts.

If many articles / blogs / repos mention a concept - it's probably important to know. If one or only a few mention it it's likely less important.

Syntopical reading

We return once again to the indispensable book

how to read a book

. The fourth level of reading is called syntopical reading. It goes as follows: First you have some subject you'd like to know more about. You then build a bibliography of relevant books on the subject you want to know more about.

Quickly scan all selected books and discard those that did not fit. Then you scan the remaining books again and collect relevant passages to your subject. After that you define a neutral terminology and interpret each author's passage into this neutral terminology.

Then you pose a set of questions that the books must answer. You gather arguments that answer those questions and you highlight any issues or contradictions between authors in an orderly fashion.

To build the bibliography the book recommends using

a syntopicon

. This is a book containing a categorization of existing literature and helps you bootstrap your search for the correct bibliography.

Modified slightly

I do not know of any syntopicons for software packages and even if there were, they'd be outdated so quickly it's not worth it in the first place. So the good ole search engine will serve as our syntopicon. Search engines need to be directed with questions or we'll get way too many hits.

Thus we need to alter the approach somewhat and start with questions we'd like to answer before we select our sources. The first question you should answer is the obvious "What does X do?" where X is the software you want to know more about.

Try a few alternatively phrased questions like "What is X?" to broaden the number of sources. Open 5-10 links that look relevant to you in new tabs and do a quick inspection of each. Discard any that are obviously irrelevant. Then do another quick inspectional reading of the articles left and collect the relevant passages.

If most of them converge on the same-ish answer you can assume you've found your answer. Now that you know what it is, go back again and this time search for a low number (three is good) of core concepts. Questions such as "How does X work?" and "What are the important parts of X" are good.

Do the same as you did above and collect some relevant passages on each of the core concepts you find.

Simplify, stupid

We now have a bunch of relevant passages that say the same-ish thing. But in order to truly make it stick in your head and pop up at the right time we need to actively work with it. According to the book we're going to define a neutral terminology, but let's not stop there.


Feynman technique

is a good way to actively process information to make it stick better.

Richard Feynman

should hopefully not need any introduction. If you're stumped about this guy, go practice finding out who he was by trying out the method above. It's well worth it.

The Feynman technique goes as follows: write down what you know about a subject. Explain it in words simple enough for a 6th grader to understand. Identify any gaps in your knowledge and read up on that again until all of the explanation is dead simple and short.

What I like about this is the elimination of technical jargon, fancy words and hand waving. They're off limits since the audience does not know programming or have deep technical knowledge.

Ignorance hides in such words. You think you know what it means, but in reality you're just parroting back what the last person said. Only when you can say the definition of a word in simple plain language do you have a firm understanding.

Take your relevant passages collected above and compress them with the Feynman-technique. Now you have what a software package does and it's core concepts in simple short language. Good job so far!

Make the information accessible

When pressure is on we tend to shut down mentally. Our thought patterns narrow and we fall back on instincts and well rehearsed routines. This is bad for remembering things you've learnt about broadly. 

They're less rehearsed than what you use frequently and will be further back in your mind exactly when you need them. In order to mitigate this we are going to make the information we just dug up more accessible by actively trying to come up with questions where the information is the answer.

In doing this we're trying to intercept questions we might later ask under pressure in order to preemptively inject an answer. So with your Feynman-compressed answers above try to the question "When would knowing this be useful?".

Try to envision as many scenarios as possible and write them down. Then try to answer the question "What could I do with this information?" and again envision and write down. Now you've both envisioned situations where something happens to you and where you make something happen with your newly minted nuggets of information.

Hopefully this will make the answer pop up when you need it.

Learn all the things?

My grandfathers were both hoarders of impressive magnitude. Anything would be put away in some storage reasoning "it might come in handy one day". And a small percentage of everything stored did come in handy. But the rest did not.

There is no shortage of videos, courses, newsletters, hacker news-ish sites so you could spend your day doing nothing but learning broadly. But that way you'd get nothing done.

Information overload

may not be a new thing, but the sources above has made it even more rampant.

My grandfathers would have loved bookmarks and incoming reading-list sites. I've applied a heuristic for years. It centers around the topic I find most interesting. Myself. If I hear or read about something new I'll look at the relation to what I'm doing. If it's highly related I'll check it out as soon as possible with the process above.
If it's weakly related I'll skip unless I hear about it very frequently. This lessens the information overload as I will mostly learn things that are interesting and relevant. I usually let the day-to-day toil drive the incoming things to learn.
I simply trust that if it's important enough it will surface somewhere somehow.

Use idle time

When does this learning happen? It might sound like a lot of work to go through the steps above. At first it'll be awkward and time consuming but once you get the hang of it it can be done very quickly.
There has to be some moments in your day when you're idling. Compiling, waiting for Bob's code review, a dull meeting, commuting or picking your nose. I'll usually get something like 15 minutes of learning broadly per workday in those moments where I'm idling anyway.
Over time that adds up even though my nose is less clean.

Read the whole story
10 days ago
Hamburg, Germany
Share this story
Next Page of Stories