3847 stories
·
3 followers

Comprehension Debt — the hidden cost of AI generated code.

1 Share
Read the whole story
emrox
1 hour ago
reply
Hamburg, Germany
Share this story
Delete

Pikimov, free motion and video app, now $6 standalone for Mac and Windows

1 Share

Pikimov 5.1.0 is now available as an offline app, bringing all the browser-powered video editing and motion graphics powers of this tool to a standalone tool — no internet connection required. The browser version is free forever; for offline capabilities, you need to pay $6 a month (no commitment). Pikimov is not free as in […]

The post Pikimov, free motion and video app, now $6 standalone for Mac and Windows appeared first on CDM Create Digital Music.

Read the whole story
emrox
4 hours ago
reply
Hamburg, Germany
Share this story
Delete

The best demos of 2025 from the demoscene

1 Share

Many software engineers I know are familiar with the demoscene, but very few follow it actively. Every year, around 2,000 productions are released. Once in a while, a masterpiece like .kkrieger or Elevated becomes famous, but most of the best work stays inside the scene.

To filter the noise, juries of demosceners nominate the best productions of the year for the Meteoriks awards. The 2026 nominees were just announced, so it’s a good excuse to watch a few demos.

I’ll highlight a few demos that particularly impressed me. Note that I have a bias toward modern platforms and size-coding, so don’t hesitate to look at the full list. Here’s what caught my eye this year.

The demoscene is a computer subculture focused on realtime graphics and technical creativity. Demos are programs that generate visuals and music in realtime, often under extreme constraints such as very small file sizes or old hardware.

Tension (Digital Dynamite x Aenima)

download

This is a 64kB remake of a prerendered video made in 2002. In a 64k intro, you usually design the aesthetics around what’s easy to generate procedurally. Here, the team did the opposite: they decided to recreate each shot of a 23-year-old video. Each shot in the demo matches the original very closely. The amount of work is mind-blowing, as everything is high-quality: textures, models, animations, etc.

The trick to make an intro is to massively use procedural generation. If you’re curious how these 64kB demos are made, I wrote a series of articles with Zavie that explain many of the techniques: A dive into the making of Immersion.

To see how the demo compares with the original video, see this side-by-side comparison:

Dune (Alcatraz)

download

This won the 8k compo at Revision. I was in the same competition with my own demo, The Sheep and the Biker (which took 2nd place). While I focused on narrative, Dune focused on the atmosphere in a superb way, each shot is beautiful, with impressive atmospheric lighting. I was particularly impressed by the soundtrack: it’s very high quality and it matches the scale of the visuals perfectly.

If you’re curious to see the techniques typically used for making 8k intros, check out my article: How we made an animated movie in 8kB.

No-CPU Challenge (Demostue Allstars)

download

This one is for the Amiga purists. The challenge was to make a demo using zero CPU cycles. Everything is handled by the Copper, Blitter, and DMA. It’s a pure hardware hack production that inspired other people to try the challenge.

I know this is fairly technical; if you’re interested in the details, check out github.com/askeksa/NoCpuChallenge for more information.

Brute Concrete (United Force & Digital Dynamite)

download

Brute Concrete uses a deliberate stop-motion aesthetic with smooth camera paths. This intro is so elegant that you forget it was just 64kB. I love the aesthetics in this demo, with brutalist models, good postprocessing and a very distinctive style.

Hexer (LJ)

download

Made by the same coder as Dune mentioned above, this is a 4kB demo. It’s difficult to do storytelling in just 4kB and most 4k intros focus on abstract geometry. Here, LJ instead went for a shaky-cam, independent-horror-film vibe with hectic cuts and a thick grain effect. It’s a very clever concept with a very strong direction.

Breach (mfx)

download

The PC demo competition is just about making a good demo, without the limitations of the other categories. Build with Cables.gl, Breach is a superb, cinematic, realtime artwork. The demo has a high-quality renderer, with lots of interesting effects and an artsy vibe. The soundtrack is a perfect match.

Nine (lft)

download

This is a demo for the Commodore 64 that features 9 moving sprites. This might not sound much, until you learn about the technical details of the demo.

C64 hardware has an 8-sprite-per-scanline limit. Presented as a magic show where a magician pulls sprites out of a hat, this demo breaks that limit in a way that looks effortless. If you’re interested in oldschool computers and magic tricks, the making-of video is fascinating:

Wunderlust (Gray Marchers)

download

I’ll finish this list with the PC demo that took 1st place at Assembly 2025. Although PC demos are allowed to use hundreds of megabytes of data, this one is just a 500kB HTML/JS file with an mp3 on the side.

Instead of the traditional polygon rendering, this demo uses raymarching, so most of the rendering happens in shaders. It has a beautiful renderer, with nice reflections and light effects. The demo has lots of scenes, with nice transitions and a very good sense of flow, feel-good music, and references to the demoscene tropes.

These demos show what the demoscene still does best: pushing hardware, tools, and creativity far beyond what most people expect. The Meteoriks winners will be announced at Revision, the world's largest demoparty. It will happen on April 3, in Saarbrücken, Germany.

Read the whole story
emrox
3 days ago
reply
Hamburg, Germany
Share this story
Delete

The Productivity Paradox: Why Technology Makes the Economy More Efficient But Most People No Richer

1 Share

Every decade, technology makes us dramatically more productive. Every decade, GDP growth slows.

These two facts should not coexist. And understanding why they do reveals the defining economic tension of our era.


The Question That Doesn’t Get Asked Enough

Computers cost 92% less than they did in 2000. A phone call across the world is free. You can stream virtually every movie ever made for $15 a month. Amazon can deliver anything to your door in 24 hours.

By every measure of efficiency, we are living in an extraordinary era of technological progress.

So why has average real GDP growth slowed from 4.5% per year in the 1960s to 2.4% in the 2010s and 2020s? Why does the median American household feel squeezed rather than enriched? Where, exactly, do the gains go?

The answer is genuinely counterintuitive — and it starts with a simple observation about consumption.


Part 1: The Consumption Ceiling

Consumer spending as a share of US GDP moved from roughly 61% in 1980 to about 68% today. That’s a modest rise over four decades — and it has essentially plateaued since 2010.

This matters because it tells us something important: technology is not meaningfully expanding the total amount humans consume. It’s redistributing how we consume, and who profits from it.

There are real biological and physical reasons for this ceiling. You can only eat so much. You can only watch one screen at a time. You live in one house. The same 24 hours constrain everyone.

When Netflix replaced cable, the typical household didn’t spend more on entertainment — they spent roughly the same, just with a different company capturing the margin. Uber didn’t add new travel; it displaced taxis. Spotify didn’t make people listen to more music; it replaced album purchases.

Technology redistributes the existing pie. It doesn’t reliably grow it.

This creates the core dynamic: efficiency gains in the “middle layers” of the economy — distribution, logistics, retail, media — don’t expand total spending. They determine who captures the existing spending.


Part 2: Following the Money

The shift becomes visible when you follow a single dollar through one industry.

A physical bookstore in 2000 took in $100 from a book sale and distributed it roughly like this: about 60% went to labor (store staff, publisher employees, authors), 30% went to capital (owner profit, rent), and 10% covered other costs. The money circulated locally through wages.

Amazon today takes in that same $100. The distribution looks fundamentally different: warehouse and tech labor receives roughly 25%, Amazon’s infrastructure and profit captures around 55%, and the remainder flows to publishers and authors. Labor’s share of that transaction dropped by more than half.

This isn’t an Amazon-specific story. It’s visible in the aggregate data.

The labor share of US GDP fell from approximately 64% in 1980 to around 58% today — a 6-percentage-point shift. Applied to a $28 trillion economy, that gap represents roughly $1.7 trillion per year that once flowed to workers but now flows to capital.

Per worker, across 160 million employed Americans: about $10,500 annually.

For an average household: closer to $26,000 a year.

That’s where the productivity gains went, i.e. back to the capital/company.


Part 3: Why We Don’t See Deflation

If technology dramatically reduces the cost of producing and distributing goods, prices should fall. That’s basic economics.

But they mostly haven’t. Four mechanisms explain why.

Mechanism 1: Monopoly Power

Competitive markets pass savings to consumers through price competition. But tech markets are highly concentrated.

Amazon controls roughly 40% of US e-commerce. Google holds about 89% of global search. When one or two players dominate a market, there’s no competitive pressure to lower prices. Efficiency gains become margin expansion instead.

Average net profit margins for S&P 500 companies have roughly doubled since 2000. That’s not innovation creating new value — that’s market structure allowing companies to keep the gains rather than pass them on.

Mechanism 2: Services Don’t Deflate — They Inflate

This one is structural, not conspiratorial. Economists call it Baumol’s Cost Disease.

Goods that can be automated get cheaper. Services requiring human labor get more expensive.

Between 2000 and 2024 (BLS CPI data): - Computer prices fell 92% - Consumer electronics broadly collapsed in price - Used vehicle prices softened

But over the same period: - Healthcare costs rose 123% - College tuition rose 177% - Median home prices went from $172,900 to $419,300 — up 142%

The problem: services make up roughly 60-70% of what households actually spend money on. The things getting cheaper are ones you buy occasionally. The things getting more expensive are what you pay for every month.

A doctor can’t see 10x more patients by using better software. A teacher can’t teach 10x more students. Their wages must still compete with the broader economy — so costs rise even without productivity gains.

The arithmetic in 2024: goods categories (vehicles, electronics) were in outright deflation; services ran at +4% annually. Overall CPI: 2.9%. Productivity gains in goods are swamped by service-sector inflation.

Mechanism 3: Gains Flow Into Assets, Not Prices

Corporate profits don’t evaporate — they flow into financial assets.

The S&P 500 went from roughly 1,426 in January 2000 to about 5,980 in January 2025 — a 319% gain. Median home prices more than doubled. But CPI — which measures consumer goods and services, not assets — rose about 86% over the same period.

The gains are real. They’re just concentrated in balance sheets, not paychecks.

Mechanism 4: The Fed Offsets Deflation With Money Creation

M2 money supply grew from $4.7 trillion in 2000 to a peak of $21.6 trillion in early 2022 — a 360% increase, while the economy grew roughly 180% over the same period.

When technology creates deflationary pressure, the Fed’s instinct is to expand the money supply to keep inflation positive. Sustained deflation causes people to delay purchases, which can spiral into recession.

The result: productivity-driven price reductions get neutralized by monetary expansion. Mild consumer inflation persists. The deflationary gains disappear into the ether.


Part 4: The Capital Lock-Up Problem

Here’s where the paradox tightens into a trap.

Productivity gains flow to capital. Capital accumulates in corporate treasuries and shareholder accounts. But there’s a limit to how much capital can find productive outlets in a saturated consumer economy.

So instead of flowing into wages, new industries, or broad employment, capital recycles into:

  • Stock buybacks — enriching shareholders, not employing workers
  • Data center infrastructure — Amazon, Alphabet, Microsoft, and Meta plan to spend over $350 billion combined on capex in 2025 alone (from their earnings reports)
  • Cash reserves parked in government bonds

The economic multiplier — where a dollar paid in wages becomes spending at a restaurant, which pays a cook, who buys groceries — breaks down when capital doesn’t circulate through wages.

Manufacturing-era investment created jobs at roughly $200,000 per position. Modern tech infrastructure creates far fewer jobs per dollar deployed, and in many cases eliminates more than it creates.

The IMF estimates roughly 40% of jobs globally have significant exposure to AI-driven automation. The World Economic Forum’s 2025 Future of Jobs Report estimates 170 million new roles created by 2030, but 92 million displaced — and the distribution of winners is heavily skewed by income, education, and geography.


Part 5: The Secular Stagnation Trap

The pieces now form a self-reinforcing cycle.

Technology improves productivity
    ↓
Companies capture gains as profit, not wages
    ↓
Workers have less to spend
    ↓
Consumer demand grows slowly
    ↓
Capital has fewer productive investment opportunities
    ↓
Capital flows into financial assets
    ↓
Inequality rises, economy runs at 2% growth
    ↓
(Repeat)

GDP growth averaged roughly 2.4% in the 2010s and 2020s, compared to 4.4-4.5% in the 1950s-60s. This isn’t because we’ve run out of ideas. It’s because the mechanism for converting productivity into broad-based prosperity is misfiring.

This is what economists call secular stagnation — not a recession, not a crisis, but a structurally lower gear that persists despite technological brilliance.


What This Isn’t Saying

This analysis has real limits worth naming.

Absolute living standards have genuinely improved. Longer lifespans, better medicines, access to information that would have cost thousands of dollars in library fees — these are real gains not fully captured in wage statistics.

Some workers, particularly in technology, have done extraordinarily well. The aggregate decline in labor share masks wide variation across sectors and skill levels.

Free services — Google Search, Wikipedia, WhatsApp — create enormous value that doesn’t show up in GDP at all. The consumption ceiling argument partially breaks down for digital goods with near-zero marginal cost.

And this is backward-looking. Policy changes, genuinely new industries, and different approaches to AI deployment could alter the pattern. There is nothing inevitable about the current distribution.


The Core Problem, Simply Stated

Technology is making distribution dramatically more efficient.

But efficiency gains are being captured by whoever controls the bottleneck — the platform, the marketplace, the search engine — rather than distributed to the workers who enable production or the consumers who fund it.

Without wages, workers can’t consume. Without consumption, capital has nowhere productive to go. So it piles up in buybacks and data centers. GDP growth slows. And we wonder why a world of genuine technological marvels feels economically stagnant for most people.

That’s the paradox.

As AI accelerates the substitution of capital for labor, the dynamics described here are likely to intensify rather than resolve. The question isn’t whether the technology works — it clearly does. The question is whether the institutions and incentive structures around it will evolve fast enough to distribute what it creates.

That’s the harder problem. And it’s not a technology problem at all.


Data sources: Bureau of Economic Analysis, Bureau of Labor Statistics CPI data, Federal Reserve FRED database, Census Bureau/HUD median home prices, Stanford HAI AI Index 2025, WEF Future of Jobs Report 2025, IMF Staff Discussion Note on Generative AI and the Future of Work (2024), Statcounter global search share data, eMarketer US e-commerce market share, company SEC filings (Amazon, Alphabet, Microsoft, Meta capex figures).

Read the whole story
emrox
6 days ago
reply
Hamburg, Germany
Share this story
Delete

I Have 30 Years of Career Left. AI Made Me Rethink All of Them.

1 Share

I’m turning 40 this year. That means, if I’m lucky, I have roughly 30 more working years ahead of me. Thirty years of building things, making career decisions, and trying to stay relevant in an industry that reinvents itself every five to seven years.

Until recently, that felt manageable. I’ve been in software engineering for over 20 years. I’ve survived the transition from monoliths to microservices, the mobile revolution, the cloud migration wave, the DevOps transformation. Each one felt significant at the time. Each one changed what we built or how we built it. But none of them changed whether we were needed.

AI does. And that’s a fundamentally different kind of shift.

Every previous technology wave I’ve lived through followed the same pattern: new tools arrived, the work changed shape, and engineers adapted. You learned new frameworks, new paradigms, new infrastructure patterns. The underlying deal stayed the same. Companies needed people to build software, and if you kept your skills current, you’d be fine.

What makes AI different isn’t that it changes the tools. It’s that it changes the leverage. When one engineer with AI can do the work that used to require three, the math changes at the org level. Companies don’t just need different engineers. They need fewer of them.

I watched this play out in real time. Teams getting restructured not because the work disappeared, but because the same work now required fewer hands. Job postings that quietly raised the bar, expecting senior-level output at mid-level headcount. Entire categories of tasks (boilerplate code, documentation drafts, test generation) moving from “junior engineer’s job” to “AI’s job” almost overnight.

And the hype makes everything worse. AI is genuinely transformative, but somewhere between “this is a useful tool” and “this will replace all engineers within five years,” the conversation went off the rails. The loudest voices in the room (often the ones furthest from the actual work) started treating AI capabilities as a foregone conclusion rather than a trajectory. CEOs read a blog post about AI agents replacing entire engineering teams and suddenly that’s the planning assumption. Headcount gets cut not because AI actually replaced those people, but because someone in leadership bought the narrative that it will.

That’s the part that keeps me up at night. Not AI itself, but the decisions being made on the back of AI hype by people who don’t understand what software engineering actually involves. The gap between what AI can do today and what executives think it can do today is enormous, and real careers are getting caught in that gap.

I sat down one evening and tried to project what my career looks like in 2035, and for the first time in two decades, I had no credible model for it. Not because the technology scared me, but because I couldn’t predict which version of the story the industry would choose to believe. Not a pessimistic model, not an optimistic one. Just a blank space where the plan used to be.

That blank space is what got me moving.

What AI can’t do (at least not yet, and I’d argue not for a long time) is exercise judgment in context.

Here’s what made it click for me. I’ve been using Claude Code lately, and it’s good. Not “neat party trick” good. Actually good. The kind of good where I ask it to build something and the code that comes back is clean, well-structured, and works on the first run more often than I’d like to admit. A year ago I could dismiss AI-generated code as a rough draft that needed heavy editing. Now? Now it writes code that looks like something I’d write. Sometimes better.

That realization forced a question I’d been avoiding: if the code itself is no longer the hard part, what am I actually being paid for?

The answer, I think, is judgment. Knowing which thing to build. Understanding why one technically correct approach is wrong for this particular team, this codebase, this set of business constraints. Seeing the second and third-order consequences of a technical decision before they show up in production. That’s where experience lives, in the space between “this works” and “this is right for the situation.”

So I’m doubling down there. On understanding business context. On learning domains deeply. On being the person who can evaluate what AI produces and say “this looks right but it’s wrong, and here’s why.” That instinct doesn’t come from tutorials or certifications. It comes from watching systems succeed and fail in production for 20 years, from understanding not just how things work but why they were built that way.

But here’s the thing about that kind of judgment: it doesn’t develop in a vacuum. It develops through building things. Which is why I still code, even though my current role doesn’t require it.

I’m working as a developer relations manager focused on content now (which is both terrifying and exciting in equal measure), so I’m not writing code all day anymore. Most of my work is writing, and I use AI to help with it. But here’s what’s interesting: AI can help me find the right words, tighten a paragraph, suggest a better structure. What it can’t do is decide what’s worth writing about, or know which angle will resonate with a senior engineer who’s been through three rewrites of the same system, or recognize when a piece of technical content is subtly misleading in ways that only someone with domain experience would catch. I bring the judgment. AI helps with the execution.

And the exact same thing applies to coding. I still code because it’s fun, but also because I’ve realized the relationship with AI works the same way there. AI can write the code. It can’t architect the system. It can’t decide which tradeoffs to make, or know that the elegant solution it just generated will fall apart at scale, or understand why the team chose a boring technology stack on purpose. The person guiding the work, deciding what to build and what not to build, evaluating whether the output actually solves the problem, that’s where experience lives.

In both cases, you learn the same thing: how to decompose a vague problem into concrete steps, how to hold a complex system in your head and reason about its edges, how to develop an instinct for where things are likely to break. It’s not a coding skill or a writing skill. It’s a thinking skill. And if you don’t have it, you can’t meaningfully evaluate what AI gives you. You can look at the output and think “that seems fine.” But you can’t see the subtle N+1 query hiding in the data access pattern, or the race condition that only shows up under load, or the security assumption baked into a convenience method.

Learn to code. Keep coding. Not because you’ll write every line yourself for the next 30 years, but because it trains the kind of thinking that makes everything else you do more valuable.

I used to pour everything into my employer. My professional identity, my network, my reputation, my growth, all of it lived inside one company’s walls. That felt normal. It’s what everyone around me was doing.

Then I watched a round of layoffs hit people I respected. People with deep expertise and years of institutional knowledge. And yes, their skills transferred, their experience was real, their ability to do the work hadn’t changed overnight. But something else had. The ground they were standing on vanished. The internal reputation, the relationships with leadership, the security of knowing where you fit, all of that evaporated in a single meeting. And suddenly they were competing in a market that had gotten significantly more crowded, against people with similar resumes and similar experience, in a hiring landscape where being talented wasn’t enough anymore. You had to be visible. You had to be connected. You had to be someone the market already knew, not someone it had to discover from a cold application.

That’s when I started thinking about professional gravity differently. Not as something your employer gives you, but as something you build that exists independent of any single company.

I’ve always been a writer. Blog posts, technical articles, documentation, the kind of writing that lives inside a company’s content strategy and serves someone else’s goals. But I’d stopped writing for myself. So I picked it back up, this time with a different purpose. Not as a hobby, not as a creative outlet, but as a deliberate investment. A newsletter about the things I think about anyway: engineering careers, leadership in the age of AI, the unspoken tensions of navigating a rapidly changing industry with decades of runway still ahead of you. Published thinking that shows people how I reason, not just what I’ve done. A network of people who know my perspective because they’ve read it, not because we happened to work on the same Jira board.

That same logic extends to money. Income diversification is the area where I’ve historically been the worst. One paycheck, one employer, one industry. I never seriously thought about what happens if that stream dries up, because it never did. I just wasn’t wired to think about money strategically, and I suspect a lot of engineers are the same. We talk about total comp and RSU vesting schedules, but we rarely talk about income resilience.

So I’m learning (slowly, awkwardly) how to diversify. Talks and workshops where two decades of experience becomes a product instead of just a resume line. A professional network that creates optionality for consulting if I ever need it. None of these produce meaningful income right now. That’s fine. I have 30 years. The goal isn’t to replace my salary tomorrow. It’s to make sure that if something changes suddenly, I don’t get caught with no options and no runway to react.

I want to be clear about the limits of what I’m sharing here, because I think the unfinished thinking is more useful than pretending I have a polished playbook.

I don’t know how to plan a technical career when the half-life of technical skills is shrinking this fast. I don’t know what engineering leadership looks like in five years, whether managers become AI-team leads or the role gets compressed because there are fewer humans to manage. I don’t know if 30 years from now, the career I’ve built will look anything like what I imagined when I started.

That used to scare me. It doesn’t anymore, and here’s why.

Every major technology shift in my career has created more opportunity than it destroyed. Not immediately, and not for everyone, but eventually and overwhelmingly. The web didn’t kill software. Mobile didn’t kill the web. Cloud didn’t kill infrastructure. Each wave created entirely new categories of work that nobody predicted from the inside.

I believe AI will do the same. The possibilities opening up right now are extraordinary. We’re going to build things in the next decade that we can barely imagine today. Entirely new categories of work will emerge, just like they always have. That’s not a threat. That’s what makes this the most exciting time to be working in technology.

But exciting doesn’t mean safe. The opportunities will be there. They just won’t show up automatically at your door.

I don’t know what the future will bring. But I know what I’ll keep doing: coding, teaching, explaining, exploring, and building. Those are the things that got me here, and they’re the things that still make me want to sit down at my desk every morning. I hope I get to keep doing them as a profession for the next 30 years. I think I will. But in the meantime, I’m making sure that if the rules change, I’m not standing still wondering what happened.

That’s the bet. I’m genuinely excited about it. I’ll let you know how it goes.

Read the whole story
emrox
7 days ago
reply
Hamburg, Germany
Share this story
Delete

(comic) Work on Your Emotional Intelligence

1 Share

Read the whole story
emrox
9 days ago
reply
Hamburg, Germany
Share this story
Delete
Next Page of Stories