Generative UI Notes

Mar. 26th, 2026 02:59 pm
[syndicated profile] csstricks_feed

Posted by Geoff Graham

I’m really interested in this emerging idea that the future of web design is Generative UI Design. We see hints of this already in products, like Figma Sites, that tout being able to create websites on the fly with prompts.

Putting aside the clear downsides of shipping half-baked technology as a production-ready product (which is hard to do), the angle I’m particularly looking at is research aimed at using Generative AI (or GenAI) to output personalized interfaces. It’s wild because it completely flips the way we think about UI design on its head. Rather than anticipating user needs and designing around them, GenAI sees the user needs and produces an interface custom-tailored to them. In a sense, a website becomes a snowflake where no two experiences with it are the same.

Again, it’s wild. I’m not here to speculate, opine, or preach on Generative UI Design (let’s call it GenUI for now). Just loose notes that I’ll update as I continue learning about it.

Defining GenUI

Google Research (PDF):

Generative UI is a new modality where the AI model generates not only content, but the entire user experience. This results in custom interactive experiences, including rich formatting, images, maps, audio and even simulations and games, in response to any prompt (instead of the widely adopted “walls-of-text”).

NN/Group:

generative UI (genUI) is a user interface that is dynamically generated in real time by artificial intelligence to provide an experience customized to fit the user’s needs and context.

UX Collective:

A Generative User Interface (GenUI) is an interface that adapts to, or processes, context such as inputs, instructions, behaviors, and preferences through the use of generative AI models (e.g. LLMs) in order to enhance the user experience.

Put simply, a GenUI interface displays different components, information, layouts, or styles, based on who’s using it and what they need at that moment.

Tree diagram showing three users, followed by inputs instructions, behaviors, and preferences, which output different webpage layouts.
Credit: UX Collective

Generative vs. Predictive AI

It’s easy to dump “AI” into one big bucket, but it’s often distinguished as two different types: predictive and generative.

Predictive AIGenerative AI
InputsUses smaller, more targeted datasets as input data. (Smashing Magazine)Trained on large datasets containing millions of sample content. (U.S. Congress, PDF)
OutputsForecasts future events and outcomes. (IBM)New content, including audio, code, images, text, simulations, and videos. (McKinsey)
ExamplesChatGPT, ClaudeSora, Suno, Cursor

So, when we’re talking about GenAI, we’re talking about the ability to create new materials trained on existing materials. And when we’re talking specifically about GenUI, it’s about generating a user interface based on what the AI knows about the user.

Accessibility

And I should note that what I’m talking about here is not strictly GenUI in how we’ve defined it so far as UI output that adapts to individual user experiences, but rather “developing” generated interfaces. These so-called AI website builders do not adapt to the individual user, but it’s easy to see it heading in that direction.

The thing I’m most interested in — concerned with, frankly — is to what extent GenUI can reliably output experiences that cater to all users, regardless of impairment, be it aural, visual, physical, etc. There are a lot of different inputs to consider here, and we’ve seen just how awful the early results have been.

That last link is a big poke at Figma Sites. They’re easy to poke because they made the largest commercial push into GenUI-based web development. To their credit (perhaps?), they received the severe pushback and decided to do something about it, announcing updates and publishing a guide for improving accessibility on Figma-generated sites. But even those have their limitations that make the effort and advice seem less useful and more about saving face.

Anyway. There are plenty of other players to jump into the game, notably WordPress, but also others like Vercel, Squarespace, Wix, GoDaddy, Lovable, and Reeady.

Some folks are more optimistic than others that GenUI is not only capable of producing accessible experiences, but will replace accessibility practitioners altogether as the technology evolves. Jakob Nielsen famously made that claim in 2024 which drew fierce criticism from the community. Nielsen walked that back a year later, but not much.

I’m not even remotely qualified to offer best practices, opine on the future of accessibility practice, or speculate on future developments and capabilities. But as I look at Google’s People + AI Guidebook, I see no mention at all of accessibility despite dripping with “human-centered” design principles.

Accessibility is a lagging consideration to the hype, at least to me. That has to change if GenUI is truly the “future” of web design and development.

Examples & Resources

Google has a repository of examples showing how user input can be used to render a variety of interfaces. Going a step further is Google’s Project Genie that bills itself as creating “interactive worlds” that are “generated in real-time.” I couldn’t get an invite to try it out, but maybe you can.

In addition to that, Google has a GenUI SDK designed to integrate into Flutter apps. So, yeah. Connect to your LLM provider and let it rip to create adaptive interfaces.

Thesys is another one in the adaptive GenUI space. Copilot, too.

References


Generative UI Notes originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

github: shadowy octopus with the head of a robot, emblazoned with the Dreamwidth swirl (Default)
[personal profile] github posting in [site community profile] changelog

Branch: refs/heads/dependabot/npm_and_yarn/api/yaml-2.8.3 Home: https://github.com/dreamwidth/dreamwidth Commit: ba81ca40ea6866fc08c604efafc1ac579a489072 https://github.com/dreamwidth/dreamwidth/commit/ba81ca40ea6866fc08c604efafc1ac579a489072 Author: dependabot[bot] <49699333+dependabot[bot][github.com profile] users> Date: 2026-03-26 (Thu, 26 Mar 2026)

Changed paths: M api/package-lock.json M api/package.json

Log Message:


Bump yaml from 2.2.2 to 2.8.3 in /api

Bumps yaml from 2.2.2 to 2.8.3. - Release notes - Commits

[syndicated profile] adactio_feed

People of Brighton, mark your calendars: Saturday, April 4th. That’s when Salter Cane will be playing in The Hope And Ruin.

It’s not just Salter Cane though. We’ll be joined by Skyscrapers from Lewes, and The Equatorial Group from Eastbourne. We’ve played with them before, and they’re superb!

Tickets are available now. They’re £8 in advance. It’ll be £10 on the door. So please get your ticket in advance!

Doors are at 7:30pm. Skyscrapers will be on stage at 8pm, The Equatorial group at 9pm, and Salter Cane at 10pm.

I’m really, really looking forward to rocking out playing songs from our newest album and I would love it if you could make it.

See you there!

[syndicated profile] hacks_mozilla_feed

Posted by Bastien Orivel

In January, we introduced our Nightly package for RPM-based Linux distributions. Today, we are thrilled to announce it is now available for Firefox Beta!

Firefox Beta is great for testing your sites in a version of Firefox that will reach regular users in the coming weeks. If you find any issues, please file them on Bugzilla.

Switching to Mozilla’s RPM repository allows Firefox Beta to be installed and updated like any other application, using your favorite package manager. It also provides a number of improvements:

  •  Better performance thanks to our advanced compiler-based optimizations,
  • Updates as fast as possible because the .rpm management is integrated into Firefox’s release process,
  • Hardened binaries with all security flags enabled during compilation,
  • No need to create your own .desktop file.

If you have Mozilla’s RPM repository already set up, you can simply install Firefox Beta with your package manager. Otherwise, follow the setup steps below.


If you are on Fedora (41+), or any other distribution using dnf5 as the package manager

 

sudo dnf config-manager addrepo --id=mozilla --set=baseurl=https://packages.mozilla.org/rpm/firefox --set=gpgkey=https://packages.mozilla.org/rpm/firefox/signing-key.gpg --set=gpgcheck=1 --set=repo_gpgcheck=0
sudo dnf makecache --refresh
sudo dnf install firefox-beta

Note: repo_gpgcheck=0 deactivate the signature of metadata with GPG. However, this is safeguarded instead by HTTPS and package signatures (gpgcheck=1).

If you are on openSUSE or any other distribution using zypper as the package manager

sudo rpm --import https://packages.mozilla.org/rpm/firefox/signing-key.gpg
sudo zypper ar --gpgcheck-allow-unsigned-repo https://packages.mozilla.org/rpm/firefox mozilla
sudo zypper refresh
sudo zypper install firefox-beta

For other RPM based distributions (RHEL, CentOS, Rocky Linux, older Fedora versions)

sudo tee /etc/yum.repos.d/mozilla.repo >  /dev/null << EOF
[mozilla]
name=Mozilla Packages
baseurl=https://packages.mozilla.org/rpm/firefox
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://packages.mozilla.org/rpm/firefox/signing-key.gpg
EOF
# For dnf users
sudo dnf makecache --refresh
sudo dnf install firefox-beta
# For zypper users
sudo zypper refresh
sudo zypper install firefox-beta

The firefox-beta package will not conflict with your distribution’s Firefox package if you have it installed, you can have both at the same time!

Adding language packs

If your distribution language is set to a supported language, language packs for it should automatically be installed. You can also install them manually with the following command (replace fr with the language code of your choice):

sudo dnf install firefox-beta-l10n-fr

You can list the available languages with the following command:

dnf search firefox-beta-l10n

Don’t hesitate to report any problem you encounter to help us make your experience better.

The post Firefox Developer Edition and Beta: Try out Mozilla’s .rpm package! appeared first on Mozilla Hacks - the Web developer blog.

Project Hail Mary by Andy Weir

Mar. 24th, 2026 03:27 pm
[syndicated profile] adactio_feed

I was in the library the weekend before last when I spotted something on the shelf of recently-returned books. Project Hail Mary by Andy Weir.

I knew the film adaptation was coming out later that week. Ideally, I’d like to read the book before seeing the film. It would be a race against time! The film would be out in days, and the book is over 450 pages long. Could this nerdy white guy rise to challenge and overcome the odds?

As it turned out, it wasn’t all that arduous. Project Hail Mary is a real page-turner, just like Andy Weir’s previous book, The Martian.

But his books are worryingly regressive. The so-called golden age of science fiction featured plenty of plucky white science guys saving the day with their brainpower in books written by white science guys. Andy Weir’s books have a similar outlook.

On the other hand, they’re undeniably fun. And who knows? Maybe his next book will feature a protaganist that isn’t an aw-shucks white guy.

(Update: multiple people have pointed out that I completely missed that Andy Weir’s other book, Artemis, features a refreshingly different kind of protaganist—phew!)

Project Hail Mary is packed with plenty of plausible-sounding science. Perhaps too much. After a while it felt like elements were being added to the story to showcase the author’s smarts rather than to propel the plot.

Over all, the book is good entertaining fun but a bit baggy and could’ve been edited down somewhat.

I was interested to see how the film would translate the science from the written page to the screen. Very commendably, as it turns out.

The film does a great job of avoiding expositional blackboard sequences or explanatory dialogue. Wherever possible, it shows rather than tells. It helps that it doesn’t underestimate what the audience can handle.

Above all, it’s entertaining. Popcorn was invented for this kind of film. Ryan Gosling does his usual entertaining shtick, though I kept thinking that Sam Rockwell would’ve really delivered the goods.

The film trims the book down to its essentials. I didn’t miss any of the elements they chose to cut. I did spot one glaring mistake, but that was continuity error rather than anything to do with the science.

Project Hail Mary the film is better than Project Hail Mary the book. Go see it. And if it leaves you wishing for more, then you can always read the book.

Buy this book

[syndicated profile] csstricks_feed

Posted by Daniel Schwarz

Over the last few years, there’s been a lot of talk about and experimentation with scroll-driven animations. It’s a very shiny feature for sure, and as soon as it’s supported in Firefox (without a flag), it’ll be baseline. It’s part of Interop 2026, so that should be relatively soon. Essentially, scroll-driven animations tie an animation timeline’s position to a scroll position, so if you were 50% scrolled then you’d also be 50% into the animation, and they’re surprisingly easy to set up too.

I’ve been seeing significant interest in the new CSS corner-shape property as well, even though it only works in Chrome for now. This enables us to create corners that aren’t as rounded, or aren’t even rounded at all, allowing for some intriguing shapes that take little-to-no effort to create. What’s even more intriguing though is that corner-shape is mathematical, so it’s easily animated.

Hence, say hello to scroll-driven corner-shape animations (requires Chrome 139+ to work fully):

corner-shape in a nutshell

Real quick — the different values for corner-shape:

corner-shape keywordsuperellipse() equivalent
squaresuperellipse(infinity)
squirclesuperellipse(2)
roundsuperellipse(1)
bevelsuperellipse(0)
scoopsuperellipse(-1)
notchsuperellipse(-infinity)
Showing the same magenta-colored rectangle with the six difference CSS corner-shape property values applied to it in a three-by-three grid.

But what’s this superellipse() function all about? Well, basically, these keyword values are the result of this function. For example, superellipse(2) creates corners that aren’t quite squared but aren’t quite rounded either (the “squircle”). Whether you use a keyword or the superellipse() function directly, a mathematical equation is used either way, which is what makes it animatable. With that in mind, let’s dive into that demo above.

Animating corner-shape

The demo isn’t too complicated, so I’ll start off by dropping the CSS here, and then I’ll explain how it works line-by-line:

@keyframes bend-it-like-beckham {
  from {
    corner-shape: superellipse(notch);
    /* or */
    corner-shape: superellipse(-infinity);
  }

  to {
    corner-shape: superellipse(square);
    /* or */
    corner-shape: superellipse(infinity);
  }
}

body::before {
  /* Fill viewport */
  content: "";
  position: fixed;
  inset: 0;

  /* Enable click-through */
  pointer-events: none;

  /* Invert underlying layer */
  mix-blend-mode: difference;
  background: white;

  /* Don’t forget this! */
  border-bottom-left-radius: 100%;

  /* Animation settings */
  animation: bend-it-like-beckham;
  animation-timeline: scroll();
}

/* Added to cards */
.no-filter {
  isolation: isolate;
}

In the code snippet above, body::before combined with content: "" creates a pseudo-element of the <body> with no content that is then fixed to every edge of the viewport. Also, since this animating shape will be on top of the content, pointer-events: none ensures that we can still interact with said content.

For the shape’s color I’m using mix-blend-mode: difference with background: white, which inverts the underlying layer, a trendy effect that to some degree only maintains the same level of color contrast. You won’t want to apply this effect to everything, so here’s a utility class to exclude the effect as needed:

/* Added to cards */
.no-filter {
  isolation: isolate;
}

A comparison:

Side-by-side comparison showing blend mode applied on the left and excluded from cards placed in the layout on the right, preventing the card backgrounds from changing.
Left: Full application of blend mode. Right: Blend mode excluded from cards.

You’ll need to combine corner-shape with border-radius, which uses corner-shape: round under the hood by default. Yes, that’s right, border-radius doesn’t actually round corners — corner-shape: round does that under the hood. Rather, border-radius handles the x-axis and y-axis coordinates to draw from:

/* Syntax */
border-bottom-left-radius: <x-axis-coord> <y-axis-coord>;

/* Usage */
border-bottom-left-radius: 50% 50%;
/* Or */
border-bottom-left-radius: 50%;
Diagramming the shape showing border-radius applied to the bottom-left corner. The rounded corner is 50% on the y-axis and 50% on the x-axis.

In our case, we’re using border-bottom-left-radius: 100% to slide those coordinates to the opposite end of their respective axes. However, we’ll be overwriting the implied corner-shape: round in our @keyframe animation, so we refer to that with animation: bend-it-like-beckham. There’s no need to specify a duration because it’s a scroll-driven animation, as defined by animation-timeline: scroll().

In the @keyframe animation, we’re animating from corner-shape: superellipse(notch), which is like an inset square. This is equivalent to corner-shape: superellipse(-infinity), so it’s not actually squared but it’s so aggressively sharp that it looks squared. This animates to corner-shape: superellipse(square) (an outset square), or corner-shape: superellipse(infinity).

Animating corner-shaperevisited

The demo above is actually a bit different to the one that I originally shared in the intro. It has one minor flaw, and I’ll show you how to fix it, but more importantly, you’ll learn more about an intricate detail of corner-shape.

The flaw: at the beginning and end of the animation, the curvature looks quite harsh because we’re animating from notch and square, right? It also looks like the shape is being sucked into the corners. Finally, the shape being stuck to the sides of the viewport makes the whole thing feel too contained.

The solution is simple:

/* Change this... */
inset: 0;

/* ...to this */
inset: -1rem;

This stretches the shape beyond the viewport, and even though this makes the animation appear to start late and finish early, we can fix that by not animating from/to -infinity/infinity:

@keyframes bend-it-like-beckham {
  from {
    corner-shape: superellipse(-6);
  }

  to {
    corner-shape: superellipse(6);
  }
}

Sure, this means that part of the shape is always visible, but we can fiddle with the superellipse() value to ensure that it stays outside of the viewport. Here’s a side-by-side comparison:

Two versions of the same magenta colored rectangle side-by-side. The left shows the top-right corner more rounded than the right which is equally rounded.

And the original demo (which is where we’re at now):

Adding more scroll features

Scroll-driven animations work very well with other scroll features, including scroll snapping, scroll buttons, scroll markers, simple text fragments, and simple JavaScript methods such as scrollTo()/scroll(), scrollBy(), and scrollIntoView().

For example, we only have to add the following CSS snippet to introduce scroll snapping that works right alongside the scroll-driven corner-shape animation that we’ve already set up:

:root {
  /* Snap vertically */
  scroll-snap-type: y;

  section {
    /* Snap to section start */
    scroll-snap-align: start;
  }
}

“Masking” with corner-shape

In the example below, I’ve essentially created a border around the viewport and then a notched shape (corner-shape: notch) on top of it that’s the same color as the background (background: inherit). This shape completely covers the border at first, but then animates to reveal it (or in this case, the four corners of it):

If I make the shape a bit more visible, it’s easier to see what’s happening here, which is that I’m rotating this shape as well (rotate: 5deg), making the shape even more interesting.

A large gray cross shape overlaid on top of a pinkish background. The shape is rotated slightly to the right and extends beyond the boundaries of the background.,

This time around we’re animating border-radius, not corner-shape. When we animate to border-radius: 20vw / 20vh, 20vw and 20vh refers to the x-axis and y-axis of each corner, respectively, meaning that 20% of the border is revealed as we scroll.

The only other thing worth mentioning here is that we need to mess around with z-index to ensure that the content is higher up in the stacking context than the border and shape. Other than that, this example simply demonstrates another fun way to use corner-shape:

@keyframes tech-corners {
  from {
    border-radius: 0;
  }

  to {
    border-radius: 20vw / 20vh;
  }
}

/* Border */
body::before {
  /* Fill (- 1rem) */
  content: "";
  position: fixed;
  inset: 1rem;
  border: 1rem solid black;
}

/* Notch */
body::after {
  /* Fill (+ 3rem) */
  content: "";
  position: fixed;
  inset: -3rem;

  /* Rotated shape */
  background: inherit;
  rotate: 5deg;
  corner-shape: notch;

  /* Animation settings */
  animation: tech-corners;
  animation-timeline: scroll();
}

main {
  /* Stacking fix */
  position: relative;
  z-index: 1;
}

Animating multiple corner-shape elements

In this example, we have multiple nested diamond shapes thanks to corner-shape: bevel, all leveraging the same scroll-driven animation where the diamonds increase in size, using padding:

<div id="diamonds">
  <div>
    <div>
      <div>
        <div>
          <div>
            <div>
              <div>
                <div>
                  <div>
                    <div></div>
                  </div>
                </div>
              </div>
            </div>
          </div>
        </div>
      </div>
    </div>
  </div>
</div>

<main>
  <!-- Content -->
</main>
@keyframes diamonds-are-forever {
  from {
    padding: 7rem;
  }

  to {
    padding: 14rem;
  }
}

#diamonds {
  /* Center them */
  position: fixed;
  inset: 50% auto auto 50%;
  translate: -50% -50%;

  /* #diamonds, the <div>s within */
  &, div {
    corner-shape: bevel;
    border-radius: 100%;
    animation: diamonds-are-forever;
    animation-timeline: scroll();
    border: 0.0625rem solid #00000030;
  }
}

main {
  /* Stacking fix */
  position: relative;
  z-index: 1;
}

That’s a wrap

We just explored animating from one custom superellipse() value to another, using corner-shape as a mask to create new shapes (again, while animating it), and animating multiple corner-shape elements at once. There are so many ways to animate corner-shape other than from one keyword to another, and if we make them scroll-driven animations, we can create some really interesting effects (although, they’d also look awesome if they were static).


Experimenting With Scroll-Driven corner-shape Animations originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Early-bird tickets for UX London

Mar. 19th, 2026 02:53 pm
[syndicated profile] adactio_feed

You should come to UX London in the first week of June. Why? Because it’s going to be awesome, that’s why!

You probably knew that already. You probably already decided to get a ticket because you’re smart like that.

But don’t dilly-dally! Early-bird tickets are available now but in just over one week, they won’t be.

So get your ticket by Friday, March 27th. If you get your ticket now, it’s a win for everyone. You get a cheaper ticket. We know for sure that you’re coming.

Every time someone buys a conference ticket in plenty of time, the conference organiser sleeps a little better at night.

If you need to convince your boss, you can give them these reasons to attend. I even made an email template you can use a starting point for making the case.

You could come for all three days of UX London, or you can pick just one day.

Tuesday, June 2nd is discovery day with a focus on user research. You’ll hear from great speakers like Melin Edomwonyi and Maria Isachenko as well as getting workshops from Natasha den Dekker and Feyikemi Akinwolemiwa.

Wednesday, June 3rd is design day where it’s all about the nitty-gritty details. Not only will there be great talks from Andrea Grigsby, Julia Petretta, and Hidde de Vries, there’s going to be the best-named workshop ever from my colleague Chris How: Yippee IA!

Thursday, June 4th is delivery with a focus on design systems and collaboration. Alex Edwards, Lucy Blackwell, Rachel Ilan Simpson and Ben Callahan will all be giving talks (and Ben’s doing a workshop too).

That’s not even close to the final line-up. I’m confirming more speakers right now and getting very, very excited about how it’s all shaping up.

You know you don’t want to miss this one. So get your early-bird ticket now while you still can.

A black and white profil…g woman with long hair. A woman with curly hair …g and tilting her head. Portrait of a woman dres… with her hair tied up. A smiling young woman wi…n front of neon lights. A smiling young woman wi…inst a blue background. A black and white portra…k shoulder-length hair. A smiling man with short… shirt under his jumper A smiling man with curly…wearing a purple shirt. A smiling young woman wi…air wearing a dark top. A woman wearing glasses …colourful floral shirt. A woman with short hair …st a pastel background. A shaven-headed man with…d slightly to one side.

[syndicated profile] csstricks_feed

Posted by Mat Marquis

Editor’s note: Mat Marquis and Andy Bell have released JavaScript for Everyone, an online course offered exclusively at Piccalilli. This post is an excerpt from the course taken specifically from a chapter all about JavaScript destructuring. We’re publishing it here because we believe in this material and want to encourage folks like yourself to sign up for the course. So, please enjoy this break from our regular broadcasting to get a small taste of what you can expect from enrolling in the full JavaScript for Everyone course.

I’ve been writing about JavaScript for long enough that I wouldn’t rule out a hubris-related curse of some kind. I wrote JavaScript for Web Designers more than a decade ago now, back in the era when packs of feral var still roamed the Earth. The fundamentals are sound, but the advice is a little dated now, for sure. Still, despite being a web development antique, one part of the book has aged particularly well, to my constant frustration.

An entire programming language seemed like too much to ever fully understand, and I was certain that I wasn’t tuned for it. I was a developer, sure, but I wasn’t a developer-developer. I didn’t have the requisite robot brain; I just put borders on things for a living.

JavaScript for Web Designers

I still hear this sentiment from incredibly talented designers and highly technical CSS experts that somehow can’t fathom calling themselves “JavaScript developers,” as though they were tragically born without whatever gland produces the chemicals that make a person innately understand the concept of variable hoisting and could never possibly qualify — this despite the fact that many of them write JavaScript as part of their day-to-day work. While I may not stand by the use of alert() in some of my examples (again, long time ago), the spirit of JavaScript for Web Designers holds every bit as true today as it did back then: type a semicolon and you’re writing JavaScript. Write JavaScript and you’re a JavaScript developer, full stop.

Now, sooner or later, you do run into the catch: nobody is born thinking like JavaScript, but to get really good at JavaScript, you will need to learn how. In order to know why JavaScript works the way it does, why sometimes things that feel like they should work don’t, and why things that feel like they shouldn’t work sometimes do, you need to go one step beyond the code you’re writing or even the result of running it — you need to get inside JavaScript’s head. You need to learn to interact with the language on its own terms.

That deep-magic knowledge is the goal of JavaScript for Everyone, a course designed to help you get from junior- to senior developer. In JavaScript for Everyone, my aim is to help you make sense of the more arcane rules of JavaScript as-it-is-played — not just teach you the how but the why, using the syntaxes you’re most likely to encounter in your day-to-day work. If you’re brand new to the language, you’ll walk away from this course with a foundational understanding of JavaScript worth hundreds of hours of trial-and-error; if you’re a junior developer, you’ll finish this course with a depth of knowledge to rival any senior.

Thanks to our friends here at CSS-Tricks, I’m able to share the entire lesson on destructuring assignment. These are some of my favorite JavaScript syntaxes, which I’m sure we can all agree are normal and in fact very cool things to have —syntaxes are as powerful as they are terse, all of them doing a lot of work with only a few characters. The downside of that terseness is that it makes these syntaxes a little more opaque than most, especially when you’re armed only with a browser tab open to MDN and a gleam in your eye. We got this, though — by the time you’ve reached the end of this lesson, you’ll be unpacking complex nested data structures with the best of them.

And if you missed it before, there’s another excerpt from the JavaScript for Everyone course covering JavaScript Expressions available here on CSS-Tricks.

Destructuring Assignment

When you’re working with a data structure like an array or object literal, you’ll frequently find yourself in a situation where you want to grab some or all of the values that structure contains and use them to initialize discrete variables. That makes those values easier to work with, but historically speaking, it can lead to pretty wordy code:

const theArray = [ false, true, false ];
const firstElement = theArray[0];
const secondElement = theArray[1];
const thirdElement = theArray[2];

This is fine! I mean, it works; it has for thirty years now. But as of 2015’s ES6, we’ve had a much more elegant option: destructuring.

Destructuring allows you to extract individual values from an array or object and assign them to a set of identifiers without needing to access the keys and/or values one at a time. In its most simple form — called binding pattern destructuring — each value is unpacked from the array or object literal and assigned to a corresponding identifier, all of which are declared with a single let or const (or var, technically, yes, fine). Brace yourself, because this is a strange one:

const theArray = [ false, true, false ];
const [ firstElement, secondElement, thirdElement ] = theArray;

console.log( firstElement );
// Result: false

console.log( secondElement );
// Result: true

console.log( thirdElement );
// Result: false

That’s the good stuff, even if it is a little weird to see brackets on that side of an assignment operator. That one binding covers all the same territory as the much more verbose snippet above it.

When working with an array, the individual identifiers are wrapped in a pair of array-style brackets, and each comma separated identifier you specify within those brackets will be initialized with the value in the corresponding element in the source Array. You’ll sometimes see destructuring referred to as unpacking a data structure, but despite how that and “destructuring” both sound, the original array or object isn’t modified by the process.

Elements can be skipped over by omitting an identifier between commas, the way you’d leave out a value when creating a sparse array:

const theArray = [ true, false, true ];
const [ firstElement, , thirdElement ] = theArray;

console.log( firstElement );
// Result: true

console.log( thirdElement );
// Result: true

There are a couple of differences in how you destructure an object using binding pattern destructuring. The identifiers are wrapped in a pair of curly braces rather than brackets; sensible enough, considering we’re dealing with objects. In the simplest version of this syntax, the identifiers you use have to correspond to the property keys:

const theObject = {
  "theProperty" : true,
  "theOtherProperty" : false
};
const { theProperty, theOtherProperty } = theObject;

console.log( theProperty );
// result: true

console.log( theOtherProperty );
// result: false

An array is an indexed collection, and indexed collections are intended to be used in ways where the specific iteration order matters — for example, with destructuring here, where we can assume that the identifiers we specify will correspond to the elements in the array, in sequential order.

That’s not the case with an object, which is a keyed collection — in strict technical terms, just a big ol’ pile of properties that are intended to be defined and accessed in whatever order, based on their keys. No big deal in practice, though; odds are, you’d want to use the property keys’ identifier names (or something very similar) as your identifiers anyway. Simple and effective, but the drawback is that it assumes a given… well, structure to the object being destructured.

This brings us to the alternate syntax, which looks absolutely wild, at least to me. The syntax is object literal shaped, but very, very different — so before you look at this, briefly forget everything you know about object literals:

const theObject = {
  "theProperty" : true,
  "theOtherProperty" : false
};
const { theProperty : theIdentifier, theOtherProperty : theOtherIdentifier } = theObject;

console.log( theIdentifier );
// result: true

console.log( theOtherIdentifier );
// result: false

You’re still not thinking about object literal notation, right? Because if you were, wow would that syntax look strange. I mean, a reference to the property to be destructured where a key would be and identifiers where the values would be?

Fortunately, we’re not thinking about object literal notation even a little bit right now, so I don’t have to write that previous paragraph in the first place. Instead, we can frame it like this: within the parentheses-wrapped curly braces, zero or more comma-separated instances of the property key with the value we want, followed by a colon, followed by the identifier we want that property’s value assigned to. After the curly braces, an assignment operator (=) and the object to be destructured. That’s all a lot in print, I know, but you’ll get a feel for it after using it a few times.

The second approach to destructuring is assignment pattern destructuring. With assignment patterns, the value of each destructured property is assigned to a specific target — like a variable we declared with let (or, technically, var), a property of another object, or an element in an array.

When working with arrays and variables declared with let, assignment pattern destructuring really just adds a step where you declare the variables that will end up containing the destructured values:

const theArray = [ true, false ];
let theFirstIdentifier;
let theSecondIdentifier

[ theFirstIdentifier, theSecondIdentifier ] = theArray;

console.log( theFirstIdentifier );
// true

console.log( theSecondIdentifier );
// false

This gives you the same end result as you’d get using binding pattern destructuring, like so:

const theArray = [ true, false ];

let [ theFirstIdentifier, theSecondIdentifier ] = theArray;

console.log( theFirstIdentifier );
// true

console.log( theSecondIdentifier );
// false

Binding pattern destructuring will allow you to use const from the jump, though:

const theArray = [ true, false ];

const [ theFirstIdentifier, theSecondIdentifier ] = theArray;

console.log( theFirstIdentifier );
// true

console.log( theSecondIdentifier );
// false

Now, if you wanted to use those destructured values to populate another array or the properties of an object, you would hit a predictable double-declaration wall when using binding pattern destructuring:

// Error
const theArray = [ true, false ];
let theResultArray = [];

let [ theResultArray[1], theResultArray[0] ] = theArray;
// Uncaught SyntaxError: redeclaration of let theResultArray

We can’t make let/const/var do anything but create variables; that’s their entire deal. In the example above, the first part of the line is interpreted as let theResultArray, and we get an error: theResultArray was already declared.

No such issue when we’re using assignment pattern destructuring:

const theArray = [ true, false ];
let theResultArray = [];

[ theResultArray[1], theResultArray[0] ] = theArray;

console.log( theResultArray );
// result: Array [ false, true ]

Once again, this syntax applies to objects as well, with a few little catches:

const theObject = {
  "theProperty" : true,
  "theOtherProperty" : false
};
let theProperty;
let theOtherProperty;

({ theProperty, theOtherProperty } = theObject );

console.log( theProperty );
// true

console.log( theOtherProperty );
// false

You’ll notice a pair of disambiguating parentheses around the line where we’re doing the destructuring. You’ve seen this before: without the grouping operator, a pair of curly braces in a context where a statement is expected is assumed to be a block statement, and you get a syntax error:

// Error
const theObject = {
  "theProperty" : true,
  "theOtherProperty" : false
};
let theProperty;
let theOtherProperty;

{ theProperty, theOtherProperty } = theObject;
// Uncaught SyntaxError: expected expression, got '='

So far this isn’t doing anything that binding pattern destructuring couldn’t. We’re using identifiers that match the property keys, but any identifier will do, if we use the alternate object destructuring syntax:

const theObject = {
  "theProperty" : true,
  "theOtherProperty" : false
};
let theFirstIdentifier;
let theSecondIdentifier;

({ theProperty: theFirstIdentifier, theOtherProperty: theSecondIdentifier } = theObject );

console.log( theFirstIdentifier );
// true

console.log( theSecondIdentifier );
// false

Once again, nothing binding pattern destructuring couldn’t do. But unlike binding pattern destructuring, any kind of assignment target will work with assignment pattern destructuring:

const theObject = {
  "theProperty" : true,
  "theOtherProperty" : false
};
let resultObject = {};

({ theProperty : resultObject.resultProp, theOtherProperty : resultObject.otherResultProp } = theObject );

console.log( resultObject );
// result: Object { resultProp: true, otherResultProp: false }

With either syntax, you can set “default” values that will be used if an element or property isn’t present at all, or it contains an explicit undefined value:

const theArray = [ true, undefined ];
const [ firstElement, secondElement = "A string.", thirdElement = 100 ] = theArray;

console.log( firstElement );
// Result: true

console.log( secondElement );
// Result: A string.

console.log( thirdElement );
// Result: 100
const theObject = {
  "theProperty" : true,
  "theOtherProperty" : undefined
};
const { theProperty, theOtherProperty = "A string.", aThirdProperty = 100 } = theObject;

console.log( theProperty );
// Result: true

console.log( theOtherProperty );
// Result: A string.

console.log( aThirdProperty );
// Result: 100

Snazzy stuff for sure, but where this syntax really shines is when you’re unpacking nested arrays and objects. Naturally, there’s nothing stopping you from unpacking an object that contains an object as a property value, then unpacking that inner object separately:

const theObject = {
  "theProperty" : true,
  "theNestedObject" : {
    "anotherProperty" : true,
    "stillOneMoreProp" : "A string."
  }
};

const { theProperty, theNestedObject } = theObject;
const { anotherProperty, stillOneMoreProp = "Default string." } = theNestedObject;

console.log( stillOneMoreProp );
// Result: A string.

But we can make this way more concise. We don’t have to unpack the nested object separately — we can unpack it as part of the same binding:

const theObject = {
  "theProperty" : true,
  "theNestedObject" : {
    "anotherProperty" : true,
    "stillOneMoreProp" : "A string."
  }
};
const { theProperty, theNestedObject : { anotherProperty, stillOneMoreProp } } = theObject;

console.log( stillOneMoreProp );
// Result: A string.

From an object within an object to three easy-to-use constants in a single line of code.

We can unpack mixed data structures just as succinctly:

const theObject = [{
  "aProperty" : true,
},{
  "anotherProperty" : "A string."
}];
const [{ aProperty }, { anotherProperty }] = theObject;

console.log( anotherProperty );
// Result: A string.

A dense syntax, there’s no question of that — bordering on “opaque,” even. It might take a little experimentation to get the hang of this one, but once it clicks, destructuring assignment gives you an incredibly quick and convenient way to break down complex data structures without spinning up a bunch of intermediate data structures and values.

Rest Properties

In all the examples above we’ve been working with known quantities: “turn these X properties or elements into Y variables.” That doesn’t match the reality of breaking down a huge, tangled object, jam-packed array, or both.

In the context of a destructuring assignment, an ellipsis (that’s ..., not , for my fellow Unicode enthusiasts) followed by an identifier (to the tune of ...theIdentifier) represents a rest property — an identifier that will represent the rest of the array or object being unpacked. This rest property will contain all the remaining elements or properties beyond the ones we’ve explicitly unpacked to their own identifiers, all bundled up in the same kind of data structure as the one we’re unpacking:

const theArray = [ false, true, false, true, true, false ];
const [ firstElement, secondElement, ...remainingElements ] = theArray;

console.log( remainingElements );
// Result: Array(4) [ false, true, true, false ]

Generally I try to avoid using examples that veer too close to real-world use on purpose where they can get a little convoluted and I don’t want to distract from the core ideas — but in this case, “convoluted” is exactly what we’re looking to work around. So let’s use an object near and dear to my heart: (part of) the data representing the very first newsletter I sent out back when I started writing this course.

const firstPost = {
  "id": "mat-update-1.md",
  "slug": "mat-update-1",
  "body": "Hey, great to meet you, everybody. I'm Mat — \\"Wilto\\" is good too — and I'm here to teach you JavaScript. Not just what JavaScript is or what JavaScript does, but the *how* and the *why* of JavaScript. The weird stuff. The *deep magic_.\\n\\nWell, okay, I'm not *currently* here to teach you JavaScript, but I will be soon. Right now I'm just getting things in order for the course — planning, outlining, polishing the fancy semicolons that I only take out when I'm having company over, writing like 5,000 words about `this` as a warm-up that completely got away from me, that kind of thing.",
  "collection": "emails",
  "data": {
    "title": "Meet your Instructor",
    "pubDate": "2025-05-08T09:55:00.630Z",
    "headingSize": "large",
    "showUnsubscribeLink": true,
    "stream": "javascript-for-everyone"
  }
};

Quite a bit going on in there. For purposes of this exercise, assume this is coming in from an external API the way it is over on my website — this isn’t an object we control. Sure, we can work with that object directly, but that’s a little unwieldy when all we need is, for example, the newsletter title and body:

const firstPost = {
  "id": "mat-update-1.md",
  "slug": "mat-update-1",
  "body": "Hey, great to meet you, everybody. I'm Mat — \\"Wilto\\" is good too — and I'm here to teach you JavaScript. Not just what JavaScript is or what JavaScript does, but the *how* and the *why* of JavaScript. The weird stuff. The *deep magic_.\\n\\nWell, okay, I'm not *currently* here to teach you JavaScript, but I will be soon. Right now I'm just getting things in order for the course — planning, outlining, polishing the fancy semicolons that I only take out when I'm having company over, writing like 5,000 words about `this` as a warm-up that completely got away from me, that kind of thing.",
  "data": {
    "title": "Meet your Instructor",
    "pubDate": "2025-05-08T09:55:00.630Z",
    "headingSize": "large",
    "showUnsubscribeLink": true,
    "stream": "javascript-for-everyone"
  }
};

const { data : { title }, body } = firstPost;

console.log( title );
// Result: Meet your Instructor

console.log( body );
/* Result:
Hey, great to meet you, everybody. I'm Mat — \\"Wilto\\" is good too — and I'm here to teach you JavaScript. Not just what JavaScript is or what JavaScript does, but the *how* and the *why* of JavaScript. The weird stuff. The *deep magic_.

Well, okay, I'm not *currently* here to teach you JavaScript, but I will be soon. Right now I'm just getting things in order for the course — planning, outlining, polishing the fancy semicolons that I only take out when I'm having company over, writing like 5,000 words about `this` as a warm-up that completely got away from me, that kind of thing.
*/

That’s tidy; a couple dozen characters and we have exactly what we need from that tangle. I know I’m not going to need those id or slug properties to publish it on my own website, so I omit those altogether — but that inner data object has a conspicuous ring to it, like maybe one could expect it to contain other properties associated with future posts. I don’t know what those properties will be, but I know I’ll want them all packaged up in a way where I can easily make use of them. I want the firstPost.data.title property in isolation, but I also want an object containing all the rest of the firstPost.data properties, whatever they end up being:

const firstPost = {
  "id": "mat-update-1.md",
  "slug": "mat-update-1",
  "body": "Hey, great to meet you, everybody. I'm Mat — \\"Wilto\\" is good too — and I'm here to teach you JavaScript. Not just what JavaScript is or what JavaScript does, but the *how* and the *why* of JavaScript. The weird stuff. The *deep magic_.\\n\\nWell, okay, I'm not *currently* here to teach you JavaScript, but I will be soon. Right now I'm just getting things in order for the course — planning, outlining, polishing the fancy semicolons that I only take out when I'm having company over, writing like 5,000 words about `this` as a warm-up that completely got away from me, that kind of thing.",
  "data": {
    "title": "Meet your Instructor",
    "pubDate": "2025-05-08T09:55:00.630Z",
    "headingSize": "large",
    "showUnsubscribeLink": true,
    "stream": "javascript-for-everyone"
  }
};

const { data : { title, ...metaData }, body } = firstPost;

console.log( title );
// Result: Meet your Instructor

console.log( metaData );
// Result: Object { pubDate: "2025-05-08T09:55:00.630Z", headingSize: "large", showUnsubscribeLink: true, stream: "javascript-for-everyone" }

Now we’re talking. Now we have a metaData object containing anything and everything else in the data property of the object we’ve been handed.

Listen. If you’re anything like me, even if you haven’t quite gotten your head around the syntax itself, you’ll find that there’s something viscerally satisfying about the binding in the snippet above. All that work done in a single line of code. It’s terse, it’s elegant — it takes the complex and makes it simple. That’s the good stuff.

And yet: maybe you can hear it too, ever-so-faintly? A quiet voice, way down in the back of your mind, that asks “I wonder if there’s an even better way.” For what we’re doing here, in isolation, this solution is about as good as it gets — but as far as the wide world of JavaScript goes: there’s always a better way. If you can’t hear it just yet, I bet you will by the end of the course.

Anyone who writes JavaScript is a JavaScript developer; there are no two ways about that. But the satisfaction of creating order from chaos in just a few keystrokes, and the drive to find even better ways to do it? Those are the makings of a JavaScript developer to be reckoned with.


You can do more than just “get by” with JavaScript; I know you can. You can understand JavaScript, all the way down to the mechanisms that power the language — the gears and springs that move the entire “interactive” layer of the web. To really understand JavaScript is to understand the boundaries of how users interact with the things we’re building, and broadening our understanding of the medium we work with every day sharpens all of our skills, from layout to accessibility to front-end performance to typography. Understanding JavaScript means less “I wonder if it’s possible to…” and “I guess we have to…” in your day-to-day decision making, even if you’re not the one tasked with writing it. Expanding our skillsets will always make us better — and more valued, professionally — no matter our roles.

JavaScript is a tricky thing to learn; I know that all too well — that’s why I wrote JavaScript for Everyone. You can do this, and I’m here to help.

I hope to see you there.


JavaScript for Everyone: Destructuring originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

[syndicated profile] adactio_feed

When I was summing up my reading habits in 2022 I said:

I think the lesson this year is: you can’t go wrong with Octavia E. Butler or Ursula K. Le Guin.

I stand by that. But maybe I’d recommend some Ursula K. Le Guin books more than others.

A Fisherman Of The Inland Sea is a good collection of short stories. But it’s not a great collection of short stories. If you’re looking for a great collection of short stories, read The Unreal and the Real.

When it comes to Ursula K. Le Guin, the standard is always going to be high so even when the stories aren’t her best, they’re still better than the output of most other sci-fi writers.

My slight disappointment with A Fisherman Of The Inland Sea isn’t so much with the stories themselves but with the collection.

To begin with, there are four unconnected short stories. That’s fine. It’s a short story collection after all.

But then after that there are three interconnected short stories from the Hainish cycle. They’re the best part of this book. That just makes the preceding stories look like filler.

If those three stories had been released as little collection, it would be a miniature classic. As it stands, you get more of a mixed bag.

But still, it’s worth reading this collection for those three stories alone.

Buy this book

That was Web Day Out

Mar. 17th, 2026 11:22 am
[syndicated profile] adactio_feed

On March 12th, 1989, Tim Berners-Lee submitted Information Management: A Proposal. This would form the basis of what became the World Wide Web.

On March 12th, 2026, Web Day Out happened in Brighton.

Coincidence?

Yes. Yes, it is a coincidence. But it’s a pretty nice coincidence, you must admit.

It was a day dedicated to the World Wide Web. Not just the foundational languages of the web—HTML, CSS, and JavaScript—but also the foundational ideas of the web.

“Share what you know!” That was the original motto of the World Wide Web project. That was the motto of Web Day Out too.

Look, I’m biased because I put the line-up together but honestly, all of the speakers were superb! So much knowledge delivered in such entertaining fashion.

I had a blast. And I’ll give myself a little pat on the back for how I grouped the talks into rhyming couplets:

Browsers: Jemima talked about what you can do with just HTML and CSS these days, and Rachel followed up with how to come up with your own browser support strategy.

Performance: Aleth made the case for multi-page progressive web apps that work under any network conditions, and Harry followed up with an impassioned rant about how much time and energy has been wasted on over-engineered single-page apps that ignore what browsers can do.

Styling: Manuel walked us through a whole new approach to writing modern CSS, and Rich followed up with a whirlwind tour of all the great typographic possibilities in CSS.

Standards Jake took us on the standards journey to customisable select elements, including anchor positioning and popovers, and then Lola showed us exactly what it takes to add a new feature to a web browser.

Everything flowed together really nicely.

I was a little apprehensive going into Web Day Out that it would just be preaching to the converted. And sure, there were plenty of veteran devs there who already knew the value of progressive enhancement and making the most of web standards. But I was gratified to also see lots of younger faces in the crowd.

I was talking to one young developer afterwards and she told me what an eye-opening experience it was. Whereas before she would have defaulted to a framework-driven single-page app for everything, now she’s got the knowledge to make an appropriate architectural choice.

Mission accomplished!

If you couldn’t make it to Web Day Out and you want to experience some RAMO, here’s the chatter on Bluesky and Mastodon, lovely photos by Marc, a post by Dave, and a lovely post by Amber.

Thank you so much to everyone who came. I think you’ll agree it was a most excellent day out.

[syndicated profile] csstricks_feed

Posted by Daniel Schwarz

For this issue of What’s !important, we have a healthy balance of old CSS that you might’ve missed and new CSS that you don’t want to miss. This includes random(), random-item(), folded corners using clip-path, backdrop-filter, font-variant-numeric: tabular-nums, the Popover API, anchored container queries, anchor positioning in general, DOOM in CSS, customizable <select>, :open, scroll-triggered animations, <toolbar>, and somehow, more.

Let’s dig in.

Understanding random() and random-item()

Alvaro Montoro explains how the random() and random-item() CSS functions work. As it turns out, they’re actually quite complex:

width: random(--w element-shared, 1rem, 2rem);
color: random-item(--c, red, orange, yellow, darkkhaki);

Creating folded corners using clip-path

My first solution to folded corners involved actual images. Not a great solution, but that was the way to do it in the noughties. Since then we’ve been able to do it with box-shadow, but Kitty Giraudel has come up with a CSS clip-path solution that clips a custom shape (hover the kitty to see it in action):

Revisiting backdrop-filter and font-variant-numeric: tabular-nums

Stuart Robson talks about backdrop-filter. It’s not a new CSS property, but it’s very useful and hardly ever talked about. In fact, up until now, I thought that it was for the ::backdrop pseudo-element, but we can actually use it to create all kinds of background effects for all kinds of elements, like this:

font-variant-numeric: tabular-nums is another one. This property and value prevents layout shift when numbers change dynamically, as they do with live clocks, counters, timers, financial tables, and so on. Amit Merchant walks you through it with this demo:

Getting started with the Popover API

Godstime Aburu does a deep dive on the Popover API, a new(ish) but everyday web platform feature that simplifies tooltip and tooltip-like UI patterns, but isn’t without its nuances.

Unraveling yet another anchor positioning quirk

Just another anchor positioning quirk, this time from Chris Coyier. These quirks have been piling up for a while now. We’ve talked about them time and time again, but the thing is, they’re not bugs. Anchor positioning works in a way that isn’t commonly understood, so Chris’ article is definitely worth a read, as are the articles that he references.

Building dynamic toggletips using anchored container queries

In this walkthrough, I demonstrate how to build dynamic toggletips using anchored container queries. Also, I ran into an anchor positioning quirk, so if you’re looking to solidify your understanding of all that, I think the walkthrough will help with that too.

Demo (full effect requires Chrome 143+):

DOOM in CSS

DOOM in CSS. DOOM. In CSS.

DOOM fully rendered in CSS. Every surface is a <div> that has a background image, with a clipping path with 3D transforms applied. Of course CSS does not have a movable camera, so we rotate and translate the scene around the user.

[image or embed]

— Niels Leenheer (@html5test.com) Mar 13, 2026 at 20:32

Safari updates, Chrome updates, and Quick Hits you missed

In addition, Chrome will ship every two weeks starting September.

From the Quick Hits reel, you might’ve missed that Font Awesome launched a Kickstarter campaign to transform Eleventy into Build Awesome, cancelled it because their emails failed to send (despite meeting their goal!), and vowed to try again. You can subscribe to the relaunch notification.

Also, <toolbar> is coming along according to Luke Warlow. This is akin to <focusgroup>, which we can actually test in Chrome 146 with the “Experimental Web Platform features” flag enabled.

Right, I’m off to slay some demons in DOOM. Until next time!

P.S. Congratulations to Kevin Powell for making it to 1 million YouTube subs!


What’s !important #7: random(), Folded Corners, Anchored Container Queries, and More originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

[syndicated profile] csstricks_feed

Posted by Zell Liew

When I talk about layouts, I’m referring to how you place items on a page. The CSS properties that are widely used here include:

  • display — often grid or flex nowadays
  • margin
  • padding
  • width
  • height
  • position
  • top, left, bottom, right

I often include border-width as a minor item in this list as well.

At this point, there’s only one thing I’d like to say.

Tailwind is really great for making layouts.

There are many reasons why.

First: Layout styles are highly dependent on the HTML structure

When we shift layouts into CSS, we lose the mental structure and it takes effort to re-establish them. Imagine the following three-column grid in HTML and CSS:

<div class="grid">
  <div class="grid-item"></div>
  <div class="grid-item"></div>
</div>
.grid {
  display: grid;
  grid-template-columns: 2fr 1fr;

  .grid-item:first-child {
    grid-column: span 2
  }

  .grid-item:last-child {
    grid-column: span 1
  }
}
Two blue rectangles side-by-side illustrating a two-column layout where the left column is twice the width of the right column.

Now cover the HTML structure and just read the CSS. As you do that, notice you need to exert effort to imagine the HTML structure that this applies to.

Now imagine the same, but built with Tailwind utilities:

<div class="grid grid-cols-3">
  <div class="col-span-2"></div>
  <div class="col-span-1"></div>
</div>

You might almost begin to see the layout manifest in your eyes without seeing the actual output. It’s pretty clear: A three-column grid, first item spans two columns while the second one spans one column.

But grid-cols-3 and col-span-2 are kinda weird and foreign-looking because we’re trying to parse Tailwind’s method of writing CSS.

Now, watch what happens when we shift the syntax out of the way and use CSS variables to define the layout instead. The layout becomes crystal clear immediately:

<div class="grid-simple [--cols:3]">
  <div class="[--span:2]"> ... </div>
  <div class="[--span:1]"> ... </div>
</div>
Two blue rectangles side-by-side illustrating a two-column layout where the left column is twice the width of the right column.

Same three-column layout.

But it makes the layout much easier to write, read, and visualize. It also has other benefits, but I’ll let you explore its documentation instead of explaining it here.

For now, let’s move on.

Why not use 2fr 1fr?

It makes sense to write 2fr 1fr for a three-column grid, doesn’t it?

.grid {
  display: grid;
  grid-template-columns: 2fr 1fr;
}

Unfortunately, it won’t work. This is because fr is calculated based on the available space after subtracting away the grid’s gutters (or gap).

Since 2fr 1fr only contains two columns, the output from 2fr 1fr will be different from a standard three-column grid.

Three examples of multi-column layouts stacked. The first is an equal three-column layout, the second and third are two columns where the left column is double the width of the right column.

Alright. Let’s continue with the reasons that make Tailwind great for building layouts.

Second: No need to name layouts

I think layouts are the hardest things to name. I rarely come up with better names than:

  • Number + Columns, e.g. .two-columns
  • Semantic names, e.g. .content-sidebar

But these names don’t do the layout justice. You can’t really tell what’s going on, even if you see .two-columns, because .two-columns can mean a variety of things:

  • Two equal columns
  • Two columns with 1fr auto
  • Two columns with auto 1fr
  • Two columns that spans total of 7 “columns” and the first object takes up 4 columns while the second takes up 3…

You can already see me tripping up when I try to explain that last one there…

Instead of forcing ourselves to name the layout, we can let the numbers do the talking — then the whole structure becomes very clear.

<div class="grid-simple [--cols:7]">
  <div class="[--span:4]"> ... </div>
  <div class="[--span:3]"> ... </div>
</div>

The variables paint a picture.

Example of a seven-column layout above a two-column layout with equally-sized columns.

Third: Layout requirements can change depending on context

A “two-column” layout might have different properties when used in different contexts. Here’s an example.

Two two-by-two layouts next to each other. In both cases, the third item wraps to the second line, followed by the fourth item.

In this example, you can see that:

  • A larger gap is used between the I and J groups.
  • A smaller gap is used within the I and J groups.

The difference in gap sizes is subtle, but used to show that the items are of separate groups.

Here’s an example where this concept is used in a real project. You can see the difference between the gap used within the newsletter container and the gap used between the newsletter and quote containers.

A two-column layout for a newsletter signup component with the form as the left column that is wider than the width of the right column, containing content.

If this sort of layout is only used in one place, we don’t have to create a modifier class just to change the gap value. We can change it directly.

<div class="grid-simple [--cols:2] gap-8">
  <div class="grid-simple gap-4 [--cols:2]"> ... </div>
  <div class="grid-simple gap-4 [--cols:2]"> ... </div>
</div>

Another common example

Let’s say you have a heading for a marketing section. The heading would look nicer if you are able to vary its max-width so the text isn’t orphaned.

text-balance might work here, but this is often nicer with manual positioning.

Without Tailwind, you might write an inline style for it.

<h2 class="h2" style="max-width: 12em;">
  Your subscription has been confirmed
</h2>

With Tailwind, you can specify the max-width in a more terse way:

<h2 class="h2 max-w-[12em]">
  Your subscription has been confirmed
</h2>
A centered heading in black that says Your subscription has been confirmed.

Fourth: Responsive variants can be created on the fly

“At which breakpoint would you change your layouts?” is another factor you’d want to consider when designing your layouts. I shall term this the responsive factor for this section.

Most likely, similar layouts should have the same responsive factor. In that case, it makes sense to group the layouts together into a named layout.

.two-column {
  @apply grid-simple;
  /* --cols: 1 is the default */

  @media (width >= 800px) {
    --cols:2;
  }
}

However, you may have layouts where you want two-column grids on a mobile and a much larger column count on tablets and desktops. This layout style is commonly used in a site footer component.

Since the footer grid is unique, we can add Tailwind’s responsive variants and change the layout on the fly.

<div class="grid-simple [--cols:2] md:[--cols:5]">
  <!-- span set to 1 by default so there's no need to specify them -->
  <div> ... </div>
  <div> ... </div>
  <div> ... </div>
  <div> ... </div>
  <div> ... </div>
  <div> ... </div>
</div>
Example of a footer that adapts to the screen size. It goes from a two-column layout on small screens to a five-column layout on wider screens.

Again, we get to create a new layout on the fly without creating an additional modifier class — this keeps our CSS clean and focused.

How to best use Tailwind

This article is a sample lesson from my course, Unorthodox Tailwind, where I show you how to use Tailwind and CSS synergistically.

Personally, I think the best way to use Tailwind is not to litter your HTML with Tailwind utilities, but to create utilities that let you create layouts and styles easily.

I cover much more of that in the course if you’re interested to find out more!


4 Reasons That Make Tailwind Great for Building Layouts originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

mark: A photo of Mark kneeling on top of the Taal Volcano in the Philippines. It was a long hike. (Default)
[staff profile] mark posting in [site community profile] dw_maintenance

Happy Saturday!

I'm going to be doing a little maintenance today. It will likely cause a tiny interruption of service (specifically for www.dreamwidth.org) on the order of 2-3 minutes while some settings propagate. If you're on a journal page, that should still work throughout!

If it doesn't work, the rollback plan is pretty quick, I'm just toggling a setting on how traffic gets to the site. I'll update this post if something goes wrong, but don't anticipate any interruption to be longer than 10 minutes even in a rollback situation.

github: shadowy octopus with the head of a robot, emblazoned with the Dreamwidth swirl (Default)
[personal profile] github posting in [site community profile] changelog

Branch: refs/heads/main Home: https://github.com/dreamwidth/dreamwidth Commit: badf5eae7a944fed8e8381ee3dff2238633191c6 https://github.com/dreamwidth/dreamwidth/commit/badf5eae7a944fed8e8381ee3dff2238633191c6 Author: Mark Smith mark@dreamwidth.org Date: 2026-03-13 (Fri, 13 Mar 2026)

Changed paths: M etc/docker/web22/Dockerfile M etc/docker/web22/config/etc/varnish/dreamwidth.vcl M etc/docker/web22/scripts/startup-prod.sh

Log Message:


Replace Apache with Starman behind Varnish on web22

Varnish now forwards to Starman on port 8080 instead of Apache on port 80. This removes Apache from the web22 request path entirely, with Varnish's caching layer helping absorb health check traffic that previously queued behind busy Starman workers.

Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com

To unsubscribe from these emails, change your notification settings at https://github.com/dreamwidth/dreamwidth/settings/notifications

github: shadowy octopus with the head of a robot, emblazoned with the Dreamwidth swirl (Default)
[personal profile] github posting in [site community profile] changelog

Branch: refs/heads/main Home: https://github.com/dreamwidth/dreamwidth Commit: 4b5bcf8ad5cda83928da05e87508127b1fdd3a46 https://github.com/dreamwidth/dreamwidth/commit/4b5bcf8ad5cda83928da05e87508127b1fdd3a46 Author: Mark Smith mark@dreamwidth.org Date: 2026-03-12 (Thu, 12 Mar 2026)

Changed paths: M app.psgi A cgi-bin/Plack/Middleware/DW/WriteTimeout.pm A t/plack-write-timeout.t

Log Message:


Add SO_SNDTIMEO middleware to prevent Starman workers blocking on dead connections

When the ALB closes a connection before Starman finishes writing a response (e.g. due to idle timeout), the worker's write() blocks for 15-30 minutes waiting for TCP retransmits to exhaust. With 10 workers, this quickly deadlocks the entire server.

The new DW::WriteTimeout middleware sets SO_SNDTIMEO on the client socket via psgix.io so that blocked writes fail in seconds instead of minutes, freeing the worker to handle new requests.

Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com

To unsubscribe from these emails, change your notification settings at https://github.com/dreamwidth/dreamwidth/settings/notifications

Profile

carene_waterman: An image of the Carina Nebula (Default)
carene_waterman

July 2014

S M T W T F S
  12345
6789101112
13141516171819
2021 22 2324 2526
2728293031  

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Mar. 27th, 2026 03:42 am
Powered by Dreamwidth Studios