Monday 31 May 2021

Serverless Functions: The Secret to Ultra-Productive Front-End Teams

Modern apps place high demands on front-end developers. Web apps require complex functionality, and the lion’s share of that work is falling to front-end devs:

  • building modern, accessible user interfaces
  • creating interactive elements and complex animations
  • managing complex application state
  • meta-programming: build scripts, transpilers, bundlers, linters, etc.
  • reading from REST, GraphQL, and other APIs
  • middle-tier programming: proxies, redirects, routing, middleware, auth, etc.

This list is daunting on its own, but it gets really rough if your tech stack doesn’t optimize for simplicity. A complex infrastructure introduces hidden responsibilities that introduce risk, slowdowns, and frustration.

Depending on the infrastructure we choose, we may also inadvertently add server configuration, release management, and other DevOps duties to a front-end developer’s plate.

Software architecture has a direct impact on team productivity. Choose tools that avoid hidden complexity to help your teams accomplish more and feel less overloaded.

The sneaky middle tier — where front-end tasks can balloon in complexity

Let’s look at a task I’ve seen assigned to multiple front-end teams: create a simple REST API to combine data from a few services into a single request for the frontend. If you just yelled at your computer, “But that’s not a frontend task!” — I agree! But who am I to let facts hinder the backlog?

An API that’s only needed by the frontend falls into middle-tier programming. For example, if the front end combines the data from several backend services and derives a few additional fields, a common approach is to add a proxy API so the frontend isn’t making multiple API calls and doing a bunch of business logic on the client side.

There’s not a clear line to which back-end team should own an API like this. Getting it onto another team’s backlog — and getting updates made in the future — can be a bureaucratic nightmare, so the front-end team ends up with the responsibility.

This is a story that ends differently depending on the architectural choices we make. Let’s look at two common approaches to handling this task:

  • Build an Express app on Node to create the REST API
  • Use serverless functions to create the REST API

Express + Node comes with a surprising amount of hidden complexity and overhead. Serverless lets front-end developers deploy and scale the API quickly so they can get back to their other front-end tasks.

Solution 1: Build and deploy the API using Node and Express (and Docker and Kubernetes)

Earlier in my career, the standard operating procedure was to use Node and Express to stand up a REST API. On the surface, this seems relatively straightforward. We can create the whole REST API in a file called server.js:

const express = require('express');

const PORT = 8080;
const HOST = '0.0.0.0';

const app = express();

app.use(express.static('site'));

// simple REST API to load movies by slug
const movies = require('./data.json');

app.get('/api/movies/:slug', (req, res) => {
  const { slug } = req.params;
  const movie = movies.find((m) => m.slug === slug);

  res.json(movie);
});

app.listen(PORT, HOST, () => {
  console.log(`app running on http://${HOST}:${PORT}`);
});

This code isn’t too far removed from front-end JavaScript. There’s a decent amount of boilerplate in here that will trip up a front-end dev if they’ve never seen it before, but it’s manageable.

If we run node server.js, we can visit http://localhost:8080/api/movies/some-movie and see a JSON object with details for the movie with the slug some-movie (assuming you’ve defined that in data.json).

Deployment introduces a ton of extra overhead

Building the API is only the beginning, however. We need to get this API deployed in a way that can handle a decent amount of traffic without falling down. Suddenly, things get a lot more complicated.

We need several more tools:

  • somewhere to deploy this (e.g. DigitalOcean, Google Cloud Platform, AWS)
  • a container to keep local dev and production consistent (i.e. Docker)
  • a way to make sure the deployment stays live and can handle traffic spikes (i.e. Kubernetes)

At this point, we’re way outside front-end territory. I’ve done this kind of work before, but my solution was to copy-paste from a tutorial or Stack Overflow answer.

The Docker config is somewhat comprehensible, but I have no idea if it’s secure or optimized:

FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]

Next, we need to figure out how to deploy the Docker container into Kubernetes. Why? I’m not really sure, but that’s what the back end teams at the company use, so we should follow best practices.

This requires more configuration (all copy-and-pasted). We entrust our fate to Google and come up with Docker’s instructions for deploying a container to Kubernetes.

Our initial task of “stand up a quick Node API” has ballooned into a suite of tasks that don’t line up with our core skill set. The first time I got handed a task like this, I lost several days getting things configured and waiting on feedback from the backend teams to make sure I wasn’t causing more problems than I was solving.

Some companies have a DevOps team to check this work and make sure it doesn’t do anything terrible. Others end up trusting the hivemind of Stack Overflow and hoping for the best.

With this approach, things start out manageable with some Node code, but quickly spiral out into multiple layers of config spanning areas of expertise that are well beyond what we should expect a frontend developer to know.

Solution 2: Build the same REST API using serverless functions

If we choose serverless functions, the story can be dramatically different. Serverless is a great companion to Jamstack web apps that provides front-end developers with the ability to handle middle tier programming without the unnecessary complexity of figuring out how to deploy and scale a server.

There are multiple frameworks and platforms that make deploying serverless functions painless. My preferred solution is to use Netlify since it enables automated continuous delivery of both the front end and serverless functions. For this example, we’ll use Netlify Functions to manage our serverless API.

Using Functions as a Service (a fancy way of describing platforms that handle the infrastructure and scaling for serverless functions) means that we can focus only on the business logic and know that our middle tier service can handle huge amounts of traffic without falling down. We don’t need to deal with Docker containers or Kubernetes or even the boilerplate of a Node server — it Just Works™ so we can ship a solution and move on to our next task.

First, we can define our REST API in a serverless function at netlify/functions/movie-by-slug.js:

const movies = require('./data.json');

exports.handler = async (event) => {
  const slug = event.path.replace('/api/movies/', '');
  const movie = movies.find((m) => m.slug === slug);

  return {
    statusCode: 200,
    body: JSON.stringify(movie),
  };
};

To add the proper routing, we can create a netlify.toml at the root of the project:

[[redirects]]
  from = "/api/movies/*"
  to = "/.netlify/functions/movie-by-slug"
  status = 200

This is significantly less configuration than we’d need for the Node/Express approach. What I prefer about this approach is that the config here is stripped down to only what we care about: the specific paths our API should handle. The rest — build commands, ports, and so on — is handled for us with good defaults.

If we have the Netlify CLI installed, we can run this locally right away with the command ntl dev, which knows to look for serverless functions in the netlify/functions directory.

Visiting http://localhost:888/api/movies/booper will show a JSON object containing details about the “booper” movie.

So far, this doesn’t feel too different from the Node and Express setup. However, when we go to deploy, the difference is huge. Here’s what it takes to deploy this site to production:

  1. Commit the serverless function and netlify.toml to repo and push it up on GitHub, Bitbucket, or GitLab
  2. Use the Netlify CLI to create a new site connected to your git repo: ntl init

That’s it! The API is now deployed and capable of scaling on demand to millions of hits. Changes will be automatically deployed whenever they’re pushed to the main repo branch.

You can see this in action at https://serverless-rest-api.netlify.app and check out the source code on GitHub.

Serverless unlocks a huge amount of potential for front-end developers

Serverless functions are not a replacement for all back-ends, but they’re an extremely powerful option for handling middle-tier development. Serverless avoids the unintentional complexity that can cause organizational bottlenecks and severe efficiency problems.

Using serverless functions allows front-end developers to complete middle-tier programming tasks without taking on the additional boilerplate and DevOps overhead that creates risk and decreases productivity.

If our goal is to empower frontend teams to quickly and confidently ship software, choosing serverless functions bakes productivity into the infrastructure. Since adopting this approach as my default Jamstack starter, I’ve been able to ship faster than ever, whether I’m working alone, with other front-end devs, or cross-functionally with teams across a company.


The post Serverless Functions: The Secret to Ultra-Productive Front-End Teams appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.



from CSS-Tricks https://ift.tt/3i6z1CI
via IFTTT

Local: Always Getting Better

I’ve been using Local for ages. Four years ago, I wrote about how I got all my WordPress sites running locally on it. I just wanted to give it another high five because it’s still here and still great. In fact, much great than it was back then.

Disclosure, Flywheel, the makers of Local, sponsor this site, but this post isn’t sponsored. I just wanted to talk about a tool I use. It’s not the only player in town. Even old school MAMP PRO is has gotten a lot better and many devs seem to like it. People that live on the command line tend to love Laravel Valet. There is another WordPress host getting in on the game here: DevKinsta.

The core of Local is still very much the same. It’s an app you run locally (Windows, Mac, or Linux) and it helps you spin up WordPress sites incredibly easily. Just a few choices and clicks and it’s going. This is particularly useful because WordPress has dependencies that make it run (PHP, MySQL, a web server, etc) and while you can absolutely do that by hand or with other tools, Local does it in a containerized way that doesn’t mess with your machine and can help you run locally with settings that are close to or entirely match your production site.

That stuff has always been true. Here are things that are new, compared to my post from four years ago!

  • Sites start up nearly instantaneously. Maybe around a year or a bit more ago Local had a beta build they dubbed Local “Lightning” because it was something of a re-write that made it way faster. Now it’s just how Local works, and it’s fast as heck.
  • You can easily pull and push sites to production (and/or staging) very easily. Back then, you could pull I think but not push. I still wire up my own deployment because I usually want it to be Git-based, but the pulling is awfully handy. Like, you sit down to work on a site, and first thing, you can just yank down a copy of production so you’re working with exactly what is live. That’s how I work anyway. I know that many people work other ways. You could have your local or staging environment be the source of truth and do a lot more pushing than pulling.
  • Instant reload. This is refreshing for my little WordPress sites where I didn’t even bother to spin up a build process or Sass or anything. Usually, those build processes also help with live reloading, so it’s tempting to reach for them just for that, but no longer needed here. When I do need a build process, I’ll often wire up Gulp, but also CodeKit still works great and its server can proxy Local’s server just fine.
  • One-click admin login. This is actually the feature that inspired me to write this post. Such a tiny quality of life thing. There is a button that says Admin. You can click that and, rather than just taking you to the login screen, it auto-logs you in as a particular admin user. SO NICE.
  • There is a plugin system. My back-end friends got me on TablePlus, so I love that there is an extension that allows me to one-click open my WordPress DBs in TablePlus. There is also an image optimizer plugin, which scans the whole site for images it can make smaller. I just used that the other day because might as well.

That’s not comprehensive of course, it’s just a smattering of features that demonstrate how this product started good and keeps getting better.

Bonus: I think it’s classy how they shout out to the open source shoulders they stand on:


The post Local: Always Getting Better appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.



from CSS-Tricks https://ift.tt/3wKG9Zp
via IFTTT

8 Ways to Champion Animals in Your Local Business Marketing Strategy

The majority of American households now include pets, and recent global quarantines have only brought us closer to them. In today’s blog, Miriam outlines eight ways you can honor the relationships your customers have with their animals, plus tips for weaving those efforts into your local business marketing strategy.



from The Moz Blog https://ift.tt/3uFUDIt
via IFTTT

Friday 28 May 2021

Dynamic Favicons for WordPress

Typically, a single favicon is used across a whole domain. But there are times you wanna step it up with different favicons depending on context. A website might change the favicon to match the content being viewed. Or a site might allow users to personalize their theme colors, and those preferences are reflected in the favicon. Maybe you’ve seen favicons that attempt to alert the user of some event.

Multiple favicons can technically be managed by hand — Chris has shown us how he uses two different favicons for development and production. But when you reach the scale of dozens or hundreds of variations, it’s time to dynamically generate them.

This was the situation I encountered on a recent WordPress project for a directory of colleges and universities. (I previously wrote about querying nearby locations for the same project.) When viewing a school’s profile, we wanted the favicon to use a school’s colors rather than our default blue, to give it that extra touch.

With over 200 schools in the directory and climbing, we needed to go dynamic. Fortunately, we already had custom meta fields storing data on each school’s visual identity. This included school colors, which were being used in the stylesheet. We just needed a way to apply these custom meta colors to a dynamic favicon.

In this article, I’ll walk you through our approach and some things to watch out for. You can see the results in action by viewing different schools.

Each favicon is a different color in the tabs based on the school that is selected.

SVG is key

Thanks to improved browser support for SVG favicons, implementing dynamic favicons is much easier than days past. In contrast to PNG (or the antiquated ICO format), SVG relies on markup to define vector shapes. This makes them lightweight, scaleable, and best of all, receptive to all kinds of fun.

The first step is to create your favicon in SVG format. It doesn’t hurt to also run it through an SVG optimizer to get rid of the cruft afterwards. This is what we used in the school directory:

Hooking into WordPress

Next, we want to add the favicon link markup in the HTML head. How to do this is totally up to you. In the case of WordPress, it could be added it to the header template of a child theme or echo’d through a wp_head() action.

function ca_favicon() {
  if ( is_singular( 'school' ) ) {
    $post_id = get_the_ID();
    $color = get_post_meta( $post_id, 'color', true );

    if ( isset( $color ) ) {
      $color = ltrim( $color, '#' ); // remove the hash
      echo '<link rel="icon" href="' . plugins_url( 'images/favicon.php?color=' . $color, __FILE__ ) . '" type="image/svg+xml" sizes="any">';
    }
  }
}
add_filter( 'wp_head' , 'ca_favicon' );

Here we’re checking that the post type is school, and grabbing the school’s color metadata we’ve previously stored using get_post_meta(). If we do have a color, we pass it into favicon.php through the query string.

From PHP to SVG

In a favicon.php file, we start by setting the content type to SVG. Next, we save the color value that’s been passed in, or use the default color if there isn’t one.

Then we echo the large, multiline chunk of SVG markup using PHP’s heredoc syntax (useful for templating). Variables such as $color are expanded when using this syntax.

Finally, we make a couple modifications to the SVG markup. First, classes are assigned to the color-changing elements. Second, a style element is added just inside the SVG element, declaring the appropriate CSS rules and echo-ing the $color.

Instead of a <style> element, we could alternatively replace the default color with $color wherever it appears if it’s not used in too many places.

<?php
header( 'Content-Type: image/svg+xml' );

$color = $_GET[ 'color' ] ?? '065281';
$color = sanitize_hex_color_no_hash( $color );

echo <<<EOL
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="1000" height="1000">
  <style type="text/css">
  <![CDATA[
  .primary {
    fill: #$color;
  }
  .shield {
    stroke: #$color;
  }
  ]]>
  </style>
  <defs>
    <path id="a" d="M0 34L318 0l316 34v417.196a97 97 0 01-14.433 50.909C483.553 722.702 382.697 833 317 833S150.447 722.702 14.433 502.105A97 97 0 010 451.196V34z"/>
  </defs>
  <g fill="none" fill-rule="evenodd">
    <g transform="translate(183 65)">
      <mask id="b" fill="#fff">
        <use xlink:href="#a"/>
      </mask>
      <use fill="#FFF" xlink:href="#a"/>
      <path class="primary" mask="url(#b)" d="M317-37h317v871H317z"/>
      <path class="primary" mask="url(#b)" d="M0 480l317 30 317-30v157l-317-90L0 517z"/>
      <path fill="#FFF" mask="url(#b)" d="M317 510l317-30v37l-317 30z"/>
    </g>
    <g fill-rule="nonzero">
      <path class="primary" d="M358.2 455.2c11.9 0 22.633-.992 32.2-2.975 9.567-1.983 18.375-4.9 26.425-8.75 8.05-3.85 15.458-8.458 22.225-13.825 6.767-5.367 13.3-11.433 19.6-18.2l-34.3-34.65c-9.567 8.867-19.192 15.867-28.875 21-9.683 5.133-21.525 7.7-35.525 7.7-10.5 0-20.125-2.042-28.875-6.125s-16.217-9.625-22.4-16.625-11.025-15.167-14.525-24.5-5.25-19.25-5.25-29.75v-.7c0-10.5 1.75-20.358 5.25-29.575 3.5-9.217 8.4-17.325 14.7-24.325 6.3-7 13.825-12.483 22.575-16.45 8.75-3.967 18.258-5.95 28.525-5.95 12.367 0 23.508 2.45 33.425 7.35 9.917 4.9 19.658 11.667 29.225 20.3l34.3-39.55a144.285 144.285 0 00-18.2-15.4c-6.533-4.667-13.65-8.633-21.35-11.9-7.7-3.267-16.275-5.833-25.725-7.7-9.45-1.867-19.892-2.8-31.325-2.8-18.9 0-36.167 3.325-51.8 9.975-15.633 6.65-29.05 15.75-40.25 27.3s-19.95 24.967-26.25 40.25c-6.3 15.283-9.45 31.675-9.45 49.175v.7c0 17.5 3.15 33.95 9.45 49.35 6.3 15.4 15.05 28.758 26.25 40.075 11.2 11.317 24.5 20.242 39.9 26.775 15.4 6.533 32.083 9.8 50.05 9.8z"/>
      <path fill="#FFF" d="M582.35 451l22.4-54.95h103.6l22.4 54.95h56.35l-105-246.75h-49.7L527.4 451h54.95zM689.1 348.45H624L656.55 269l32.55 79.45z"/>
    </g>
    <path class="shield" stroke-width="30" d="M183 99l318-34 316 34v417.196a97 97 0 01-14.433 50.909C666.553 787.702 565.697 898 500 898S333.447 787.702 197.433 567.105A97 97 0 01183 516.196V99h0z"/>
  </g>
</svg>
EOL;
?>

With that, you’ve got a dynamic favicon working on your site.

Security considerations

Of course, blindly echo-ing URL parameters opens you up to hacks. To mitigate these, we should sanitize all of our inputs.

In this case, we‘re only interested in values that match the 3-digit or 6-digit hex color format. We can include a function like WordPress’s own sanitize_hex_color_no_hash() to ensure only colors are passed in.

function sanitize_hex_color( $color ) {
  if ( '' === $color ) {
    return '';
  }

  // 3 or 6 hex digits, or the empty string.
  if ( preg_match( '|^#([A-Fa-f0-9]{3}){1,2}$|', $color ) ) {
    return $color;
  }
}

function sanitize_hex_color_no_hash( $color ) {
  $color = ltrim( $color, '#' );

  if ( '' === $color ) {
    return '';
  }

  return sanitize_hex_color( '#' . $color ) ? $color : null;
}

You’ll want to add your own checks based on the values you want passed in.

Caching for better performance

Browsers cache SVGs, but this benefit is lost for PHP files by default. This means each time the favicon is loaded, your server’s being hit.

To reduce server strain and improve performance, it’s essential that you explicitly cache this file. You can configure your server, set up a page rule through your CDN, or add a cache control header to the very top of favicon.php like so:

header( 'Cache-Control: public, max-age=604800' );  // 604,800 seconds or 1 week

In our tests, with no caching, our 1.5 KB SVG file took about 300 milliseconds to process on the first load, and about 100 milliseconds on subsequent loads. That’s pretty lousy. But with caching, we brought this down to 25 ms from CDN on first load, and 1 ms from browser cache on later loads — as good as a plain old SVG.

Browser support

If we were done there, this wouldn’t be web development. We still have to talk browser support.

As mentioned before, modern browser support for SVG favicons is solid, and fully-supported in current versions of Chrome, Firefox, and Edge.

SVG favicons have arrived… except in Safari.

One caveat is that Firefox requires the attribute type="image/svg+xml" in the favicon declaration for it to work. The other browsers are more forgiving, but it‘s just good practice. You should include sizes="any" while you’re at it.

<link rel="icon" href="path/to/favicon.svg" type="image/svg+xml" sizes="any">

Safari

Safari doesn‘t support SVG favicons as of yet, outside of the mask icon feature intended for pinned tabs. In my experimentation, this was buggy and inconsistent. It didn’t handle complex shapes or colors well, and cached the same icon across the whole domain. Ultimately we decided not to bother and just use a fallback with the default blue fill until Safari improves support.

Fallbacks

As solid as SVG favicon support is, it‘s still not 100%. So be sure to add fallbacks. We can set an additional favicon for when SVG icons aren’t supported with the rel="alternative icon" attribute:

<link rel="icon" href="path/to/favicon.svg" type="image/svg+xml" sizes="any">
<link rel="alternate icon" href="path/to/favicon.png" type="image/png">

To make the site even more bulletproof, you can also drop the eternal favicon.ico in your root.

Going further

We took a relatively simple favicon and swapped one color for another. But taking this dynamic approach opens the door to so much more: modifying multiple colors, other properties like position, entirely different shapes, and animations.

For instance, here’s a demo I’ve dubbed Favicoin. It plots cryptocurrency prices in the favicon as a sparkline.

Implementing dynamic favicons like this isn’t limited to WordPress or PHP; whatever your stack, the same principles apply. Heck, you could even achieve this client-side with data URLs and JavaScript… not that I recommend it for production.

But one thing‘s for sure: we’re bound to see creative applications of SVG favicons in the future. Have you seen or created your own dynamic favicons? Do you have any tips or tricks to share?


The post Dynamic Favicons for WordPress appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.



from CSS-Tricks https://ift.tt/2SCLW4F
via IFTTT

SEO and Accessibility: Technical SEO [Series Part 3]

We hope you’ve enjoyed this series on SEO and accessibility. In the final installment, Cooper shows you how the technical SEO strategies you implement across your site can help make it more perceivable, operable, understandable, and robust.



from The Moz Blog https://ift.tt/34poSZJ
via IFTTT

Thursday 27 May 2021

To $ or Not to $: Displaying Terminal Code Snippets

It’s very popular to put a $ on lines that are intended to be a command in code documentation that involves the terminal (i.e. the command line).

Like this:

$ brew install somepackage

The point of that is that it mimics the prompt that you (may) see on your command line. Here’s mine:

So the dollar sign ($) is a little technique that people use to indicate this line of code is supposed to be run on the command line.

Minor trouble

The trouble with that is that I (and I’ll wager most other people too) will copy and paste commands like that from that documentation.

If I run that command above in my terminal exactly as it’s written…

…it doesn’t work. $ is not a command. How do you deal with this? You just have to know. You just need to have had this problem before and somehow learned that what the documentation is actually telling you is to run the command brew install somepackage (without the dollar sign) at the command line.

I say minor trouble as there are all sorts of stuff like this in every job in the world. When I put something like font-size: 2.2rem in a blog post, I don’t also say, “Put that declaration in a ruleset in a CSS file that your HTML file links to.” You just have to know those those things.

Fixing it with CSS

The fact that it’s only minor trouble and that tech is laden with things you just need to know doesn’t mean that we can’t try to fix this and do a little better.

The idea for this post came from this tweet that got way more likes than I thought it would:

To expand on that, I’d expect you’re probably marking up your docs something like:

<p>Install package like:</p>

<pre><code class="command">brew install package</code></pre>

Now you can insert the $ as a pseudo-element rather than as actual text:

code.command::before {
  content: "$ ";
}

Now you aren’t just saving yourself a character in the HTML, the $ cannot be selected, because that’s how pseudo-elements work. So now you’re now a bit better in the UX department. Even if the user double-clicks the line or tries to select all of it, they won’t get the $ screwing up the copy-paste.

Hopefully they aren’t equally frustrated by not being able to copy the $. 😬

So, anyway, something like this incredible design by me:

Fixing it with text

A lot of documentation for code-things are on a public git repo place like GitHub. You don’t have access to CSS to style what GitHub looks like, so while there is trickery available, you can’t just plop a line of CSS in there to style things.

We might have to (gasp) use our words:

<p>
  Install the package by entering this 
  command at your terminal:
</p>

<kbd class="command">brew install package</kbd>

Other thoughts

  • You probably wouldn’t bother syntax highlighting it at all. I don’t think I’ve ever seen a terminal that syntax highlights commands as you enter them.
  • Eric Meyer suggested the <kbd> element which is the Keyboard Input element. I like that. I’ve long used <code> but I think <kbd> is more appropriate here.
  • Tim Chase suggested using a <span> and including the prompt in the HTML so you can style it uniquely if you want, including making it not selectable with user-select: none;.
  • Justin Searls has a dotfiles trick where if you accidently copy/paste the $, it just ignores it and runs everything after it.
  • Jackson Bates suggests being very careful about what you copy and paste to a terminal.
  • I learned that $ is also a way of denoting “unprivileged” commands while # is for root commands. Part of that reason is that if you copy-paste a root command, it won’t run as it will be recognized as a comment.

The post To $ or Not to $: Displaying Terminal Code Snippets appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.



from CSS-Tricks https://ift.tt/3usgW4u
via IFTTT

How to Show Images on Click

Most images on the web are superfluous. If I might be a jerk for a bit, 99% of them aren’t event that helpful at all (although there are rare exceptions). That’s because images don’t often complement the text they’re supposed to support and instead hurt users, taking forever to load and blowing up data caps like some sort of performance tax.

Thankfully, this is mostly a design problem today because making images performant and more user-friendly is so much easier than it once was. We have better image formats like WebP (and soon, perhaps, JPEG XL). We have the magic of responsive images of course. And there are tons of great tools out there, like ImageOptim, as well as resources such as Addy Osmani’s new book.

Although perhaps my favorite way to improve image performance is with lazy loading:

<img href="image.webp" alt="Image description" loading="lazy">

This image will only load when a user scrolls down the page so it’s visible to the user — which removes it from the initial page load and that’s just great! Making that initial load of a webpage lightning fast is a big deal.

But maybe there are images that should never load at all. Perhaps there are situations where it’d be better if a person could opt-into seeing it. Here’s one example: take the text-only version of NPR and click around for a bit. Isn’t it… just so great?! It’s readable! There’s no junk all over the place, it respects me as a user and — sweet heavens — is it fast.

Did I just show you an image in a blog post that insults the very concept of images? Yep! Sue me.

So! What if we could show images on a website but only once they are clicked or tapped? Wouldn’t it be neat if we could show a placeholder and swap it out for the real image on click? Something like this:

Well, I had two thoughts here as to how to build this chap (the golden rule is that there’s never one way to build anything on the web).

Method #1: Using <img> without a src attribute

We can remove the src attribute of an <img> tag to hide an image. We could then put the image file in an attribute, like data-src or something, just like this:

<img data-src="image.jpg" src="" alt="Photograph of hot air balloons by Musab Al Rawahi. 144kb">

By default, most browsers will show a broken image icon that you’re probably familiar with:

Okay, so it’s sort of accessible. I guess? You can see the alt tag rendered on screen automatically, but with a light dash of JavaScript, we can then swap out the src with that attribute:

document.querySelectorAll("img").forEach((item) => {
  item.addEventListener("click", (event) => {
    const image = event.target.getAttribute("data-src");
    event.target.setAttribute("src", image);
  });
});

Now we can add some styles and ugh, oh no:

Ugh. In some browsers there’ll be a tiny broken image icon in the bottom when the image hasn’t loaded. The problem here is that browsers don’t give you a way to remove the broken image icon with CSS (and we probably shouldn’t be allowed to anyway). It’s a bit annoying to style the alt text, too. But if we remove the alt attribute altogether, then the broken image icon is gone, although this makes the <img> unusable without JavaScript. So removing that alt text is a no-go.

As I said: Ugh. I don’t think there’s a way to make this method work (although please prove me wrong!).

Method #2: Using links to create an image

The other option we have is to start with the humble hyperlink, like this:

<a href="image.jpg">Photograph of hot air balloons by Musab Al Rawahi. 144kb<a>

Which, yep, nothing smart happening yet — this will just render a link to an image:

That works accessibility-wise, right? If we don’t have any JavaScript, then all we have is just a link that folks can optionally click. Performance-wise, it can’t get much faster than plain text!

But from here, we can reach for JavaScript to stop the link from loading on click, grab the href attribute within that link, create an image element and, finally, throw that image on the page and remove the old link once we’re done:

document.querySelectorAll(".load-image").forEach((item) => {
  item.addEventListener("click", (event) => {
    const href = event.target.getAttribute("href");
    const newImage = document.createElement("img");
    event.preventDefault();
    newImage.setAttribute("src", href);
    document.body.insertBefore(newImage, event.target);
    event.target.remove();
  });
});

We could probably style this placeholder link to make it look a bit nicer than what I have below. But this is just an example. Go ahead and click the link to load the image again:

And there you have it! It isn’t groundbreaking or anything, and I’m sure someone’s done this before at some point or another. But if we wanted to be extremely radical about performance beyond the first paint and initial load, then I think this is an okay-ish solution. If we’re making a text-only website then I think this is definitely the way to go.

Perhaps we could also make a web component out of this, or even detect if someone has prefers-reduced-data and then only load images if someone has enough data or something. What do you think?


The post How to Show Images on Click appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.



from CSS-Tricks https://ift.tt/2RPEG5h
via IFTTT

Rethinking Postgres in a Post-Server World

Serverless architectures have brought engineering teams a great number of benefits. We get simpler deployments, automatic and infinite scale, better concurrency, and a stateless API surface. It’s hard to imagine going back to the world of managed services, broken local environments, and SSHing into servers. When I started doing web development, moving from servers in a closet to rackspace was a revolution.

It’s not just hosting and how we deploy applications that have changed under this new paradigm. These advantages of serverless have presented challenges to the traditional MVC architecture that has been so ubiquitous. Thinking back to those early days, frameworks like Zend, Laravel, Django, and Rails were incredible productivity boosters. They not only influenced the way we built applications, but also the way we think about solving problems with the web. These were your “majestic monoliths” and they solidified the MVC pattern as the defacto standard for most of the web applications we use today.

In many ways, the rise of microservices and with it the idea of hexagonal architectures (aka ports and adaptors) led naturally to this new serverless world. It started as creating and hosting standalone APIs organized by a shared context that were still backed by the classic frameworks we already knew and loved.

The popularity of NodeJS led to the express framework where we now had a less rigid microframework enabling us to more flexibly organize our code. The final metamorphosis of this pattern is individual pieces of business logic that can be executed on demand in the cloud. We no longer manage machines or even multiple API servers. Each piece of business logic only exists when it is needed, for as long as it’s needed and no longer. They are lightweight, and you only pay for the individual parts of your application that get used.

Today, we hardly realize there is a server at all–even the terminology, serverless, is a misnomer designed to highlight just how far we’ve come from the days of setting up XAMP or VirtualBox and Vagrant. The benefits are clear, the hours saved, the headaches avoided, and the freedom to just solve business problems with code bring building software closer than ever to the simple act of writing prose.

The Problem

The classic MVC frameworks codified not only the pattern of working in three distinct tiers (data, application, and presentation) but also the technology for each. You were able to choose some options at each layer, for instance Postgres or MySQL as the data layer, but the general idea is these decisions are made for you. You implicitly adopt the idea of convention over configuration.

Postgres as a data layer solution makes a lot of sense. It’s robust, fast, supports ACID transactions, and has over thirty years of development behind it. It is also open-source, can be hosted almost anywhere, and is likely to be around for another thirty years. You could do much worse than stake your company’s future on Postgres. Add to that all the work put into integrating it into these equally battle-tested frameworks and the story for choosing Postgres becomes very strong.

However, when we enter a serverless context, this type of architecture presents a number of challenges particularly when it comes to handling our data.

Common issues include:

  1. Maintaining stateful connections: when each user is a new connection to Postgres this can max out the number of connections Postgres can handle quickly.
  2. Provisioned scale: with Postgres we must be sure to provision the right size database for our application ahead of time, which is not ideal when our application layer can automatically scale to any workload.
  3. Traditional security model: this model does not allow for any client-side use and is vulnerable to SQL injection attacks.
  4. Data centralization: while our application may be deployed globally, this is of little use when our database is stuck in a single location potentially thousands of miles from where the data needs to be.
  5. High operational overhead: serverless promises to free us from complexity and remove barriers to solving business problems. With Postgres we return to needing to manage a service ourselves, dealing with sharding, scaling, distribution, and backups.

Traditional systems like Postgres were never designed for this purpose. To start, Postgres operates on the assumption of a stateful connection. What this means is that Postgres will hold open a connection with a server in order to optimize the response time. In a traditional monolithic application, if your server had to open a new connection every single time it requested data this would be quite inefficient. The actual network request would in many times be the primary bottleneck. By keeping this connection cached Postgres removes this bottleneck. As you scale your application you will likely have multiple machines running, and a single Postgres database can handle many such connections, but this number isn’t infinite. In fact, in many cases, you have to set this number at the time of provisioning the database.

In a serverless context, each request is effectively a new machine and a new connection to the database. As Postgres attempts to hold open these connections we can quickly run up against our connection limit and the memory limits of the machine. This also introduces another issue with the traditional Postgres use case, which is provisioned resources. 

With Postgres we have to decide the size of the database, the capacity of the machine it runs on, where that machine is located, and the connection limit at the time of creation. This puts us in a situation where our application can scale automatically but we must watch our database closely and scale it ourselves. This can be even trickier when we are dealing with spikes in traffic that are not consistent in both time and location. Ultimately by moving to serverless we have reduced the operational overhead of our application layer, but created some increased operational overhead in our database. Would it not be better if both our application and our data layer could scale together without us having to manage it?

The complexity required to make traditional systems like Postgres work in a serverless environment can often be enough to abandon the architecture all together. Serverless requires on-demand, stateless execution of business logic. This allows us to create lighter, more scalable programs but does not allow us to preserve things like network connections and can be slowed down by additional dependencies like ORMs and middleware. 

The Ideal Solution

It’s time we begin thinking about a new type of database, one more in line with the spirit of serverless and one that embraces iterative development and more unified tooling. We want this database to have the same automatic, on-demand scale as the rest of our application as well as handle global distribution that are hallmarks of the serverless promise. This ideal solution should be:

  1. Support for stateless connections with no limits.
  2. Auto-scaling both for the size of the machine and in the size of the database itself.
  3. Be securely accessible from both the client and the server to support both serverless APIs as well as Jamstack use cases.
  4. Globally distributed so data is closest to where it is needed always.
  5. Free of operational overhead so we don’t add complexity managing things like sharding, distribution, and backups.

If we are truly to embrace the serverless architecture, we need to ensure that our database scales along with the rest of the application. In this case, we have a variety of solutions some of which involve sticking with Postgres. Amazon Aurora is one example of a Postgres cloud solution that gives us automatic scalability and backups, and gives us some global distribution of data. However, Amazon Aurora is hardly easy to set up and doesn’t free us from all operational overhead. We also are not able to securely access our data from the client without building an accompanying API as it still follows the traditional Postgres security model.

Another option here are services like Hasura, that allow us to leverage Postgres but access our data by way of a GraphQL API. This solves our security issues when accessing data from the client and gets us much closer to the ease of use we have with many of our other serverless services. However, we are left to manage our database ourselves and this merely adds another layer on top of our database to manage the security. While the Hasura application layer is distributed, our database is not so we don’t get true global distribution with this system.

I think at this point we should turn toward some additional solutions that really hit all the points above. When looking at solutions outside of Postgres we have to add two additional requirements that put the solutions on par with the power of Postgres:

  1. Support for robust, distributed, ACID transactions.
  2. Support for relational modeling such that we can easily perform join operations on normalized data.

When we typically step outside of relational database systems into the world of schemaless solutions, ACID transactions and relational, normalized data are often things we sacrifice. So we want to make sure that when we optimize for serverless we are not losing the features that have made Postgres such a strong contender for so long.

Azure’s CosmosDB supports a variety of databases (both SQL and NoSQL) under the hood. CosmosDB also provides us with libraries that can work on both the client and server freeing us from an additional dependency like Hasura. We get some global distribution as well as automatic scale. However, we are still left with a lot of choices to make and are not free entirely from database management. We still have to manage our database size effectively and choose from many database options that all have their pros and cons.

What we really want is a fully managed solution where the operational overhead of choosing database size and the type of database can be abstracted away. In a general sense having to research many types of databases and estimate scale would be things that matter a lot less if we have all of the features we need. Fauna is a solution where we don’t have to worry about the size of the database nor do we have to select the type of database under the hood. We get the support of ACID transactions, global distribution, and no data loss without having to figure out the best underlying technology to achieve that. We also can freely access our database on the client or the server with full support for serverless functions. This allows us to flexibly create different types of applications in a variety of architectures such as JAMstack clients, serverless APIs, traditional long-running backends, or combinations of these styles.

When it comes to schemaless databases, we gain flexibility but are forced to think differently about our data model to most efficiently query our data. When moving away from Postgres this is often a subtle but large point of friction. With Fauna, we have to move into a schemaless design as you cannot opt into another database type. However, Fauna makes use of a unique document-relational database model. This allows us to utilize relational database knowledge and principles when modeling our data into collections and indexes. This is why I think it’s worth considering for people used to Postgres as the mental overhead is not the same as with other NoSql options.

Conclusion

Systems like Postgres have been powerful allies for building applications for over thirty years. The rise of agile and iterative development cycles led us to the serverless revolution before us today. We are able to build increasingly more complex applications with less operational overhead than ever before. That power requires us to think about databases differently and demand a similar level of flexibility and ease of management. We want to preserve the best qualities of a Postgres, like ACID transactions, but ditch the more unsavory aspects of working with the database like connection pooling, provisioning resources, security, distribution and managing scale, availability and reliability.

Solutions such as Amazon’s Aurora Serverless v2 create a serverless solution that works in this new world. There are also solutions like Hasura that sit on top of this to further fulfill the promise of serverless. We also have solutions like Cosmos DB and Fauna that are not based in Postgres but are built for serverless while supporting important Postgres functionality.

While Cosmos DB gives us a lot of flexibility in terms of what database we use, it still leaves us with many decisions and is not completely free of operational overhead. Fauna has made it so you don’t have to compromise on ACID transactions, relational modeling or normalized data — while still alleviating all the operational overhead of database management. Fauna is a complete rethinking of a database that is truly free of operational overhead. By combining the best of the past with the needs of the future Fauna has built a solution that behaves more like a data API and feels natural in a serverless future.


Follow Michael Rispoli on Twitter


The post Rethinking Postgres in a Post-Server World appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.



from CSS-Tricks https://ift.tt/3foPBMc
via IFTTT

Wednesday 26 May 2021

The MozCon Virtual 2021 Final Agenda

MozCon Virtual 2021 is right around the corner! Read on to see what this year has in store, and don't forget to purchase your ticket!



from The Moz Blog https://ift.tt/3yF2olo
via IFTTT

Awesome Standalone (Web Components)

In his last An Event Apart talk, Dave made a point that it’s really only just about right now that Web Components are becoming a practical choice for production web development. For example, it has only been about a year since Edge went Chromium. Before that, Edge didn’t support any Web Component stuff. If you were shipping them long ago, you were doing so with fairly big polyfills, or in a progressive-enhancement style, where if they failed, they did so gracefully or in a controlled environment, say, an intranet where everyone has the same computer (or in something like Electron).

In my opinion, Web Components still have a ways to go to be compelling to most developers, but they are getting there. One thing that I think will push their adoption along is the incredibly easy DX of pre-built components thanks to, in part, ES Modules and how easy it is to import JavaScript.

I’ve mentioned this one before: look how silly-easy it is to use Nolan Lawson’s emoji picker:

That’s one line of JavaScript and one line of HTML to get it working, and another one line of JavaScript to wire it up and return a JSON response of a selection.

Compelling, indeed. DX, you might call it.

Web Components like that aren’t alone, hence the title of this post. Dave put together a list of Awesome Standalones. That is, Web Components that aren’t a part of some bigger more complex system1, but are just little drop-in doodads that are useful on their own, just like the emoji picker. Dave’s repo lists about 20 of them.

Take this one from GitHub (the company), a copy-to-clipboard Web Component:

Pretty sweet for something that comes across the wire at ~3KB. The production story is whatever you want it to be. Use it off the CDN. Bundle it up with your stuff. Self-host it be leave it a one-off. Whatever. It’s darn easy to use. In the case of this standalone, there isn’t even any Shadow DOM to deal with.

No shade on Shadow DOM, that’s perhaps the most useful feature of Web Components (and cannot be replicated by a library since it’s a native browser feature), but the options for styling it aren’t my favorite. And if you used three different standalone components with three different opinions on how to style through the Shadow DOM, that’s going to get annoying.

What I picture is developers dipping their toes into stuff like this, seeing the benefits, and using more and more of them in what they are building, and even building their own. Building a design system from Web Components seems like a real sweet spot to me, like many big names2 already do.

The dream is for people to actually consolidate common UI patterns. Like, even if we never get native HTML “tabs” it’s possible that a Web Component could provide them, get the UI, UX, and accessibility perfect, yet leave them style-able such that literally any website could use them. But first, that needs to exist.


  1. That’s a cool way to use Web Components, too, but easy gets attention, and that matters.
  2. People always mention Lightning Design System as a Web Components-based design system, but I’m not seeing it. For example, this accordion looks like semantic HTML with class names, not Web Components. What am I missing?

The post Awesome Standalone (Web Components) appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.



from CSS-Tricks https://ift.tt/2RNywCH
via IFTTT

Links on Web Components

  • How we use Web Components at GitHub — Kristján Oddsson talks about how GitHub is using web components. I remember they were very early adopters, and it says here they released a <relative-time> component in 2014! Now they’ve got a whole bunch of open source components. So easy to use! Awesome! I wanted to poke around their HTML and see them in action, so I View’d Source and used the RegEx (<\w+-[\w|-|]+.*>) (thanks, Andrew) to look for them. Seven on the logged-in homepage, so they ain’t blowin’ smoke.
  • Using web components to encapsulate CSS and resolve design system conflicts — Tyler Williams says the encapsulation (Shadow DOM) of web components meant avoiding styling conflicts with an older CSS system. He also proves that companies that make sites for Git repos love web components.
  • Container Queries in Web Components — Max Böck shares that the :host of a web component can be the @container which is extremely great and is absolutely how all web components should be written.
  • Faster Integration with Web Components — Jason Grigsby does client work and says that web components don’t make integration fast or easy, they make integration fast and easy.
  • FicusJS — I remember being told once that native web components weren’t really meant to be used “raw” but meant to be low-level such that tooling could be built on top of them. We see that in competition amongst renderers, like lit-html vs htm. Then, in layers of tooling on top of that, like Ficus here, that adds a bunch of fancy stuff like state, methods, and events.
  • Shadow DOM and Its Effect on the Unofficial Styling API — Jim Nielsen expands on the idea I poked at on ShopTalk that the DOM is the styling API. It’s self-documenting, in a way. “As an author, you have to spend time and effort thinking about, architecting, and then documenting a styling API for your component. And as a consumer, you have to read, understand, and implement that API.” Yes. That’s why, to me, it feels like a good idea to have an option to “reach into the Shadow DOM from outside CSS” in an unencumbered way.
  • Awesome Standalones — I think Dave’s list here is exactly the kind of thing that gets developers feet wet and thinking about web components as actually useful.

Two years ago, hold true:


The post Links on Web Components appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.



from CSS-Tricks https://ift.tt/3yIE41M
via IFTTT

A Thorough Analysis of CSS-in-JS

Wondering what’s even more challenging than choosing a JavaScript framework? You guessed it: choosing a CSS-in-JS solution. Why? Because there are more than 50 libraries out there, each of them offering a unique set of features.

We tested 10 different libraries, which are listed here in no particular order: Styled JSX, styled-components, Emotion, Treat, TypeStyle, Fela, Stitches, JSS, Goober, and Compiled. We found that, although each library provides a diverse set of features, many of those features are actually commonly shared between most other libraries.

So instead of reviewing each library individually, we’ll analyse the features that stand out the most. This will help us to better understand which one fits best for a specific use case.

Note: We assume that if you’re here, you’re already familiar with CSS-in-JS. If you’re looking for a more elementary post, you can check out “An Introduction to CSS-in-JS.”

Common CSS-in-JS features

Most actively maintained libraries that tackle CSS-in-JS support all the following features, so we can consider them de-facto.

Scoped CSS

All CSS-in-JS libraries generate unique CSS class names, a technique pioneered by CSS modules. All styles are scoped to their respective component, providing encapsulation without affecting any styling defined outside the component.

With this feature built-in, we never have to worry about CSS class name collisions, specificity wars, or wasted time spent coming up with unique class names across the entire codebase.

This feature is invaluable for component-based development.

SSR (Server-Side Rendering)

When considering Single Page Apps (SPAs) — where the HTTP server only delivers an initial empty HTML page and all rendering is performed in the browser — Server-Side Rendering (SSR) might not be very useful. However, any website or application that needs to be parsed and indexed by search engines must have SSR pages and styles need to be generated on the server as well.

The same principle applies to Static Site Generators (SSG), where pages along with any CSS code are pre-generated as static HTML files at build time.

The good news is that all libraries we’ve tested support SSR, making them eligible for basically any type of project.

Automatic vendor prefixes

Because of the complex CSS standardization process, it might take years for any new CSS feature to become available in all popular browsers. One approach aimed at providing early access to experimental features is to ship non-standard CSS syntax under a vendor prefix:

/* WebKit browsers: Chrome, Safari, most iOS browsers, etc */
-webkit-transition: all 1s ease;

/* Firefox */
-moz-transition: all 1s ease;

/* Internet Explorer and Microsoft Edge */
-ms-transition: all 1s ease;

/* old pre-WebKit versions of Opera */
-o-transition: all 1s ease;

/* standard */
transition: all 1s ease; 

However, it turns out that vendor prefixes are problematic and the CSS Working Group intends to stop using them in the future. If we want to fully support older browsers that don’t implement the standard specification, we’ll need to know which features require a vendor prefix.

Fortunately, there are tools that allow us to use the standard syntax in our source code, generating the required vendor prefixed CSS properties automatically. All CSS-in-JS libraries also provide this feature out-of-the-box.

No inline styles

There are some CSS-in-JS libraries, like Radium or Glamor, that output all style definitions as inline styles. This technique has a huge limitation, because it’s impossible to define pseudo classes, pseudo-elements, or media queries using inline styles. So, these libraries had to hack these features by adding DOM event listeners and triggering style updates from JavaScript,  essentially reinventing native CSS features like :hover, :focus and many more.

It’s also generally accepted that inline styles are less performant than class names. It’s usually a discouraged practice to use them as a primary approach for styling components.

All current CSS-in-JS libraries have stepped away from using inline styles, adopting CSS class names to apply style definitions.

Full CSS support

A consequence of using CSS classes instead of inline styles is that there’s no limitation regarding what CSS properties we can and can’t use. During our analysis we were specifically interested in:

  • pseudo classes and elements;
  • media queries;
  • keyframe animations.

All the libraries we’ve analyzed offer full support for all CSS properties.

Differentiating features

This is where it gets even more interesting. Almost every library offers a unique set of features that can highly influence our decision when choosing the appropriate solution for a particular project. Some libraries pioneered a specific feature, while others chose to borrow or even improve certain features.

React-specific or framework-agnostic?

It’s not a secret that CSS-in-JS is more prevalent within the React ecosystem. That’s why some libraries are specifically built for React: Styled JSX, styled-components, and Stitches.

However, there are plenty of libraries that are framework-agnostic, making them applicable to any project: Emotion, Treat, TypeStyle, Fela, JSS or Goober.

If we need to support vanilla JavaScript code or frameworks other than React, the decision is simple: we should choose a framework-agnostic library. But when dealing with a React application, we have a much wider range of options which ultimately makes the decision more difficult. So let’s explore other criteria.

Styles/Component co-location

The ability to define styles along with their components is a very convenient feature, removing the need to switch back-and-forth between two different files: the .css or .less/.scss file containing the styles and the component file containing the markup and behavior.

React Native StyleSheets, Vue.js SFCs, or Angular Components support co-location of styles by default, which proves to be a real benefit during both the development and the maintenance phases. We always have the option to extract the styles into a separate file, in case we feel that they’re obscuring the rest of the code.

Almost all CSS-in-JS libraries support co-location of styles. The only exception we encountered was Treat, which requires us to define the styles in a separate .treat.ts file, similarly to how CSS Modules work.

Styles definition syntax

There are two different methods we can use to define our styles. Some libraries support only one method, while others are quite flexible and support both of them.

Tagged Templates syntax

The Tagged Templates syntax allows us to define styles as a string of plain CSS code inside a standard ES Template Literal:

// consider "css" being the API of a generic CSS-in-JS library
const heading = css`
  font-size: 2em;
  color: ${myTheme.color};
`;

We can see that:

  • CSS properties are written in kebab case just like regular CSS;
  • JavaScript values can be interpolated;
  • we can easily migrate existing CSS code without rewriting it.

Some things to keep in mind:

  • In order to get syntax highlight and code suggestions, an additional editor plugin is required; but this plugin is usually available for popular editors like VSCode, WebStorm, and others.
  • Since the final code must be eventually executed in JavaScript, the style definitions need to be parsed and converted to JavaScript code. This can be done either at runtime, or at build time, incurring a small overhead in bundle size, or computation.
Object Styles syntax

The Object Styles syntax allows us to define styles as regular JavaScript Objects:

// consider "css" being the API of a generic CSS-in-JS library
const heading = css({
  fontSize: "2em",
  color: myTheme.color,
});

We can see that:

  • CSS properties are written in camelCase and string values must be wrapped in quotes;
  • JavaScript values can be referenced as expected;
  • it doesn’t feel like writing CSS, as instead we define styles using a slightly different syntax but with the same property names and values available in CSS (don’t feel intimidated by this, you’ll get used to it in no time);
  • migrating existing CSS would require a rewrite in this new syntax.

Some things to keep in mind:

  • Syntax highlighting comes out-of-the-box, because we’re actually writing JavaScript code.
  • To get code completion, the library must ship CSS types definitions, most of them extending the popular CSSType package.
  • Since the styles are already written in JavaScript, there’s no additional parsing or conversion required.
Library Tagged template Object styles
styled-components
Emotion
Goober
Compiled
Fela 🟠
JSS 🟠
Treat
TypeStyle
Stitches
Styled JSX

✅  Full support         ðŸŸ   Requires plugin          ❌  Unsupported

Styles applying method

Now that we know what options are available for style definition, let’s have a look at how to apply them to our components and elements.

Using a class attribute / className prop

The easiest and most intuitive way to apply the styles is to simply attach them as classnames. Libraries that support this approach provide an API that returns a string which will output the generated unique classnames:

// consider "css" being the API of a generic CSS-in-JS library
const heading_style = css({
  color: "blue"
});

Next, we can take the heading_style, which contains a string of generated CSS class names, and apply it to our HTML element:

// Vanilla DOM usage
const heading = `<h1 class="${heading_style}">Title</h1>`;

// React-specific JSX usage
function Heading() {
  return <h1 className={heading_style}>Title</h1>;
}

As we can see, this method pretty much resembles the traditional styling: first we define the styles, then we attach the styles where we need them. The learning curve is low for anyone who has written CSS before.

Using a <Styled /> component

Another popular method, that was first introduced by the styled-components library (and named after it), takes a different approach.

// consider "styled" being the API for a generic CSS-in-JS library
const Heading = styled("h1")({
  color: "blue"
});

Instead of defining the styles separately and attaching them to existing components or HTML elements, we use a special API by specifying what type of element we want to create and the styles we want to attach to it.

The API will return a new component, having classname(s) already applied, that we can render like any other component in our application. This basically removes the mapping between the component and its styles.

Using the css prop

A newer method, introduced by Emotion, allows us to pass the styles to a special prop, usually named css. This API is available only for JSX-based syntax.

// React-specific JSX syntax
function Heading() {
  return <h1 css=>Title</h1>;
}

This approach has a certain ergonomic benefit, because we don’t have to import and use any special API from the library itself. We can simply pass the styles to this css prop, similarly to how we would use inline styles.

Note that this custom css prop is not a standard HTML attribute, and needs to be enabled and supported via a separate Babel plugin provided by the library.

Library className <Styled /> css prop
styled-components
Emotion
Goober 🟠 2
Compiled 🟠 1
Fela
JSS 🟠 2
Treat
TypeStyle
Stitches 🟠 1
Styled JSX

✅  Full support          ðŸŸ  1  Limited support          ðŸŸ  2  Requires plugin          ❌  Unsupported

Styles output

There are two mutually exclusive methods to generate and ship styles to the browser. Both methods have benefits and downsides, so let’s analyze them in detail.

<style>-injected DOM styles

Most CSS-in-JS libraries inject styles into the DOM at runtime, using either one or more <style> tags, or using the CSSStyleSheet API to manage styles directly within the CSSOM. During SSR, styles are always appended as a <style> tag inside the <head> of the rendered HTML page.

There are a few key advantages and preferred use cases for this approach:

  1. Inlining the styles during SSR provides an increase in page loading performance metrics such as FCP (First Contentful Paint), because rendering is not blocked by fetching a separate .css file from the server.
  2. It provides out-of-the-box critical CSS extraction during SSR by inlining only the styles required to render the initial HTML page. It also removes any dynamic styles, thus further improving loading time by downloading less code.
  3. Dynamic styling is usually easier to implement, as this approach appears to be more suited for highly interactive user interfaces and Single-Page Applications (SPA), where most components are client-side rendered.

The drawbacks are generally related to the total bundle size:

  • an additional runtime library is required for handling dynamic styling in the browser;
  • the inlined SSR styles won’t be cached out-of-the-box and they will need to be shipped to the browser upon each request since they’re part of the .html file rendered by the server;
  • the SSR styles that are inlined in the .html page will be sent to the browser again as JavaScript resources during the rehydration process.
Static .css file extraction

There’s a very small number of libraries that take a totally different approach. Instead of injecting the styles in the DOM, they generate static .css files. From a loading performance point of view, we get the same advantages and drawbacks that come with writing plain CSS files.

  1. The total amount of shipped code is much smaller, since there is no need for additional runtime code or rehydration overhead.
  2. Static .css files benefit from out-of-the-box caching inside the browser, so subsequent requests to the same page won’t fetch the styles again.
  3. This approach seems to be more appealing when dealing with SSR pages or Static Generated pages since they benefit from default caching mechanisms.

However, there are some important drawbacks we need to take note of:

  • The first visit to a page, with an empty cache, will usually have a longer FCP when using this method compared to the one mentioned previously; so deciding if we want to optimize for first-time users or returning visitors could play a crucial role when choosing this approach.
  • All dynamic styles that can be used on the page will be included in the pre-generated bundle, potentially leading to larger .css resources that need to be loaded up front.

Almost all the libraries that we tested implement the first method, injecting the styles into the DOM. The only tested library which supports static .css file extraction is Treat. There are other libraries that support this feature, like Astroturf, Linaria, and style9, which were not included in our final analysis.

Atomic CSS

Some libraries took optimizations one step further, implementing a technique called atomic CSS-in-JS, inspired by frameworks such as Tachyons or Tailwind.

Instead of generating CSS classes containing all the properties that were defined for a specific element, they generate a unique CSS class for each unique CSS property/value combination.

/* classic, non-atomic CSS class */
._wqdGktJ {
  color: blue;
  display: block;
  padding: 1em 2em;
}

/* atomic CSS classes */
._ktJqdG { color: blue; }
._garIHZ { display: block; }
/* short-hand properties are usually expanded */
._kZbibd { padding-right: 2em; }
._jcgYzk { padding-left: 2em; }
._ekAjen { padding-bottom: 1em; }
._ibmHGN { padding-top: 1em; }

This enables a high degree of reusability because each of these individual CSS classes can be reused anywhere in the code base.

In theory, this works really great in the case of large applications. Why? Because there’s a finite number of unique CSS properties that are needed for an entire application. Thus, the scaling factor is not linear, but rather logarithmic, resulting in less CSS output compared to non-atomic CSS.

But there is a catch: individual class names must be applied to each element that requires them, resulting in slightly larger HTML files:

<!-- with classic, non-atomic CSS classes -->
<h1 class="_wqdGktJ">...</h1>

<!-- with atomic CSS classes -->
<h1 class="_ktJqdG _garIHZ _kZbibd _jcgYzk _ekAjen _ibmHGN">...</h1>

So basically, we’re moving code from CSS to HTML. The resulting difference in size depends on too many aspects for us to draw a definite conclusion, but generally speaking, it should decrease the total amount of bytes shipped to the browser.

Conclusion

CSS-in-JS will dramatically change the way we author CSS, providing many benefits and improving our overall development experience.

However, choosing which library to adopt is not straightforward and all choices come with many technical compromises. In order to identify the library that is best suited for our needs, we have to understand the project requirements and its use cases:

  • Are we using React or not? React applications have a wider range of options, while non-React solutions have to use a framework agnostic library.
  • Are we dealing with a highly interactive application, with client-side rendering? In this case, we probably aren’t very concerned about the overhead of rehydration, or care that much about extracting static .css files.
  • Are we building a dynamic website with SSR pages? Then, extracting static .css files may probably be a better option, as it would allow us to benefit from caching.
  • Do we need to migrate existing CSS code? Using a library that supports Tagged Templates would make the migration easier and faster.
  • Do we want to optimize for first-time users or returning visitors? Static .css files offer the best experience for returning visitors by caching the resources, but the first visit requires an additional HTTP request that blocks page rendering.
  • Do we update our styles frequently? All cached .css files are worthless if we frequently update our styles, thus invalidating any cache.
  • Do we re-use a lot of styles and components? Atomic CSS shines if we reuse a lot of CSS properties in our codebase.

Answering the above questions will help us decide what features to look for when choosing a CSS-in-JS solution, allowing us to make more educated decisions.


The post A Thorough Analysis of CSS-in-JS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.



from CSS-Tricks https://ift.tt/3fn8YW7
via IFTTT

Passkeys: What the Heck and Why?

These things called  passkeys  sure are making the rounds these days. They were a main attraction at  W3C TPAC 2022 , gained support in  Saf...