Ahmad Shadeed nails it again with “Defensive CSS.” The idea is that you should write CSS to be ready for issues caused by dynamic content.
More items than you thought would be there? No problem, the area can expand or scroll. Title too long? No problem, it either wraps or truncates, and won’t bump into anything weird because margins or gaps are set up. Image come over in an unexpected size? No worries, the layout is designed to make sure the dedicated area is filled with image and will handle the sizing/cropping accordingly.
There is no such thing as being a good CSS developer and not coding defensively. This is what being a CSS developer is, especially when you factor in progressive enhancement concepts and cross-browser/device unknowns.
I’m actually working on a talk (whew! been a while! kinda feels good!) about just how good the world of building websites has gotten. I plan to cover a wide swath of web tech, on purpose, because I feel like things have gotten good all around. CSS is doing great, but so is nearly everything else involved in making websites, especially if we take care in what we’re doing.
It also strikes me that updates to the web platform and the ecosystem around it are generally additive. If you feel like the web used to be simpler, well, perhaps it was—but it also still is. Whatever you could do then you can do now, if you want to, although, it would be a fair point if you’re job searching and the expectations to get hired involve a wheelbarrow of complicated tech.
This idea of the web getting better feels like it’s in the water a bit…
Simeon Griggs in “There’s never been a better time to build websites” has a totally different take on what is great on the web these days than mine, but I appreciate that. The options around building websites have also widened, meaning there are approaches to things that just feel better to people who think and work in different ways.
While there’s absolutely a learning curve to getting started, once you’ve got momentum, modern web development feels like having rocket boosters. The distance between idea and execution is as short as it’s ever been.
When you’re about to start a new website, what do you think first? Do you start with a library or framework you know, like React or Vue, or a meta-framework on top of that, like Next or Nuxt? Do you pull up a speedy build tool like Vite, or configure your webpack?
There’s a great tweet by Phil Hawksworth that I bookmarked a few years back and still love to this day:
Your websites start fast until you add too much to make them slow. Do you need any framework at all? Could you do what you want natively in the browser? Would doing it without a framework at all make your site lighter, or actually heavier in the long run as you create or optimize what others have already done?
I personally love the idea of shipping less code to ultimately ship more value to the browser. Understanding browser APIs and what comes “for free” could actually lead to less reinventing the wheel, and potentially more accessibility as you use the tools provided.
Instead of pulling in a library for every single task you want to do, try to look under the hood at what they are doing. For example, in a project I was maintaining, I noticed that we had a React component imported that was shipping an entire npm package for a small (less than 10-line) component with some CSS sprinkled on top (that we were overriding with our own design system). When we re-wrote that component from scratch, our bundle size was smaller, we were able to customize it more, and we didn’t have to work around someone else’s decisions.
Now, I’m not saying you shouldn’t use any libraries or frameworks or components out there. Open source exists for a reason! What I am saying is to be discerning about what you bring into your projects. Let the power of the browser work for you, and use less stuff!
High-velocity, online businesses produce multiple digital assets like banners, images, videos, PDFs, etc., to promote their businesses online. For such businesses, Digital Asset Management (DAM) solutions are essential. These solutions help centrally store, manage, organize, search and track digital assets. Having a central repository of assets helps in the faster execution of campaigns and improves cross-functional collaboration.
But, for an organization operating at scale and dealing with millions of digital assets flowing in from multiple sources, certain parts of your asset management workflow cannot be done manually using a UI. For example, how do you upload thousands of images in the correct folders every day? Or integrate an internal CMS to add product SKU IDs as a tag on the product image in the DAM?
This is why leading DAM solutions come with APIs to allow you to integrate them into your existing workflows and get the benefits of a DAM system at scale. Let’s first understand what an API is before getting to some common examples and use cases you can solve with them.
What is an API?
API stands for Application Programming Interface. It allows two software pieces or applications to communicate using a common definition.
An analogy in the physical world is when you order a dish in a restaurant, the chef understands what you ordered and prepares it. Here, the menu with the dish’s name serves as the common language for you (one of the parties) to communicate with the chef (the other party).
Let’s look at an example of an API in an e-commerce application. To check the delivery time to your location, you enter your pin code, and in a second or two, the time appears on your mobile screen. Here, your app (one of the software) is talking to the server (the other software), asking to give delivery times for a pin code (the definition or the common language between the two software). The delivery time that is returned by the server is called an API’s “response.”
What is a DAM API?
Continuing our explanation above, DAM APIs allow you to communicate with the Digital Asset Management system using a defined language. These APIs allow you to use all or most of the features of a DAM system, but instead of doing it via the user interface in a browser, you would be able to use them from a software program.
For example, a DAM’s user interface lets you drag and drop an image to upload it. However, the same DAM system could offer an API to upload images from your user’s Android app. Here the Android app is one of the software, the DAM system itself is the other software, and the upload API communicates what and how to upload to the DAM system. Once completed, the API responds with information about the uploaded image.
What’s ImageKit? What’s its DAM offering?
ImageKit is a leading Digital Asset Management solution. It comes with standard DAM features like storage, management, AI tagging, custom metadata, and advanced search. It also has optimized asset delivery integrated into the system.
While ImageKit’s DAM system comes with a user-friendly UI, like all leading players in this space, it also offers media APIs to use all of its features programmatically.
Use cases you can solve with DAM APIs
Before jumping to the APIs, here are some ways to use a DAM system’s APIs.
If you have an app or website where users can upload images or videos, or other content,you can use the DAM API to upload them directly to the DAM system.
Suppose you build a product that offers integrated media storage to its users. Instead of exposing your users to the DAM system directly, you would want to integrate it into your product natively (or white-label it). You can use a combination of DAM upload APIs, list and search APIs, and get image detail APIs to build this asset library for the users of your product.
Suppose your team uses an existing CMS or any other system to manage internal data. You can use the DAM as the underlying file storage and use its advanced management and search features via its APIs. Your team never has to leave their existing CMS while still leveraging all the features of the DAM system.
If you require and your DAM solution supports it, you can use the real-time image and video optimization APIs to deliver the assets to your users or on different platforms. ImageKit is one such DAM that supports file delivery for any asset upload to its media library.
Common Digital Asset Management APIs
Let’s look at some of the standard APIs that most DAM systems offer. For demonstration and examples, we would be using ImageKit’s DAM APIs.
1. API for uploading a file
This is the most basic API of all — before you use the DAM system, you need to upload files to it.
ImageKit’s Upload API allows you to upload an actual file from your file system or a web URL. You can use this API on a front-end application, like a mobile app, or a back-end application, like your application server. Here is an example of uploading the image from a back-end application.
You would get some information about the uploaded file in the API response. For example, you would usually get a unique ID for your file, which would be super valuable for subsequent APIs, along with other information like the file’s format, size, upload time, etc.
After uploading a file to the DAM system, you might want to remove it or move it around to different folders. This also can be done programmatically via APIs.
For example, in ImageKit, to move a file from one folder to the other, you need to give the file’s path (sourceFilePath) and the destination folder path (destinationPath) in the API.
File nomenclature and creating the correct folder structure are often insufficient to organize and find content in a growing repository of digital assets.
Associating custom metadata or tags with an asset helps build another layer of organization for your content. For example, you could assign values to fields such as “Product Category” (Shoe, Shirt, Jeans, etc.), “Platform” (Facebook, Instagram, etc.), “Sale Name” (Thanksgiving, Black Friday, etc.) to the files in your DAM system, to build a more business-specific organization.
Through services like Google Cloud Vision, taking advantage of AI can help speed up asset tagging workflows and reduce errors. In addition, good DAM systems do provide APIs to associate tags with your assets.
For example, ImageKit allows you to add AI-inferred tags, using Google Cloud Vision, to your asset in the code below.
While the above API adds tags to an existing file, you can also do this when the file is first uploaded.
4. Searching for a file using search APIs
The most significant advantage of using a DAM is searching for the exact asset amongst thousands of them. Therefore, a good search API is necessary for any DAM system. It should allow searching on all the possible parameters associated with an asset, including custom tags and metadata that we add to create a business-specific organization for ourselves.
ImageKit provides a very flexible search API that lets you construct complex search queries to pinpoint the exact resource you need. The example below finds out all assets you created more than seven days ago with a size of more than 2MB.
curl -X GET "https://api.imagekit.io/v1/files" \
-G --data-urlencode "searchQuery=createdAt >= \"7d\" AND size > \"2mb\"" \
-u your_private_api_key:
5. The image and video delivery API
Once your team starts managing and collaborating on the assets on the DAM, the next obvious step would be to be able to use these assets on the web, share them via their URLs, use them on your website, apps, emails, and so on.
Leading DAM solutions like ImageKit provide ready-to-use URLs for any file stored with them. ImageKit API also has in-built automatic optimizations and real-time manipulations for images and videos that ensure optimized asset delivery every time.
The above example resizes the original image to a 200×200 square thumbnail while compressing it and optimizing its format. But, of course, you can do the same using a similar URL-based API for videos too.Read more about ImageKit’s media APIs
Conclusion
Apart from the basic APIs explained above, all DAM solutions offer several other APIs that allow you to manage folders, get file details, control the shareability of assets, and more. The possibilities are endless for integrating these APIs to simplify and automate your existing workflows. Using a DAM solution like ImageKit, with its extensive media management APIs given here, will bring your marketing, creative, and technology teams on the same page and help them execute campaigns faster. Sign up today on ImageKit’s forever free DAM plan and start optimizing your media workflows.
Animation on the web is often a contentious topic. I think, in part, it’s because bad animation is blindingly obvious, whereas well-executed animation fades seamlessly into the background. When handled well, animation can really elevate a website, whether it’s just adding a bit of personality or providing visual hints and lessening cognitive load. Unfortunately, it often feels like there are two camps, accessibility vs. animation. This is such a shame because we can have it all! All it requires is a little consideration.
Here’s a couple of important questions to ask when you’re creating animations.
Does this animation serve a purpose?
This sounds serious, but don’t worry — the site’s purpose is key. If you’re building a personal portfolio, go wild! However, if someone’s trying to file a tax return, whimsical loading animations aren’t likely to be well-received. On the other hand, an animated progress bar could be a nice touch while providing visual feedback on the user’s action.
Is it diverting focus from important information?
It’s all too easy to get caught up in the excitement of whizzing things around, but remember that the web is primarily an information system. When people are trying to read, animating text or looping animations that play nearby can be hugely distracting, especially for people with ADD or ADHD. Great animation aids focus; it doesn’t disrupt it.
So! Your animation’s passed the test, what next? Here are a few thoughts…
Did we allow users to opt-out?
It’s important that our animations are safe for people with motion sensitivities. Those with vestibular (inner ear) disorders can experience dizziness, headaches, or even nausea from animated content.
Luckily, we can tap into operating system settings with the prefers-reduced-motion media query. This media query detects whether the user has requested the operating system to minimize the amount of animation or motion it uses.
This snippet taps into that user setting and, if enabled, it gets rid of all your CSS animations and transitions. It’s a bit of a sledgehammer approach though — remember, the key word in this media query is reduced. Make sure functionality isn’t breaking and that users aren’t losing important context by opting out of the animation. I prefer tailoring reduced motion options for those users. Think simple opacity fades instead of zooming or panning effects.
What about JavaScript, though?
Glad you asked! We can make use of the reduced motion media query in JavaScript land, too!
Tapping into system preferences isn’t bulletproof. After all, it’s there’s no guarantee that everyone affected by motion knows how to change their settings. To be extra safe, it’s possible to add a reduced motion toggle in the UI and put the power back in the user’s hands to decide. We {the collective} has a really nice implementation on their site
Here’s a straightforward example:
Scroll animations
One of my favorite things about animating on the web is hooking into user interactions. It opens up a world of creative possibilities and really allows you to engage with visitors. But it’s important to remember that not all interactions are opt-in — some (like scrolling) are inherently tied to how someone navigates around your site.
The Nielson Norman Group has done some great research on scroll interactions. One particular part really stuck out for me. They found that a lot of task-focused users couldn’t tell the difference between slow load times and scroll-triggered entrance animations. All they noticed was a frustrating delay in the interface’s response time. I can relate to this; it’s annoying when you’re trying to scan a website for some information and you have to wait for the page to slowly ease and fade into view.
If you’re using GreenSock’s ScrollTrigger plugin for your animations, you’re in luck. We’ve added a cool little property to help avoid this frustration: fastScrollEnd.
fastScrollEnd detects the users’ scroll velocity. ScrollTrigger skips the entrance animations to their end state when the user scrolls super fast, like they’re in a hurry. Check it out!
There’s also a super easy way to make your scroll animations reduced-motion-friendly with ScrollTrigger.matchMedia():
I hope these snippets and insights help. Remember, consider the purpose, lead with empathy, and use your animation powers responsibly!
Lea Verou made a Web Component for processing Markdown. Looks like there were a couple of others out there already, but I agree with Lea in that this is a good use case for the light DOM (as opposed to the shadow DOM that is normally quite useful for web components), and that’s what Lea’s does. The output is HTML so I can imagine it’s ideal you can style it on the page like any other type rather than have to deal with that shadow DOM. I still feel like the styling stories for shadow DOM all kinda suck.
The story of how it came to be is funny and highly relatable. You just want to build one simple thing and it turns out you have to do 15 other things and it takes the better part of a week.
One of the best things you can do for your website in 2022 is add a service worker, if you don’t have one in place already. Service workers give your website super powers. Today, I want to show you some of the amazing things that they can do, and give you a paint-by-numbers boilerplate that you can use to start using them on your site right away.
What are service workers?
A service worker is a special type of JavaScript file that acts like middleware for your site. Any request that comes from the site, and any response it gets back, first goes through the service worker file. Service workers also have access to a special cache where they can save responses and assets locally.
Together, these features allow you to…
Serve frequently accessed assets from your local cache instead of the network, reducing data usage and improving performance.
Provide access to critical information (or even your entire site or app) when the visitor goes offline.
Prefetch important assets and API responses so they’re ready when the user needs them.
Provide fallback assets in response to HTTP errors.
In short, service workers allow you to build faster and more resilient web experiences.
Unlike regular JavaScript files, service workers do not have access to the DOM. They also run on their own thread, and as a result, don’t block other JavaScript from running. Service workers are designed to be fully asynchronous.
Security
Because service workers intercept every request and response for your site or app, they have some important security limitations.
Service workers follow a same-origin policy.
You can’t run your service worker from a CDN or third party. It has to be hosted at the same domain as where it will be run.
Service workers only work on sites with an installed SSL certificate.
Many web hosts provide SSL certificates at no cost or for a small fee. If you’re comfortable with the command line, you can also install one for free using Let’s Encrypt.
There is an exception to the SSL certificate requirement for localhost testing, but you can’t run your service worker from the file:// protocol. You need to have a local server running.
Adding a service worker to your site or web app
To use a service worker, the first thing we need to do is register it with the browser. You can register a service worker using the navigator.serviceWorker.register() method. Pass in the path to the service worker file as an argument.
navigator.serviceWorker.register('sw.js');
You can run this in an external JavaScript file, but prefer to run it directly in a script element inline in my HTML so that it runs as soon as possible.
Unlike other types of JavaScript files, service workers only work for the directory in which they exist (and any of its sub-directories). A service worker file located at /js/sw.js would only work for files in the /js directory. As a result, you should place your service worker file inside the root directory of your site.
While service workers have fantastic browser support, it’s a good idea to make sure the browser supports them before running your registration script.
if (navigator && navigator.serviceWorker) {
navigator.serviceWorker.register('sw.js');
}
After the service worker installs, the browser can activate it. Typically, this only happens when…
there is no service worker currently active, or
the user refreshes the page.
The service worker won’t run or intercept requests until it’s activated.
Listening for requests in a service worker
Once the service worker is active, it can start intercepting requests and running other tasks. We can listen for requests with self.addEventListener() and the fetch event.
// Listen for request events
self.addEventListener('fetch', function (event) {
// Do stuff...
});
Inside the event listener, the event.request property is the request object itself. For ease, we can save it to the request variable.
// Listen for request events
self.addEventListener('fetch', function (event) {
// Get the request
let request = event.request;
// Bug fix
// https://stackoverflow.com/a/49719964
if (event.request.cache === 'only-if-cached' && event.request.mode !== 'same-origin') return;
});
Once your service worker is active, every single request is sent through it, and will be intercepted with the fetch event.
Service worker strategies
Once your service worker is installed and activated, you can intercept requests and responses, and handle them in various ways. There are two primary strategies you can use in your service worker:
Network-first. With a network-first approach, you pass along requests to the network. If the request isn’t found, or there’s no network connectivity, you then look for the request in the service worker cache.
Offline-first. With an offline-first approach, you check for a requested asset in the service worker cache first. If it’s not found, you send the request to the network.
Network-first and offline-first approaches work in tandem. You will likely mix-and-match approaches depending on the type of asset being requested.
Offline-first is great for large assets that don’t change very often: CSS, JavaScript, images, and fonts. Network-first is a better fit for frequently updated assets like HTML and API requests.
Strategies for caching assets
How do you get assets into your browser’s cache? You’ll typically use two different approaches, depending on the types of assets.
Pre-cache on install. Every site and web app has a set of core assets that are used on almost every page: CSS, JavaScript, a logo, favicon, and fonts. You can pre-cache these during the install event, and serve them using an offline-first approach whenever they’re requested.
Cache as you browser. Your site or app likely has assets that won’t be accessed on every visit or by every visitor; things like blog posts and images that go with articles. For these assets, you may want to cache them in real-time as the visitor accesses them.
You can then serve those cached assets, either by default or as a fallback, depending on your approach.
Implementing network-first and offline-first strategies in your service worker
Inside a fetch event in your service worker, the request.headers.get('Accept') method returns the MIME type for the content. We can use that to determine what type of file the request is for. MDN has a list of common files and their MIME types. For example, HTML files have a MIME type of text/html.
We can pass the type of file we’re looking for into the String.includes() method as an argument, and use if statements to respond in different ways based on the file type.
// Listen for request events
self.addEventListener('fetch', function (event) {
// Get the request
let request = event.request;
// Bug fix
// https://stackoverflow.com/a/49719964
if (event.request.cache === 'only-if-cached' && event.request.mode !== 'same-origin') return;
// HTML files
// Network-first
if (request.headers.get('Accept').includes('text/html')) {
// Handle HTML files...
return;
}
// CSS & JavaScript
// Offline-first
if (request.headers.get('Accept').includes('text/css') || request.headers.get('Accept').includes('text/javascript')) {
// Handle CSS and JavaScript files...
return;
}
// Images
// Offline-first
if (request.headers.get('Accept').includes('image')) {
// Handle images...
}
});
Network-first
Inside each if statement, we use the event.respondWith() method to modify the response that’s sent back to the browser.
For assets that use a network-first approach, we use the fetch() method, passing in the request, to pass through the request for the HTML file. If it returns successfully, we’ll return the response in our callback function. This is the same behavior as not having a service worker at all.
If there’s an error, we can use Promise.catch() to modify the response instead of showing the default browser error message. We can use the caches.match() method to look for that page, and return it instead of the network response.
// Send the request to the network first
// If it's not found, look in the cache
event.respondWith(
fetch(request).then(function (response) {
return response;
}).catch(function (error) {
return caches.match(request).then(function (response) {
return response;
});
})
);
Offline-first
For assets that use an offline-first approach, we’ll first check inside the browser cache using the caches.match() method. If a match is found, we’ll return it. Otherwise, we’ll use the fetch() method to pass the request along to the network.
// Check the cache first
// If it's not found, send the request to the network
event.respondWith(
caches.match(request).then(function (response) {
return response || fetch(request).then(function (response) {
return response;
});
})
);
Pre-caching core assets
Inside an install event listener in the service worker, we can use the caches.open() method to open a service worker cache. We pass in the name we want to use for the cache, app, as an argument.
The cache is scoped and restricted to your domain. Other sites can’t access it, and if they have a cache with the same name the contents are kept entirely separate.
The caches.open() method returns a Promise. If a cache already exists with this name, the Promise will resolve with it. If not, it will create the cache first, then resolve.
// Listen for the install event
self.addEventListener('install', function (event) {
event.waitUntil(caches.open('app'));
});
Next, we can chain a then() method to our caches.open() method with a callback function.
In order to add files to the cache, we need to request them, which we can do with the new Request() constructor. We can use the cache.add() method to add the file to the service worker cache. Then, we return the cache object.
We want the install event to wait until we’ve cached our file before completing, so let’s wrap our code in the event.waitUntil() method:
// Listen for the install event
self.addEventListener('install', function (event) {
// Cache the offline.html page
event.waitUntil(caches.open('app').then(function (cache) {
cache.add(new Request('offline.html'));
return cache;
}));
});
I find it helpful to create an array with the paths to all of my core files. Then, inside the install event listener, after I open my cache, I can loop through each item and add it.
let coreAssets = [
'/css/main.css',
'/js/main.js',
'/img/logo.svg',
'/img/favicon.ico'
];
// On install, cache some stuff
self.addEventListener('install', function (event) {
// Cache core assets
event.waitUntil(caches.open('app').then(function (cache) {
for (let asset of coreAssets) {
cache.add(new Request(asset));
}
return cache;
}));
});
Cache as you browse
Your site or app likely has assets that won’t be accessed on every visit or by every visitor; things like blog posts and images that go with articles. For these assets, you may want to cache them in real-time as the visitor accesses them. On subsequent visits, you can load them directly from cache (with an offline-first approach) or serve them as a fallback if the network fails (using a network-first approach).
When a fetch() method returns a successful response, we can use the Response.clone() method to create a copy of it.
Next, we can use the caches.open() method to open our cache. Then, we’ll use the cache.put() method to save the copied response to the cache, passing in the request and copy of the response as arguments. Because this is an asynchronous function, we’ll wrap our code in the event.waitUntil() method. This prevents the event from ending before we’ve saved our copy to cache. Once the copy is saved, we can return the response as normal.
/explanation We use cache.put() instead of cache.add() because we already have a response. Using cache.add() would make another network call.
// HTML files
// Network-first
if (request.headers.get('Accept').includes('text/html')) {
event.respondWith(
fetch(request).then(function (response) {
// Create a copy of the response and save it to the cache
let copy = response.clone();
event.waitUntil(caches.open('app').then(function (cache) {
return cache.put(request, copy);
}));
// Return the response
return response;
}).catch(function (error) {
return caches.match(request).then(function (response) {
return response;
});
})
);
}
If you do nothing else, this will be a huge boost to your site in 2022.
But there’s so much more you can do with service workers. There are advanced caching strategies for APIs. You can provide an offline page with critical information if a visitor loses their network connection. You can clean up bloated caches as the user browses.
Jeremy Keith’s book, Going Offline, is a great primer on service workers. If you want to take things to the next level and dig into progressive web apps, Jason Grigsby’s book dives into the various strategies you can use.
And for a pragmatic deep dive you can complete in about an hour, I also have a course and ebook on service workers with lots of code examples and a project you can work on.
I hadn’t heard of most of the Chrome extensions that Sarem Gizaw lists as 2021 favorites. Here are my hot takes on all of them, except the virtual learning specific ones that aren’t very relevant to me.
LoomOh that’s neat how it records your screen and your camera, encouraging you to do little personal walkthroughs of a thing for someone. I actually had a guest author use this to pitch an idea and it was both above-and-beyond in a good way, and didn’t look overly difficult to do. I like how it’s a browser app, meaning that something like a Chromebook could use it even though it feels like an app that would otherwise be native.
MoteSimilar in spirit to Loom—leave feedback with your voice instead of writing it out by hand. This is clearly a much more personal way to do things. I wonder if it’s big in the eduction space where that student/teacher connection would be bolstered by this.
WordtuneLooks like a competitor to Grammarly. I like to see that sort of competition. That said, I think Grammarly is good, though a bit heavy handed sometimes. It’s nice to see them being pushed by another player in the same space.
ForestI’m skeptical of productivity apps that ultimately feel like yet-another-app I need to use and learn. But maybe it’s worth it if the point is that it keeps you on task and away from distracting apps? I was very skeptical of Centered, but it really did seem to help the times I tried it.
Dark ReaderThis forces dark mode on sites that otherwise don’t have one. I’ve heard it does a pretty decent job.
Tab Manager PlusI’m as guilty as anyone for having too many tabs open… all the time. But now that Chrome has tab groups natively (as well as “pinning”), I just roll with that.
NimbusInteresting that this is in an entirely different category that Loom. I guess it’s more about screenshots than video. Note that you don’t need a browser plugin to do full-length screenshots—in DevTools you can do CMD+Shift+P then search for “Capture Full Size Screenshot.” I actually use that quite a bit. For other screenshots, I’m super hot on Cleanshot.
StylusI love that this exists and that it is still somewhat actively developed (a beta exists). Being able to slap some extra CSS onto sites of your own liking and have it persist is super cool, and a community of user-created styles for other websites is even cooler.
RakutenThis is one of those things that automatically applies coupons during checkout to eCommerce stores. Skeeves me out for some reason, especially the “Cash Back” idea. I would think that if they are sending you money, it’s because they are making money off of you, meaning it’s from hidden-feeling affiliate partnerships or by selling your purchase data and personal details.
Browser extensions have come a long way toward being cross-browser compatible, so I’d think a lot of these are available for Safari and Firefox now—or at least could be without enormous amounts of work if the authors felt like doing it.
Notably, there are no ad blocker plugins in the list. Not a huge surprise there, even though I’m sure they are some of the most-downloaded and used. I use Ghostery, but I haven’t re-evaluated the landscape there in a while. I like how Ghostery makes it easy for me to toggle on-and-off individual scripts, both on individual sites and broadly across all sites. That means I could enable BuySellAds (something even Adblock Plus does by default) and Google Analytics scripts, but turn off A/B testers or gross ad networks.
Recently, I found out that I can combine both of these really cool things with CSS custom properties in such a way that a paint worklet’s appearance can be tailored to fit the user’s preferred color scheme!
Setting the stage
I’ve been overdue for a website overhaul, and I decided to go with a Final Fantasy II theme. My first order of business was to make a paint worklet that was a randomly generated Final Fantasy-style landscape I named overworld.js:
It could use a bit more dressing up—and that’s certainly on the agenda—but this here is a damn good start!
After I finished the paint worklet, I went on to work on other parts of the website, such as a theme switcher for light and dark modes. It was then that I realized that the paint worklet wasn’t adapting to these preferences. This might normally be a huge pain, but with CSS custom properties, I realized I could adapt the paint worklet’s rendering logic to a user’s preferred color scheme with relative ease!
Setting up the custom properties for the paint worklet
The state of CSS these days is pretty dope, and CSS custom properties are one such example of aforementioned dopeness. To make sure both the Paint API and custom properties features are supported, you do a little feature check like this:
const paintAPISupported = "registerProperty" in window.CSS && "paintWorklet" in window.CSS`
The first step is to define your custom properties, which involves the CSS.registerProperty method. That looks something like this:
CSS.registerProperty({
name, // The name of the property
syntax, // The syntax (e.g., <number>, <color>, etc.)
inherits, // Whether the value can be inherited by other properties
initialValue // The default value
});
Custom properties are the best part of using the Paint API, as these values are specified in CSS, but readable in the paint worklet context. This gives developers a super convenient way to control how a paint worklet is rendered—entirely in CSS.
For the overworld.js paint worklet, the custom properties are used to define the colors for various parts of the randomly generated landscape—the grass and trees, the river, the river banks, and so on. Those color defaults are for the light mode color scheme.
The way I register these properties is to set up everything in an object that I call with Object.entries and then loop over the entries. In the case of my overworld.js paint worklet, that looked like this:
Because every property sets an initial value, you don’t have to specify any custom properties when you call the paint worklet later. However, because the default values for these properties can be overridden, they can be adjusted when users express a preference for a color scheme.
Adapting to a user’s preferred color scheme
The website refresh I’m working on has a settings menu that’s accessible from the site’s main navigation. From there, users can adjust a number of preferences, including their preferred color scheme:
The color scheme setting cycles through three options:
System
Light
Dark
“System” defaults to whatever the user has specified in their operating system’s settings. The last two options override the user’s operating system-level setting by setting a light or dark class on the <html> element, but in the absence of an explicit, the “System” setting relies on whatever is specified in the prefers-color-scheme media queries.
The hinge for this override depends on CSS variables:
/* Kicks in if the user's site-level setting is dark mode */
html.dark {
/* (I'm so good at naming colors) */
--pink: #cb86fc;
--firion-red: #bb4135;
--firion-blue: #5357fb;
--grass-green: #3a6b1a;
--light-rock: #ce9141;
--dark-rock: #784517;
--river-blue: #69a3dc;
--light-river-blue: #b1c7dd;
--menu-blue: #1c1f82;
--black: #000;
--white: #dedede;
--true-black: #000;
--grey: #959595;
}
/* Kicks in if the user's system setting is dark mode */
@media screen and (prefers-color-scheme: dark) {
html {
--pink: #cb86fc;
--firion-red: #bb4135;
--firion-blue: #5357fb;
--grass-green: #3a6b1a;
--light-rock: #ce9141;
--dark-rock: #784517;
--river-blue: #69a3dc;
--light-river-blue: #b1c7dd;
--menu-blue: #1c1f82;
--black: #000;
--white: #dedede;
--true-black: #000;
--grey: #959595;
}
}
/* Kicks in if the user's site-level setting is light mode */
html.light {
--pink: #fd7ed0;
--firion-red: #bb4135;
--firion-blue: #5357fb;
--grass-green: #58ab1d;
--dark-rock: #a15d14;
--light-rock: #eba640;
--river-blue: #75b9fd;
--light-river-blue: #c8e3fe;
--menu-blue: #252aad;
--black: #0d1b2a;
--white: #fff;
--true-black: #000;
--grey: #959595;
}
/* Kicks in if the user's system setting is light mode */
@media screen and (prefers-color-scheme: light) {
html {
--pink: #fd7ed0;
--firion-red: #bb4135;
--firion-blue: #5357fb;
--grass-green: #58ab1d;
--dark-rock: #a15d14;
--light-rock: #eba640;
--river-blue: #75b9fd;
--light-river-blue: #c8e3fe;
--menu-blue: #252aad;
--black: #0d1b2a;
--white: #fff;
--true-black: #000;
--grey: #959595;
}
}
It’s repetitive—and I’m sure someone out there knows a better way—but it gets the job done. Regardless of the user’s explicit site-level preference, or their underlying system preference, the page ends up being reliably rendered in the appropriate color scheme.
Setting custom properties on the paint worklet
If the Paint API is supported, a tiny inline script in the document <head> applies a paint-api class to the <html> element.
/* The main content backdrop rendered at a max-width of 64rem.
We don't want to waste CPU time if users can't see the
background behind the content area, so we only allow it to
render when the screen is 64rem (1024px) or wider. */
@media screen and (min-width: 64rem) {
.paint-api .backdrop {
background-image: paint(overworld);
position: fixed;
top: 0;
left: 0;
width: 100%;
height: 100%;
z-index: -1;
/* These oh-so-well-chosen property names refer to the
theme-driven CSS variables that vary according to
the user's preferred color scheme! */
--overworld-grass-green-color: var(--grass-green);
--overworld-dark-rock-color: var(--dark-rock);
--overworld-light-rock-color: var(--light-rock);
--overworld-river-blue-color: var(--river-blue);
--overworld-light-river-blue-color: var(--light-river-blue);
}
}
There’s some weirdness here for sure. For some reason, that may or may not be the case later on—but is at least the case as I write this—you can’t render a paint worklet’s output directly on the <body> element.
Plus, because some pages can be quite tall, I don’t want the entire page’s background to be filled with randomly generated (and thus potentially expensive) artwork. To get around this, I render the paint worklet in an element that uses fixed positioning that follows the user as they scroll down, and occupies the entire viewport.
All quirks aside, the magic here is that the custom properties for the paint worklet are based on the user’s system—or site-level—color scheme preference because the CSS variables align with that preference. In the case of the overworld paint worklet, that means I can adjust its output to align with the user’s preferred color scheme!
Not bad! But this isn’t even that inventive of a way to control how paint worklets render. If I wanted, I could add some extra details that would only appear in a specific color scheme, or do other things to radically change the rendering or add little easter eggs. While I learned a lot this year, I think this intersection of APIs was one of my favorites.
The first time I had my breath taken away by a humble scrollbar was on this very site. When CSS-Tricks v17 rolled out with its FAT CHONKY BOI, my jaw dropped.
I didn’t know you could do that on a professional site. And it would look… good?!
I appreciated so much about it—the gentle gradient, the reckless rounding, the blended background, the sheer satisfying CHONKINESS that dares you to click and wiggle it up and down just to marvel in its tactile heft. How bold! How avant-garde! What sheer, accessible, gracefully degrading delight!
Of course, because fun doesn’t last, the current CSS Tricks scrollbar is more grown-up and muted, light gray on black. Still on brand, still flexing subtle gradient muscle, but not so distracting that it detracts from the reading experience. In our ultra-functional world of MVPs and 80/20 rules, maximizing efficiency and hacking productivity, custom scrollbars evince something about craftsmanship. It says with no words what you can’t in a hundred.
Thanks to some standardization (with more on the way), the API is simple: seven pseudo-elements and eleven pseudo-classes that target (almost) every imaginable component and state of the trusty (and often overlooked) scrollbar. Sounds like a lot, but you can go very far with just three of them:
body::-webkit-scrollbar {
/* required - the "base" of the bar - mostly for setting width */
}
body::-webkit-scrollbar-track {
/* the "track" of the bar - great for customizing "background" colors */
}
body::-webkit-scrollbar-thumb {
/* the actual draggable element, the star of the show! */
}
From here, it works like any other selected element, so bring your full bag of single div CSStricks! Media queries work! Background gradients work! Transparency works! Margins with all manner of CSS units work! (Not everything works… I’d love to style cursor on my scrollbars for that authentic GeoCities look). I tried it out on my site with Lea Verou’s stash of CSS background gradients (my stash of stashes is here) and ended up with an atrocious combo of stripy barber pole (💈) for the thumb element and transparent hearts for the track. But it was most definitely mine—so much so that people have taken to calling it the “swyxbar” when I implemented a subtler version at work.
Every front-end developer should take this too far at least once in their careers. Live dangerously!Break the rules!Rage against the user agent! And maybe don’t ship scrollbars that break user expectations on a mass-market product (like Google Wave did back in the day)!
How do you make a great website? Everyone has an answer at the ready: Flashy animations! The latest punk-rock CSS trick! Gradients! Illustrations! Colors to pack a punch! Vite! And, sure, all these things might make a website better. But no matter how fancy the application is or how dazzling the technology will ever be under the hood, a great website will always require great text.
So, whenever I’m stuck pondering the question: “how do I make this website better?” I know the answer is always this:
care for the text.
Without great writing, a website is harder to read, extremely difficult to navigate, and impossible to remember. Without great writing, it’s hardly a website at all. But it’s tough to remember this day in and day out—especially when it’s not our job to care about the text—yet each and every <p> tag and <button> element is an opportunity for great writing. It’s a moment to inject some humor or add a considerate note that helps people.
So: care for the text. Got it. But there are so many ways to care! From commas and smart quotes, to labels in our forms, to typography, and even the placeholders in our inputs. It’s a dizzying amount of responsibility—but it’s worth every second of our time.
Here’s one example: a while ago, we needed to explain a new feature to our users and point to it in the UI. We could use our pop-up component to explain how our team just fixed something for a ton of folks—but!—I knew that no matter what the fancy new feature was, our customers would be annoyed by a pop-up.
After thinking about it for far too long I realized that this was an opportunity to acknowledge how annoying this popup was:
With this project, I could’ve just thrown some text in that button that says “Dismiss” but our little team of writers at Sentry constantly remind me that even the smallest, most boring block of text can be a playground. Each string has potential, even in this dumb example. It doesn’t change the world or anything, but it improves something that would otherwise be yawn-worthy, predictable.
Not every bit of text in a website needs to be passive-aggressive though. When you’re in the checkout ordering medicine, you likely don’t want to be reading a quirky story or a poem, and you don’t want to click a button that insults you. In this context, caring for the text means something entirely different. It’s about not getting in the way but being as efficient and empathetic as possible. And this is true of every link in the footer, every navigation item, every <alt> tag, and subtitle—they all require care and attention. Because all of these details add up.
These are the details that make a good website great.
One thing people can do to make their websites better is to remember that you are not representative of all your users. Our life experiences and how we interact with the web are not indicative of how everyone interacts with the web.
We must care about accessibility.
Some users rely on assistive technology to navigate web pages.
We must care about multi-language support.
Layouts that make sense to me, as a native English speaker (a right-to-left language) don’t necessarily make sense in Arabic (a left-to-right language) and can’t simply be swapped from a content perspective.
We must care about common/familiar UX paradigms.
What may be obvious to you may not be obvious to other users.
Take the time to research your key user markets and understand how these users expect your website or product to function. Don’t forget accessibility. Don’t forget internationalization. And don’t forget that you are not the representation of all users.
Last year, we kicked out a roundup of published surveys, research, and other findings from around the web. There were some nice nuggets in there, like a general sentiment that the web needs more documentation, Tailwind CSS dun got big, TypeScript is the second most beloved language, and that the top one million sites are “dismal” when it comes to accessibility.
Among many other findings, of course.
Now, as 2021 winds to a close and many of us tend to reflect back on the past year, let’s do that once again. It is pretty interesting to not only see what trends are emerging in our industry (and those adjacent to it) but how those trends, you know, trend over time.
What it is: A study that looks at 8.2 million websites sourced from the Chrome UX Report that analyzes how the sites were made, breaking things up into sections that include page content, user experience, content publishing, and content distribution. The CSS chapter is written by Eric Meyer and Shuvam Manna, and reviewed by folks that include CSS-Tricks guest authors Adam Argyle and Lea Verou.
What it found: Last year, we saw CSS contributing more to overall page weight and that trend continued into this year with the median weight of a CSS file up 7.9% to around 70 KB. There’s so much great data in here to dig through, but here’s one eye-opening stat: this year set the record for most external stylesheets loaded by a page, coming in at a whopping 2,368 files, which is nearly double last year’s record total. It’s like someone is trying to win that sad race.
What it is: An annual look at CSS, surveying developers on the features they use, as well as their understanding of and satisfaction with them. Co-creator Sacha Greif has written about the survey here on CSS-Tricks in the past (including why CSS needs a survey at all). This year’s survey garnered 8,714 responses from developers around the world.
What it found: Welp, Tailwind CSS continues to explode (usage up from 26% to 39%). CSS variables were already mainstream in 2019 (59.6% usage) but are downright common (84.4%) these days. There are lots of little gems like this, but one more that specifically caught my eye is that the perception that CSS is “easy to learn” has subtly trended down between 2019 and 2021.
What it is: This is sort of GitHub’s internal review of activity, like the number of users, repos, languages, and whatnot. Those numbers sort of reveal interesting things about our work-life balance, communities, and general activity.
What it found: Last year’s findings were interesting because developer activity spiked on GitHub between February and March 2020, signaling that people were actually busier as a result of the global pandemic, whether it be from employers or perhaps side projects. This year continues to show a sea change in the way we work, with more than 86% of respondents expecting to work either fully remote or in some sort of hybrid arrangement in the next year.
Also worth noting is that a lack of documentation continues to be an ongoing issue. Oh, and this survey shows TypeScript usage absolutely skyrocketing—it’s become the fourth most-used language on GitHub since it was released in 2017, supplanting PHP, which has fallen to sixth since 2019 (perhaps in part to WordPress continuing its transition to JavaScript).
What it is: A report that the search giant releases each year highlighting top search terms, breaking them down into categories, including news, people, actors, definitions, recipes, and more.
What it found: I only find this report interesting because it’s sorta like a glimpse into the collective mind of users and what they search for on the web. Last year, I described it like flipping through a high school yearbook, and that’s still exactly how it feels, even if it isn’t directly related to front-end web design or development. For example, look at searches that include “how to be” in the query. The top search was “how to be eligible for a stimulus check,” followed by “how to be attractive” and “how to be happy alone.” So, I guess we’re collectively searching for how to be a better-looking rich person who is on a quest for happiness in the absence of companionship. Generalizations, FTW!
What it is: A survey of 80,000 developers (up from 65,000) that looks at the technologies they use and how they use them.
What it found: This report confirms what GitHub’s State of the Octoverse already shows us—TypeScript is growing. More interesting is a question that asks developers what they do when they are “stuck” on something. If you have ever beat yourself up for not knowing how to solve a particular thing, take solace in the fact that nearly 90% of developers are just like you and have to “Google it” too. If not that, then 80% head over to StackOverflow for ideas.
What it is: Last year, we looked at Angular’s general developer survey. This year, they have one devoted entirely to CSS. It’s more of a status update than a survey, but still interesting to see what the framework is prioritizing when it comes to the namesake language of this very website.
What it found: Again, no findings here. But Angular reports it has dropped support for Internet Explorer 11, which has opened the floodgates for other CSS features to make their way into the framework—things like CSS grid, logical properties, calc() and more.
What it is: A survey of nearly 31,743 developers (up from 20,000) by JetBrains, maker of the popular PhpStorm IDE.
What it found: The key takeaways are published right up front in this report. Like last year, JavaScript is the most popular language. But unlike last year, JavaScript is also the language most developers are studying, taking over Python’s spot at the top.
What it is: An evaluation of the accessibility of the top 1 million and over 100,000 additional interior pages. What are those top million sites? They include ones from the Majestic Millions list with additional page analysis coming from the Open PageRank Initiative and Alexa Top Sites.
What it found: Some good news—the number of distinct accessibility errors found in this year’s batch of sites is down 15.6%! The bad news? The report still found 51,379,694 errors overall, and those are only the ones they could detect. And even though there’s a decrease in the number of sites that contain WCAG 2 errors, it’s still 97.4% of all sites that were scanned which is a mind-blowing number. The leading issue? Low contrast text at 86.4% of all homepages in the study. We have lots of work left to do in this space.
WebAIM Survey of Web Accessibility Practitioners #3
What it is: This is the third WebAIM survey that polls web accessibility practitioners. The last one was done in 2018. What I like about this survey is that it paints a fairly nice picture of what it looks like to work in an accessibility role and the expectations that come with it.
What it found: Accessibility training and education really caught my attention. There is very little formal schooling in web accessibility (12.5%). Most of it comes from online resources (91.3%) and on-the-job training/experiences (83.4%). That seems like a huge opportunity for academia to swoop in and help grow the field.
What it is: This is the ninth time WebAIM has surveyed people who use screen readers to browse the web. We often hear that knowing your audience is a good way to create better user experiences, and this survey is a nice broad look at an audience that often goes overlooked.
What it found: What’s the most cited disability? Blindness. Which screen reader is used the most? JAWS. How about on mobile? VoiceOver. How do most users find information on a page? Navigate through the headings. And, hey, 40% of respondents believe the web has gotten more accessible, but 60% believe it is either unchanged or has gotten worse (and they aren’t wrong based on the WebAIM Million report above).
What it is: A survey of 15,000 (down from 20,000) developers and HR professionals, covering learning, skills, languages, and demographics. It’s a little glimpse into the hiring that goes into development roles.
What it found: The report cites several findings in the summary, like that 48% of companies offer the possibility to work 100% remote.
What it is: A voluntary survey of 5,154 (down from 6,607) working professionals that evaluates their career priorities, challenges, and motivations. This isn’t exactly focused on the front end. But given that LinkedIn Learning is now a core part of LinkedIn itself, and its archive of front-end videos and courses is growing, it feels like it could start to produce some interesting insights over time about what we’re learning and how we learn it.
What it found: Again, this is all about people’s career motivations more than it is about anything on the front end. But as a card-carrying member of Gen X, I found it interesting that Gen Z learners watched 50% more hours of learning content in 2020 than they did in 2019. On more of a sour note, though, only 40% of folks say their managers are actively challenging them to learn new skills. Seems like that number should be a lot higher since the report also shows that 59% consider “upskilling” and “reskilling” their top priority.
What it is: Insights on innovation and hiring trends in and around the tech industry.
What it found:“With the acceleration of digital transformation during the pandemic, every company is now prioritizing one thing: innovation.” Exactly what you might expect from the opening line of a report that is focused on—cough—innovation. Anyway, there is one jarring finding that says only 47% of respondents use skills as the foundation for creating a tech job description. That begs the question: what on earth is being used to write tech job descriptions? I’m reminded that job titles in our industry are all over the place and that the interview process can be just as bad.
What it is: A study on the growth, evolvement and use of the Internet of Things (IoT), a term used to describe physical objects taking on Internet capabilities, say a watch, lightbulb, or whatever. The study polled about 3,000 people with a 20-minute online survey.
What it found: Mostly nice trivia for cocktail chatter. 90% of companies report adopting IoT strategies, which is consistent with last year’s 91%. Personally, when I hear “Internet of Things,” my mind goes straight to smart refrigerators and more HomeKit-supported toys. But what these findings show is that technologies—like artificial intelligence and edge computing—are being used to automate business operations, manufacturing, and logistics in such ways that improve quality, consistency, and efficiency. Wondering how the pandemic has impacted the IoT? A large chunk of companies (44%) say it’s accelerated their IoT initiatives.
Developer Nation 2021 State of the Developer Survey
What it is: A survey of 19,000+ developers across 169 countries who work on a range of tech projects, from 5G and IoT to machine learning and apps for third-party platforms. It looks at things like developer demographics, workplace behavior, and various industry trends.
What it found: One question asks developers what, if anything, would make them leave their current employer and you might not be surprised that the leading factor is… drumroll… money. I would’ve expected something like 75% of folks to say that, but the actual figure is 50%. Those who wouldn’t change their employer for anything? That would be 10%. I love subjective hypotheticals like this.
What it is: UpWork’s second annual survey that checks on the current state of freelancing, including the effect Covid has had on it, and what we might expect in the future.
What it found: The percentage of freelancers who provide skilled services is 53%, up from 50% in 2020 and 45% in 2019. Also, 56% of non-freelancers say they are likely to freelance in the future. Much of that likely has to do with the current “Great Resignation” of workers leaving their jobs post-pandemic as work becomes more remote and flexible. Oh, and 44% of freelancers say they make more money freelancing that what they believe they could get working for a “traditional” employer… so maybe that group of developers in the Developer Nation study who say money is the biggest factor for leaving a job ought to look into freelancing instead. 🤑
What it is: A survey of 3,359 designers to find out who they are, what they do, and what sort of tools they’re using this year to bridge the physical gaps left by the rise of working from home.
What it found: The first thing that stood out to me is that “product designer” is the leading job title (31%) that respondents use to identify themselves. That’s slightly ahead of “UX designer” (30%), but leaps past other job titles, like “UI designer” (10%), “web designer” (5%), and “graphic designer” (3%). There’s a lot less surprise as far as design tooling goes, with Figma vastly leading the pack (64%), followed by Sketch (12%). That said, it’s a little surprising to me that Adobe Illustrator and Photoshop combine for a minuscule 3%.
What it is: A survey of more than 28,000 developers (up from 13,500!) that measures who is developing with APIs, what sort of work they’re doing with them, and how APIs are evolving.
What it found: Postman users made 855 million API requests in the past year, which is up a massive 56%. And the trend should continue—67% of developers say they’ve adopted an API-first philosophy and 94% say they believe their companies will either invest more or the same in APIs in the next year. We’ll see when those results roll in next year!
What it is: A survey of 880 anonymous submissions commissioned by the Chrome team about the state of scrolling on the web. Of those submissions, 336 completed every answer. The questions were drawn from a 2019 MDN Web DNA Report that outlines the most commonly reported issues related to scrolling.
What it found: Chris actually covered this back in September and noted that nearly half of surveyed developers are dissatisfied with scrolling on the web. He somewhat lumped himself in that group noting that smooth scrolling leaves a lot to be desired when it comes to development, and how scroll snapping seems to attract the occasional browser bug.
Choice open-ended answer: “Scrolljacking should be considered a crime.”
What it is: Looks like this survey slipped under our radar last year because this is Sparkbox’s fourth edition looking at design systems, zeroing in on adoption, contributions, design and technical debt, and how organizations use design systems. This year’s results reflect the answers of 376 submissions.
What it found: Roughly 40% of folks consider their design system either successful (31%) or “very” successful (8%). When it comes to design system adoption, 57% said it was an individual who brought the idea of a design system to their organization, whereas 22% said it was leadership, and 3% said it came from a third-party recommendation. Interestingly enough, encouraging adoption is the top priority of those surveyed, but overcoming design and technical debt is the top challenge.
What it is: A survey of 4,072 developers working on the MacOS platform with the goal of understanding the profile of this specific developer niche. And while it means nothing, I hereby crown this the most gorgeous report of the bunch. 🏆
What it found: A majority of those surveyed (54.8%) fall somewhere in the 30-44 age group, most of those (39%) are between 30-39. If this sample group is truly reflective of the Mac developer community, then it looks as though Mac development itself is on the rise with 48.8% bringing fewer than 10 years of experience into the job. (5.1% are OGs with 30+ years of experience.) Other than that, most developers say they write JavaScript the most (54.6%) while Swift is the top language (28.8%) they want to learn. And hey, CSS-Tricks is noted as a top learning resource! The feeling is mutual, as the Tower team wrote our recent Advanced Git series.
What it is: Another one we missed last year! HackerEarth’s second annual survey polls 25,431 developers across 171 countries, asking participants about their skills, workplace, learning methods, and tooling.
What it found: First off, I love that this survey has a section purely about student developers because it reveals what they’re interested in, including artificial intelligence (16.3%), general information technology (13.8%), data science (11.8%), and the Internet of Things (9%). Blockchain (4.7%) makes an appearance in there as well. An overwhelming majority (81.8%) start to learn coding between ages 15-21. The results also show a lot of interest in TypeScript, that Zoom fatigue is real, LinkedIn is leading way to find work, and that many (22%) take walks as a way to unwind (compared to 5.3% who either can’t or don’t take breaks at all). Interesting stuff!
What it is: An analysis of threats and risks on the web for the first six months of 2021. Unless I missed it somewhere, there isn’t a whole lot of detail on the methodology used here, but I suspect there’s scanning involved since the data shows that 7.3 million ransomware threads were detected.
What it found: That 7.3 million figure sounds BIG (and it is), but it also represents about half of what was detected in the first six months of 2020. It’s worth poking at this because it identifies a number of prominent attacks, most of which I was totally oblivious to. It’s crazy how sophisticated attacks have become—to the extent where one ransomware attack left half of the U.S. East Coast without fuel for a spell this year.
What it is: I think this is the only instance of a survey in this list that calls itself a “2020” survey because that’s when the data was collected—rather than 2021, the year it was published. 17,295 WordPress professionals and users submitted answers to questions that dig into how they use and work with WordPress.
What it found: Kudos to WordPress (disclaimer: they’re a sponsor of this site!) for not sugarcoating the fact that the data shows clear frustrations with some features, and that its Net Promoter Score (NPS, which is metric for customer loyalty) has hit an all-time low (41% are either passive customers or detractors). 59% say they use WordPress because “it is what I know best” which might be a little concerning as the platform wades deeper into the no-code waters of blocks and full-site editing capabilities—new terrain for most WordPress developers and users.
There’s so much data in here, from site customization trends, to how comfortable developers are writing JavaScript and PHP, to working with React, to level of WordPress experience, to the most used plugins, to… well, there’s just a lot.
An Informal Survey of Web Performance Tooling in 2021
What it is: Sia Karamalegos opened a survey to learn more about the web performance tooling folks are using to make sites fast. It’s a small sample size of 36 people, but still interesting nonetheless.
What it found: WebPageTest and Chrome’s DevTools came in tied for most used performance tooling. Funny enough, second place was also a tie, but between PageSpeed Insights and simply spinning up a plain ol’ browser with JavaScript disabled. That’s what people say they use, but there’s also a similar question that asks what performance tools they want to use. Boomerang, Lighthouse Treemap, and Sitespeed.io top that list.
Well, that does it for another year of rounding up research! I know it probably goes without saying, but this is all just for fun. Very few of the surveys followed a scientific method and many of the sample sizes are too small to be a true, proven reflection of reality. But what fun it is to grok the results and put them up against our personal assumptions!
Know of a report I missed? Let me know and I’ll try to work it in. 🤠