I’d say “website” fits better than “mobile app” but I like this framing from Max Lynch:
Every production mobile app ultimately has a set of recurring tasks around integration, testing, deployment, and long term maintenance. These tasks often must be automated across a team of many developers and app projects. Building a process for these tasks can be incredibly time consuming and require specialized infrastructure experience, but is critical for the success of any serious app project.
They are talking about “Continuous Integration and Continuous Deployment,” or CI/CD.
Everybody is trying to get you on their CI/CD tooling, and it’s obvious why: it’s a form of lock-in. This stuff is hard, so if they can help make it easier, that’s great, but they tend to do it in their own special way, which means you can’t just up and leave without causing a bunch of work for yourself. I ain’t throwing shade, it’s just how it is.
So much CI/CD stuff crosses my attention:
Max was writing about AppFlow, which is a new CI/CD thing from Ionic. I haven’t used it, but hey, it looks nice. I also have not used Semaphore but it also looks nice.
Heroku deserves a high five for teaching developers long ago that CI/CD should be Git-based and as simple as git push heroku master.
I’m probably missing at least 20 companies here. Like I say, everybody wants you on their system. They want you storing your secrets there. They want you configuring your permissions there.
Matt Hobbs says you should hire a front-end developer because…
“A front-end developer is the best person to champion accessibility best practices in product teams.”
“80-90% of the end-user response time is spent on the front end.”
“A front-end developer takes pressure off interaction designers.”
“If you do not have a front-end developer there is a high risk that the good work the rest of the team does will not be presented to users in the best way.”
I’ve been intrigued by the morphing effect ever since I was a little kid. There’s something about a shape-shifting animation that always captures my attention. The first time I saw morphing left me wondering “ Wow, how did they do that?” Since then, I’ve created demos and written an article about the effect.
There are a lot of options when it comes to different animation libraries that support morphing. Many of them are good and provide numerous features. Lately, I’ve been absorbed by react-spring. React-spring is a nifty physics-enabled animation library built on React. Adam Rackis recently posted a nice overview of it. The library comes with many features including, among a lot of others, SVG morphing. In fact, the beauty of react-spring lies in how it supports morphing. It allows you to do so directly in the markup where you define your SVG paths descriptors. This is good from a bookkeeping perspective. The SVG path descriptors are where you typically would expect them to be.
Here’s a video of what we’re looking into in this article:
It’s a morphing effect in an onboarding sequence. Here, it’s used as a background effect. It’s meant to complement the foreground animation; to make it stand out a bit more rather than taking over the scene.
Creating the SVG document
The first thing we want to do is to create the underlying model. Usually, once I have a clear picture of what I want to do, I create some kind of design. Most of my explorations start with a model and end with a demo. In most cases, it means creating an SVG document in my vector editor. I use Inkscape to draw my SVGs.
When I create SVG documents, I use the exact proportions. I find that it’s better to be precise. For me, it helps my perception of what I want to create when I use the same coordinate system in the vector editor as in the browser and the code editor. For example, let’s say you’re about to create a 24px ✕ 30px SVG icon, including padding. The best approach is to use the exact same size — an SVG document that is 24 pixels wide by 30 pixels tall. Should the proportions turn out to be wrong, then they can always be adjusted later on. SVG is forgiving in that sense. It’s scalable, no matter what you do.
The SVG document we’re creating is 256 pixels wide and 464 pixels high.
Drawing the models
When creating models, we need to think about where we place the nodes and how many nodes to use. This is important. This is where we lay the groundwork for the animation. Modeling is what morphing is all about. We have one set of nodes that transforms into another. These collections of nodes need to have the exact same number of nodes. Secondly, these sets should somehow correlate.
If the relation between the vector shapes is not carefully thought through, the animation won’t be perfect. Each and every node will affect the animation. Their position and the curvature need to be just right. For more detailed information on how nodes in SVG paths are constructed, refer to the explanation of Bézier curves on MDN.
Secondly, we need to take both shapes into account. One of the vectors may contain parts, which cannot be found in the other. Or, there may be other differences between the two models. For these cases, it can be a good idea to insert extra nodes in certain places. The best is to form strategies. Like, this corner goes there, this straight line bulges into a curve and so on.
I’ve put together a pen to illustrate what it looks like when sets correlate bad versus when accurately designed. In the below example, in the morphing effect on the left, nodes are randomly placed. The nodes that make up the number one and two have no relation. In the right example, the placement of the nodes is more carefully planned. This leads to a more coherent experience.
The first model
The line tool is what we use to draw the first vector shape. As the model we’re creating is more abstract, it’s slightly more forgiving. We still need to think about the placement and the curvature, but it allows for more sloppiness.
As for vectors and sizing, the same goes for creating the models for morphing. It’s an iterative process. If it’s not right the first time, we can always go back and adjust. It usually requires an iteration or two to make the animation shine. Here’s what the model looks like when completed.
The result is a smooth abstract SVG shape with eight nodes.
The second and third models
Once the first model is complete, it’s time to draw the second model (which we can also think of as a state). This is the shape the first set will morph into. This could be the end state, i.e., a single morphing effect. Or it could be a step on the way, like a keyframe. In the case we’re looking at, there are three steps. Again, each model must correlate to the previous one. One way to make sure the models match up is to create the second vector as a duplicate of the first. This way, we know the models have the same numbers of nodes and the same look and feel.
Be careful with the editor. Vector editors typically optimize for file size and format. It might very well make the models incompatible when saving changes. I’ve made a habit of inspecting the SVG code after saving the file. It also helps if you’re familiar with the path descriptor format. It is a bit cryptic if you’re not used to it. It could also be a good idea to disable optimization in the vector editor’s preferences.
Repeat the above process for the third shape. Copy, relocate all of the nodes, and set the third color.
Lights, camera… action!
Once the models are created, we’ve done most of the work. Now it’s time to look at the animation part. React-spring comes with a set of hooks that we can use for animation and morphing. useSpring is a perfect candidate for the effect in this example. It’s meant to be used for single animations like the one we’re creating. Here’s how to initiate animations with the useSpring hook.
The above gives us an animation property, x, to build our morphing effect around. A great thing about these animation properties is that we can alter them to create almost any kind of animation. If the value is off, we can change it through interpolation.
The second parameter, the set function, allows us to trigger updates. Below is a snippet of code showing its use. It updates the animation value x with a gesture handler useDrag from the react-use-gesture library. There are many ways in which we can trigger animations in react-spring.
We now have everything set up to combine our models, the path descriptors, with the markup. By adding the animated keyword to the JSX code, we activate react-spring’s animation system. With the interpolation call to, previously named interpolate, we convert drag distances to morphed shapes. The output array contains the models already discussed. To get them in place, we simply copy the path descriptors from the SVG file into the markup. Three different descriptors, d, from three different path elements copied from three different SVG files are now combined into one. Here’s what the JSX node looks like when powered with a react-spring animation.
Let’s look at the differences between a standard JSX path element and what we currently have. To get the morphing animation in place, we have:
added the animated keyword to make the JSX path element animate with React spring,
changed the descriptor d from a string to a React spring interpolation, and
converted the distance x to a keyframe animation between three path descriptors.
Development environment
I have yet to find the perfect development environment for working with SVG. Currently, I go back and forth between the vector editor, the IDE, and the browser. There’s a bit of copying and some redundancy involved. It’s not perfect, but it works. I have experimented a bit, in the past, with scripts that parse SVGs. I have still not found something that I can apply to all scenarios. Maybe it’s just me who got it wrong. If not, it would be great if web development with SVGs could be a bit more seamless.
But in that short amount of time, Chrome has a few new tricks up its sleeve. One of the features Umar covered was the ability to emulate certain browsing conditions including, among many, vision deficiencies like blurred vision.
Chrome 86 introduces new emulators!
Emulate missing local fonts (great for testing when a user’s device does not have an installed font)
Emulate prefers-reduced-data (to complement Chrome support for this new feature!)
Emulate inactive users (yay, no more multiple browser windows with different user accounts!)
The video game Doom famously would flash the screen red when you were hit. Chris Johnson not only took that idea, but incorporated a bunch of the UI from Doom into this tounge-in-cheek JavaScript library called Doom Scroller. Get it? Like, doom scrolling, but like, Doom scrolling. It’s funny, trust me.
I extracted bits from Chris’ cool project to focus on the damage animation itself. The red flash is done in HTML and CSS. First, we create a full screen overlay:
Note that it’s not display: none. It’s much harder to animate that as we have to wait until the animation is completed to apply it. That’s because display isn’t animatable. It’s doable, just annoying.
To flash it, we’ll apply a class that does it, but only temporarily.
The next bit calls the function that does the damage flash. Essentially it tracks the current scroll position, and if it’s past the nextDamagePosition, it will red flash and reset the next nextDamagePostition one full screen height length away.
If you want to see all that, I’ve abstracted it into this Pen:
Google Analytics works by putting a client-side bit of JavaScript on your site. Netlify Analytics works by parsing server logs server-side. They are not exactly apples to apples, feature-wise. Google Analytics is, I think it’s fair to say, far more robust. You can do things like track custom events which might be very important analytics data to a site. But they both have the basics. They both want to tell you how many pageviews your homepage got, for instance.
There are two huge things that affect these numbers:
Client-side JavaScript is blockable and tons of people use content blockers, particularly for third-party scripts from Google. Server-side logs are not blockable.
Netlify doesn’t filter things out of that log, meaning bots are counted in addition to regular people visiting.
So I’d say: Netlify probably has more accurate numbers, but a bit inflated from the bots.
Plugins are a common feature of libraries and frameworks, and for a good reason: they allow developers to add functionality, in a safe, scalable way. This makes the core project more valuable, and it builds a community — all without creating an additional maintenance burden. What a great deal!
So how do you go about building a plugin system? Let’s answer that question by building one of our own, in JavaScript.
I’m using the word “plugin” but these things are sometimes called other names, like “extensions,” “add-ons,” or “modules.” Whatever you call them, the concept (and benefit) is the same.
Let’s build a plugin system
Let’s start with an example project called BetaCalc. The goal for BetaCalc is to be a minimalist JavaScript calculator that other developers can add “buttons” to. Here’s some basic code to get us started:
We’re defining our calculator as an object-literal to keep things simple. The calculator works by printing its result via console.log.
Functionality is really limited right now. We have a setValue method, which takes a number and displays it on the “screen.” We also have plus and minus methods, which will perform an operation on the currently displayed value.
It’s time to add more functionality. Let’s start by creating a plugin system.
The world’s smallest plugin system
We’ll start by creating a register method that other developers can use to register a plugin with BetaCalc. The job of this method is simple: take the external plugin, grab its exec function, and attach it to our calculator as a new method:
// The Calculator
const betaCalc = {
// ...other calculator code up here
register(plugin) {
const { name, exec } = plugin;
this[name] = exec;
}
};
And here’s an example plugin, which gives our calculator a “squared” button:
In many plugin systems, it’s common for plugins to have two parts:
Code to be executed
Metadata (like a name, description, version number, dependencies, etc.)
In our plugin, the exec function contains our code, and the name is our metadata. When the plugin is registered, the exec function is attached directly to our betaCalc object as a method, giving it access to BetaCalc’s this.
So now, BetaCalc has a new “squared” button, which can be called directly:
There’s a lot to like about this system. The plugin is a simple object-literal that can be passed into our function. This means that plugins can be downloaded via npm and imported as ES6 modules. Easy distribution is super important!
But our system has a few flaws.
By giving plugins access to BetaCalc’s this, they get read/write access to all of BetaCalc’s code. While this is useful for getting and setting the currentValue, it’s also dangerous. If a plugin was to redefine an internal function (like setValue), it could produce unexpected results for BetaCalc and other plugins. This violates the open-closed principle, which states that a software entity should be open for extension but closed for modification.
Also, the “squared” function works by producing side effects. That’s not uncommon in JavaScript, but it doesn’t feel great — especially when other plugins could be in there messing with the same internal state. A more functional approach would go a long way toward making our system safer and more predictable.
A better plugin architecture
Let’s take another pass at a better plugin architecture. This next example changes both the calculator and its plugin API:
First, we’ve separated the plugins from “core” calculator methods (like plus and minus), by putting them in their own plugins object. Storing our plugins in a plugin object makes our system safer. Now plugins accessing this can’t see the BetaCalc properties — they can only see properties of betaCalc.plugins.
Second, we’ve implemented a press method, which looks up the button’s function by name and then calls it. Now when we call a plugin’s exec function, we pass it the current calculator value (currentValue), and we expect it to return the new calculator value.
Essentially, this new press method converts all of our calculator buttons into pure functions. They take a value, perform an operation, and return the result. This has a lot of benefits:
It simplifies the API.
It makes testing easier (for both BetaCalc and the plugins themselves).
It reduces the dependencies of our system, making it more loosely coupled.
This new architecture is more limited than the first example, but in a good way. We’ve essentially put up guardrails for plugin authors, restricting them to only the kind of changes that we want them to make.
In fact, it might be too restrictive! Now our calculator plugins can only do operations on the currentValue. If a plugin author wanted to add advanced functionality like a “memory” button or a way to track history, they wouldn’t be able to.
Maybe that’s ok. The amount of power you give plugin authors is a delicate balance. Giving them too much power could impact the stability of your project. But giving them too little power makes it hard for them to solve their problems — in that case you might as well not have plugins.
What more could we do?
There’s a lot more we could do to improve our system.
We could add error handling to notify plugin authors if they forget to define a name or return a value. It’s good to think like a QA dev and imagine how our system could break so we can proactively handle those cases.
We could expand the scope of what a plugin can do. Currently, a BetaCalc plugin can add a button. But what if it could also register callbacks for certain lifecycle events — like when the calculator is about to display a value? Or what if there was a dedicated place for it to store a piece of state across multiple interactions? Would that open up some new use cases?
We could also expand plugin registration. What if a plugin could be registered with some initial settings? Could that make the plugins more flexible? What if a plugin author wanted to register a whole suite of buttons instead of a single one — like a “BetaCalc Statistics Pack”? What changes would be needed to support that?
Your plugin system
Both BetaCalc and its plugin system are deliberately simple. If your project is larger, then you’ll want to explore some other plugin architectures.
One good place to start is to look at existing projects for examples of successful plugin systems. For JavaScript, that could mean jQuery, Gatsby, D3, CKEditor, or others.
You may also want to be familiar with various JavaScript design patterns. (Addy Osmani has a book on the subject.) Each pattern provides a different interface and degree of coupling, which gives you a lot of good plugin architecture options to choose from. Being aware of these options helps you better balance the needs of everyone who uses your project.
Besides the patterns themselves, there’s a lot of good software development principles you can draw on to make these kinds of decisions. I’ve mentioned a few along the way (like the open-closed principle and loose coupling), but some other relevant ones include the Law of Demeter and dependency injection.
I know it sounds like a lot, but you’ve gotta do your research. Nothing is more painful than making everyone rewrite their plugins because you needed to change the plugin architecture. It’s a quick way to lose trust and discourage people from contributing in the future.
Conclusion
Writing a good plugin architecture from scratch is difficult! You have to balance a lot of considerations to build a system that meets everyone’s needs. Is it simple enough? Powerful enough? Will it work long term?
It’s worth the effort though. Having a good plugin system helps everyone. Developers get the freedom to solve their problems. End users get a large number of opt-in features to choose from. And you get to grow an ecosystem and community around your project. It’s a win-win-win situation.