Thursday, 30 September 2021

Creating the Perfect Commit in Git

This article is part of our “Advanced Git” series. Be sure to follow Tower on Twitter or sign up for the Tower newsletter to hear about the next articles!

A commit in Git can be one of two things:

  • It can be a jumbled assortment of changes from all sorts of topics: some lines of code for a bugfix, a stab at rewriting an old module, and a couple of new files for a brand new feature.
  • Or, with a little bit of care, it can be something that helps us stay on top of things. It can be a container for related changes that belong to one and only one topic, and thereby make it easier for us to understand what happened.

In this post, we’re talking about what it takes to produce the latter type of commit or, in other words: the “perfect” commit.

Advanced Git series:

  • Part 1: Creating the Perfect Commit in Git
    You are here!
  • Part 2: Branching Strategies in Git
    Coming soon!
  • Part 3: Better Collaboration With Pull Requests
  • Part 4: Merge Conflicts
  • Part 5: Rebase vs. Merge
  • Part 6: Interactive Rebase
  • Part 7: Cherry-Picking Commits in Git
  • Part 8: Using the Reflog to Restore Lost Commits

Why clean and granular commits matter

Is it really necessary to compose commits in a careful, thoughtful way? Can’t we just treat Git as a boring backup system? Let’s revisit our example from above one more time.

If we follow the first path – where we just cram changes into commits whenever they happen – commits lose much of their value. The separation between one commit and the next becomes arbitrary: there seems to be no reason why changes were put into one and not the other commit. Looking at these commits later, e.g. when your colleagues try to make sense of what happened in that revision, is like going through the “everything drawer” that every household has: there’s everything in here that found no place elsewhere, from crayons to thumbtacks and cash slips. It’s terribly hard to find something in these drawers!

Following the second path – where we put only things (read: changes) that belong together into the same commit – requires a bit more planning and discipline. But in the end, you and your team are rewarded with something very valuable: a clean commit history! These commits help you understand what happened. They help explain the complex changes that were made in a digestible manner.

How do we go about creating better commits?

Composing better commits

One concept is central to composing better commits in Git: the Staging Area.

The Staging Area was made exactly for this purpose: to allow developers to select changes – in a very granular way – that should be part of the next commit. And, unlike other version control systems, Git forces you to make use of this Staging Area.

Unfortunately, however, it’s still easy to ignore the tidying effect of the Staging Area: a simple git add . will take all of our current local changes and mark them for the next commit.

It’s true that this can be a very helpful and valid approach sometimes. But many times, we would be better off stopping for a second and deciding if really all of our changes are actually about the same topic. Or if two or three separate commits might be a much better choice. 

In most cases, it makes a lot of sense to keep commits rather smaller than larger. Focused on an individual topic (instead of two, three, or four), they tend to be much more readable.

The Staging Area allows us to carefully pick each change that should go into the next commit: 

$ git add file1.ext file2.ext

This will only mark these two files for the next commit and leave the other changes for a future commit or further edits.

This simple act of pausing and deliberately choosing what should make it into the next commit goes a long way. But we can get even more precise than that. Because sometimes, even the changes in a single file belong to multiple topics.

Let’s look at a real-life example and take a look at the exact changes in our “index.html” file. We can either use the “git diff” command or a Git desktop GUI like Tower:

Now, we can add the -p option to git add:

$ git add -p index.html

We’re instructing Git to go through this file on a “patch” level: Git takes us by the hand and walks us through all of the changes in this file. And it asks us, for each chunk, if we want to add it to the Staging Area or not:

By typing [Y] (for “yes”) for the first chunk and [N] (for “no”) for the second chunk, we can include the first part of our changes in this file in the next commit, but leave the other changes for a later time or more edits.

The result? A more granular, more precise commit that’s focused on a single topic.

Testing your code

Since we’re talking about “the perfect commit” here, we cannot ignore the topic of testing. How exactly you “test” your code can certainly vary, but the notion that tests are important isn’t new. In fact, many teams refuse to consider a piece of code completed if it’s not properly tested.

If you’re still on the fence about whether you should test your code or not, let’s debunk a couple of myths about testing:

  • “Tests are overrated”: The fact is that tests help you find bugs more quickly. Most importantly, they help you find them before something goes into production – which is when mistakes hurt the most. And finding bugs early is, without exaggeration, priceless!
  • “Tests cost valuable time”: After some time you will find that well-written tests make you write code faster. You waste less time hunting bugs and find that, more often, a well-structured test primes your thinking for the actual implementation, too.
  • “Testing is complicated”: While this might have been an argument a couple of years ago, this is now untrue. Most professional programming frameworks and languages come with extensive support for setting up, writing, and managing tests.

All in all, adding tests to your development habits is almost guaranteed to make your code base more robust. And, at the same time, they help you become a better programmer.

A valuable commit message

Version control with Git is not a fancy way of backing up your code. And, as we’ve already discussed, commits are not a dump of arbitrary changes. Commits exist to help you and your teammates understand what happened in a project. And a good commit message goes a long way to ensure this.

But what makes a good commit message?

  • A brief and concise subject line that summarizes the changes
  • A descriptive message body that explains the most important facts (and as concisely as possible)

Let’s start with the subject line: the goal is to get a brief summary of what happened. Brevity, of course, is a relative term; but the general rule of thumb is to (ideally) keep the subject under 50 characters. By the way, if you find yourself struggling to come up with something brief, this might be an indicator that the commit tackles too many topics! It could be worthwhile to take another look and see if you have to split it into multiple, separate ones.

If you close the subject with a line break and an additional empty line, Git understands that the following text is the message’s “body.” Here, you have more space to describe what happened. It helps to keep the following questions in mind, which your body text should aim to answer:

  • What changed in your project with this commit?
  • What was the reason for making this change?
  • Is there anything special to watch out for? Anything someone else should know about these changes?

If you keep these questions in mind when writing your commit message body, you are very likely to produce a helpful description of what happened. And this, ultimately, benefits your colleagues (and after some time: you) when trying to understand this commit.

On top of the rules I just described about the content of commit messages, many teams also care about the format: agreeing on character limits, soft or hard line wraps, etc. all help to produce better commits within a team. 

To make it easier to stick by such rules, we recently added some features to Tower, the Git desktop GUI that we make: you can now, for example, configure character counts or automatic line wraps just as you like.

A great codebase consists of great commits

Any developer will admit that they want a great code base. But there’s only one way to achieve this lofty goal: by consistently producing great commits! I hope I was able to demonstrate that (a) it’s absolutely worth pursuing this goal and (b) it’s not that hard to achieve.

If you want to dive deeper into advanced Git tools, feel free to check out my (free!) “Advanced Git Kit”: it’s a collection of short videos about topics like branching strategies, Interactive Rebase, Reflog, Submodules and much more.

Have fun creating awesome commits!

Advanced Git series:

  • Part 1: Creating the Perfect Commit in Git
    You are here!
  • Part 2: Branching Strategies in Git
    Coming soon!
  • Part 3: Better Collaboration With Pull Requests
  • Part 4: Merge Conflicts
  • Part 5: Rebase vs. Merge
  • Part 6: Interactive Rebase
  • Part 7: Cherry-Picking Commits in Git
  • Part 8: Using the Reflog to Restore Lost Commits

The post Creating the Perfect Commit in Git appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.



from CSS-Tricks https://ift.tt/3kXOKFf
via IFTTT

Wednesday, 29 September 2021

Web Streams Everywhere (and Fetch for Node.js)

Chrome developer advocate Jake Archibald called 2016 “the year of web streams.” Clearly, his prediction was somewhat premature. The Streams Standard was announced back in 2014. It’s taken a while, but there’s now a consistent streaming API implemented in modern browsers (still waiting on Firefox…) and in Node (and Deno).

What are streams?

Streaming involves splitting a resource into smaller pieces called chunks and processing each chunk one at a time. Rather than needing to wait to complete the download of all the data, with streams you can process data progressively as soon as the first chunk is available.

There are three kinds of streams: readable streams, writable streams, and transform streams. Readable streams are where the chunks of data come from. The underlying data sources could be a file or HTTP connection, for example. The data can then (optionally) be modified by a transform stream. The chunks of data can then be piped to a writable stream.

Web streams everywhere

Node has always had it’s own type of streams. They are generally considered to be difficult to work with. The Web Hypertext Application Technology Working Group (WHATWG) web standard for streams came later, and are largely considered an improvement. The Node docs calls them “web streams” which sounds a bit less cumbersome. The original Node streams aren’t being deprecated or removed but they will now co-exist with the web standard stream API. This makes it easier to write cross-platform code and means developers only need to learn one way of doing things.

Deno, another attempt at server-side JavaScript by Node’s original creator, has always closely aligned with browser APIs and has full support for web streams. Cloudflare workers (which are a bit like service workers but running on CDN edge locations) and Deno Deploy (a serverless offering from Deno) also support streams.

fetch() response as a readable stream

There are multiple ways to create a readable stream, but calling fetch() is bound to be the most common. The response body of fetch() is a readable stream.

fetch('data.txt')
.then(response => console.log(response.body));

If you look at the console log you can see that a readable stream has several useful methods. As the spec says, A readable stream can be piped directly to a writable stream, using its pipeTo() method, or it can be piped through one or more transform streams first, using its pipeThrough() method.

Unlike browsers, Node core doesn’t currently implement fetch. node-fetch, a popular dependency that tries to match the API of the browser standard, returns a node stream, not a WHATWG stream. Undici, an improved HTTP/1.1 client from the Node.js team, is a modern alternative to the Node.js core http.request (which things like node-fetch and Axios are built on top of). Undici has implemented fetchand response.body does return a web stream. 🎉

Undici might end up in Node.js core eventually, and it looks set to become the recommended way to handle HTTP requests in Node. Once you npm install undici and import fetch, it works the same as in the browser. In the following example, we pipe the stream through a transform stream. Each chunk of the stream is a Uint8Array. Node core provides a TextDecoderStream to decode binary data.

import { fetch } from 'undici';
import { TextDecoderStream } from 'node:stream/web';

async function fetchStream() {
  const response = await fetch('https://example.com')
  const stream = response.body;
  const textStream = stream.pipeThrough(new TextDecoderStream());
}

response.body is synchronous so you don’t need to await it. In the browser, fetch and TextDecoderStream are available on the global object so you wouldn’t include any import statements. Other than that, the code is exactly the same for Node and web browsers. Deno also has built-in support for fetch and TextDecoderStream.

Async iteration

The for-await-of loop is an asynchronous version of the for-of loop. A regular for-of loop is used to loop over arrays and other iterables. A for-await-of loop can be used to iterate over an array of promises, for example.

const promiseArray = [Promise.resolve("thing 1"), Promise.resolve("thing 2")];
for await (const thing of promiseArray) { console.log(thing); }

Importantly for us, this can also be used to iterate streams.

async function fetchAndLogStream() {
  const response = await fetch('https://example.com')
  const stream = response.body;
  const textStream = stream.pipeThrough(new TextDecoderStream());

  for await (const chunk of textStream) {
    console.log(chunk);
  }
}

fetchAndLogStream();

Async iteration of streams works in Node and Deno. All modern browsers have shipped for-await-of loops but they don’t work on streams just yet.

Some other ways to get a readable stream

Fetch will be one of the most common ways to get hold of a stream, but there are other ways. Blob and File both have a .stream() method that returns a readable stream. The following code works in modern browsers as well as in Node and in Deno — although, in Node, you will need to import { Blob } from 'buffer'; before you can use it:

const blobStream = new Blob(['Lorem ipsum'], { type: 'text/plain' }).stream();

Here is a front-end browser-based example: If you have a <input type="file"> in your markup, it’s easy to get the user-selected file as a stream.

const fileStream = document.querySelector('input').files[0].stream();

Shipping in Node 17, the FileHandle object returned by the fs/promises open() function has a .readableWebStream() method.

import {
  open,
} from 'node:fs/promises';

const file = await open('./some/file/to/read');

for await (const chunk of file.readableWebStream())
  console.log(chunk);

await file.close();

Streams work nicely with promises

If you need to do something after the stream has completed, you can use promises.

someReadableStream
.pipeTo(someWritableStream)
.then(() => console.log("all data successfully written"))
.catch(error => console.error("something went wrong", error))

Or, you can optionally await the result:

await someReadableStream.pipeTo(someWritableStream)

Creating your own transform stream

We already saw TextDecoderStream (there’s also a TextEncoderStream). You can also create your own transform stream from scratch. The TransformStream constructor can accept an object. You can specify three methods in the object: start, transform and flush. They’re all optional, but transform is what actually does the transformation.

As an example, let’s pretend that TextDecoderStream() doesn’t exist and implement the same functionality (be sure to use TextDecoderStream in production though as the following is an over-simplified example):

const decoder = new TextDecoder();
const decodeStream = new TransformStream({
  transform(chunk, controller) {
    controller.enqueue(decoder.decode(chunk, {stream: true}));
  }
});

Each received chunk is modified and then forwarded on by the controller. In the above example, each chunk is some encoded text that gets decoded and then forwarded. Let’s take a quick look at the other two methods:

const transformStream = new TransformStream({
  start(controller) {
    // Called immediately when the TransformStream is created
  },

  flush(controller) {
    // Called when chunks are no longer being forwarded to the transformer
  }
});

A transform stream is a readable stream and a writable stream working together, usually to transform some data. Every object made with new TransformStream() has a property called readable, which is a ReadableStream, and a property called writable, which is a writable stream. Calling someReadableStream.pipeThrough() writes the data from someReadableStream to transformStream.writable, possibly transforms the data, then pushes the data to transformStream.readable.

Some people find it helpful to create a transform stream that doesn’t actually transform data. This is known as an “identity transform stream” — created by calling new TransformStream() without passing in any object argument, or by leaving off the transform method. It forwards all chunks written to its writable side to its readable side, without any changes. As a simple example of the concept, “hello” is logged by the following code:

const {readable, writable} = new TransformStream();
writable.getWriter().write('hello');
readable.getReader().read().then(({value, done}) => console.log(value))

Creating your own readable stream

It’s possible to create a custom stream and populate it with your own chunks. The new ReadableStream() constructor takes an object that can contain a start function, a pull function, and a cancel function. This function is invoked immediately when the ReadableStream is created. Inside the start function, use controller.enqueue to add chunks to the stream.

Here’s a basic “hello world” example:

import { ReadableStream } from "node:stream/web";
const readable = new ReadableStream({
  start(controller) {
    controller.enqueue("hello");
    controller.enqueue("world");
    controller.close();
  },
});

const allChunks = [];
for await (const chunk of readable) {
  allChunks.push(chunk);
}
console.log(allChunks.join(" "));

Here’s an more real-world example taken from the streams specification that turns a web socket into a readable stream:

function makeReadableWebSocketStream(url, protocols) {
  let websocket = new WebSocket(url, protocols);
  websocket.binaryType = "arraybuffer";

  return new ReadableStream({
    start(controller) {
      websocket.onmessage = event => controller.enqueue(event.data);
      websocket.onclose = () => controller.close();
      websocket.onerror = () => controller.error(new Error("The WebSocket errored"));
    }
  });
}

Node streams interoperability

In Node, the old Node-specific way of working with streams isn’t being removed. The old node streams API and the web streams API will coexist. It might therefore sometimes be necessary to turn a Node stream into a web stream, and vice versa, using .fromWeb() and .toWeb() methods, which are being added in Node 17.

import {Readable} from 'node:stream';
import {fetch} from 'undici';

const response = await fetch(url);
const readableNodeStream = Readable.fromWeb(response.body);

Conclusion

ES modules, EventTarget, AbortController, URL parser, Web Crypto, Blob, TextEncoder/Decoder: increasingly more browser APIs are ending up in Node.js. The knowledge and skills are transferable. Fetch and streams are an important part of that convergence.

Domenic Denicola, a co-author of the streams spec, has written that the goal of the streams API is to provide an efficient abstraction and unifying primitive for I/O, like promises have become for asynchronicity. To become truly useful on the front end, more APIs need to actually support streams. At the moment a MediaStream, despite its name, is not a readable stream. If you’re working with video or audio (at least at the moment), a readable stream can’t be assigned to srcObject. Or let’s say you want to get an image and pass it through a transform stream, then insert it onto the page. At the time of writing, the code for using a stream as the src of an image element is somewhat verbose:

const response = await fetch('cute-cat.png');
const bodyStream = response.body;
const newResponse = new Response(bodyStream);
const blob = await newResponse.blob();
const url = URL.createObjectURL(blob);
document.querySelector('img').src = url;    

Over time, though, more APIs in both the browser and Node (and Deno) will make use of streams, so they’re worth learning about. There’s already a stream API for working with Web Sockets in Deno and Chrome, for example. Chrome has implemented Fetch request streams. Node and Chrome have implemented transferable streams to pipe data to and from a worker to process the chunks in a separate thread. People are already using streams to do interesting things for products in the real world: the creators of file-sharing web app Wormhole have open-sourced code to encrypt a stream, for example.

Perhaps 2022 will be the year of web streams…


The post Web Streams Everywhere (and Fetch for Node.js) appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.



from CSS-Tricks https://ift.tt/3oetAEN
via IFTTT

Front-End Dev Shortcuts in iOS 15

I was pretty stoked when Chris shared a way to “View Source” on mobile. Sure, it’s not the same as a built-in feature but it allows iOS users like myself a way to peek at a site’s code the same way folks on Android can by prepending view-source: to a URL.

I was curious what sorts of dev-related tools might be baked right into my iPhone, so I dug around. It’s actually a perfect time to do that with iOS15 fresh out of the oven and all.

The iOS Shortcuts app might be the most underrated app of all. It’s sorta like IFTTT or Zapier for iOS in that you get these hooks you can play around with to make one app respond to another and do things. Like dim the lights in my house when Marvin Gaye hits the HomePod. Useful stuff like that.

And guess what? Turns out there are a few pre-made Shortcut recipes that can be useful for front-end developers.

View source (for realzzz)

Of course, there’s a pre-fab shortcut to view the source of any webpage.

Now, when I’m on a webpage, I can ask Siri to view source, or open the Share Sheet to trigger the shortcut.

The option is added to the Share Sheet.
Notice the option to share the source.

This would be perfect if we have line numbers, syntax highlighting, a mono font (seriously!), zooming, and… OK, maybe it’s far from perfect.

Grab all the images on a page

This is another Shortcut gem. Open a webpage and ask Siri to “get images from page.” Now, I’m no fan of scraping assets off webpages, especially in bulk, but not all image collection has to be nefarious.

It’s also available in the Share Sheet.
Sorry, not sorry.

Wayback Machine

How fun is this?! Someone had the idea to create a shortcut that opens the current page in the Wayback Machine to see past versions.

So cool to see that name in a list of options.
Saved 7,599 times since 2007? That’s more than once per day!

Edit webpage

I actually think this one is legitimately useful. Say you want to test content in a design and want to see exactly how it looks on mobile. This shortcut basically drops contenteditable on every element on the page.


That’s all! Maybe there are others. Or maybe someone’s made a cool shortcut to share with the rest of us. 😉


The post Front-End Dev Shortcuts in iOS 15 appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.



from CSS-Tricks https://ift.tt/3uo6vAn
via IFTTT

Tuesday, 28 September 2021

So many little design helper sites!

I had one of those little single-serving designer helper sites bookmarked the other day: getwaves.io. Randomized SVG waves! Lots of cool options! Easy to customize! Easy to copy and paste! Well played, z creative labs.

But then I saw the little link at the top of the page, that it was part of something called Haikei. So I checked that out, and holy mackerel, it’s great! There are a dozen or more similar “generators” within one app, each just as well done as the SVG waves one.

Random scattering of vector stars.

Kind of reminds me of Omatsuri which is a similar collection of useful little on-off tools.

But heck, those are just two apps, even if they are collections of mini apps in an of themselves. Around the same time, I became aware of tiny-helpers.dev, which is a roundup site of all sorts of these little one-off helper sites.

Haikei and Omatsuri are both on there, along with many hundred more. Just the SVG area alone is super:

I’m sure y’all find these things just as useful as I do. They don’t make us lazy, they make us efficient. I know how to make a pattern. I know how to draw a curve with a Pen Tool. I know how to convert SVG into JSX. But using a dedicated tool makes me faster and better at it. And sometimes I don’t know how to do those things, but that doesn’t mean I can’t take advantage. Fake it ’til you make it, right?


The post So many little design helper sites! appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.



from CSS-Tricks https://ift.tt/3zQdwek
via IFTTT

iOS Browser Choice

Just last week I got one of those really?! 🤨 faces when this fact came up in conversation amongst smart and engaged fellow web developers: there is no browser choice on iOS. It’s all Safari. You can download apps that are named Chrome or Firefox, or anything else, but they are just veneer over Safari. If you’re viewing a website on iOS, it’s Safari.

I should probably call it what the App Store Review Guidelines call it: WebKit. I usually think it’s more clear to refer to browsers by their common names rather than the engine behind it, since each of The Big Three web browsers have distinct engines (for now anyway), but in this case, the engine is the important bit.

I’ll say how I feel: that sucks. I have this expensive computer in my pocket and it feels unfair that it is hamstrung in this very specific way of not allowing other browser engines. I also have an Apple laptop and it’s not hamstrung in that way, and I really hope it never is.

There is, of course, all sorts of nuance to this. My Apple laptop is hamstrung in that I can’t just install whatever OS I want on it unless I do it a sanctioned way. I also like the fact that there is some gatekeeping in iOS apps, and sometimes wish it was more strict. Like when I try to download simple games for my kid, and I end up downloading some game that is so laden with upsells, ads, and dark patterns that I think the developer should be in prison. I wish Apple just wouldn’t allow that garbage on the App Store at all. So that’s me wishing for more and less gatekeeping at the same time.

But what sucks about this lack of browser choice on iOS isn’t just the philosophy of gatekeeping, it’s that WebKit on iOS just isn’t that great. See Dave’s post for a rundown of just some of the problems from a day-to-day web developer perspective that I relate to. And because WebKit has literally zero competition on iOS, because Apple doesn’t allow competition, the incentive to make Safari better is much lighter than it could (should) be.

It’s not something like Google’s AMP, where if you really dislike it you can both not use it on your own sites and redirect yourself away from them on other sites. This choice is made for you.

My ability to talk intelligently about this is dwarfed by many others though, so what I really want to do is point out some of that recent writing. Allow me to pull a quote from a bunch of them…

iOS Engine Choice In Depth — Alex Russell

None of this is theoretical; needing to re-develop features through a straw, using less-secure, more poorly tested and analyzed mechanisms, has led to serious security issues in alternative iOS browsers. Apple’s policy, far from insulating responsible WebKit browsers from security issues, is a veritable bug farm for the projects wrenched between the impoverished feature set of Apple’s WebKit and the features they can securely deliver with high fidelity on every other platform.

This is, of course, a serious problem for Apple’s argument as to why it should be exclusively responsible for delivering updates to browser engines on iOS.

Chrome is the new Safari. And so are Edge and Firefox. — Niels Leenheer

The Safari and Chrome team both want to make the web safer and work hard to improve the web. But they do have different views on what the web should be.

Google is focussing on improving the web by making it more capable. To expand the relevance of the web, to go beyond what is possible today. And that also means allowing it to compete with native apps, with which the Android team surely does not always agree.

Safari seems to focus on improving the web as it currently is. To let it be a safer place, much faster and more beautiful. And if you want something more, you can use an app for that.

Browser choice on Apple’s iOS: privacy and security aspects — Stuart Langridge

Alternative browsers on iOS aren’t just restricted to WebKit, they’re restricted to the version of WebKit which is in the current version of Safari. Not even different or more modern versions of WebKit itself are allowed.

Even motivated users who work hard to get out of the browser choice they’re forced into don’t actually get a choice; if they choose a different browser, they still get the same one. If there’s a requirement from people for something, the market can’t provide it because competition is not permitted.

Briefing to the UK Competition and Markets Authority on Apple’s iOS browser monopoly and Progressive Web Apps — Bruce Lawson

[…] these people at Echo Pharmacy, not only have they got a really great website, but they also have to build an app for iOS just because they want to send push notifications. And, perhaps ironically, given Apple’s insistence that they do all of this for security and privacy, is that if I did choose to install this app, I would also be giving it permission to access my health and fitness data, my contact info, my identifiers sensitive info, financial info, user content, user data and diagnostics. Whereas, if I had push notifications and I were using a PWA, I’d be leaking none of this data.

So, we can see that despite Apple’s claims, I cannot recommend a PWA as being an equal experience an iOS simply here because of push notifications. But it’s not just hurting current business, it’s also holding back future business.


I’ve heard precious few arguments defending Apple’s choice to only allow Safari on iOS. Vague Google can’t be trusted sentiment is the bulk of it, privacy-focused, performance forced, or both. All in all, nobody wants this complete lack of choice but Apple.

As far as I know, there isn’t any super clear language from Apple on why this requirement is in place. That would be nice to hear, because maybe then whatever the reasons are could be addressed.

We hear mind-blowing tech news all the time. I’d love to wake up one morning and have the news be “Apple now allows other browser engines on iOS.” You’ll hear a faint yesssssss in the air because I’ve screamed it so loud from my office in Bend, Oregon, you can hear it at your house.


The post iOS Browser Choice appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.



from CSS-Tricks https://ift.tt/3idnj8q
via IFTTT

Web Unleashed is Back! (Use Coupon Code “CSS-Tricks” for 50% Off)

(This is a sponsored post.)

Now in its tenth year (!!!), Web Unleashed is one of the top events for web devs. It’s coming up quick: October 20-22, 2021.

The lineup is amazing. You’ll hear from leaders in the industry, including Ethan Marcotte, Rachel Andrew, Harry Roberts, Addy Osmani, Tracy Lee Ire, Aderinokun, Suz Hinton, Shawn Wang, Jen Looper, and many more.

And don’t forget the coupon code CSS-Tricks as 50% off is a massively good deal! (Here’s where you apply it)

So many hot topics will be covered, including responsive design, optimizing images, open source projects, performance metrics, Gatsby, GraphQL, Applied ML, growing your career, and much much more.

But, seriously folks, take it from someone who has gone to this event a bunch of years… it’s really well done, you’re going to learn a lot, and the price right now is unbeatable. The entire conference clocks in at CA $179.16, which is CA $89.91 with the CSS-Tricks coupon code. That’s… just go.

Direct Link to ArticlePermalink


The post Web Unleashed is Back! (Use Coupon Code “CSS-Tricks” for 50% Off) appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.



from CSS-Tricks https://ift.tt/3oenFQ0
via IFTTT

Monday, 27 September 2021

Tonic (Component Framework)

I enjoy little frameworks like Tonic. It’s essentially syntactic sugar over <web-components /> to make them feel easier to use. Define a Class, template literal an HTML template, probably some other fancy helpers, and you’ve got a component that doesn’t feel terribly different to something like a React component, except you need no build process or other exotic tooling.

Here’s a Hello World + Counter example:

They have a whole bunch of examples (in a separate repo). You can snag and use them, and they are pretty nice! So that makes Tonic a bit like a design system as well as a web component framework.

To be fair, it’s not that different from Lit, which Google is behind and pushing pretty actively.

Here’s a Hello, World + Counter with Lit:

And Dave was just showing me petite-vue the other day, so I figured I might as well do that one, too:

I’d say that petite-vue example wins for just how super easy that is to pull of in just declarative HTML. But of course, there are a bunch of other considerations from specific features, syntax, philosophy, and size. Just looking at size, if I pop open the Network tab in DevTools and see the over-the-wire JavaScript for each demo…

  • Tonic = 5.1 KB
  • Lit = 12.6 KB
  • petite-vue = 8.1 KB

They are all basically the same: tiny.

I’ve never actually built anything real in any of them, so I’m not the best to judge one from the other. But they all seem pretty neat to me, particularly because they require no build step.


The post Tonic (Component Framework) appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.



from CSS-Tricks https://ift.tt/3AM2vwd
via IFTTT

Collecting Email Signups With the Notion API

A lot of people these days are setting up their own newsletters. You’ve got the current big names like Substack and MailChimp, companies like Twitter are getting into it with Revue, and even Facebook is getting into the newsletter business. Some folks are trying to bring it closer to home with a self-managed WordPress solution via MailPoet.

Let’s talk about another possibility: collecting your own email subscribers the good ol’ fashioned way: a <form> that submits to an API and puts them into data storage.

The first hurdle in setting up a newsletter is a mechanism to collect emails. Most apps that help you with building and sending newsletters will offer some kind of copy-and-paste signup form example, but most of these options are paid and we are looking for something that is free and completely hosted by us.

Nothing is handier than setting up our own system which we can control. In this tutorial, we will set up our own system to collect emails for our newsletters based on the Jamstack architecture. We will implement an HTML form to collect emails for our newsletter, process those emails with Netlify Functions, and save them to a Notion database with the Notion API.

But before we begin…

We need the following things:

All of these services are free (with limits). And as far as Mailgun goes, we could really use any email service provider that offers an API.

Get the entire code used in this tutorial over at GitHub.

First off, Notion

What is Notion, you ask? Let’s let them describe it:

Notion is a single space where you can think, write, and plan. Capture thoughts, manage projects, or even run an entire company — and do it exactly the way you want.

A screenshot of Notion's homepage. It says Notion is an all-in-one workspace and there is a form to enter an email address and create an account. Black and white illustrations of four people are below the form.

In other words, Notion is sort of like a database but with a nice visual interface. And each row in the database gets its own “page” where writing content is not unlike writing a blog post in the WordPress block editor.

And those databases can be visualized in a number of ways, from a simple spreadsheet to a Trello-like board, or a gallery, or a calendar, or a timeline, or a… you get the picture.

We are using Notion because we can set up tables that store our form responses, making it basically a data store. Notion also just so happened to recently release a full API for public use. We can take advantage of that API and interact with it using…

Netlify Functions

Netlify Functions are serverless API endpoints provided by, of course, Netlify’s hosting platform. We can write them in JavaScript or Go.

A screenshot of the Netlify Functions webpage. It has a large black heading in bold type that reads Build scalable, dynamic applications. There is a teal colored button with rounded corners below to get started with Netlify, then a text link beside it that reads talk to an expert.

We could use Netlify forms here to collect form submissions. In fact, Matthew Ström already shared how that works. But on a free plan, we can only receive 100 submissions per month. So, that’s why we’re using Netlify Functions — they can be invoked 125,000 times a month, meaning we can collect a quarter-million emails every month without paying a single penny.

Let’s set up the Notion database

The first thing we need to do is create a database in Notion. All that takes is creating a new page, then inserting a full-page table block by typing /table.

Screenshot of a Notion page with the block inserter open and the full page table option selected. The page has a dark background and a sidebar on the left with navigation for the user account.
Typing a command in Notion, like /table, opens up a contextual menu to insert a block on a page.

Let’s give our database a name: Newsletter Emails.

We want to keep things pretty simple, so all we’re collecting is an email address when someone submits the form. So, really, all we need is a column in the table called Email.

A screenshot of the Notion database, which is in a spreadsheet or table view. The white heading is titled Newsletter Emails, then there is one column for an email address.

But Notion is a little smarter than that and includes advanced properties that provide additional information. One of those is a “Created time” property which, true to its name, prints a timestamp for when a new submission is created.

The Notion database showing a second property being added to the database to display the created time of an email entry. A contextual menu is open with a Created time property type selected and an additional contextual menu beside it showing the different types of available properties.
Now, when a new “Email” entry is added to the table, the date it was added is displayed as well. That can be nice for other things, like calculating how long someone has been a subscriber.

We also need a Notion API Token

In order to interact with our Notion database, we need to create a Notion integration and get an API token. New integrations are made, not in your notion account, but on the Notion site when you’re logged into your account. We’ll give this integration a name that’s equally creative as the Notion table: Newsletter Signups.

A plain white webpage for Notion integrations. It has a heading that says Create a new integration with form fields for name, logo, associated workspace, and a submit button.
Create a new Notion integration and associate it with the workspace where the Newsletter Emails table lives.

Once the integration has been created, we can grab the Internal Integration Token (API Token). Hold onto it because we’ll use it in just a bit.

The Notion integrations conformation screen displaying the secret internal integration token, which is obscured by the app.
Once the integration has been created, Notion provides a secret API token that’s used to authenticate connections.

By default, Notion Integrations are actually unable to access Notion pages or databases. We need to explicitly share the specific page or database with our integration in order for the integration to properly read and use information in the table. Click on the Share link at the top-right corner of the Notion database, use the selector to find the integration by its name, then click Invite.

The Notion database with Share options displayed in a modal with a list of available integrations and a light blue invite button.

Next is Netlify

Our database is all set to start receiving form submissions from Notion’s API. Now is the time to create a form and a serverless function for handling the form submissions.

Since we are setting up our website on Netlify, let’s install the Netlify CLI on our local machine. Open up Terminal and use this command:

npm install netlify-cli -g

Based on your taste, we can authenticate to Netlify in different ways. You can follow Netlify’s amazing guide as a starting point. Once we have authenticated with Netlify, the next thing we have to do is to create a Netlify project. Here are the commands to do that:

$ mkdir newsletter-subscription-notion-netlify-function
$ cd newsletter-subscription-notion-netlify-function
$ npm init

Follow along with this commit in the GitHub project.

Let’s write a Netlify Function

Before we get into writing our Netlify function, we need to do two things.

First, we need to install the [@notionhq/client](https://github.com/makenotion/notion-sdk-js) npm package, which is an official Notion JavaScript SDK for using the Notion API.

$ npm install @notionhq/client --save

Second, we need to create a .env file at the root of the project and add the following two environment variables:

NOTION_API_TOKEN=<your Notion API token>
NOTION_DATABASE_ID=<your Notion database>

The database ID can be found in its URL. A typical Notion page has a URL like https://www.notion.so/{workspace_name}/{database_id}?v={view_id} where {database_id} corresponds to the ID.

We are writing our Netlify functions in the functions directory. So, we need to create a netlify.toml file at the root of the project to tell Netlify that our functions need to be built from this directory.

[build]
functions = "functions"

Now, let’s create a functions directory at the root of the project. In the functions directory, we need to create an index.js file and add the following code. This function is available as an API endpoint at /.netlify/functions/index.

const { Client, LogLevel } = require('@notionhq/client');

const { NOTION_API_TOKEN, NOTION_DATABASE_ID } = process.env;

async function addEmail(email) {
  // Initialize Notion client
  const notion = new Client({
    auth: NOTION_API_TOKEN,
    logLevel: LogLevel.DEBUG,
  });

  await notion.pages.create({
    parent: {
      database_id: NOTION_DATABASE_ID,
    },
    properties: {
      Email: {
        title: [
          {
            text: {
              content: email,
            },
          },
        ],
      },
    },
  });
}

function validateEmail(email) {
  const re =
    /^(([^&lt;&gt;()[\\]\\\\.,;:\\s@&quot;]+(\\.[^&lt;&gt;()[\\]\\\\.,;:\\s@&quot;]+)*)|(&quot;.+&quot;))@((\\[[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\])|(([a-zA-Z\\-0-9]+\\.)+[a-zA-Z]{2,}))$/;
  return re.test(String(email).toLowerCase());
}

module.exports.handler = async function (event, context) {
  // Check the request method
  if (event.httpMethod != 'POST') {
    return { statusCode: 405, body: 'Method not allowed' };
  }

  // Get the body
  try {
    var body = JSON.parse(event.body);
  } catch (err) {
    return {
      statusCode: 400,
      body: err.toString(),
    };
    return;
  }

  // Validate the email
  const { email } = body;
  if (!validateEmail(email)) {
    return { statusCode: 400, body: 'Email is not valid' };
  }

  // Store email in Notion
  await addEmail(email);
  return { statusCode: 200, body: 'ok' };
};

Let’s break this down, block by block.

The addEmail function is used to add the email to our Notion database. It takes an email parameter which is a string.

We create a new client for authenticating with the Notion API by providing our NOTION_API_TOKEN. We have also set the log level parameter to get information related to the execution of commands while we are developing our project, which we can remove once we are ready to deploy the project to production.

In a Notion database, each entry is known as a page. So, we use the notion.pages.create method to create a new entry in our Notion database. The module.exports.handler function is used by Netlify functions that handle the requests made to it.

The first thing we check is the request method. If the request method is something other than POST, we send back a 405 - Method not allowed response.

Then we parse the payload (event.body) object sent by the front end. If we are unable to parse the payload object, we send back a 400 - Bad request response.

Once we have parsed the request payload, we check for the presence of an email address. We use the validateEmail function to check the validity of the email. If the email is invalid, we send back a 400 - Bad request response with a custom message saying the Email is not valid.

After we have completed all the checks, we call the addEmail function to store the email in our Notion database and send back a 200 - Success response that everything went successfully.

Follow along with the code in the GitHub project with commit ed607db.

Now for a basic HTML form

Our serverless back-end is ready and the last step is to create a form to collect email submissions for our newsletter.

Let’s create an index.html file at the root of the project and add the following HTML code to it. Note that the markup and classes are pulled from Bootstrap 5.

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta http-equiv="X-UA-Compatible" content="IE=edge">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <title>Notion + Netlify Functions Newsletter</title>
  <link href="<https://cdn.jsdelivr.net/npm/bootstrap@5.1.0/dist/css/bootstrap.min.css>" rel="stylesheet" integrity="sha384-KyZXEAg3QhqLMpG8r+8fhAXLRk2vvoC2f3B09zVXn8CA5QIVfZOJ3BCsw2P0p/We" crossorigin="anonymous">
</head>

<body>
  <div class="container py-5">
    <p>
      Get daily tips related to Jamstack in your inbox.
    </p>

    <div id="form" class="row">
      <div class="col-sm-8">
        <input name="email" type="email" class="form-control" placeholder="Your email" autocomplete="off">
        <div id="feedback-box" class="invalid-feedback">
            Please enter a valid email
        </div>
      </div>
      <div class="col-sm-4">
        <button class="btn btn-primary w-100" type="button" onclick="registerUser()">Submit form</button>
      </div>
    </div>

    <div id="success-box" class="alert alert-success d-none">
      Thanks for subscribing to our newsletter.
    </div>
    <div id="error-box" class="alert alert-danger d-none">
      There was some problem adding your email.
    </div>
  </div>
</body>
<script>
</script>
</html>

We have an input field for collecting user emails. We have a button that, when clicked, calls the registerUser function. We have two alerts that are shown, one for a successful submission and one for an unsuccessful submission.

Since we have installed Netlify CLI, we can use the netlify dev command to run our website on the localhost server:

$ netlify dev
Showing the email form with a heading that says Get daily tips related to Jamstack in your inbox above a form field for an email and a blue button next to it to submit the form. There is a read error message below the email input that reads Please enter a valid email.
Once the server is up and running, we can visit localhost:8888 and see our webpage with the form.

Next, we need to write some JavaScript that validates the email and sends an HTTP request to the Netlify function. Let’s add the following code in the <script> tag in the index.html file:

function validateEmail(email) {
  const re =
    /^(([^<>()[\\]\\\\.,;:\\s@"]+(\\.[^<>()[\\]\\\\.,;:\\s@"]+)*)|(".+"))@((\\[[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\])|(([a-zA-Z\\-0-9]+\\.)+[a-zA-Z]{2,}))$/;
  return re.test(String(email).toLowerCase());
}

async function registerUser() {
  const form = document.getElementById('form');
  const successBox = document.getElementById('success-box');
  const errorBox = document.getElementById('error-box');
  const feedbackBox = document.getElementById('feedback-box');
  const email = document.getElementsByName('email')[0].value;

  if (!validateEmail(email)) {
    feedbackBox.classList.add('d-block');
    return;
  }

  const headers = new Headers();
headers.append('Content-Type', 'application/json');
  
  const options = {
    method: 'POST',
    headers,
    body: JSON.stringify({email}),
  };

  const response = await fetch('/.netlify/functions/index', options);

  if (response.ok) {
    form.classList.add('d-none');
    successBox.classList.remove('d-none');
  }
  else {
    form.classList.add('d-none');
    errorBox.classList.remove('d-none');
  }
}

In the registerUser function, we check the validity of the email input using a RegEx function (validateEmail). If the email is invalid, we prompt the user to try again. Once we have validated the email, we call the fetch method to send a POST request to our Netlify function which is available at /.netlify/functions/index with a payload object containing the user’s email.

Based on the response, we show a success alert to the user otherwise we show an error alert.

Follow along with the code in the GitHub project with commit cd9791b.

Getting the Mailgun credentials

Getting emails in our database is good, but not enough, because we don’t know how we are going to use them. So, the final step is to send a confirmation email to a subscriber when they sign up.

We are not going to send a message immediately after a user signs up. Instead, we will set up a method to fetch the email signups in the last 30 minutes, then send the email. Adding a delay to the auto-generated email makes the welcome message feel a bit more personal, I think.

We’re using Mailgun to send, but again, we could really use any email service provider with an API. Let’s visit the Mailgun dashboard and, from the sidebar, go to the Settings → API Keys screen.

Showing the Mailgun API security screen. The page has a white background with a dark blue left sidebar that contains navigation. The page displays the private API key, the public validation key, and the HTTP web hook signing key.
We want the “Private API key.”

Let’s copy the Private API key and add it to the .env file as:

MAILGUN_API_KEY=<your Mailgun api key>

Every email needs to come from a sending address, so one more thing we need is the Mailgun domain from which the emails are sent. To get our Mailgun domain, we need to go to the Sending → Domains screen.

The Mailgun domains screen showing a table with one row that contains a sandbox URL with a series of statistics.
Mailgun offers a sandbox domain for testing.

Let’s copy our Mailgun sandbox domain and add it to the .env file as:

MAILGUN_DOMAIN=<your Mailgun domain>

Sending a confrimation email

Before we get into writing our Netlify function for sending an email confirmation, we need to install the [mailgun-js](https://github.com/mailgun/mailgun-js) npm package, which is an official Mailgun JavaScript SDK for using the Mailgun API:

$ npm install mailgun-js --save

We are using the /.netlify/functions/index endpoint for sending welcome emails. So, in the functions directory, let’s create a new a welcome.js file and add the following code:

const { Client, LogLevel } = require('@notionhq/client');
const mailgun = require('mailgun-js');

const {
  NOTION_API_TOKEN,
  NOTION_DATABASE_ID,
  MAILGUN_API_KEY,
  MAILGUN_DOMAIN,
} = process.env;

async function fetchNewSignups() {
  // Initialize notion client
  const notion = new Client({
    auth: NOTION_API_TOKEN,
    logLevel: LogLevel.DEBUG,
  });

  // Create a datetime that is 30 mins earlier than the current time
  let fetchAfterDate = new Date();
  fetchAfterDate.setMinutes(fetchAfterDate.getMinutes() - 30);

  // Query the database
  // and fetch only entries created in the last 30 mins
  const response = await notion.databases.query({
    database_id: NOTION_DATABASE_ID,
    filter: {
      or: [
        {
          property: 'Added On',
          date: {
            after: fetchAfterDate,
          },
        },
      ],
    },
  });

  const emails = response.results.map((entry) => entry.properties.Email.title[0].plain_text);

  return emails;
}

async function sendWelcomeEmail(to) {
  const mg = mailgun({ apiKey: MAILGUN_API_KEY, domain: MAILGUN_DOMAIN });

  const data = {
    from: `Ravgeet Dhillon <postmaster@${MAILGUN_DOMAIN}>`,
    to: to,
    subject: 'Thank you for subscribing',
    text: "Thank you for subscribing to my newsletter. I'll be sending daily tips related to JavScript.",
  };

  await mg.messages().send(data);
}

module.exports.handler = async function (event, context) {
  // Check the request method
  if (event.httpMethod != 'POST') {
    return { statusCode: 405, body: 'Method not allowed' };
  }

  const emails = await fetchNewSignups();

  emails.forEach(async (email) => {
    await sendWelcomeEmail(email);
  });

  return { statusCode: 200, body: 'ok' };
};

The above code has two main functions which we need to be understood. First, we have a fetchNewSignups function. We use the Notion SDK client to fetch the entries in our Notion database. We have used a filter to fetch only those emails which were added in the last 30 minutes. Once we get a response from the Notion API, we loop over the response and map the email into an emails array.

Second, we have a sendWelcomeEmail function. This function is used to send an email to the new subscriber via Mailgun API.

To bring everything together, we loop over the new emails in the module.exports.handler function and send a welcome email to each new email.

This function is available at localhost:8888/.netlify/functions/welcome.

Follow along with the code in the GitHub project with commit b802f93.

Let’s test this out!

Now that our project is ready, the first thing we need to do is to test the form and see whether the email is stored in our Notion database. Let’s go to localhost:8888 and enter a valid email address.

Perfect! We got the success message. Now let’s head over to our Notion database to see if the email address was captured.

Awesome, it’s there! We can also see that the Added On field has also been auto-filled by Notion, which is exactly what we want.

Next, let’s send a POST request to localhost:8888/.netlify/functions/welcome from Postman and check our inbox for the subscription confirmation message.

Woah! We have received the welcome email for subscribing to the newsletter.

Not seeing the email in your inbox? Make sure to check your spam folder.

What’s next?

The next step from here would be to set up a script that runs every 30 minutes and calls the welcome endpoint. As of now, Netlify doesn’t provide any way to run CRON jobs. So, we can set up GitHub Actions to handle it.

Also, we can secure the welcome endpoint using security mechanisms like a simple password or a JWT token. This will prevent anyone on the Internet from calling the welcome endpoint and sending tons of spam to our subscribers.

So, there we have it! We successfully implemented a Jamstack way to collect emails for a newsletter. The biggest advantage of doing something like this is that we can learn a lot about different technologies at once. We wired together services and APIs that collect email addresses, validate submissions, post to databases, and even send emails. It’s pretty cool that we can do so much with relatively so little.

I hope you enjoyed reading and learning something from this article as much as I enjoyed writing it for you. Let me know in the comments how you used this method for your own benefit. Happy coding!


The post Collecting Email Signups With the Notion API appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.



from CSS-Tricks https://ift.tt/2ZrDYyu
via IFTTT

Passkeys: What the Heck and Why?

These things called  passkeys  sure are making the rounds these days. They were a main attraction at  W3C TPAC 2022 , gained support in  Saf...