Tuesday, 10 May 2022

Let’s Create a Tiny Programming Language

By now, you are probably familiar with one or more programming languages. But have you ever wondered how you could create your own programming language? And by that, I mean:

A programming language is any set of rules that convert strings to various kinds of machine code output.

In short, a programming language is just a set of predefined rules. And to make them useful, you need something that understands those rules. And those things are compilers, interpreters, etc. So we can simply define some rules, then, to make it work, we can use any existing programming language to make a program that can understand those rules, which will be our interpreter.

Compiler

A compiler converts codes into machine code that the processor can execute (e.g. C++ compiler).

Interpreter

An interpreter goes through the program line by line and executes each command.

Want to give it a try? Let’s create a super simple programming language together that outputs magenta-colored output in the console. We’ll call it Magenta.

Screenshot of terminal output in color magenta.
Our simple programming language creates a codes variable that contains text that gets printed to the console… in magenta, of course.

Setting up our programming language

I am going to use Node.js but you can use any language to follow along, the concept will remain the same. Let me start by creating an index.js file and set things up.

class Magenta {
  constructor(codes) {
    this.codes = codes
  }
  run() {
    console.log(this.codes)
  }
}

// For now, we are storing codes in a string variable called `codes`
// Later, we will read codes from a file
const codes = 
`print "hello world"
print "hello again"`
const magenta = new Magenta(codes)
magenta.run()

What we’re doing here is declaring a class called Magenta. That class defines and initiates an object that is responsible for logging text to the console with whatever text we provide it via a codes variable. And, for the time being, we’ve defined that codes variable directly in the file with a couple of “hello” messages.

Screenshot of terminal output.
If we were to run this code we would get the text stored in codes logged in the console.

OK, now we need to create a what’s called a Lexer.

What is a Lexer?

OK, let’s talks about the English language for a second. Take the following phrase:

How are you?

Here, “How” is an adverb, “are” is a verb, and “you” is a pronoun. We also have a question mark (“?”) at the end. We can divide any sentence or phrase like this into many grammatical components in JavaScript. Another way we can distinguish these parts is to divide them into small tokens. The program that divides the text into tokens is our Lexer.

Diagram showing command going through a lexer.

Since our language is very tiny, it only has two types of tokens, each with a value:

  1. keyword
  2. string

We could’ve used a regular expression to extract tokes from the codes string but the performance will be very slow. A better approach is to loop through each character of the code string and grab tokens. So, let’s create a tokenize method in our Magenta class — which will be our Lexer.

Full code
class Magenta {
  constructor(codes) {
    this.codes = codes
  }
  tokenize() {
    const length = this.codes.length
    // pos keeps track of current position/index
    let pos = 0
    let tokens = []
    const BUILT_IN_KEYWORDS = ["print"]
    // allowed characters for variable/keyword
    const varChars = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ_'
    while (pos < length) {
      let currentChar = this.codes[pos]
      // if current char is space or newline,  continue
      if (currentChar === " " || currentChar === "\n") {
        pos++
        continue
      } else if (currentChar === '"') {
        // if current char is " then we have a string
        let res = ""
        pos++
        // while next char is not " or \n and we are not at the end of the code
        while (this.codes[pos] !== '"' && this.codes[pos] !== '\n' && pos < length) {
          // adding the char to the string
          res += this.codes[pos]
          pos++
        }
        // if the loop ended because of the end of the code and we didn't find the closing "
        if (this.codes[pos] !== '"') {
          return {
            error: `Unterminated string`
          }
        }
        pos++
        // adding the string to the tokens
        tokens.push({
          type: "string",
          value: res
        })
      } else if (varChars.includes(currentChar)) { arater
        let res = currentChar
        pos++
        // while the next char is a valid variable/keyword charater
        while (varChars.includes(this.codes[pos]) && pos < length) {
          // adding the char to the string
          res += this.codes[pos]
          pos++
        }
        // if the keyword is not a built in keyword
        if (!BUILT_IN_KEYWORDS.includes(res)) {
          return {
            error: `Unexpected token ${res}`
          }
        }
        // adding the keyword to the tokens
        tokens.push({
          type: "keyword",
          value: res
        })
      } else { // we have a invalid character in our code
        return {
          error: `Unexpected character ${this.codes[pos]}`
        }
      }
    }
    // returning the tokens
    return {
      error: false,
      tokens
    }
  }
  run() {
    const {
      tokens,
      error
    } = this.tokenize()
    if (error) {
      console.log(error)
      return
    }
    console.log(tokens)
  }
}

If we run this in a terminal with node index.js, we should see a list of tokens printed in the console.

Screenshot of code.
Great stuff!

Defining rules and syntaxes

We want to see if the order of our codes matches some sort of rule or syntax. But first we need to define what those rules and syntaxes are. Since our language is so tiny, it only has one simple syntax which is a print keyword followed by a string.

keyword:print string

So let’s create a parse method that loops through our tokens and see if we have a valid syntax formed. If so, it will take necessary actions.

class Magenta {
  constructor(codes) {
    this.codes = codes
  }
  tokenize(){
    /* previous codes for tokenizer */
  }
  parse(tokens){
    const len = tokens.length
    let pos = 0
    while(pos < len) {
      const token = tokens[pos]
      // if token is a print keyword
      if(token.type === "keyword" && token.value === "print") {
        // if the next token doesn't exist
        if(!tokens[pos + 1]) {
          return console.log("Unexpected end of line, expected string")
        }
        // check if the next token is a string
        let isString = tokens[pos + 1].type === "string"
        // if the next token is not a string
        if(!isString) {
          return console.log(`Unexpected token ${tokens[pos + 1].type}, expected string`)
        }
        // if we reach this point, we have valid syntax
        // so we can print the string
        console.log('\x1b[35m%s\x1b[0m', tokens[pos + 1].value)
        // we add 2 because we also check the token after print keyword
        pos += 2
      } else{ // if we didn't match any rules
        return console.log(`Unexpected token ${token.type}`)
      }
    }
  }
  run(){
    const {tokens, error} = this.tokenize()
    if(error){
      console.log(error)
      return
    }
    this.parse(tokens)
  }
}

And would you look at that — we already have a working language!

Screenshot of terminal output.

Okay but having codes in a string variable is not that fun. So lets put our Magenta codes in a file called code.m. That way we can keep our magenta codes separate from the compiler logic. We are using .m as file extension to indicate that this file contains code for our language.

Let’s read the code from that file:

// importing file system module
const fs = require('fs')
//importing path module for convenient path joining
const path = require('path')
class Magenta{
  constructor(codes){
    this.codes = codes
  }
  tokenize(){
    /* previous codes for tokenizer */
 }
  parse(tokens){
    /* previous codes for parse method */
 }
  run(){
    /* previous codes for run method */
  }
}

// Reading code.m file
// Some text editors use \r\n for new line instead of \n, so we are removing \r
const codes = fs.readFileSync(path.join(__dirname, 'code.m'), 'utf8').toString().replace(/\r/g, &quot;&quot;)
const magenta = new Magenta(codes)
magenta.run()

Go create a programming language!

And with that, we have successfully created a tiny Programming Language from scratch. See, a programming language can be as simple as something that accomplishes one specific thing. Sure, it’s unlikely that a language like Magenta here will ever be useful enough to be part of a popular framework or anything, but now you see what it takes to make one.

The sky is really the limit. If you want dive in a little deeper, try following along with this video I made going over a more advanced example. This is video I have also shown hoe you can add variables to your language also.


Let’s Create a Tiny Programming Language originally published on CSS-Tricks. You should get the newsletter.



from CSS-Tricks https://ift.tt/7UO9shy
via IFTTT

Monday, 9 May 2022

Useful Tools for Creating AVIF Images

AVIF (AV1 Image File Format) is a modern image file format specification for storing images that offer a much more significant file reduction when compared to other formats like JPG, JPEG, PNG, and WebP. Version 1.0.0 of the AVIF specification was finalized in February 2019 and released by Alliance for Open Media to the public.

In this article, you will learn about some browser-based tools and command line tools for creating AVIF images.

Why use AVIF over JPGs, PNGS, WebP, and GIF?

  • Lossless compression and lossy compression
  • JPEG suffers from awful banding
  • WebP is much better, but there’s still noticeable blockiness compared to the AVIF
  • Multiple color space
  • 8, 10, 12-bit color depth

Caveats

Jake Archibald, wrote an article a few years back on this new image format and also helped us to identify some disadvantages to compressing images, normally you should look out for these two when compressing to AVIF:

  1. If a user looks at the image in the context of the page, and it strikes them as ugly due to compression, then that level of compression is not acceptable. But, one tiny notch above that boundary is fine.
  2. It’s okay for the image to lose noticeable detail compared to the original unless that detail is significant to the context of the image.

See also: Addy Osmani at Smashing Magazine goes in-depth on using AVIF and WebP.

Browser Solutions

Squoosh

Screenshot of Squoosh.
Screenshot of Squoosh.

Squoosh is a popular image compression web app that allows you to convert images in numerous formats to other widely used compressed formats, including AVIF.

Features
  • File-size limit: 4MB
  • Image optimization settings (located on the right side)
  • Download controls – this includes seeing the size of of the resulting file and the percentage reduction from the original image
  • Free to use

AVIF.io

Screenshot of AVIF.io.
Screenshot of AVIF.io.

AVIF.io is an image tool that doesn’t require any form of code. All you need to do is upload your selected images in PNG, JPG, GIF, etc. and it would return compressed versions of them.

Features
  • Conversion settings (located on the top-right of upload banner)
  • Free to use

You can find answers to your questions on the AVIF.io FAQ page.

Command Line Solutions

avif-cli

avif-cli by lovell lets you take AVIF images stored in a folder and converts them to AVIF images of your specified reduction size.

Here are the requirements and what you need to do:

  • Node.js 12.13.0+

Install the package:

npm install avif

Run the command in your terminal:

npx avif --input="./imgs/*" --output="./output/" --verbose
  • ./imgs/* – represents the location of all your image files
  • ./output/ – represents the location of your output folder
Features
  • Free to use
  • Speed of conversion can be set

You can find out about more commands via the avif-cli GitHub page.

sharp

sharp (also maintained by lovell) is another useful tool for converting large images in common formats to smaller, web-friendly AVIF images.

Here are the requirements and what you need to do:

  • Node.js 12.13.0+

Install the package:

npm install sharp

Create a JavaScript file named sharp-example.js and copy this code:

const sharp = require('sharp')

const convertToAVIF = () => {
    sharp('path_to_image')
    .toFormat('avif', {palette: true})
    .toFile(__dirname + 'path_to_output_image')
}

convertToAVIF()

Where path_to_image represents the path to your image with its name and extension, i.e.:

./imgs/example.jpg

And path_to_output_image represents the path you want your image to be stored with its name and new extension, i.e.:

/sharp-compressed/compressed-example.avif

Run the command in your terminal:

node sharp-example.js

And there! You should have a compressed AVIF file in your output location!

Features
  • Free to use
  • Images can be rotated, blurred, resized, cropped, scaled, and more using sharp

See also: Stanley Ulili’s article on How To Process Images in Node.js With Sharp.

Conclusion

AVIF is a technology that front-end developers should consider for their projects. These tools allow you to convert your existing JPEG and PNG images to AVIF format. But as with adopting any new tool in your workflow, the benefits and downsides will need to be properly evaluated in accordance with your particular use case.

I hope you enjoyed reading this article as much as I enjoyed writing it. Thank you so much for your time and I hope you have a great day ahead!


Useful Tools for Creating AVIF Images originally published on CSS-Tricks. You should get the newsletter.



from CSS-Tricks https://ift.tt/T4aXVN3
via IFTTT

Friday, 6 May 2022

How to Serve a Subdomain as a Subdirectory

Let’s say you have a website built on a platform that excels at design and it’s available at example.com. But that platform falls short at blogging. So you think to yourself, “What if I could use a different blogging platform and make it available at example.com/blog?”

Most people would tell you that goes against how DNS and websites are supposed to work and to use a subdomain instead. But there are benefits to keeping your content on the root domain that we just don’t get with subdomains.

There’s a way to serve two different platforms on the same URL. And I’m going to show you the secret sauce so that, by the end of this article, we’ll make blog.example.com serve as example.com/blog.

Why you’d want to do this

Because you’re here, you probably already know why this is a path to pursue. But I’d like to ensure you are here for the primary reason to do this: SEO. Check out these 14 case studies that show positive results when people move their subdomains over to subdirectories. You want your blog and your domain to share SEO value. Putting it on a subdomain would somewhat disconnect the two.

This was my reason, and wound up merging two platforms, where the main domain was on WordPress and the subdomain was on Drupal. But this tutorial is platform agnostic — it’ll work with just about any platform.

That said, the Cloudflare approach we’re covering in this tutorial is incompatible with Shopify unless you pay for Cloudflare’s Enterprise plan. That’s because Shopify also uses Cloudflare and does not allow us to proxy the traffic on their free pricing tier.

Step 0 (Preview)

Before I jump in, I want to explain the high level of what’s going to happen. In short, we’ll have two websites: our main one (example.com) and the subdomain (blog.example.com). I use “blog” as an example, but in my case, I needed to drop in Drupal with a different type of content. But a blog is the typical use case.

This approach relies on using Cloudflare for DNS and a little extra something that’ll provide the magic. We’re going to tell Cloudflare that when someone visits example.com/blog, it should:

  1. intercept that request (because example.com/blog doesn’t really exist),
  2. request a different domain (blog.example.com/blog) behind the scenes, and
  3. deliver the results from that last step to the visitor masked through example.com/blog.

Okay, let’s dive into it in more detail!

Step 1: Using Cloudflare

Again, we’re using Cloudflare for the DNS. Pointing your domain’s DNS there is the first step to getting started.

The reason for Cloudflare is that it allows us to create Workers that are capable of running a bit of code anytime somebody visits certain URLs (called Routes which we’ll create in step 3). This code will be responsible for switching the websites behind the scenes.

Cloudflare has an excellent guide to getting started. The goal is to point your domain’s — wherever it is registered — to Cloudflare’s nameservers and confirm that Cloudflare is connected in your Cloudflare account.

Step 2: Create the Worker

This code will be responsible for switching the websites behind the scenes. Head over to Workers and click Create a Service.

Note the median CPU time! This process added about .7ms to the request (so basically nothing).

Name your service, then select “HTTP handler”:

Click Create Service and then Quick Edit.

Paste in the following code and replace the domain names with your own on line 16:

// Listen for every request and respond with our function.
// Note, this will only run on the routes configured in Cloudflare.
addEventListener('fetch', function(event) {
  event.respondWith(handleRequest(event.request))
})

// Our function to handle the response.
async function handleRequest(request) {
  // Only GET requests work with this proxy.
  if (request.method !== 'GET')
  return MethodNotAllowed(request);
  // The URL that is being requested.
  const url = new URL(request.url);
  // Request "origin URL" aka the real blog instead of what was requested.
  // This switches out the absolute URL leaving the relative path unchanged. 
  const originUrl = url.toString().replace('https://example.com', 'https://blog.example.com');
  // The contents of the origin page.
  const originPage = await fetch(originUrl);
  // Give the response our origin page.
  const newResponse = new Response(originPage.body, originPage); return newResponse;
}

// Hey! GET requests only 
function MethodNotAllowed(request) {
  return new Response(`Method ${request.method} not allowed.`, {
    status: 405,
    headers: { 'Allow': 'GET' }
  })
}

Lastly, click Save and Deploy.

Step 3: Add Routes

Now let’s inform Cloudflare which URLs (aka Routes) to run this code on. Head over to the website in Cloudflare, then click Workers.

There is the Workers section on the main screen of Cloudflare, where you edit the code, and then there is the Workers section on each website where you add the routes. They are two different places, and it’s confusing.

First off, click Add Route:

Because we are adding a blog that has many child pages, we’ll use https://example.com/blog*. Note the asterisk acts as a wild card for matching. This code will run on the blog page and every page that begins with blog.

This can have unintended consequences. Say, for example, you have a page that starts with “blog” but isn’t a part of the actual blog, like https://example.com/blogging-services. That would get picked up with this rule.

Then, select the Worker in the Service dropdown.

We have a lot of the work done, but there are more routes we need to add — the CSS, JavaScript, and other file paths that the blog is dependent on (unless all the files are hosted on a different URL, such as on a CDN). A good way to find these is by testing your route and checking the console.

Head over to your https://example.com/blog and make sure something is loading. It’ll look messed up because it’s missing the theme files. That’s fine for now, just as long as it’s not producing a 404 error. The important thing is to open up your browser’s DevTools, fire up the console, and make note of all the red URLs it can’t find or load (usually a 404 or 403) that are a part of your domain.

The resources in the orange boxes are the ones we need to target.

You’ll want to add those as routes… but only do the parent paths. So, if your red URL is https://example.com/wp-content/themes/file1.css, then do https://example.com/wp-content* as your route. You can add a child path, too, if you want to be more specific, but the idea is to use one route to catch most of the files.

Once you add those routes, check out your URL and see if it looks like your subdomain. If it doesn’t, check the previous steps. (Chances are you will need to add more routes.)

It’s best to do a quality check by navigating to multiple pages and seeing if anything is missing. I also recommend opening up DevTools and searching for your subdomain (blog.example.com). If that’s showing up, you either need to add routes to target those resources or do something with your platform to stop outputting those URLs. For example, my platform was outputting a canonical tag with my subdomain, so I found a plugin to modify the canonical URL to be my root domain.

Step 4: The secretest of sauces (noindex)

You might see that we have a problem. Our URLs are available at two different URLs. Yeah, we can use the canonical attribute to inform Google which URL is our “main” one, but let’s not leave it up to Google to pick the right one.

First, set your entire subdomain as noindex (the way to do this will vary by platform). Then, in the Cloudflare Worker, we are going to add the following line of code, which basically says to remove noindex when the current URL is accessed through the proxy.

newResponse.headers.delete("x-robots-tag");

The full code solution is provided at the end of this article.

Step 5: Modify the sitemap

The last thing to do is to modify the subdomain’s sitemap so it doesn’t use the subdomain in it. The way to do this will vary by platform, but the goal is to modify the base/absolute/domain in your sitemap so that it prints example.com/mypost) instead of blog.exmaple.com/mypost. Some plugins and modules will allow this without custom code.

That’s that! The solution should be working!

Limitations

This Cloudflare magic isn’t without its downsides. For example, it only accepts GET requests, meaning we can only get things from the server. We are unable to POST which is what forms use. So, if you need to have your visitors log in or submit forms, there will be more work on top of what we’ve already done. I discussed several solutions for this in another article.

As noted earlier, another limitation is that using this approach on Shopify requires subscribing to Cloudflare’s Enterprise pricing tier. Again, that’s because Shopify also uses Cloudflare and restricts the ability to proxy traffic on their other plans.

You also might get some issues if you’re trying to merge two instances of the same platforms together (e.g. both your top-level domain and subdomain use WordPress). But in a case like that you should be able to consolidate and use one instance of the platform.

Full solution

Here’s the code in all its glory:

// Listen for every request and respond with our function.
// Note, this will only run on the routes configured in Cloudflare.
addEventListener('fetch', function(event) {
  event.respondWith(handleRequest(event.request))
})
// Our function to handle the response.
async function handleRequest(request) {
  // Only GET requests work with this proxy.
  if (request.method !== 'GET') return MethodNotAllowed(request);
  // The URL that is being requested.
  const url = new URL(request.url);
  // Request "origin URL" aka the real blog instead of what was requested.
  // This switches out the absolute URL leaving the relative path unchanged. 
  const originUrl = url.toString().replace('https://example.com', 'https://blog.example.com');
  // The contents of the origin page.
  const originPage = await fetch(originUrl);
  // Give the response our origin page.
  const newResponse = new Response(originPage.body, originPage);
  // Remove "noindex" from the origin domain.
  newResponse.headers.delete("x-robots-tag");
  // Remove Cloudflare cache as it's meant for WordPress.
  // If you are using Cloudflare APO and your blog isn't WordPress, (but
  // your main domain is), then stop APO from running on your origin URL.
  // newResponse.headers.set("cf-edge-cache", "no-cache"); return newResponse; 
}
// Hey! GET requests only 
function MethodNotAllowed(request) {
  return new Response(`Method ${request.method} not allowed.`, {
    status: 405,
    headers:
    { 'Allow': 'GET' }
  })
}

If you need help along the way, I welcome you to reach out to me through my website CreateToday.io or check out my YouTube for a video demonstration.


How to Serve a Subdomain as a Subdirectory originally published on CSS-Tricks. You should get the newsletter.



from CSS-Tricks https://ift.tt/YtPxrTM
via IFTTT

Wednesday, 4 May 2022

Syntax Highlighting (and More!) With Prism on a Static Site

So, you’ve decided to build a blog with Next.js. Like any dev blogger, you’d like to have code snippets in your posts that are formatted nicely with syntax highlighting. Perhaps you also want to display line numbers in the snippets, and maybe even have the ability to call out certain lines of code.

This post will show you how to get that set up, as well as some tips and tricks for getting these other features working. Some of it is tricker than you might expect.

Prerequisites

We’re using the Next.js blog starter as the base for our project, but the same principles should apply to other frameworks. That repo has clear (and simple) getting started instructions. Scaffold the blog, and let’s go!

Another thing we’re using here is Prism.js, a popular syntax highlighting library that’s even used right here on CSS-Tricks. The Next.js blog starter uses Remark to convert Markdown into markup, so we’ll use the remark-Prism.js plugin for formatting our code snippets.

Basic Prism.js integration

Let’s start by integrating Prism.js into our Next.js starter. Since we already know we’re using the remark-prism plugin, the first thing to do is install it with your favorite package manager:

npm i remark-prism

Now go into the markdownToHtml file, in the /lib folder, and switch on remark-prism:

import remarkPrism from "remark-prism";

// later ...

.use(remarkPrism, { plugins: ["line-numbers"] })

Depending on which version of the remark-html you’re using, you might also need to change its usage to .use(html, { sanitize: false }).

The whole module should now look like this:

import { remark } from "remark";
import html from "remark-html";
import remarkPrism from "remark-prism";

export default async function markdownToHtml(markdown) {
  const result = await remark()
    .use(html, { sanitize: false })
    .use(remarkPrism, { plugins: ["line-numbers"] })
    .process(markdown);

  return result.toString();
}

Adding Prism.js styles and theme

Now let’s import the CSS that Prism.js needs to style the code snippets. In the pages/_app.js file, import the main Prism.js stylesheet, and the stylesheet for whichever theme you’d like to use. I’m using Prism.js’s “Tomorrow Night” theme, so my imports look like this:

import "prismjs/themes/prism-tomorrow.css";
import "prismjs/plugins/line-numbers/prism-line-numbers.css";
import "../styles/prism-overrides.css";

Notice I’ve also started a prism-overrides.css stylesheet where we can tweak some defaults. This will become useful later. For now, it can remain empty.

And with that, we now have some basic styles. The following code in Markdown:

```js
class Shape {
  draw() {
    console.log("Uhhh maybe override me");
  }
}

class Circle {
  draw() {
    console.log("I'm a circle! :D");
  }
}
```

…should format nicely:

Adding line numbers

You might have noticed that the code snippet we generated does not display line numbers even though we imported the plugin that supports it when we imported remark-prism. The solution is hidden in plain sight in the remark-prism README:

Don’t forget to include the appropriate css in your stylesheets.

In other words, we need to force a .line-numbers CSS class onto the generated <pre> tag, which we can do like this:

And with that, we now have line numbers!

Note that, based on the version of Prism.js I have and the “Tomorrow Night” theme I chose, I needed to add this to the prism-overrides.css file we started above:

.line-numbers span.line-numbers-rows {
  margin-top: -1px;
}

You may not need that, but there you have it. We have line numbers!

Highlighting lines

Our next feature will be a bit more work. This is where we want the ability to highlight, or call out certain lines of code in the snippet.

There’s a Prism.js line-highlight plugin; unfortunately, it is not integrated with remark-prism. The plugin works by analyzing the formatted code’s position in the DOM, and manually highlights lines based on that information. That’s impossible with the remark-prism plugin since there is no DOM at the time the plugin runs. This is, after all, static site generation. Next.js is running our Markdown through a build step and generating HTML to render the blog. All of this Prism.js code runs during this static site generation, when there is no DOM.

But fear not! There’s a fun workaround that fits right in with CSS-Tricks: we can use plain CSS (and a dash of JavaScript) to highlight lines of code.

Let me be clear that this is a non-trivial amount of work. If you don’t need line highlighting, then feel free to skip to the next section. But if nothing else, it can be a fun demonstration of what’s possible.

Our base CSS

Let’s start by adding the following CSS to our prism-overrides.css stylesheet:

:root {
  --highlight-background: rgb(0 0 0 / 0);
  --highlight-width: 0;
}

.line-numbers span.line-numbers-rows > span {
  position: relative;
}

.line-numbers span.line-numbers-rows > span::after {
  content: " ";
  background: var(--highlight-background);
  width: var(--highlight-width);
  position: absolute;
  top: 0;
}

We’re defining some CSS custom properties up front: a background color and a highlight width. We’re setting them to empty values for now. Later, though, we’ll set meaningful values in JavaScript for the lines we want highlighted.

We’re then setting the line number <span> to position: relative, so that we can add a ::after pseudo element with absolute positioning. It’s this pseudo element that we’ll use to highlight our lines.

Declaring the highlighted lines

Now, let’s manually add a data attribute to the <pre> tag that’s generated, then read that in code, and use JavaScript to tweak the styles above to highlight specific lines of code. We can do this the same way that we added line numbers before:

This will cause our <pre> element to be rendered with a data-line="3,8-10" attribute, where line 3 and lines 8-10 are highlighted in the code snippet. We can comma-separate line numbers, or provide ranges.

Let’s look at how we can parse that in JavaScript, and get highlighting working.

Reading the highlighted lines

Head over to components/post-body.tsx. If this file is JavaScript for you, feel free to either convert it to TypeScript (.tsx), or just ignore all my typings.

First, we’ll need some imports:

import { useEffect, useRef } from "react";

And we need to add a ref to this component:

const rootRef = useRef<HTMLDivElement>(null);

Then, we apply it to the root element:

<div ref={rootRef} className="max-w-2xl mx-auto">

The next piece of code is a little long, but it’s not doing anything crazy. I’ll show it, then walk through it.

useEffect(() => {
  const allPres = rootRef.current.querySelectorAll("pre");
  const cleanup: (() => void)[] = [];

  for (const pre of allPres) {
    const code = pre.firstElementChild;
    if (!code || !/code/i.test(code.tagName)) {
      continue;
    }

    const highlightRanges = pre.dataset.line;
    const lineNumbersContainer = pre.querySelector(".line-numbers-rows");

    if (!highlightRanges || !lineNumbersContainer) {
      continue;
    }

    const runHighlight = () =>
      highlightCode(pre, highlightRanges, lineNumbersContainer);
    runHighlight();

    const ro = new ResizeObserver(runHighlight);
    ro.observe(pre);

    cleanup.push(() => ro.disconnect());
  }

  return () => cleanup.forEach(f => f());
}, []);

We’re running an effect once, when the content has all been rendered to the screen. We’re using querySelectorAll to grab all the <pre> elements under this root element; in other words, whatever blog post the user is viewing.

For each one, we make sure there’s a <code> element under it, and we check for both the line numbers container and the data-line attribute. That’s what dataset.line checks. See the docs for more info.

If we make it past the second continue, then highlightRanges is the set of highlights we declared earlier which, in our case, is "3,8-10", where lineNumbersContainer is the container with the .line-numbers-rows CSS class.

Lastly, we declare a runHighlight function that calls a highlightCode function that I’m about to show you. Then, we set up a ResizeObserver to run that same function anytime our blog post changes size, i.e., if the user resizes the browser window.

The highlightCode function

Finally, let’s see our highlightCode function:

function highlightCode(pre, highlightRanges, lineNumberRowsContainer) {
  const ranges = highlightRanges.split(",").filter(val => val);
  const preWidth = pre.scrollWidth;

  for (const range of ranges) {
    let [start, end] = range.split("-");
    if (!start || !end) {
      start = range;
      end = range;
    }

    for (let i = +start; i <= +end; i++) {
      const lineNumberSpan: HTMLSpanElement = lineNumberRowsContainer.querySelector(
        `span:nth-child(${i})`
      );
      lineNumberSpan.style.setProperty(
        "--highlight-background",
        "rgb(100 100 100 / 0.5)"
      );
      lineNumberSpan.style.setProperty("--highlight-width", `${preWidth}px`);
    }
  }
}

We get each range and read the width of the <pre> element. Then we loop through each range, find the relevant line number <span>, and set the CSS custom property values for them. We set whatever highlight color we want, and we set the width to the total scrollWidth value of the <pre> element. I kept it simple and used "rgb(100 100 100 / 0.5)" but feel free to use whatever color you think looks best for your blog.

Here’s what it looks like:

Syntax highlighting for a block of Markdown code.

Line highlighting without line numbers

You may have noticed that all of this so far depends on line numbers being present. But what if we want to highlight lines, but without line numbers?

One way to implement this would be to keep everything the same and add a new option to simply hide those line numbers with CSS. First, we’ll add a new CSS class, .hide-numbers:

```js[class="line-numbers"][class="hide-numbers"][data-line="3,8-10"]
class Shape {
  draw() {
    console.log("Uhhh maybe override me");
  }
}

class Circle {
  draw() {
    console.log("I'm a circle! :D");
  }
}
```

Now let’s add CSS rules to hide the line numbers when the .hide-numbers class is applied:

.line-numbers.hide-numbers {
  padding: 1em !important;
}
.hide-numbers .line-numbers-rows {
  width: 0;
}
.hide-numbers .line-numbers-rows > span::before {
  content: " ";
}
.hide-numbers .line-numbers-rows > span {
  padding-left: 2.8em;
}

The first rule undoes the shift to the right from our base code in order to make room for the line numbers. By default, the padding of the Prism.js theme I chose is 1em. The line-numbers plugin increases it to 3.8em, then inserts the line numbers with absolute positioning. What we did reverts the padding back to the 1em default.

The second rule takes the container of line numbers, and squishes it to have no width. The third rule erases all of the line numbers themselves (they’re generated with ::before pseudo elements).

The last rule simply shifts the now-empty line number <span> elements back to where they would have been so that the highlighting can be positioned how we want it. Again, for my theme, the line numbers normally adds 3.8em worth of left padding, which we reverted back to the default 1em. These new styles add the other 2.8em so things are back to where they should be, but with the line numbers hidden. If you’re using different plugins, you might need slightly different values.

Here’s what the result looks like:

Syntax highlighting for a block of Markdown code.

Copy-to-Clipboard feature

Before we wrap up, let’s add one finishing touch: a button allowing our dear reader to copy the code from our snippet. It’s a nice little enhancement that spares people from having to manually select and copy the code snippets.

It’s actually somewhat straightforward. There’s a navigator.clipboard.writeText API for this. We pass that method the text we’d like to copy, and that’s that. We can inject a button next to every one of our <code> elements to send the code’s text to that API call to copy it. We’re already messing with those <code> elements in order to highlight lines, so let’s integrate our copy-to-clipboard button in the same place.

First, from the useEffect code above, let’s add one line:

useEffect(() => {
  const allPres = rootRef.current.querySelectorAll("pre");
  const cleanup: (() => void)[] = [];

  for (const pre of allPres) {
    const code = pre.firstElementChild;
    if (!code || !/code/i.test(code.tagName)) {
      continue;
    }

    pre.appendChild(createCopyButton(code));

Note the last line. We’re going to append our button right into the DOM underneath our <pre> element, which is already position: relative, allowing us to position the button more easily.

Let’s see what the createCopyButton function looks like:

function createCopyButton(codeEl) {
  const button = document.createElement("button");
  button.classList.add("prism-copy-button");
  button.textContent = "Copy";

  button.addEventListener("click", () => {
    if (button.textContent === "Copied") {
      return;
    }
    navigator.clipboard.writeText(codeEl.textContent || "");
    button.textContent = "Copied";
    button.disabled = true;
    setTimeout(() => {
      button.textContent = "Copy";
      button.disabled = false;
    }, 3000);
  });

  return button;
}

Lots of code, but it’s mostly boilerplate. We create our button then give it a CSS class and some text. And then, of course, we create a click handler to do the copying. After the copy is done, we change the button’s text and disable it for a few seconds to help give the user feedback that it worked.

The real work is on this line:

navigator.clipboard.writeText(codeEl.textContent || "");

We’re passing codeEl.textContent rather than innerHTML since we want only the actual text that’s rendered, rather than all the markup Prism.js adds in order to format our code nicely.

Now let’s see how we might style this button. I’m no designer, but this is what I came up with:

.prism-copy-button {
  position: absolute;
  top: 5px;
  right: 5px;
  width: 10ch;
  background-color: rgb(100 100 100 / 0.5);
  border-width: 0;
  color: rgb(0, 0, 0);
  cursor: pointer;
}

.prism-copy-button[disabled] {
  cursor: default;
}

Which looks like this:

Syntax highlighting for a block of Markdown code.

And it works! It copies our code, and even preserves the formatting (i.e. new lines and indentation)!

Wrapping up

I hope this has been useful to you. Prism.js is a wonderful library, but it wasn’t originally written for static sites. This post walked you through some tips and tricks for bridging that gap, and getting it to work well with a Next.js site.


Syntax Highlighting (and More!) With Prism on a Static Site originally published on CSS-Tricks. You should get the newsletter.



from CSS-Tricks https://ift.tt/fVhsRuP
via IFTTT

Tuesday, 3 May 2022

Adding Custom GitHub Badges to Your Repo

If you’ve spent time looking at open-source repos on GitHub, you’ve probably noticed that most of them use badges in their README files. Take the official React repository, for instance. There are GitHub badges all over the README file that communicate important dynamic info, like the latest released version and whether the current build is passing.

Showing the header of React's repo displaying GitHub badges.

Badges like these provide a nice way to highlight key information about a repository. You can even use your own custom assets as badges, like Next.js does in its repo.

Showing the Next.js repo header with GitHub badges.

But the most useful thing about GitHub badges by far is that they update by themselves. Instead of hardcoding values into your README, badges in GitHub can automatically pick up changes from a remote server.

Let’s discuss how to add dynamic GitHub badges to the README file of your own project. We’ll start by using an online generator called badgen.net to create some basic badges. Then we’ll make our badges dynamic by hooking them up to our own serverless function via Napkin. Finally, we’ll take things one step further by using our own custom SVG files.

Showing three examples of custom GitHub badges including Apprentice, Intermediate, and wizard skill levels.

First off: How do badges work?

Before we start building some badges in GitHub, let’s quickly go over how they are implemented. It’s actually very simple: badges are just images. README files are written in Markdown, and Markdown supports images like so:

!\[alt text\](path or URL to image)

The fact that we can include a URL to an image means that a Markdown page will request the image data from a server when the page is rendered. So, if we control the server that has the image, we can change what image is sent back using whatever logic we want!

Thankfully, we have a couple options to deploy our own server logic without the whole “setting up the server” part. For basic use cases, we can create our GitHub badge images with badgen.net using its predefined templates. And again, Napkin will let us quickly code a serverless function in our browser and then deploy it as an endpoint that our GitHub badges can talk to.

Making badges with Badgen

Let’s start off with the simplest badge solution: a static badge via badgen.net. The Badgen API uses URL patterns to create templated badges on the fly. The URL pattern is as follows:

https://badgen.net/badge/:subject/:status/:color?icon=github

There’s a full list of the options you have for colors, icons, and more on badgen.net. For this example, let’s use these values:

  • :subject : Hello
  • :status: : World
  • :color: : red
  • :icon: : twitter

Our final URL winds up looking like this:

https://badgen.net/badge/hello/world/red?icon=twitter

Adding a GitHub badge to the README file

Now we need to embed this badge in the README file of our GitHub repo. We can do that in Markdown using the syntax we looked at earlier:

!\[my badge\](https://badgen.net/badge/hello/world/red?icon=twitter)

Badgen provides a ton of different options, so I encourage you to check out their site and play around! For instance, one of the templates lets you show the number of times a given GitHub repo has been starred. Here’s a star GitHub badge for the Next.js repo as an example:

https://badgen.net/github/stars/vercel/next.js

Pretty cool! But what if you want your badge to show some information that Badgen doesn’t natively support? Luckily, Badgen has a URL template for using your own HTTPS endpoints to get data:

https://badgen.net/https/url/to/your/endpoint

For example, let’s say we want our badge to show the current price of Bitcoin in USD. All we need is a custom endpoint that returns this data as JSON like this:

{
  "color": "blue",
  "status": "$39,333.7",
  "subject": "Bitcoin Price USD"
}

Assuming our endpoint is available at https://some-endpoint.example.com/bitcoin, we can pass its data to Badgen using the following URL scheme:

https://badgen.net/https/some-endpoint.example.com/bitcoin
GitHub badge. On the left is a gray label with white text. On the right is a blue label with white text showing the price of Bitcoin.
The data for the cost of Bitcoin is served right to the GitHub badge.

Even cooler now! But we still have to actually create the endpoint that provides the data for the GitHub badge. 🤔 Which brings us to…

Badgen + Napkin

There’s plenty of ways to get your own HTTPS endpoint. You could spin up a server with DigitalOcean or AWS EC2, or you could use a serverless option like Google Cloud Functions or AWS Lambda; however, those can all still become a bit complex and tedious for our simple use case. That’s why I’m suggesting Napkin’s in-browser function editor to code and deploy an endpoint without any installs or configuration.

Head over to Napkin’s Bitcoin badge example to see an example endpoint. You can see the code to retrieve the current Bitcoin price and return it as JSON in the editor. You can run the code yourself from the editor or directly use the endpoint.

To use the endpoint with Badgen, work with the same URL scheme from above, only this time with the Napkin endpoint:

https://badgen.net/https/napkin-examples.npkn.net/bitcoin-badge

More ways to customize GitHub badges

Next, let’s fork this function so we can add in our own custom code to it. Click the “Fork” button in the top-right to do so. You’ll be prompted to make an account with Napkin if you’re not already signed in.

Once we’ve successfully forked the function, we can add whatever code we want, using any npm modules we want. Let’s add the Moment.js npm package and update the endpoint response to show the time that the price of Bitcoin was last updated directly in our GitHub badge:

import fetch from 'node-fetch'
import moment from 'moment'

const bitcoinPrice = async () => {
  const res = await fetch("<https://blockchain.info/ticker>")
  const json = await res.json()
  const lastPrice = json.USD.last+""

  const [ints, decimals] = lastPrice.split(".")

  return ints.slice(0, -3) + "," + ints.slice(-3) + "." + decimals
}

export default async (req, res) => {
  const btc = await bitcoinPrice()

  res.json({
    icon: 'bitcoin',
    subject: `Bitcoin Price USD (${moment().format('h:mma')})`,
    color: 'blue',
    status: `\\$${btc}`
  })
}
Deploy the function, update your URL, and now we get this.

You might notice that the badge takes some time to refresh the next time you load up the README file over at GitHub. That’s is because GitHub uses a proxy mechanism to serve badge images.

GitHub serves the badge images this way to prevent abuse, like high request volume or JavaScript code injection. We can’t control GitHub’s proxy, but fortunately, it doesn’t cache too aggressively (or else that would kind of defeat the purpose of badges). In my experience, the TTL is around 5-10 minutes.

OK, final boss time.

Custom SVG badges with Napkin

For our final trick, let’s use Napkin to send back a completely new SVG, so we can use custom images like we saw on the Next.js repo.

A common use case for GitHub badges is showing the current status for a website. Let’s do that. Here are the two states our badge will support:

Badgen doesn’t support custom SVGs, so instead, we’ll have our badge talk directly to our Napkin endpoint. Let’s create a new Napkin function for this called site-status-badge.

The code in this function makes a request to example.com. If the request status is 200, it returns the green badge as an SVG file; otherwise, it returns the red badge. You can check out the function, but I’ll also include the code here for reference:

import fetch from 'node-fetch'

const site_url = "<https://example.com>"

// full SVGs at <https://napkin.io/examples/site-status-badge>
const customUpBadge = ''
const customDownBadge = ''

const isSiteUp = async () => {
  const res = await fetch(site_url)
  return res.ok
}

export default async (req, res) => {
  const forceFail = req.path?.endsWith('/400')

  const healthy = await isSiteUp()
  res.set('content-type', 'image/svg+xml')
  if (healthy && !forceFail) {
    res.send(Buffer.from(customUpBadge).toString('base64'))
  } else {
    res.send(Buffer.from(customDownBadge).toString('base64'))
  }
}

Odds are pretty low that the example.com site will ever go down, so I added the forceFail case to simulate that scenario. Now we can add a /400 after the Napkin endpoint URL to try it:

!\[status up\](https://napkin-examples.npkn.net/site-status-badge/)
!\[status down\](https://napkin-examples.npkn.net/site-status-badge/400)

Very nice 😎


And there we have it! Your GitHub badge training is complete. But the journey is far from over. There’s a million different things where badges like this are super helpful. Have fun experimenting and go make that README sparkle! ✨


Adding Custom GitHub Badges to Your Repo originally published on CSS-Tricks. You should get the newsletter.



from CSS-Tricks https://ift.tt/XFId2EJ
via IFTTT

Monday, 2 May 2022

Creating Realistic Reflections With CSS

In design, reflections are stylized mirror images of objects. Even though they are not as popular as shadows, they have their moments — just think about the first time you explored the different font formats in MS Word or PowerPoint: I bet reflection was your second-most-used style, next to shadow, foregoing others like outline and glow. Or perhaps you remember when reflections were all the rage back when Apple used them on just about everything.

Old iPod Touch product image showing the front, back, and profile of the device with subtle reflections beneath each.

Reflections are still cool! And unlike years past, we can actually make reflections with CSS! Here’s what we’ll be making in this article:

There are two steps to a reflection design:

  1. Create a copy of the original design.
  2. Style that copy.

The most authentic and standardized way to get a mirror image in CSS now would be to use the element() property. But it’s still in its experimental phase and is only supported in Firefox, at the time of writing this article. If you’re curious, you can check out this article I wrote that experiments with it.

So, rather than element(), I’m going to add two of the same designs and use one as the reflection in my examples. You can code this part to be dynamic using JavaScript, or use pseudo-elements, but in my demos, I’m using a pair of identical elements per design.

<div class="units">
  <div>trinket</div>
  <div>trinket</div>
</div>
.units > * {
  background-image: url('image.jpeg');
  background-clip: text;
  color: transparent;
  /* etc. */
}

The original design is a knock-out text graphic created from the combination of a background image, transparent text color, and the background-clip property with its text value.

The bottom element of the pair is then turned upside-down and moved closer to the original design using transform. This is the reflection:

.units > :last-child {
  transform: rotatex(180deg) translatey(15px); 
}

The now upturned bottom element will take on some styles to create fades and other graphic effects on the reflection. A gradual fading of reflection can be achieved with a linear gradient image used as a mask layer on the upturned element.

.units > :last-child {
  transform: rotatex(180deg) translatey(15px); 
  mask-image: linear-gradient(transparent 50%, white 90%);
}

By default, the mask-mode of the mask-image property is alpha. That means the transparent parts of an image, when the image is used as a mask layer for an element, turn their corresponding areas of the element transparent as well. That’s why a linear-gradient with transparent gradation at the top fades out the upside-down reflection at the end.

We can also try other gradient styles, with or without combining them. Take this one with stripes, for example. I added the pattern along with the fade effect from before.

.units > :last-child {
  /* ... */
  mask-image: 
    repeating-linear-gradient(transparent, transparent 3px, white 3px, white 4px),
    linear-gradient( transparent 50%, white 90%);
}

Or this one with radial-gradient:

.units > :last-child {
  /* ... */
  mask-image: radial-gradient(circle at center, white, transparent 50%);
}

Another idea is to morph the mirror image by adding skew() to the transform property. This gives some movement to the reflection.

.units > :last-child {
  /* ... */
  transform: rotatex(180deg) translatey(15px) skew(135deg) translatex(30px);
}

When you need the reflection to be subtle and more like a shadow, then blurring it out, brightening it, or reducing its opacity, can do the trick.

.units > :last-child {
  /* ... */
  filter: blur(4px) brightness(1.5);
}

Sometimes a reflection can also be shadowy itself, so, instead of using the background image (from the original design) or a block color for the text, I tried giving the reflection a series of translucent shadows of red, blue and green colors that go well with the original design.

.units > :last-child {
  /* ... */
  text-shadow: 
    0 0 8px rgb(255 0 0 / .4),
    -2px -2px 6px rgb(0 255 0 / .4),
    2px 2px 4px rgb(0 255 255 / .4);
}

Do those rgb()values look weird? That’s a new syntax that’s part of some exciting new CSS color features.

Let’s bring all of these approaches together in one big demo:

Wrapping up

The key to a good reflection is to go with effects that are subtler than the main object, but not so subtle that it’s difficult to notice. Then there are other considerations, including the reflection’s color, direction, and shape.

I hope you got some inspirations from this! Sure, all we looked at here was text, but reflections can work well for any striking element in a design that has a sensible enough space around it and can benefit from a reflection to elevate itself on the page.


Creating Realistic Reflections With CSS originally published on CSS-Tricks. You should get the newsletter.



from CSS-Tricks https://ift.tt/as5GyXi
via IFTTT

Passkeys: What the Heck and Why?

These things called  passkeys  sure are making the rounds these days. They were a main attraction at  W3C TPAC 2022 , gained support in  Saf...