Monday 26 July 2021

How I Built a Cross-Platform Desktop Application with Svelte, Redis, and Rust

At Cloudflare, we have a great product called Workers KV which is a key-value storage layer that replicates globally. It can handle millions of keys, each of which is accessible from within a Worker script at exceptionally low latencies, no matter where in the world a request is received. Workers KV is amazing — and so is its pricing, which includes a generous free tier.

However, as a long-time user of the Cloudflare lineup, I have found one thing missing: local introspection. With thousands, and sometimes hundreds of thousands of keys in my applications, I’d often wish there was a way to query all my data, sort it, or just take a look to see what’s actually there.

Well, recently, I was lucky enough to join Cloudflare! Even more so, I joined just before the quarter’s “Quick Wins Week” — aka, their week-long hackathon. And given that I hadn’t been around long enough to accumulate a backlog (yet), you best believe I jumped on the opportunity to fulfill my own wish.

So, with the intro out of the way, let me tell you how I built Workers KV GUI, a cross-platform desktop application using Svelte, Redis, and Rust.

The front-end application

As a web developer, this was the familiar part. I’m tempted to call this the “easy part” but, given that you can use any and all HTML, CSS, and JavaScript frameworks, libraries, or patterns, choice paralysis can easily set in… which might be familiar, too. If you have a favorite front-end stack, great, use that! For this application, I chose to use Svelte because, for me, it certainly makes and keeps things easy.

Also, as web developers, we expect to bring all our tooling with us. You certainly can! Again, this phase of the project is no different than your typical web application development cycle. You can expect to run yarn dev (or some variant) as your main command and feel at home. Keeping with an “easy” theme, I’ve elected to use SvelteKit, which is Svelte’s official framework and toolkit for building applications. It includes an optimized build system, a great developer experience (including HMR!), a filesystem-based router, and all that Svelte itself has to offer.

As a framework, especially one that takes care of its own tooling, SvelteKit allowed me to purely think about my application and its requirements. In fact, as far as configuration is concerned, the only thing I had to do was tell SvelteKit that I wanted to build a single-page application (SPA) that only runs in the client. In other words, I had to explicitly opt out of SvelteKit’s assumption that I wanted a server, which is actually a fair assumption to make since most applications can benefit from server-side rendering. This was as easy as attaching the @sveltejs/adapter-static package, which is a configuration preset made exactly for this purpose. After installing, this was my entire configuration file:

// svelte.config.js
import preprocess from 'svelte-preprocess';
import adapter from '@sveltejs/adapter-static';

/** @type {import('@sveltejs/kit').Config} */
const config = {
  preprocess: preprocess(),

  kit: {
    adapter: adapter({
      fallback: 'index.html'
    }),
    files: {
      template: 'src/index.html'
    }
  },
};

export default config;

The index.html changes are a personal preference. SvelteKit uses app.html as a default base template, but old habits die hard.

It’s only been a few minutes, and my toolchain already knows it’s building a SPA, that there’s a router in place, and a development server is at the ready. Plus, TypeScript, PostCSS, and/or Sass support is there if I want it (and I do), thanks to svelte-preprocess. Ready to rumble!

The application needed two views:

  1. a screen to enter connection details (the default/welcome/home page)
  2. a screen to actually view your data

In the SvelteKit world, this translates to two “routes” and SvelteKit dictates that these should exist as src/routes/index.svelte for the home page and src/routes/viewer.svelte for the data viewer page. In a true web application, this second route would map to the /viewer URL. While this is still the case, I know that my desktop application won’t have a navigation bar, which means that the URL won’t be visible… which means that it doesn’t matter what I call this route, as long as it makes sense to me.

The contents of these files are mostly irrelevant, at least for this article. For those curious, the entire project is open source and if you’re looking for a Svelte or SvelteKit example, I welcome you to take a look. At the risk of sounding like a broken record, the point here is that I’m building a regular web app.

At this time, I’m just designing my views and throwing around fake, hard-coded data until I have something that seems to work. I hung out here for about two days, until everything looked nice and all interactivity (button clicks, form submissions, etc.) got fleshed out. I’d call this a “working” app, or a mockup.

Desktop application tooling

At this point, a fully functional SPA exists. It operates — and was developed — in a web browser. Perhaps counterintuitively, this makes it the perfect candidate to become a desktop application! But how?

You may have heard of Electron. It’s the most well-known tool for building cross-platform desktop applications with web technologies. There are a number of massively popular and successful applications built with it: Visual Studio Code, WhatsApp, Atom, and Slack, to name a few. It works by bundling your web assets with its own Chromium installation and its own Node.js runtime. In other words, when you’re installing an Electron-based application, it’s coming with an extra Chrome browser and an entire programming language (Node.js). These are embedded within the application contents and there’s no avoiding them, as these are dependencies for the application, guaranteeing that it runs consistently everywhere. As you might imagine, there’s a bit of a trade-off with this approach — applications are fairly massive (i.e. more than 100MB) and use lots of system resources to operate. In order to use the application, an entirely new/separate Chrome is running in the background — not quite the same as opening a new tab.

Luckily, there are a few alternatives — I evaluated Svelte NodeGui and Tauri. Both choices offered significant application size and utilization savings by relying on native renderers the operating system offers, instead of embedding a copy of Chrome to do the same work. NodeGui does this by relying on Qt, which is another Desktop/GUI application framework that compiles to native views. However, in order to do this, NodeGui requires some adjustments to your application code in order for it to translate your components into Qt components. While I’m sure this certainly would have worked, I wasn’t interested in this solution because I wanted to use exactly what I already knew, without requiring any adjustments to my Svelte files. By contrast, Tauri achieves its savings by wrapping the operating system’s native webviewer — for example, Cocoa/WebKit on macOS, gtk-webkit2 on Linux, and Webkit via Edge on Windows. Webviewers are effectively browsers, which Tauri uses because they already exist on your system, and this means that our applications can remain pure web development products.

With these savings, the bare minimum Tauri application is less than 4MB, with average applications weighing less than 20MB. In my testing, the bare minimum NodeGui application weighed about 16MB. A bare minimum Electron app is easily 120MB.

Needless to say, I went with Tauri. By following the Tauri Integration guide, I added the @tauri-apps/cli package to my devDependencies and initialized the project:

yarn add --dev @tauri-apps/cli
yarn tauri init

This creates a src-tauri directory alongside the src directory (where the Svelte application lives). This is where all Tauri-specific files live, which is nice for organization.

I had never built a Tauri application before, but after looking at its configuration documentation, I was able to keep most of the defaults — aside from items like the package.productName and windows.title values, of course. Really, the only changes I needed to make were to the build config, which had to align with SvelteKit for development and output information:

// src-tauri/tauri.conf.json
{
  "package": {
    "version": "0.0.0",
    "productName": "Workers KV"
  },
  "build": {
    "distDir": "../build",
    "devPath": "http://localhost:3000",
    "beforeDevCommand": "yarn svelte-kit dev",
    "beforeBuildCommand": "yarn svelte-kit build"
  },
  // ...
}

The distDir relates to where the built production-ready assets are located. This value is resolved from the tauri.conf.json file location, hence the ../ prefix.

The devPath is the URL to proxy during development. By default, SvelteKit spawns a devserver on port 3000 (configurable, of course). I had been visiting the localhost:3000 address in my browser during the first phase, so this is no different.

Finally, Tauri has its own dev and build commands. In order to avoid the hassle of juggling multiple commands or build scripts, Tauri provides the beforeDevCommand and beforeBuildCommand hooks which allow you to run any command before the tauri command runs. This is a subtle but strong convenience!

The SvelteKit CLI is accessed through the svelte-kit binary name. Writing yarn svelte-kit build, for example, tells yarn to fetch its local svelte-kit binary, which was installed via a devDependency, and then tells SvelteKit to run its build command.

With this in place, my root-level package.json contained the following scripts:

{
  "private": true,
  "type": "module",
  "scripts": {
    "dev": "tauri dev",
    "build": "tauri build",
    "prebuild": "premove build",
    "preview": "svelte-kit preview",
    "tauri": "tauri"
  },
  // ...
  "devDependencies": {
    "@sveltejs/adapter-static": "1.0.0-next.9",
    "@sveltejs/kit": "1.0.0-next.109",
    "@tauri-apps/api": "1.0.0-beta.1",
    "@tauri-apps/cli": "1.0.0-beta.2",
    "premove": "3.0.1",
    "svelte": "3.38.2",
    "svelte-preprocess": "4.7.3",
    "tslib": "2.2.0",
    "typescript": "4.2.4"
  }
}

After integration, my production command was still yarn build, which invokes tauri build to actually bundle the desktop application, but only after yarn svelte-kit build has completed successfully (via the beforeBuildCommand option). And my development command remained yarn dev which spawns the tauri dev and yarn svelte-kit dev commands to run in parallel. The development workflow is entirely within the Tauri application, which is now proxying localhost:3000, allowing me to still reap the benefits of a HMR development server.

Important: Tauri is still in beta at the time of this writing. That said, it feels very stable and well-planned. I have no affiliation with the project, but it seems like Tauri 1.0 may enter a stable release sooner rather than later. I found the Tauri Discord to be very active and helpful, including replies from the Tauri maintainers! They even entertained some of my noob Rust questions throughout the process. :)

Connecting to Redis

At this point, it’s Wednesday afternoon of Quick Wins week, and — to be honest — I’m starting to get nervous about finishing before the team presentation on Friday. Why? Because I’m already halfway through the week, and even though I have a good-looking SPA inside a working desktop application, it still doesn’t do anything. I’ve been looking at the same fake data all week.

You may be thinking that because I have access to a webview, I can use fetch() to make some authenticated REST API calls for the Workers KV data I want and dump it all into localStorage or an IndexedDB table… You’re 100% right! However, that’s not exactly what I had in mind for my desktop application’s use case.

Saving all the data into some kind of in-browser storage is totally viable, but it saves it locally to your machine. This means that if you have team members trying to do the same thing, everyone will have to fetch and save all the data on their own machines, too. Ideally, this Workers KV application should have the option to connect to and sync with an external database. That way, when working in team settings, everyone can tune into the same data cache to save time — and a couple bucks. This starts to matter when dealing with millions of keys which, as mentioned, is not uncommon with Workers KV.

Having thought about it for a bit, I decided to use Redis as my backing store because it also is a key-value store. This was great because Redis already treats keys as a first-class citizen and offers the sorting and filtering behaviors I wanted (aka, I can pass along the work instead of implementing it myself!). And then, of course, Redis is easy to install and run either locally or in a container, and there are many hosted-Redis-as-service providers out there if someone chooses to go that route.

But, how do I connect to it? My app is basically a browser tab running Svelte, right? Yes — but also so much more than that.

You see, part of Electron’s success is that, yes, it guarantees a web app is presented well on every operating system, but it also brings along a Node.js runtime. As a web developer, this was a lot like including a back-end API directly inside my client. Basically the “…but it works on my machine” problem went away because all of the users were (unknowingly) running the exact same localhost setup. Through the Node.js layer, you could interact with the filesystem, run servers on multiple ports, or include a bunch of node_modules to — and I’m just spit-balling here — connect to a Redis instance. Powerful stuff.

We don’t lose this superpower because we’re using Tauri! It’s the same, but slightly different.

Instead of including a Node.js runtime, Tauri applications are built with Rust, a low-level systems language. This is how Tauri itself interacts with the operating system and “borrows” its native webviewer. All of the Tauri toolkit is compiled (via Rust), which allows the built application to remain small and efficient. However, this also means that we, the application developers, can include any additional crates — the “npm module” equivalent — into the built application. And, of course, there’s an aptly named redis crate that, as a Redis client driver, allows the Workers KV GUI to connect to any Redis instance.

In Rust, the Cargo.toml file is similar to our package.json file. This is where dependencies and metadata are defined. In a Tauri setting, this is located at src-tauri/Cargo.toml because, again, everything related to Tauri is found in this directory. Cargo also has a concept of “feature flags” defined at the dependency level. (The closest analogy I can come up with is using npm to access a module’s internals or import a named submodule, though it’s not quite the same still since, in Rust, feature flags affect how the package is built.)

# src-tauri/Cargo.toml
[dependencies]
serde_json = "1.0"
serde = { version = "1.0", features = ["derive"] }
tauri = { version = "1.0.0-beta.1", features = ["api-all", "menu"] }
redis = { version = "0.20", features = ["tokio-native-tls-comp"] }

The above defines the redis crate as a dependency and opts into the "tokio-native-tls-comp" feature, which the documentation says is required for TLS support.

Okay, so I finally had everything I needed. Before Wednesday ended, I had to get my Svelte to talk to my Redis. After poking around a bit, I noticed that all the important stuff seemed to be happening inside the src-tauri/main.rs file. I took note of the #[command] macro, which I knew I had seen before in a Tauri example earlier in the day, so I studied copied the example file in sections, seeing which errors came and went according to the Rust compiler.

Eventually, the Tauri application was able to run again, and I learned that the #[command] macro is wrapping the underlying function in a way so that it can receive “context” values, if you choose to use them, and receive pre-parsed argument values. Also, as a language, Rust does a lot of type casting. For example:

use tauri::{command};

#[command]
fn greet(name: String, age: u8) {
  println!("Hello {}, {} year-old human!", name, age);
}

This creates a greet command which, when run,expects two arguments: name and age. When defined, the name value is a string value and age is a u8 data type — aka, an integer. However, if either are missing, Tauri throws an error because the command definition does not say anything is allowed to be optional.

To actually connect a Tauri command to the application, it has to be defined as part of the tauri::Builder composition, found within the main function.

use tauri::{command};

#[command]
fn greet(name: String, age: u8) {
  println!("Hello {}, {} year-old human!", name, age);
}

fn main() {
  // start composing a new Builder chain
  tauri::Builder::default()
    // assign our generated "handler" to the chain
    .invoke_handler(
      // piece together application logic
      tauri::generate_handler![
        greet, // attach the command
      ]
    )
    // start/initialize the application
    .run(
      // put it all together
      tauri::generate_context!()
    )
    // print <message> if error while running
    .expect("error while running tauri application");
}

The Tauri application compiles and is aware of the fact that it owns a “greet” command. It’s also already controlling a webview (which we’ve discussed) but in doing so, it acts as a bridge between the front end (the webview contents) and the back end, which consists of the Tauri APIs and any additional code we’ve written, like the greet command. Tauri allows us to send messages across this bridge so that the two worlds can communicate with one another.

A component diagram of a basic Tauri application.
The developer is responsible for webview contents and may optionally include custom Rust modules and/or define custom commands. Tauri controls the webviewer and the event bridge, including all message serialization and deserialization.

This “bridge” can be accessed by the front end by importing functionality from any of the (already included) @tauri-apps packages, or by relying on the window.__TAURI__ global, which is available to the entire client-side application. Specifically, we’re interested in the invoke command, which takes a command name and a set of arguments. If there are any arguments, they must be defined as an object where the keys match the parameter names our Rust function expects.

In the Svelte layer, this means that we can do something like this in order to call the greet command, defined in the Rust layer:

<!-- Greeter.svelte -->
<script>
  function onclick() {
    __TAURI__.invoke('greet', {
      name: 'Alice',
      age: 32
    });
  }
</script>

<button on:click={onclick}>Click Me</button>

When this button is clicked, our terminal window (wherever the tauri dev command is running) prints:

Hello Alice, 32 year-old human!

Again, this happens because of the println! function, which is effectively console.log for Rust, that the greet command used. It appears in the terminal’s console window — not the browser console — because this code still runs on the Rust/system side of things.

It’s also possible to send something back to the client from a Tauri command, so let’s change greet quickly:

use tauri::{command};

#[command]
fn greet(name: String, age: u8) {
  // implicit return, because no semicolon!
  format!("Hello {}, {} year-old human!", name, age)
}

// OR

#[command]
fn greet(name: String, age: u8) {
  // explicit `return` statement, must have semicolon
  return format!("Hello {}, {} year-old human!", name, age);
}

Realizing that I’d be calling invoke many times, and being a bit lazy, I extracted a light client-side helper to consolidate things:

// @types/global.d.ts
/// <reference types="@sveltejs/kit" />

type Dict<T> = Record<string, T>;

declare const __TAURI__: {
  invoke: typeof import('@tauri-apps/api/tauri').invoke;
}

// src/lib/tauri.ts
export function dispatch(command: string, args: Dict<string|number>) {
  return __TAURI__.invoke(command, args);
}

The previous Greeter.svelte was then refactored into:

<!-- Greeter.svelte -->
<script lang="ts">
  import { dispatch } from '$lib/tauri';

  async function onclick() {
    let output = await dispatch('greet', {
      name: 'Alice',
      age: 32
    });
    console.log('~>', output);
    //=> "~> Hello Alice, 32 year-old human!"
  }
</script>

<button on:click={onclick}>Click Me</button>

Great! So now it’s Thursday and I still haven’t written any Redis code, but at least I know how to connect the two halves of my application’s brain together. It was time to comb back through the client-side code and replace all TODOs inside event handlers and connect them to the real deal.

I will spare you the nitty gritty here, as it’s very application-specific from here on out — and is mostly a story of the Rust compiler giving me a beat down. Plus, spelunking for nitty gritty is exactly why the project is open source!

At a high-level, once a Redis connection is established using the given details, a SYNC button is accessible in the /viewer route. When this button is clicked (and only then — because of costs) a JavaScrip function is called, which is responsible for connecting to the Cloudflare REST API and dispatching a "redis_set" command for each key. This redis_set command is defined in the Rust layer — as are all Redis-based commands — and is responsible for actually writing the key-value pair to Redis.

Reading data out of Redis is a very similar process, just inverted. For example, when the /viewer started up, all the keys should be listed and ready to go. In Svelte terms, that means I need to dispatch a Tauri command when the /viewer component mounts. That happens here, almost verbatim. Additionally, clicking on a key name in the sidebar reveals additional “details” about the key, including its expiration (if any), its metadata (if any), and its actual value (if known). Optimizing for cost and network load, we decided that a key’s value should only be fetched on command. This introduces a REFRESH button that, when clicked, interacts with the REST API once again, then dispatches a command so that the Redis client can update that key individually.

I don’t mean to bring things to a rushed ending, but once you’ve seen one successful interaction between your JavaScript and Rust code, you’ve seen them all! The rest of my Thursday and Friday morning was just defining new request-reply pairs, which felt a lot like sending PING and PONG messages to myself.

Conclusion

For me — and I imagine many other JavaScript developers — the challenge this past week was learning Rust. I’m sure you’ve heard this before and you’ll undoubtedly hear it again. Ownership rules, borrow-checking, and the meanings of single-character syntax markers (which are not easy to search for, by the way) are just a few of the roadblocks that I bumped into. Again, a massive thank-you to the Tauri Discord for their help and kindness!

This is also to say that using Tauri was not a challenge — it was a massive relief. I definitely plan to use Tauri again in the future, especially knowing that I can use just the webviewer if I want to. Digging into and/or adding Rust parts was “bonus material” and is only required if my app requires it.

For those wondering, because I couldn’t find another place to mention it: on macOS, the Workers KV GUI application weighs in at less than 13 MB. I am so thrilled with that!

And, of course, SvelteKit certainly made this timeline possible. Not only did it save me a half-day-slog configuring my toolbelt, but the instant, HMR development server probably saved me a few hours of manually refreshing the browser — and then the Tauri viewer.

If you’ve made it this far — that’s impressive! Thank you so much for your time and attention. A reminder that the project is available on GitHub and the latest, pre-compiled binaries are always available through its releases page.


The post How I Built a Cross-Platform Desktop Application with Svelte, Redis, and Rust appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.



from CSS-Tricks https://ift.tt/3eYPPZN
via IFTTT

No comments:

Post a Comment

Passkeys: What the Heck and Why?

These things called  passkeys  sure are making the rounds these days. They were a main attraction at  W3C TPAC 2022 , gained support in  Saf...