A clever use of CSS grid from Jay Freestone to accomplish a particular variation of the media object design pattern (where the image is centered with the title) without any magic numbers anything that isn’t flexible and resiliant.
The trick is to use an “extra” row above and below the title:
The image goes on the first three rows in the first column, and the content goes in the last three rows in the second column using named grid areas:
Read Jay’s post for a little more trickery required to make it entirely resilient.
I love the kind of post that zeroes in on the mental model behind CSS grid like this. It’s like… how can I slice up this design with arbitrary columns and rows, knowing that I can place things on arbitrary rectangular combinations of cells with any type of alignment, to best suit this design?
Nolan Lawson has a little emoji-picker-element that is awfully handy and incredibly easy to use. But considering you’d probably be using it within your own app, it should be style-able so it can incorporated nicely anywhere. How to allow that styling isn’t exactly obvious:
What wasn’t obvious to me, though, was how to allow users to style it. What if they wanted a different background color? What if they wanted the emoji to be bigger? What if they wanted a different font for the input field?
Nolan list four possibilities (I’ll rename them a bit in a way that helps me understand them).
CSS Custom Properties: Style things like background: var(--background, white);. Custom properties penetrate the Shadow DOM, so you’re essentially adding styling hooks.
Pre-built variations: You can add a class attribute to the custom elements, which are easy to access within CSS inside the Shadow DOM thanks to the pseudo selectors, like :host(.dark) { background: black; }.
Shadow parts: You add attributes to things you want to be style-able, like <span part="foo">, then CSS from the outside can reach in like custom-component::part(foo) { }.
User forced: Despite the nothing in/nothing out vibe of the Shadow DOM, you can always reach the element.shadowRoot and inject a <style>, so there is always a way to get styles in.
This is such a funky problem. I like the Shadow DOM because it’s the closest thing we have on the web platform to scoped styles which are definitely a good idea. But I don’t love any of those styling solutions. They all seem to force me into thinking about what kind of styling API I want to offer and document it, while not encouraging any particular consistency across components.
To me, the DOM already is a styling API. I like the scoped protection, but there should be an easy way to reach in there and style things if I want to. Seems like there should be a very simple CSS-only way to reach inside and still use the cascade and such. Maybe the dash-separated custom-element name is enough? my-custom-elemement li { }. Or maybe it’s more explicit, like @shadow my-custom-element li { }. I just think it should be easier. Constructable Stylesheets don’t seem like a step toward make it easier, either.
Last time I was thinking about styling web components, I was just trying to figure out how to it works in the first place, not considering how to expose styling options to consumers of the component.
Does this actually come up as a problem in day-to-day work? Sure does.
I don’t see any particularly good options in that thread (yet) for the styling approach. If I was Dave, I’d be tempted to just do nothing. Offer minimal styling, and if people wanna style it, they can do it however they want from their copy of the component. Or they can “force” the styles in, meaning you have complete freedom.
If you need to get backlinks and generate brand awareness for clients, a great way to start is by creating original research and then pitching that research to writers. But the promotion of your work is probably the trickiest part, and a lot of it comes down to the pitch email you send to a writer.
To make this task a bit less daunting, in this episode of Whiteboard Friday, Amanda Milligan of Fractl walks you through a real pitch email that resulted in coverage of one of their stories.
Click on the whiteboard image above to open a high resolution version in a new tab!
Video Transcription
Hi, everyone. Welcome to another edition of Whiteboard Friday. My name is Amanda Milligan. I'm the Marketing Director at Fractl. Today I'm going to talk to you about the anatomy of the perfect pitch email.
This has to do with the digital PR space. The way that we get backlinks and brand awareness for our clients is by creating original research, new studies and surveys, and then pitching those things to writers. Now the pitching and the promotion is some of the trickiest part, and a lot of it comes down to this — the email you send to a writer.
So what I've done here is literally write out a real pitch email that was sent to a writer that resulted in a publication and coverage of the story. I'll shout out to Skylar who wrote this one. What I'm going to do is walk through each piece of it, each element that we think is extremely important and that we include in all of our emails.
Human connection
So to start, I actually use this email because it didn't delve too much into the personalization. I wanted to show an example of what happens if you can't personalize as well. But personalization or any kind of human connection is extremely important, and it should be the lead into the body of your email.
So in this case, it's a little more general. It says, "We all remember the horror flicks that left us sleeping with the lights on." So that's a more general human experience. I know I slept with the lights on when I saw "The Ring" for the first time. That's just some way to connect with the person who's reading it, to have them think of a memory.
However, if you actually have a chance to personalize an email, for example, if the writer has written something that resonates with you recently or you follow them on Twitter or LinkedIn and you like something that they shared, you can connect with them — you went to the same school, you have the same love of animals. We actually have a lot of people who pitched this year like pitching pictures of their animals and talking about how much they love dogs or cats.
Anything that is genuine can do really well. But remember that there's a human being on the other side of the email that you're sending, and just humanize this a little bit. So that should be about a sentence or two. As you can see here, it ends about here. So you don't want to go into a whole life story, but touch on that a little bit.
Top-level project description
The next segment is a top-level project description. So the next sentence here says, "Could you imagine if one of those characters occupied the room next to yours?" So now we're bridging the kind of anecdote to the actual project. "To explore this further, my team asked over 1,000 TV and movie fans about their most and least desirable fictional roommates."
So right there you know exactly what the project is about. It's about a survey we did asking people which fictional characters in all kinds of media they would like to live with. So it's very fun. It's a light piece. It's a fun piece. However, the structure is still the same when creating these pitch emails. No matter if it's hard news or something a little more lighthearted, this is a really effective way to go.
Main takeaways
So that covers human connection and top-level project description. This next piece is arguably the most important. This, as you can see over here, are the main takeaways, the biggest, most interesting, new insights from this study that you did. You don't want the writer to be sifting through your content trying to figure out why they care or why any of their readers are going to care.
It's your job to pull the two or three most interesting takeaways, literally create a bulleted list for them so that they can see it very quickly. So in this case, Skylar literally said, "Here's what we found: The Beetlejuice home ranks as one of the most identifiable movie houses. However, Beetlejuice was the least desirable fictional roommate."
Understandably. The reason why I can assume she called this out is because, in this particular pitch, she was pitching a home publication, so she's talking about houses. The reason I highlight that is you shouldn't have even the same body of a pitch that you send to everyone. It depends on who their readers are and the topics that they cover, the subtopics they cover.
Even if you know they're relevant and you're pitching them in the first place, make sure to tailor every aspect of the email to them specifically. You might have a list of 10 to 15 interesting takeaways, and you piece together which ones make the most sense per writer. So then some other facts. "Movie fans agree Norman Bates would have been equally undesirable as the Hulk would be as a roommate."
Which is just fun. "Despite appearing in your dreams while you're fast asleep, Freddy Krueger ranks as less desirable than Hannibal Lecter." So the fun thing about this project and something I didn't mention at the top is that we were pitching it around Halloween. So it makes a little more sense. You have that timeliness factor also.
This is fun, but they're basically writing these bullet points thinking like, "What can the writer's headline be? What are they going to say is the most interesting part of this project, and why do they think it's going to be fun or funny or entertaining or useful or informative?" So that covers this section. It's extremely important. Honestly, as you're creating content, you should be thinking of these things, hypothesizing what these could look like.
Link to the content
If you can't even imagine what little bullet points you're going to be able to create after you do something, it might not be interesting enough, or you might not be on the right track. So then this is important. It's small, but it's important. "Here's a link to the full study." Linked. Some people do the tactic of kind of asking, "Oh, do you want to see the rest? We can send it to you."
We don't recommend doing that because you don't want to add an extra step. You don't want writers to have to work for anything. You want to give them everything they need to make a decision. So you're making it easy for them by calling out the bullet points that are the most relevant. Then you're saying, "But listen, look at the whole study if you want. If this is intriguing to you, here it is. You can view the whole thing and make a decision as to whether it's a good fit for your audience." So be sure to do that.
Direct ask
Then Skylar did a good job by saying, "It's the very first day of October," which it was at the time, "and your readers are gearing up for Halloween." So she's tying it back to the relevancy of the project to their readers, which is what you always have to think about. The writer only cares about whether something is going to resonate with their readership. That means that they're doing a good job. So she kind of ties that up. "Any interest in sharing this exclusive study with [the publication]?" So I highlighted here a direct ask. So come out and say like, "So do you want to cover this?" In this case, we were pitching it as an exclusive, meaning nobody else hadn't covered it yet, which makes it a little more appealing.
You're saying, "You're going to be the first ones to talk about this study." You can say it's exclusive, and you can highlight that in the email as well. But even if you're not doing that, if you're pitching it to a bunch of people or somebody has already covered it and you're still pitching it, just make sure you directly ask, "Are you interested in covering this?" Don't assume that they even know how to respond. So those are the four main components of a pitch email.
Conclusion
Now there's a lot that goes into making this work. This is just one piece of a greater puzzle. Your content has to be fantastic, because, as I say, no fantastic pitch can salvage a terrible project. You just can't pitch your way out of it. But also you need to be targeting the right people.
So sometimes we have fantastic pitch emails go out, or anybody has fantastic pitch emails go out, but the person, for whatever reason, can't cover the content. That happens. It certainly happens. Sometimes people have full editorial calendars, or they just wrote about something recently similar. But you want to avoid the situation where they say, "Cool pitch, but this isn't my niche."
This happens all the time in the industry. We surveyed 500 publishers last year, in 2019, depending on when you're listening to this, and they said that their number one pet peeve is being pitched content that does not match their niche. So they're being pitched things that they don't typically write about. So this is a fantastic way to increase the chances of your pitch being successful, but that doesn't mean that it's foolproof if you haven't done all these other steps.
If you're interested in learning about those things, check out my other content on Moz. I've talked about what makes great content. I've talked about some things to look at when it comes to who to pitch. All these things fit together. But I did want to break down for you exactly what that pitch can look like. So best of luck out there. I know it's tough.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from The Moz Blog https://ift.tt/2MxsuDp
via IFTTT
High five to the Greensock gang for the ScrollTrigger release. The point of this new plugin is triggering animation when a page scrolls to certain positions, as well as when certain elements are in the viewport. Anything you’d want configurable about it, is. There’s been plenty of scroll-position libraries over the years, but Greensock has a knack for getting the APIs and performance just right — not to mention that because what you want is to trigger animations, now you’ve got Greensock at your fingertips making sure you’re in good hands. It’s tightly integrated with all the other animation possibilities of GSAP (e.g. animating a timeline based on scroll position).
They’ve got docs and a bunch of examples. I particularly like how they have a mistakes section with ways you can screw it up. Every project should do that.
CodePen is full of examples too, so I’ll take the opportunity to drop some here for your viewing pleasure. Note that while this is a paid plugin, you can play with it on CodePen for free (search for it).
If you’re worried about too much motion, that’s something that you can do responsibly through prefers-reduced-motion, which is available both as a CSS media query and in JavaScript.
I can’t stop thinking about this site. It looks like a pretty standard fare; a website with links to different pages. Nothing to write home about except that… the whole website is contained within a single HTML file.
What about clicking the navigation links, you ask? Each link merely shows and hides certain parts of the HTML.
<section id="home">
<!-- home content goes here -->
</section>
<section id="about">
<!-- about page goes here -->
</section>
Each <section> is hidden with CSS:
section { display: none; }
Each link in the main navigation points to an anchor on the page:
And once you click a link, the <section> for that particular link is displayed via:
section:target { display: block; }
See that :target pseudo selector? That’s the magic! Sure, it’s been around for years, but this is a clever way to use it for sure. Most times, it’s used to highlight the anchor on the page once an anchor link to it has been clicked. That’s a handy way to help the user know where they’ve just jumped to.
Anyway, using :target like this is super smart stuff! It ends up looking like just a regular website when you click around:
Building a website in 2021? I’m guessing you’re going to take a component-driven approach. It’s all the chatter these days. React and Vue are everywhere (is Angular still a thing?), while other emerging frameworks continue to attempt a push into the spotlight.
Over the last decade or so we’ve seen an explosion of frameworks and tools that help us build sites systematically using components. Early frameworks like AngularJS helped shape the generic concept of web components. Web components are also reusable bits of HTML code that are written in JavaScript and made functional by the browser. They are client-side components.
But components, in a more generic sense, have actually been around much longer. In fact, they go back to the early days of the web. They just haven’t typically been called components, though they still function as such. Server components are also reusable bits of code, but are compiled into HTML before the browser sees them. They are server-side components, and they are still very much a thing today.
Even in a world in which all it seems like we hear is “React, React, React,” both types of components are still relevant and can help us build super awesome websites. Let’s explore how client and server components differ from one another. That will give us a clearer picture of where we came from. And then we’ll have the information we need to dream about the future.
Rendering
Perhaps the biggest difference between client-side and server-side components is what makes them what they are. That is the thing that is responsible for rendering them.
Server components are rendered by — you guessed it! — the server. They aren’t typically referred to as components. They’re often called partials, includes, snippets, or templates, depending on the framework in which they are used.
Server components can take two flavors. The first is the classic approach, which is to render components in real-time based on a request from the client. See here:
The second flavor is the Jamstack approach. In this case, the entire site is compiled during a build a process, and static HTML is already available when requested by the client. See here:
In both cases, the client (i.e. your browser) never sees the distinction between your components. It simply receives a bunch of HTML from the server.
Client components, on the other hand, are rendered by — you are two-for-two and on a ROLL! — the client. They are written in JavaScript and rendered by the client (your browser). Because the server is the server and it knows all, it can know about your client components, but whether it cares enough to do anything with them depends on the framework you’re using.
Like server components, there are also two flavors of client components. The first is the more official web component, which makes use of the shadow DOM. The shadow DOM helps with encapsulating styles and other functionality (we’ll talk more about this later). Frameworks like Polymer and Stencil make use of the shadow DOM.
The more popular frameworks, like React and Vue, represent the second flavor of component, which handles DOM manipulation and scoping on their own.
Interactivity
Because server components are just HTML when they are sent to the client, if they are to be interactive on the front end, the application must load JavaScript code separately.
Consider a countdown timer. Its presentation is determined by HTML and CSS (we‘ll come back to the CSS part). But if it is to do its thing (count), it also needs some JavaScript. That means not just bringing in that JavaScript, but also having a means by which the JavaScript can attach itself to the countdown’s HTML element(s), which must either be done manually or with (yet) another framework.
Though this may feel unnecessarily tedious (especially if you’ve been around long enough to have been forced into this approach), there are benefits to it. It is a clear separation of concerns, where server-side code lives in one place, while the functionality lives in another. And it brings only the code it needs for the interactivity (theoretically), which can lessen the burden on the browser.
With client components, the markup and interactivity tend to be tightly coupled, often in the same file or directory. While this can quickly become a mess if you’re not diligent about staying organized, one major benefit to client components is that they already have access to the client. And because they are written in JavaScript, their functionality can ship right alongside their markup (and styles).
Performance
In a one-to-one comparison, server-side components tend to perform better. When the page that a browser receives contains everything it needs for presentation, it’s going to be able to deliver that presentation to the user much quicker.
Because client-side components require JavaScript, the browser must download or process additional information (often in separate files) to be able to render the component.
That said, client-side components are often used within the context of a larger framework. React has Gatsby and Next, while Vue has Nuxt. These frameworks have mechanisms for creating a superior in-app experience. What I mean is that, while they may be slower to load the first page you visit on a site, they can then focus their energy on delivering subsequent views extremely fast — often faster than a server-side rendered site can deliver its content.
If you’re thinking, Yeah but whatabout pre-rendering and…
Yes, you’re right. We’ll get there. Also, no more spoilers, please. The rest of us are along for the ride.
Languages
Server components can be written in (almost) any server-side language. This enables you to write your templates in the same language as your application’s logic. For example, applications written with Ruby on Rails use ERB templating by default, which is a form of Ruby. Thus, Rails apps use the same language for the application itself as it does for its components.
The reason client components are written in JavaScript is because that’s the language browsers parse for interactivity on a website. However, JavaScript also has server-based runtimes, the most popular of which is Node.js. That means code for client components could be written in the same language as the application, as long as the application is written with Node (or similar).
Styling (CSS)
When it comes to styling components, server-side components run into the same trouble they face with JavaScript. The styles are typically detached from the components, and require a bit of extra effort to tie styles to the elements on the page.
However, there are frameworks like Tailwind CSS that are working to make this process less painful.
Many client-side component libraries come with CSS support (or at least a pattern for styling) right out of the box. That often means including the styles in the same file as the markup and logic, which can get messy. But typically, with a little effort, you can adjust that approach to your liking.
Welcome to the (hybrid) future
Neither type of component is the answer by itself. Server-side components require additional effort in styling and interactivity that feels unnecessary when we look at the offerings of client components. But then client components have a tendency to take away from performance on the front end. And because the success of a website often depends on user engagement, a lack of performance can hurt the end result and be enough not to want to use client components.
What does that mean for a future that demands both performance and a good developer experience? More than likely, a hybrid approach.
Components are going to have to be rendered on the server side. They just are. That‘s how we optimize performance, and good performance is going to continue to be an attribute of successful websites. But, now that we’ve seen the ease of front-end logic and interactivity using frameworks, again, like React and Vue, those frameworks are here to stay (at least for awhile).
So where are we going?
I think we’re going to see these components come together in three ways in the very near future.
1. Advancement of JavaScript framework frameworks
Remember when you thought up that spoiler about pre-rendering? Well, let’s talk about it now.
Frameworks like Gatsby, Next, and Nuxt act as front-end engines built on top of component frameworks, like React and Vue. They bring together tooling to build a comprehensive front-end experience using their preferred framework. One such feature is pre-rendering, which means these engines will introspect components and then write static HTML on the page while the site is being built. Then, when users view that page, it‘s actually already there. They don’t need JavaScript to view it.
However, JavaScript comes into play through a process called hydration. After the page loads and your user sees all the (static) content, that’s when JavaScript goes to work. It takes over the components to make them interactive. This provides the opportunity to build a client-side, component-based website with some of the benefits of the server, namely performance and SEO.
These tools have gotten super popular because of this approach, and I suspect we’ll see them continue to advance.
2. Baked-in client-side pre-rendering
That’s a lot of compound words.
What I‘ve been thinking about a lot the last couple years is: Why doesn’t React (or Vue) take on server-side rendering? They do, it’s just not super easy to understand or implement without another framework to help.
On one hand, I understand the single-responsibility principle, and that these component frameworks are just ways to build client-side components. But it felt like a huge miss to delegate server-side rendering to bigger, more complex tools like Gatsby and Next (among others).
I think we‘re going to see a lot more development while these traditionally client-side-focused tools solve for server-side rendering. I suspect that also means we‘ll hear a little more from Svelte in the future, which seems like it’s ahead of the game in this regard.
That may also lead to the development of more competitors to bulkier tools like Gatsby and Next. For example, look at what Netlify is doing with their website. It‘s an Eleventy project that pulls in Vue components and renders them for use on the server. What it’s missing is the hydration and interactivity piece. I expect that to come together in the very near future.
3. Server-side component interactivity
And still, we can‘t discount the continued use of server-side components. The one side effect of both of the other two advancements is that they’re still using JavaScript frameworks that can feel unnecessary when you only need just a little interactivity.
There must be a simpler way to add just a little JavaScript to make a server-side component that are written in a server-side language more interactive.
Solving that problem seems to be the approach from the folks at Basecamp, who just released Hotwire, which is a means to bring some of the gains of client components to the server, using (almost) any server-side language.
I don‘t know if that means we‘re going to see competition to Hotwire emerge right away. But I do think Hotwire is going to get some attention. And that might just bring folks back to working with full-stack monolithic frameworks like Rails. (Personally, I love that Rails hasn’t become obsolete in this JavaScript-focused world. The more competition we have, the better the web gets.)
Where do you think all this component business is going? Let’s talk about it.
In this article, we explain how to build an integrated and interactive data visualization layer into an application with Cumul.io. To do so, we’ve built a demo application that visualizes Spotify Playlist analytics! We use Cumul.io as our interactive dashboard as it makes integration super easy and provides functionality that allow interaction between the dashboard and applications (i.e. custom events). The app is a simple JavaScript web app with a Node.js server, although you can, if you want, achieve the same with Angular, React and React Native while using Cumul.io dashboards too.
Here, we build dashboards that display data from the The Kaggle Spotify Dataset 1921–2020, 160k+ Tracks and also data via the Spotify Web API when a user logs in. We’ve built dashboards as an insight into playlist and song characteristics. We’ve added some Cumul.io custom events that will allow any end user visiting these dashboards to select songs from a chart and add them to one of their own Spotify playlists. They can also select a song to display more info on them, and play them from within the application. The code for the full application is also publicly available in an open repository.
Here’s a sneak peak into what the end result for the full version looks like:
What are Cumul.io custom events and their capabilities?
Simply put, Cumul.io custom events are a way to trigger events from a dashboard, to be used in the application that the dashboard is integrated in. You can add custom events into selected charts in a dashboard, and have the application listen for these events.
Why? The cool thing about this tool is in how it allows you to reuse data from an analytics dashboard, a BI tool, within the application it’s built into. It gives you the freedom to define actions based on data, that can be triggered straight from within an integrated dashboard, while keeping the dashboard, analytics layer a completely separate entity to the application, that can be managed separately to it.
What they contain:Cumul.io custom events are attached to charts rather than dashboards as a whole. So the information an event has is limited to the information a chart has.
An event is simply put a JSON object. This object will contain fields such as the ID of the dashboard that triggered it, the name of the event and a number of other fields depending on the type of chart that the event was triggered from. For example, if the event was triggered from a scatter plot, you will receive the x-axis and y-axis values of the point it was triggered from. On the other hand, if it were triggered from a table, you would receive column values for example. See examples of what these events will look like from different charts:
// 'Add to Playlist' custom event from a row in a table
{
"type":"customEvent",
"dashboard":"xxxx",
"name":"xxxx",
"object":"xxxx",
"data":{
"language":"en",
"columns":[
{"id":"Ensueno","value":"Ensueno","label":"Name"},
{"id":"Vibrasphere","value":"Vibrasphere","label":"Artist"},
{"value":0.406,"formattedValue":"0.41","label":"Danceability"},
{"value":0.495,"formattedValue":"0.49","label":"Energy"},
{"value":180.05,"formattedValue":"180.05","label":"Tempo (bpm)"},
{"value":0.568,"formattedValue":"0.5680","label":"Accousticness"},
{"id":"2007-01-01T00:00:00.000Z","value":"2007","label":"Release Date (Yr)"},
],
"event":"add_to_playlist"
}
}
//'Song Info' custom event from a point in a scatter plot
{
"type":"customEvent",
"dashboard":"xxxx",
"name":"xxxx",
"object":"xxxx",
"data":{
"language":"en",
"x-axis":{"id":0.601,"value":"0.601","label":"Danceability"},
"y-axis":{"id":0.532,"value":"0.532","label":"Energy"},
"name":{"id":"xxxx","value":"xxx","label":"Name"},
"event":"song_info"
}
}
The possibilities with this functionality are virtually limitless. Granted, depending on what you want to do, you may have to write a couple more lines of code, but it is unarguably quite a powerful tool!
The dashboard
We won’t actually go through the dashboard creation process here and we’ll focus on the interactivity bit once it’s integrated into the application. The dashboards integrated in this walk through have already been created and have custom events enabled. You can, of course create your own ones and integrate those instead of the one we’ve pre-built (you can create an account with a free trial). But before, some background info on Cumul.io dashboards;
Cumul.io offers you a way to create dashboards from within the platform, or via its API. In either case, dashboards will be available within the platform, decoupled from the application you want to integrate it into, so can be maintained completely separately.
On your landing page you’ll see your dashboards and can create a new one:
You can open one and drag and drop any chart you want:
You can connect data which you can then drag and drop into those charts:
And, that data can be one of a number of things. Like a pre-existing database which you can connect to Cumul.io, a dataset from a data warehouse you use, a custom built plugin etc.
Enabling custom events
We have already enabled these custom events to the scatter plot and table in the dashboard used in this demo, which we will be integrating in the next section. If you want to go through this step, feel free to create your own dashboards too!
First thing you need to do will be to add custom events to a chart. To do this, first select a chart in your dashboard you’d like to add an event to. In the chart settings, select Interactivity and turn Custom Events on:
To add an event, click edit and define its Event Name and Label. Event Name is what your application will receive and Label is the one that will show up on your dashboard. In our case, we’ve added 2 events; ‘Add to Playlist’ and ‘Song Info’:
This is all the setup you need for your dashboard to trigger an event on a chart level. Before you leave the editor, you will need your dashboard ID to integrate the dashboard later. You can find this in the Settings tab of your dashboard. The rest of the work remains on application level. This will be where we define what we actually want to do once we receive any of these events.
Takeaway points
Events work on a chart level and will include information within the limits of the information on the chart
To add an event, go to the chart settings on the chart you want to add them to
Define name and label of event. And you’re done!
(Don’t forget to take note of the dashboard ID for integration)
Using custom events in your own platform
Now that you’ve added some events to the dashboard, the next step is to use them. The key point here is that, once you click an event in your dashboard, your application that integrates the dashboard receives an event. The Integration API provides a function to listen to these events, and then it’s up to you to define what you do with them. For more information on the API and code examples for your SDK, you can also check out the relevant developer docs.
For this section, we’re also providing an open GitHub repository (separate to the repository for the main application) that you can use as a starting project to add custom events to.
The cumulio-spotify-datatalks repository is structured so that you can checkout on the commit called skeleton to start from the beginning. All the following commits will represent a step we go through here. It’s a boiled down version of the full application, focusing on the main parts of the app that demonstrates Custom Events. I’ll be skipping some steps such as the Spotify API calls which are in src/spotify.js, so as to limit this tutorial to the theme of ‘adding and using custom events’.
Let’s have a look at what happens in our case. We had created two events; add_to_playlist and song_info. We want visitors of our dashboard to be able to add a song to their own playlist of choice in their own Spotify account. In order to do so, we take the following steps:
First, we need to add a dashboard to our application. Here we use the Cumul.io Spotify Playlist dashboard as the main dashboard and the Song Info dashboard as the drill through dashboard (meaning we create a new dashboard within the main one that pops up when we trigger an event). If you have checked out on the commit called skeleton and npm run start, the application should currently just open up an empty ‘Cumul.io Favorites’ tab, with a Login button at the top right. For instructions on how to locally run the project, go to the bottom of the article:
To integrate a dashboard, we will need to use the Cumulio.addDashboard() function. This function expects an object with dashboard options. Here’s what we do to add the dashboard:
In src/app.js, we create an object that stores the dashboard IDs for the main dashboard and the drill through dashboard that displays song info alongside a dashboardOptions object:
// create dashboards object with the dashboard ids and dashboardOptions object
// !!!change these IDs if you want to use your own dashboards!!!
const dashboards = {
playlist: 'f3555bce-a874-4924-8d08-136169855807',
songInfo: 'e92c869c-2a94-406f-b18f-d691fd627d34',
};
const dashboardOptions = {
dashboardId: dashboards.playlist,
container: '#dashboard-container',
loader: {
background: '#111b31',
spinnerColor: '#f44069',
spinnerBackground: '#0d1425',
fontColor: '#ffffff'
}
};
We create a loadDashboard() function that calls Cumulio.addDashboard(). This function optionally receives a container and modifies the dashboardOptions object before adding dashboard to the application.
// create a loadDashboard() function that expects a dashboard ID and container
const loadDashboard = (id, container) => {
dashboardOptions.dashboardId = id;
dashboardOptions.container = container || '#dashboard-container';
Cumulio.addDashboard(dashboardOptions);
};
Finally, we use this function to add our playlist dashboard when we load the Cumul.io Favorites tab:
At this point, we’ve integrated the playlist dashboard and when we click on a point in the Energy/Danceability by Song scatter plot, we get two options with the custom events we added earlier. However, we’re not doing anything with them yet.
Listen to incoming events
Now that we’ve integrated the dashboard, we can tell our app to do stuff when it receives an event. The two charts that have ‘Add to Playlist’ and ‘Song Info’ events here are:
First, we need to set up our code to listen to incoming events. To do so, we need to use the Cumulio.onCustomEvent() function. Here, we chose to wrap this function in a listenToEvents() function that can be called when we load the Cumul.io Favorites tab. We then use if statements to check what event we’ve received:
This is the point after which things are up to your needs and creativity. For example, you could simply print a line out to your console, or design your own behaviour around the data you receive from the event. Or, you could also use some of the helper functions we’ve created that will display a playlist selector to add a song to a playlist, and integrate the Song Info dashboard. This is how we did it;
Add song to playlist
Here, we will make use of the addToPlaylistSelector() function in src/ui.js. This function expects a Song Name and ID, and will display a window with all the available playlists of the logged in user. It will then post a Spotify API request to add the song to the selected playlist. As the Spotify Web API requires the ID of a song to be able to add it, we’ve created a derived Name & ID field to be used in the scatter plot.
An example event we receive on add_to_playlist will include the following for the scatter plot:
"name":{"id":"So Far To Go&id=3R8CATui5dGU42Ddbc2ixE","value":"So Far To Go&id=3R8CATui5dGU42Ddbc2ixE","label":"Name & ID"}
We extract the Name and ID of the song from the event via the getSong() function, then call the ui.addToPlaylistSelector() function:
/*********** LISTEN TO CUSTOM EVENTS AND ADD EXTRAS ************/
const getSong = (event) => {
let songName;
let songArtist;
let songId;
if (event.data.columns === undefined) {
songName = event.data.name.id.split('&id=')[0];
songId = event.data.name.id.split('&id=')[1];
}
else {
songName = event.data.columns[0].value;
songArtist = event.data.columns[1].value;
songId = event.data.columns[event.data.columns.length - 1].value;
}
return {id: songId, name: songName, artist: songArtist};
};
const listenToEvents = () => {
Cumulio.onCustomEvent(async (event) => {
const song = getSong(event);
console.log(JSON.stringify(event));
if (event.data.event === 'add_to_playlist'){
await ui.addToPlaylistSelector(song.name, song.id);
}
else if (event.data.event === 'song_info'){
//DO SOMETHING
}
});
};
Now, the ‘Add to Playlist’ event will display a window with the available playlists that a logged in user can add the song to:
Display more song info
The final thing we want to do is to make the ‘Song Info’ event display another dashboard when clicked. It will display further information on the selected song, and include an option to play the song. It’s also the step where we get into more some more complicated use cases of the API which may need some background knowledge. Specifically, we make use of Parameterizable Filters. The idea is to create a parameter on your dashboard, for which the value can be defined while creating an authorization token. We include the parameter as metadata while creating an authorization token.
For this step, we have created a songId parameter that is used in a filter on the Song Info dashboard:
Then, we create a getDashboardAuthorizationToken() function. This expects metadata which it then posts to the /authorization endpoint of our server in server/server.js:
const getDashboardAuthorizationToken = async (metadata) => {
try {
const body = {};
if (metadata && typeof metadata === 'object') {
Object.keys(metadata).forEach(key => {
body[key] = metadata[key];
});
}
/*
Make the call to the backend API, using the platform user access credentials in the header
to retrieve a dashboard authorization token for this user
*/
const response = await fetch('/authorization', {
method: 'post',
body: JSON.stringify(body),
headers: { 'Content-Type': 'application/json' }
});
// Fetch the JSON result with the Cumul.io Authorization key & token
const responseData = await response.json();
return responseData;
}
catch (e) {
return { error: 'Could not retrieve dashboard authorization token.' };
}
};
Finally, we use the load the songInfo dashboard when the song_info event is triggered. In order to do this, we create a new authorization token using the song ID:
And voilá! We are done! In this demo we used a lot of helper functions I haven’t gone through in detail, but you are free clone the demo repository and play around with them. You can even disregard them and build your own functionality around the custom events.
Conclusion
For any one intending to have a layer of data visualisation and analytics integrated into their application, Cumul.io provides a pretty easy way of achieving it as I’ve tried to demonstrate throughout this demo. The dashboards remain decoupled entities to the application that can then go on to be managed separately. This becomes quite an advantage if say you’re looking at integrated analytics within a business setting and you’d rather not have developers going back and fiddling with dashboards all the time.
Events you can trigger from dashboards and listen to in their host applications on the other hand allows you to define implementations based off of the information in those decoupled dashboards. This can be anything from playing a song in our case to triggering a specific email to be sent. The world is your oyster in this sense, you decide what to do with the data you have from your analytics layer. In other words, you get to reuse the data from your dashboards, it doesn’t have to just stay there in its dashboard and analytics world 🙂
At Google’s Search On event in October last year, Prabhakar Raghavan explained that 15% of daily queries are ones that have never been searched before. If we take the latest figures from Internet Live Stats, which state 3.5 billion queries are searched every day, that means that 525 million of those queries are brand new.
That is a huge number of opportunities waiting to be identified and worked into strategies, optimization, and content plans. The trouble is, all of the usual keyword research tools are, at best, a month behind with the data they can provide. Even then, the volumes they report need to be taken with a grain of salt – you’re telling me there are only 140 searches per month for “women’s discount designer clothing”? – and if you work in B2B industries, those searches are generally much smaller volumes to begin with.
So, we know there are huge amounts of searches available, with more and more being added every day, but without the data to see volumes, how do we know what we should be working into strategies? And how do we find these opportunities in the first place?
Finding the opportunities
The usual tools we turn to aren’t going to be much use for keywords and topics that haven’t been searched in volume previously. So, we need to get a little creative — both in where we look, and in how we identify the potential of queries in order to start prioritizing and working them into strategies. This means doing things like:
Mining People Also Ask
Scraping autosuggest
Drilling into related keyword themes
Mining People Also Ask
People Also Ask is a great place to start looking for new keywords, and tends to be more up to date than the various tools you would normally use for research. The trap most marketers fall into is looking at this data on a small scale, realizing that (being longer-tail terms) they don’t have much volume, and discounting them from approaches. But when you follow a larger-scale process, you can get much more information about the themes and topics that users are searching for and can start plotting this over time to see emerging topics faster than you would from standard tools.
2. Use SerpAPI to run your keywords through the API call – you can see their demo interface below and try it yourself:
3. Export the “related questions” features returned in the API call and map them to overall topics using a spreadsheet:
4. Export the “related search boxes” and map these to overall topics as well:
5. Look for consistent themes in the topics being returned across related questions and searches.
6. Add these overall themes to your preferred research tool to identify additional related opportunities. For example, we can see coffee + health is a consistent topic area, so you can add that as an overall theme to explore further through advanced search parameters and modifiers.
7. Add these as seed terms to your preferred research tool to pull out related queries, like using broad match (+coffee health) and phrase match (“coffee health”) modifiers to return more relevant queries:
This then gives you a set of additional “suggested queries” to broaden your search (e.g. coffee benefits) as well as related keyword ideas you can explore further.
This is also a great place to start for identifying differences in search queries by location, like if you want to see different topics people are searching for in the UK vs. the US, then SerpAPI allows you to do that at a larger scale.
If you’re looking to do this on a smaller scale, or without the need to set up an API, you can also use this really handy tool from Candour – Also Asked – which pulls out the related questions for a broad topic and allows you to save the data as a .csv or an image for quick review:
Once you’ve identified all of the topics people are searching for, you can start drilling into new keyword opportunities around them and assess how they change over time. Many of these opportunities don’t have swathes of historical data reported in the usual research tools, but we know that people are searching for them and can use them to inform future content topics as well as immediate keyword opportunities.
You can also track these People Also Ask features to identify when your competitors are appearing in them, and get a better idea of how they’re changing their strategies over time and what kind of content and keywords they might also be targeting. At Found, we use our bespoke SERP Real Estate tool to do just that (and much more) so we can spot these opportunities quickly and work them into our approaches.
Scraping autosuggest
This one doesn’t need an API, but you’ll need to be careful with how frequently you use it, so you don’t start triggering the dreaded captchas.
Similar to People Also Ask, you can scrape the autosuggest queries from Google to quickly identify related searches people are entering. This tends to work better on a small scale, just because of the manual process behind it. You can try setting up a crawl with various parameters entered and a custom extraction, but Google will be pretty quick to pick up on what you’re doing.
To scrape autosuggest, you use a very simple URL query string:
https://ift.tt/3aep6p5
Okay, it doesn’t look that simple, but it’s essentially a search query that outputs all of the suggested queries for your seed query.
So, if you were to enter “cyber security” after the “q=”, you would get:
This gives you the most common suggested queries for your seed term. Not only is this a goldmine for identifying additional queries, but it can show some of the newer queries that have started trending, as well as information related to those queries that the usual tools won’t provide data for.
For example, if you want to know what people are searching for related to COVID-19, you can’t get that data in Keyword Planner or most tools that utilize the platform, because of the advertising restrictions around it. But if you add it to the suggest queries string, you can see:
This can give you a starting point for new queries to cover without relying on historical volume. And it doesn’t just give you suggestions for broad topics – you can add whatever query you want and see what related suggestions are returned.
If you want to take this to another level, you can change the location settings in the query string, so instead of “gl=uk” you can add “=us” and see the suggested queries from the US. This then opens up another opportunity to look for differences in search behavior across different locations, and start identifying differences in the type of content you should be focusing on in different regions — particularly if you’re working on international websites or targeting international audiences.
Refining topic research
Although the usual tools won’t give you that much information on brand new queries, they can be a goldmine for identifying additional opportunities around a topic. So, if you have mined the PAA feature, scraped autosuggest, and grouped all of your new opportunities into topics and themes, you can enter these identified “topics” as seed terms to most keyword tools.
Google Ads Keyword Planner
Currently in beta, Google Ads now offers a “Refine keywords” feature as part of their Keyword Ideas tool, which is great for identifying keywords related to an overarching topic.
Below is an example of the types of keywords returned for a “coffee” search:
Here we can see the keyword ideas have been grouped into:
Brand or Non-Brand – keywords relating to specific companies
Drink – types of coffee, e.g. espresso, iced coffee, brewed coffee
Product – capsules, pods, instant, ground
Method – e.g. cold brew, French press, drip coffee
These topic groupings are fantastic for finding additional areas to explore. You can either:
Start here with an overarching topic to identify related terms and then go through the PAA/autosuggest identification process.
Start with the PAA / autosuggest identification process and put your new topics into Keyword Planner
Whichever way you go about it, I’d recommend doing a few runs so you can get as many new ideas as possible. Once you’ve identified the topics, run them through the refine keywords beta to pull out more related topics, then run them through the PAA/autosuggest process to get more topics, and repeat a few times depending how many areas you want to explore or how in-depth you need your research to be.
Google Trends
Trends data is one of the most up-to-date sets you can look at for topics and specific queries. However, it is worth noting that for some topics, it doesn’t hold any data, so you might run into problems with more niche areas.
Using “travel ban” as an example, we can see the trends in searches as well as related topics and specific related queries:
Now, for new opportunities, you aren’t going to find a huge amount of data, but if you’ve grouped your opportunities into overarching topics and themes, you’ll be able to find some additional opportunities from the “Related topics” and “Related queries” sections.
In the example above we see these sections include specific locations and specific mentions of coronavirus – something that Keyword Planner won’t provide data on as you can’t bid on it.
Drilling into the different related topics and queries here will give you a bit more insight into additional areas to explore that you may not have otherwise been able to identify (or validate) through other Google platforms.
The Moz interface is a great starting point for validating keyword opportunities, as well as identifying what’s currently appearing in the SERPs for those terms. For example, a search for “london theatre” returns the following breakdown:
From here, you can drill into the keyword suggestions and start grouping them into themes as well, as well as being able to review the current SERP and see what kind of content is appearing. This is particularly useful when it comes to understanding the intent behind the terms to make sure you’re looking at the opportunities from the right angle – if a lot more ticket sellers are showing than news and guides, for example, then you want to be focusing these opportunities on more commercial pages than informational content.
Other tools
There are a variety of other tools you can use to further refine your keyword topics and identify new related ideas, including the likes of SEMRush, AHREFS, Answer The Public, Ubersuggest, and Sistrix, all offering relatively similar methods of refinement.
The key is identifying the opportunities you want to explore further, looking through the PAA and autosuggest queries, grouping them into themes, and then drilling into those themes.
Keyword research is an ever-evolving process, and the ways in which you can find opportunities are always changing, so how do you then start planning these new opportunities into strategies?
Forming a plan
Once you’ve got all of the data, you need to be able to formalize it into a plan to know when to start creating content, when to optimize pages, and when to put them on the back burner for a later date.
A quick (and consistent) way you can easily plot these new opportunities into your existing plans and strategies is to follow this process:
Identify new searches and group into themes
Monitor changes in new searches. Run the exercise once a month to see how much they change over time
Plot trends in changes alongside industry developments. Was there an event that changed what people were searching for?
Group the opportunities into actions: create, update, optimize.
Group the opportunities into time-based categories: topical, interest, evergreen, growing, etc.
Plot timeframes around the content pieces. Anything topical gets moved to the top of the list, growing themes can be plotted in around them, interest-based can be slotted in throughout the year, and evergreen pieces can be turned into more hero-style content.
Then you end up with a plan that covers:
All of your planned content.
All of your existing content and any updates you might want to make to include the new opportunities.
A revised optimization approach to work in new keywords on existing landing pages.
A revised FAQ structure to answer queries people are searching for (before your competitors do).
Developing themes of content for hubs and category page expansion.
Conclusion
Finding new keyword opportunities is imperative to staying ahead of the competition. New keywords mean new ways of searching, new information your audience needs, and new requirements to meet. With the processes outlined above, you’ll be able to keep on top of these emerging topics to plan your strategies and priorities around them. The world of search will always change, but the needs of your audience — and what they are searching for — should always be at the center of your plans.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from The Moz Blog https://ift.tt/3r3KtjL
via IFTTT