Thursday, 30 November 2017

Font of the Month Club

Every month for the past year, David Jonathan Ross has been publishing a new font to his Font of the Month Club. It’s only $6 for a monthly subscription and it provides early access to some of his work. I’d highly recommend signing up because each design is weird and intriguing in a very good way:

Join the Font of the Month Club and get a fresh new font delivered to your inbox every single month! Each font is lovingly designed and produced by me, David Jonathan Ross.

Fonts of the month are not available anywhere else, and will include my distinctive display faces, experimental designs, and exclusive previews of upcoming retail typeface families.

Direct Link to ArticlePermalink


Font of the Month Club is a post from CSS-Tricks



from CSS-Tricks http://ift.tt/2rioutG
via IFTTT

The Front-End Checklist is just a tool… everything depends on you.

​7 Days of Free Stock Images

Wednesday, 29 November 2017

Localisation and Translation on the Web

The other day Chris wrote about how the CodePen team added lang='en' to the html element in all pens for accessibility reasons and I thought it was pretty interesting but I suddenly wanted to learn more about that attribute because I’ve never designed a website in any other language besides English and it might be useful for the future.

As if by magic Ire Aderinokun published this piece on Localisation and Translation on the Web just a couple of days later and thankfully it answers all those questions I had:

Coming from the English-speaking world, it can be easy to maintain the bubble that is the English-speaking World Wide Web. But in fact, more than half of web pages are written in languages other than English.

Since starting work at eyeo, I’ve had to think a lot more about localisation and translations because most of our websites are translated into several languages, something I previously didn’t have to really consider before. Once you decide to translate a web page, there are many things to take into account, and a lot of them I've found are useful even if your website is written in only one language.

I had no idea about the experimental, and currently unsupported, translate attribute or the mysterious margin-inline-start CSS property. Handy stuff!

Direct Link to ArticlePermalink


Localisation and Translation on the Web is a post from CSS-Tricks



from CSS-Tricks http://ift.tt/2nhgY1F
via IFTTT

Fontastic Web Performance

In this talk Monica Dinculescu takes a deep dive into webfonts and how the font-display CSS property lets us control the way those fonts are rendered. She argues that there’s all sorts of huge performance gains to be had if we just spend a little bit of time thinking about the total number of fonts we load and how they’re loaded.

Also, Monica made a handy demo that gives an even more detailed series of examples of how the font-display property works:

This depends a lot on how you are using your webfont, and whether rendering the text in a fallback font makes sense. For example, if you're rendering the main body text on a site, you should use font-display:optional. On browsers that implement it, like Chrome, the experience will be much nicer: your users will get fast content, and if the web font download takes too long, they won't get a page relayout halfway through reading your article.

If you're using a web font for icons, there is no acceptable fallback font you can render these icons in (unless you're using emoji or something), so your only option is to completely block until the font is ready, with font-display:block.

Direct Link to ArticlePermalink


Fontastic Web Performance is a post from CSS-Tricks



from CSS-Tricks http://ift.tt/2neI9ug
via IFTTT

WordPress + React

I posted just 2 months ago about Foxhound and how I found it pretty cool, but also curious that it was one of very few themes around that combine the WordPress JSON API and React, even though they seem like a perfect natural fit. Like a headless CMS, almost.

Since then, a few more things have crossed my desk of people doing more with this idea and combination.

Maxime Laboissonniere wrote Strapping React.js on a WordPress Backend: WP REST API Example:

I'll use WordPress as a backend, and WordPress REST API to feed data into a simple React e-commerce SPA:

  • Creating products with the WP Advanced Custom Fields plugin
  • Mapping custom fields to JSON payload
  • Consuming the JSON REST API with React
  • Rendering products in our store

Perhaps more directly usable, Postlight have put out a Starter Kit. Gina Trapani:

People who publish on the web love WordPress. Engineers love React. With some research, configuration, and trial and error, you can have both — but we’d like to save you the work.

Here's that repo.


WordPress + React is a post from CSS-Tricks



from CSS-Tricks http://ift.tt/2neuSSh
via IFTTT

The Complete Guide to Direct Traffic in Google Analytics

Posted by tombennet

When it comes to direct traffic in Analytics, there are two deeply entrenched misconceptions.

The first is that it’s caused almost exclusively by users typing an address into their browser (or clicking on a bookmark). The second is that it’s a Bad Thing, not because it has any overt negative impact on your site’s performance, but rather because it’s somehow immune to further analysis. The prevailing attitude amongst digital marketers is that direct traffic is an unavoidable inconvenience; as a result, discussion of direct is typically limited to ways of attributing it to other channels, or side-stepping the issues associated with it.

In this article, we’ll be taking a fresh look at direct traffic in modern Google Analytics. As well as exploring the myriad ways in which referrer data can be lost, we’ll look at some tools and tactics you can start using immediately to reduce levels of direct traffic in your reports. Finally, we’ll discover how advanced analysis and segmentation can unlock the mysteries of direct traffic and shed light on what might actually be your most valuable users.

What is direct traffic?

In short, Google Analytics will report a traffic source of "direct" when it has no data on how the session arrived at your website, or when the referring source has been configured to be ignored. You can think of direct as GA’s fall-back option for when its processing logic has failed to attribute a session to a particular source.

To properly understand the causes and fixes for direct traffic, it’s important to understand exactly how GA processes traffic sources. The following flow-chart illustrates how sessions are bucketed — note that direct sits right at the end as a final "catch-all" group.

Broadly speaking, and disregarding user-configured overrides, GA’s processing follows this sequence of checks:

AdWords parameters > Campaign overrides > UTM campaign parameters > Referred by a search engine > Referred by another website > Previous campaign within timeout period > Direct

Note the penultimate processing step (previous campaign within timeout), which has a significant impact on the direct channel. Consider a user who discovers your site via organic search, then returns via direct a week later. Both sessions would be attributed to organic search. In fact, campaign data persists for up to six months by default. The key point here is that Google Analytics is already trying to minimize the impact of direct traffic for you.

What causes direct traffic?

Contrary to popular belief, there are actually many reasons why a session might be missing campaign and traffic source data. Here we will run through some of the most common.

1. Manual address entry and bookmarks

The classic direct-traffic scenario, this one is largely unavoidable. If a user types a URL into their browser’s address bar or clicks on a browser bookmark, that session will appear as direct traffic.

Simple as that.

2. HTTPS > HTTP

When a user follows a link on a secure (HTTPS) page to a non-secure (HTTP) page, no referrer data is passed, meaning the session appears as direct traffic instead of as a referral. Note that this is intended behavior. It’s part of how the secure protocol was designed, and it does not affect other scenarios: HTTP to HTTP, HTTPS to HTTPS, and even HTTP to HTTPS all pass referrer data.

So, if your referral traffic has tanked but direct has spiked, it could be that one of your major referrers has migrated to HTTPS. The inverse is also true: If you’ve migrated to HTTPS and are linking to HTTP websites, the traffic you’re driving to them will appear in their Analytics as direct.

If your referrers have moved to HTTPS and you’re stuck on HTTP, you really ought to consider migrating to HTTPS. Doing so (and updating your backlinks to point to HTTPS URLs) will bring back any referrer data which is being stripped from cross-protocol traffic. SSL certificates can now be obtained for free thanks to automated authorities like LetsEncrypt, but that’s not to say you should neglect to explore the potentially-significant SEO implications of site migrations. Remember, HTTPS and HTTP/2 are the future of the web.

If, on the other hand, you’ve already migrated to HTTPS and are concerned about your users appearing to partner websites as direct traffic, you can implement the meta referrer tag. Cyrus Shepard has written about this on Moz before, so I won’t delve into it now. Suffice to say, it’s a way of telling browsers to pass some referrer data to non-secure sites, and can be implemented as a <meta> element or HTTP header.

3. Missing or broken tracking code

Let’s say you’ve launched a new landing page template and forgotten to include the GA tracking code. Or, to use a scenario I’m encountering more and more frequently, imagine your GTM container is a horrible mess of poorly configured triggers, and your tracking code is simply failing to fire.

Users land on this page without tracking code. They click on a link to a deeper page which does have tracking code. From GA’s perspective, the first hit of the session is the second page visited, meaning that the referrer appears as your own website (i.e. a self-referral). If your domain is on the referral exclusion list (as per default configuration), the session is bucketed as direct. This will happen even if the first URL is tagged with UTM campaign parameters.

As a short-term fix, you can try to repair the damage by simply adding the missing tracking code. To prevent it happening again, carry out a thorough Analytics audit, move to a GTM-based tracking implementation, and promote a culture of data-driven marketing.

4. Improper redirection

This is an easy one. Don’t use meta refreshes or JavaScript-based redirects — these can wipe or replace referrer data, leading to direct traffic in Analytics. You should also be meticulous with your server-side redirects, and — as is often recommended by SEOs — audit your redirect file frequently. Complex chains are more likely to result in a loss of referrer data, and you run the risk of UTM parameters getting stripped out.

Once again, control what you can: use carefully mapped (i.e. non-chained) code 301 server-side redirects to preserve referrer data wherever possible.

5. Non-web documents

Links in Microsoft Word documents, slide decks, or PDFs do not pass referrer information. By default, users who click these links will appear in your reports as direct traffic. Clicks from native mobile apps (particularly those with embedded "in-app" browsers) are similarly prone to stripping out referrer data.

To a degree, this is unavoidable. Much like so-called “dark social” visits (discussed in detail below), non-web links will inevitably result in some quantity of direct traffic. However, you also have an opportunity here to control the controllables.

If you publish whitepapers or offer downloadable PDF guides, for example, you should be tagging the embedded hyperlinks with UTM campaign parameters. You’d never even contemplate launching an email marketing campaign without campaign tracking (I hope), so why would you distribute any other kind of freebie without similarly tracking its success? In some ways this is even more important, since these kinds of downloadables often have a longevity not seen in a single email campaign. Here’s an example of a properly tagged URL which we would embed as a link:

http://ift.tt/2ifQkVi?..._medium=offline_document&utm_campaign=201711_utm_whitepaper

The same goes for URLs in your offline marketing materials. For major campaigns it’s common practice to select a short, memorable URL (e.g. moz.com/tv/) and design an entirely new landing page. It’s possible to bypass page creation altogether: simply redirect the vanity URL to an existing page URL which is properly tagged with UTM parameters.

So, whether you tag your URLs directly, use redirected vanity URLs, or — if you think UTM parameters are ugly — opt for some crazy-ass hash-fragment solution with GTM (read more here), the takeaway is the same: use campaign parameters wherever it’s appropriate to do so.

6. “Dark social”

This is a big one, and probably the least well understood by marketers.

The term “dark social” was first coined back in 2012 by Alexis Madrigal in an article for The Atlantic. Essentially it refers to methods of social sharing which cannot easily be attributed to a particular source, like email, instant messaging, Skype, WhatsApp, and Facebook Messenger.

Recent studies have found that upwards of 80% of consumers’ outbound sharing from publishers’ and marketers’ websites now occurs via these private channels. In terms of numbers of active users, messaging apps are outpacing social networking apps. All the activity driven by these thriving platforms is typically bucketed as direct traffic by web analytics software.

People who use the ambiguous phrase “social media marketing” are typically referring to advertising: you broadcast your message and hope people will listen. Even if you overcome consumer indifference with a well-targeted campaign, any subsequent interactions are affected by their very public nature. The privacy of dark social, by contrast, represents a potential goldmine of intimate, targeted, and relevant interactions with high conversion potential. Nebulous and difficult-to-track though it may be, dark social has the potential to let marketers tap into elusive power of word of mouth.

So, how can we minimize the amount of dark social traffic which is bucketed under direct? The unfortunate truth is that there is no magic bullet: proper attribution of dark social requires rigorous campaign tracking. The optimal approach will vary greatly based on your industry, audience, proposition, and so on. For many websites, however, a good first step is to provide convenient and properly configured sharing buttons for private platforms like email, WhatsApp, and Slack, thereby ensuring that users share URLs appended with UTM parameters (or vanity/shortened URLs which redirect to the same). This will go some way towards shining a light on part of your dark social traffic.

Checklist: Minimizing direct traffic

To summarize what we’ve already discussed, here are the steps you can take to minimize the level of unnecessary direct traffic in your reports:

  1. Migrate to HTTPS: Not only is the secure protocol your gateway to HTTP/2 and the future of the web, it will also have an enormously positive effect on your ability to track referral traffic.
  2. Manage your use of redirects: Avoid chains and eliminate client-side redirection in favour of carefully-mapped, single-hop, server-side 301s. If you use vanity URLs to redirect to pages with UTM parameters, be meticulous.
  3. Get really good at campaign tagging: Even amongst data-driven marketers I encounter the belief that UTM begins and ends with switching on automatic tagging in your email marketing software. Others go to the other extreme, doing silly things like tagging internal links. Control what you can, and your ability to carry out meaningful attribution will markedly improve.
  4. Conduct an Analytics audit: Data integrity is vital, so consider this essential when assessing the success of your marketing. It’s not simply a case of checking for missing track code: good audits involve a review of your measurement plan and rigorous testing at page and property-level.

Adhere to these principles, and it’s often possible to achieve a dramatic reduction in the level of direct traffic reported in Analytics. The following example involved an HTTPS migration, GTM migration (as part of an Analytics review), and an overhaul of internal campaign tracking processes over the course of about 6 months:

But the saga of direct traffic doesn’t end there! Once this channel is “clean” — that is, once you’ve minimized the number of avoidable pollutants — what remains might actually be one of your most valuable traffic segments.

Analyze! Or: why direct traffic can actually be pretty cool

For reasons we’ve already discussed, traffic from bookmarks and dark social is an enormously valuable segment to analyze. These are likely to be some of your most loyal and engaged users, and it’s not uncommon to see a notably higher conversion rate for a clean direct channel compared to the site average. You should make the effort to get to know them.

The number of potential avenues to explore is infinite, but here are some good starting points:

  • Build meaningful custom segments, defining a subset of your direct traffic based on their landing page, location, device, repeat visit or purchase behavior, or even enhanced e-commerce interactions.
  • Track meaningful engagement metrics using modern GTM triggers such as element visibility and native scroll tracking. Measure how your direct users are using and viewing your content.
  • Watch for correlations with your other marketing activities, and use it as an opportunity to refine your tagging practices and segment definitions. Create a custom alert which watches for spikes in direct traffic.
  • Familiarize yourself with flow reports to get an understanding of how your direct traffic is converting. By using Goal Flow and Behavior Flow reports with segmentation, it’s often possible to glean actionable insights which can be applied to the site as a whole.
  • Ask your users for help! If you’ve isolated a valuable segment of traffic which eludes deeper analysis, add a button to the page offering visitors a free downloadable ebook if they tell you how they discovered your page.
  • Start thinking about lifetime value, if you haven’t already — overhauling your attribution model or implementing User ID are good steps towards overcoming the indifference or frustration felt by marketers towards direct traffic.

I hope this guide has been useful. With any luck, you arrived looking for ways to reduce the level of direct traffic in your reports, and left with some new ideas for how to better analyze this valuable segment of users.

Thanks for reading!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog http://ift.tt/2AjjidM
via IFTTT

Tuesday, 28 November 2017

V6: Typography and Proportions

Here’s a good ol’ fashion blog post by Rob Weychert where he looks into the new design system that he implemented on his personal website and specifically the typographic system that ties everything together:

According to the OED, a scale is “a graduated range of values forming a standard system for measuring or grading something.” A piece of music using a particular scale—a limited selection of notes with a shared mathematic relationship—can effect a certain emotional tenor. Want to write a sad song? Use a minor scale. Changed your mind? Switch to a major scale and suddenly that same song is in a much better mood.

Spatial relationships can likewise achieve a certain visual harmony using similar principles, and the constraints a scale provides take a lot of the arbitrary guesswork out of the process of arranging elements in space. Most of what I design that incorporates type has a typographic scale as its foundation, which informs the typeface choices and layout proportions. The process of creating that scale begins by asking what the type needs to do, and what role contrasting sizes will play in that.

Direct Link to ArticlePermalink


V6: Typography and Proportions is a post from CSS-Tricks



from CSS-Tricks http://ift.tt/2ncIYn1
via IFTTT

An Idea for a Simple Responsive Spreadsheet

Introducing minmax()

It’s relatively easy to get lost in all the new features of CSS Grid because there’s just so much to learn and familiarize ourselves with; it’s much easier to learn it chunk by chunk in my opinion.

And so you might already be familiar with Rachel Andrew’s Grid By Example which contains a whole bunch of tutorials with new layout tips and tricks about CSS Grid. But the minmax() tutorial is one small chunk of Grid that you can learn today and thankfully Rachel has made a rather handy two minute long video that dives straight into it.

In fact, it’s pretty darn impressive how many opportunities just one new CSS feature can give us.

Direct Link to ArticlePermalink


Introducing minmax() is a post from CSS-Tricks



from CSS-Tricks http://ift.tt/2AIiRui
via IFTTT

How to Write Marketing Case Studies That Convert

Posted by kerryjones

In my last post, I discussed why your top funnel content shouldn’t be all about your brand. Today I’m making a 180-degree turn and covering the value of content at the opposite end of the spectrum: content that’s directly about your business and offers proof of your effectiveness.

Specifically, I’m talking about case studies.

I’m a big believer in investing in case studies because I’ve seen firsthand what happened once we started doing so at Fractl. Case studies were a huge game changer for our B2B marketing efforts. For one, our case studies portfolio page brings in a lot of traffic – it’s the second most-visited page on our site, aside from our home page. It also brings in a significant volume of organic traffic, being our fourth most-visited page from organic searches. Most importantly, our case studies are highly effective at converting visitors to leads – about half of our leads view at least one of our case studies before contacting us.

Assuming anyone who reads the Moz Blog is performing some type of marketing function, I’m zeroing in on how to write a compelling marketing case study that differentiates your service offering and pulls prospects down the sales funnel. However, what I’m sharing can be used as a framework for creating case studies in any industry.

Get your client on board with a case study

Marketers shy away from creating case studies for a few reasons:

  1. They’re too busy “in the weeds” with deliverables.
  2. They don’t think their results are impressive enough.
  3. They don’t have clients’ permission to create case studies.

While I can’t help you with #1 and #2 (it’s up to you to make the time and to get the results deserving of a case study!), I do have some advice on #3.

In a perfect world, clients would encourage you to share every little detail of your time working together. In reality, most clients expect you to remain tight-lipped about the work you’ve done for them.

cobert-gif.gif

Understandably, this might discourage you from creating any case studies. But it shouldn’t.

With some compromising, chances are your client will be game for a case study. We’ve noticed the following two objections are common regarding case studies.

Client objection 1: “We don’t want to share specific numbers.”

At first it you may think, “Why bother?” if a client tells you this, but don’t let it hold you back. (Truth is, the majority of your clients will probably feel this way).

In this instance, you’ll want your case study to focus on highlighting the strategy and describing projects, while steering away from showing specific numbers regarding short and long-term results. Believe it or not, the solution part of the case study can be just as, or more, compelling than the results. (I’ll get to that shortly.)

And don’t worry, you don’t have to completely leave out the results. One way to get around not sharing actual numbers but still showing results is to use growth percentages.

Specific numbers: “Grew organic traffic from 5,000 to 7,500 visitors per month”

Growth percentage: “Increased organic traffic by 150%”

We do this for most of our case studies at Fractl, and our clients are totally fine with it.

Client objection 2: “We don’t want to reveal our marketing strategy to competitors.”

A fear of giving away too much intel to competitors is especially common in highly competitive niches.

So how do you get around this?

Keep it anonymous. Don’t reveal who the client is and keep it vague about what niche they’re in. This can be as ambiguous as referring to the client as “Client A” or slightly more specific (“our client in the auto industry”). Instead, the case study will focus on the process and results – this is what your prospects care about, anyway.

Gather different perspectives

Unless you were directly working with the client who you are writing the case study about, you will need to conduct a few interviews to get a full picture of the who, what, how, and why of the engagement. At Fractl, our marketing team puts together case studies based on interviews with clients and the internal team who worked on the client’s account.

The client

Arrange an interview with the client, either on a call or via email. If you have multiple contacts within the client’s team, interview the main point of contact who has been the most involved in the engagement.

What to ask:

  • What challenge were you facing that you hired us to help with?
  • Had you previously tried to solve this challenge (working with another vendor, using internal resources, etc.)?
  • What were your goals for the engagement?
  • How did you benefit from the engagement (short-term and long-term results, unexpected wins, etc.)?

You’ll also want to run the case study draft by the client before publishing it, which offers another chance for their feedback.

The project team

Who was responsible for this client’s account? Speak with the team behind the strategy and execution.

What to ask:

  • How was the strategy formed? Were strategic decisions made based on your experience and expertise, competitive research, etc.?
  • What project(s) were launched as part of the strategy? What was the most successful project?
  • Were there any unexpected issues that you overcame?
  • Did you refine the strategy to improve results?
  • How did you and the client work together? Was there a lot of collaboration or was the client more hands-off? (Many prospective clients are curious about what their level of involvement in your process would look like.)
  • What did you learn during the engagement? Any takeaways?

Include the three crucial elements of a case study

There’s more than one way to package case studies, but the most convincing ones all have something in common: great storytelling. To ensure you’re telling a proper narrative, your case study should include the conflict, the resolution, and the happy ending (but not necessarily in this order).

We find a case study is most compelling when you get straight to the point, rather than making someone read the entire case study before seeing the results. To grab readers’ attention, we begin with a quick overview of conflict-resolution-happy ending right in the introduction.

For example, in our Fanatics case study, we summarized the most pertinent details in the first three paragraphs. The rest of the case study focused on the resolution and examples of specific projects.

fanatics-case-study.png

Let’s take a look at what the conflict, resolution, and happy ending of your case study should include.

The Conflict: What goal did the client want to accomplish?

Typically serving as the introduction of the case study, “the conflict” should briefly describe the client’s business, the problem they hired you to work on, and what was keeping them from fixing this problem (ex. lack of internal resources or internal expertise). This helps readers identify with the problem the client faced and empathize with them – which can help them envision coming to you for help with this problem, too.

Here are a few examples of “conflicts” from our case studies:

  • “Movoto engaged Fractl to showcase its authority on local markets by increasing brand recognition, driving traffic to its website, and earning links back to on-site content.”
  • “Alexa came to us looking to increase awareness – not just around the Alexa name but also its resources. Many people had known Alexa as the site-ranking destination; however, Alexa also provides SEO tools that are invaluable to marketers.”
  • “While they already had strong brand recognition within the link building and SEO communities, Buzzstream came to Fractl for help with launching large-scale campaigns that would position them as thought leaders and provide long-term value for their brand.”

The Resolution: How did you solve the conflict?

Case studies are obviously great for showing proof of results you’ve achieved for clients. But perhaps more importantly, case studies give prospective clients a glimpse into your processes and how you approach problems. A great case study paints a picture of what it’s like to work with you.

For this reason, the bulk of your case study should detail the resolution, sharing as much specific information as you and your client are comfortable with; the more you’re able to share, the more you can highlight your strategic thinking and problem solving abilities.

The following snippets from our case studies are examples of details you may want to include as part of your solution section:

What our strategy encompassed:

“Mixing evergreen content and timely content helped usher new and existing audience members to the We Are Fanatics blog in record numbers. We focused on presenting interesting data through evergreen content that appealed to a variety of sports fans as well as content that capitalized on current interest around major sporting events.” - from Fanatics case study

How strategy was decided:

“We began by forming our ideation process around Movoto’s key real estate themes. Buying, selling, or renting a home is an inherently emotional experience, so we turned to our research on viral emotions to figure out how to identify with and engage the audience and Movoto’s prospective clients. Based on this, we decided to build on the high-arousal feelings of curiosity, interest, and trust that would be part of the experience of moving.

We tapped into familiar cultural references and topics that would pique interest in the regions consumers were considering. Comic book characters served us well in this regard, as did combining publicly available data (such as high school graduation rates or IQ averages) with our own original research.” - from Movoto case study

Why strategy was changed based on initial results:

“After analyzing the initial campaigns, we determined the most effective strategy included a combination of the following content types designed to achieve different goals [case study then lists the three types of content and goals]...

This strategy yielded even better results, with some campaigns achieving up to 4 times the amount of featured stories and social engagement that we achieved in earlier campaigns.” - from BuzzStream case study

How our approach was tailored to the client’s niche:

“In general, when our promotions team starts its outreach, they’ll email writers and editors who they think would be a good fit for the content. If the writer or editor responds, they often ask for more information or say they’re going to do a write-up that incorporates our project. From there, the story is up to publishers – they pick and choose which visual assets they want to incorporate in their post, and they shape the narrative.

What we discovered was that, in the marketing niche, publishers preferred to feature other experts’ opinions in the form of guest posts rather than using our assets in a piece they were already working on. We had suspected this (as our Fractl marketing team often contributes guest columns to marketing publications), but we confirmed that guest posts were going to make up the majority of our outreach efforts after performing outreach for Alexa’s campaigns.” - from Alexa case study

Who worked on the project:

Since the interviews you conduct with your internal team will inform the solution section of the case study, you may want to give individuals credit via quotes or anecdotes as a means to humanize the people behind the work. In the example below, one of our case studies featured a Q&A section with one of the project leads.

The Happy Ending: What did your resolution achieve?

Obviously, this is the part where you share your results. As I mentioned previously, we like to feature the results at the beginning of the case study, rather than buried at the end.

In our Superdrug Online Doctor case study, we summarized the overall results our campaigns achieved over 16 months:

But the happy ending isn’t finished here.

A lot of case studies fail to answer an important question: What impact did the results have on the client’s business? Be sure to tie in how the results you achieved had a bottom-line impact.

In the case of Superdrug Online Doctor, the results from our campaigns lead to a 238% increase in organic traffic. This type of outcome has tangible value for the client.

You can also share secondary benefits in addition to the primary goals the client hired you for.

In the case of our client Busbud, who hired us for SEO-oriented goals, we included examples of secondary results.

Busbud saw positive impacts beyond SEO, though, including the following:

  • Increased blog traffic
  • New partnerships as a result of more brands reaching out to work with the site
  • Brand recognition at large industry events
  • An uptick in hiring
  • Featured as a “best practice” case study at an SEO conference

Similarly, in our Fractl brand marketing case study, which focused on lead generation, we listed all of the additional benefits resulting from our strategy.

How to get the most out of your case studies

You’ve published your case study, now what should you do with it?

Build a case study page on your site

Once you've created several case studies, I recommend housing them all on the same page. This makes it easy to show off your results in a single snapshot and saves visitors from searching through your blog or clicking on a category tag to find all of your case studies in one place. Make this page easy to find through your site navigation and internal links.

While it probably goes without saying, make sure to optimize this page for search. When we initially created our case study portfolio page, we underestimated its potential to bring in search traffic and assumed it would mostly be accessed from our site navigation. Because of this, we were previously using a generic URL to house our case study portfolio. Since updating the URL from “frac.tl/our-work” to “http://ift.tt/2AgWZDi,” we’ve jumped from page 2 to the top #1–3 positions for a specific phrase we wanted to rank for (“content marketing case studies”), which attracts highly relevant search traffic.

Use case studies as concrete proof in blog posts and off-site content

Case studies can serve as tangible examples that back up your claims. Did you state that creating original content for six months can double your organic traffic? On its own, this assertion may not be believable to some, but a case study showing these results will make your claim credible.

In a post on the Curata blog, my colleague Andrea Lehr used our BuzzStream case study to back up her assertion that in order to attract links, social shares, and traffic, your off-site content should appeal to an audience beyond your target customer. Showing the results this strategy earned for a client gives a lot more weight to her advice.

On the same note, case studies have high linking potential. Not only do they make a credible citation for your own off-site content, they can also be cited by others writing about your service/product vertical. Making industry publishers aware that you publish case studies by reaching out when you’ve released a new case study can lead to links down the road.

Repurpose your case studies into multiple content formats

Creating a case study takes a lot of time, but fortunately it can be reused again and again in various applications.

Long-form case studies

While a case study featured on your site may only be a few hundred words, creating a more in-depth version is a chance to reveal more details. If you want to get your case study featured on other sites, consider writing a long-form version as a guest post.

Most of the case studies you’ll find on the Moz Blog are extremely detailed:

Video

HubSpot has hundreds of case studieson its site, dozens of which also feature supplemental video case studies, such as the one below for Eyeota.

Don’t feel like you have to create flashy videos with impressive production value, even no-frills videos can work. Within its short case study summaries, PR That Converts embeds videos of clients talking about its service. These videos are simple and short, featuring the client speaking to their webcam for a few minutes.

Speaking engagements

Marketing conferences love case studies. Look on any conference agenda, and you’re sure to notice at least a handful of speaker presentations focused on case studies. If you’re looking to secure more speaking gigs, including case studies in your speaking pitch can give you a leg up over other submissions – after all, your case studies are original data no one else can offer.

My colleague Kelsey Libert centered her MozCon presentation a few years ago around some of our viral campaign case studies.

Sales collateral

As I mentioned at the beginning of this post, many of our leads view the case studies on our site right before contacting us about working together. Once that initial contact is made, we don’t stop showing off our case studies.

We keep a running “best of” list of stats from our case studies, which allows us to quickly pull compelling stats to share in written and verbal conversations. Our pitch and proposal decks feature bite-sized versions of our case studies.

Consider how you can incorporate case studies into various touch points throughout your sales process and make sure the case studies you share align with the industry and goals of whoever you're speaking with.

I’ve shared a few of my favorite ways to repurpose case studies here but there are at least a dozen other applications, from email marketing to webinars to gated content to printed marketing materials. I even link to our case studies page in my email signature.

case study email.png

My last bit of advice: Don’t expect immediate results. Case studies typically pay off over time. The good news is it’s worth the wait, because case studies retain their value – we’re still seeing leads come in and getting links to case studies we created three or more years ago. By extending their lifespan through repurposing, the case studies you create today can remain an essential part of your marketing strategy for years to come.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog http://ift.tt/2k5MiiQ
via IFTTT

Monday, 27 November 2017

Animating Layouts with the FLIP Technique

User interfaces are most effective when they are intuitive and easily understandable to the user. Animation plays a major role in this - as Nick Babich said, animation brings user interfaces to life. However, adding meaningful transitions and micro-interactions is often an afterthought, or something that is “nice to have” if time permits. All too often, we experience web apps that simply “jump” from view to view without giving the user time to process what just happened in the current context.

This leads to unintuitive user experiences, but we can do better, by avoiding “jump cuts” and “teleportation” in creating UIs. After all, what’s more natural than real life, where nothing teleports (except maybe car keys), and everything you interact with moves with natural motion?

In this article, we’ll explore a technique called “FLIP” that can be used to animate the positions and dimensions of any DOM element in a performant manner, regardless of how their layout is calculated or rendered (e.g., height, width, floats, absolute positioning, transform, flexbox, grid, etc.)

Why the FLIP technique?

Have you ever tried to animate height, width, top, left, or any other properties besides transform and opacity? You might have noticed that the animations look a bit janky, and there's a reason for that. When any property that triggers layout changes (such as `height`), the browser has to recursively check if any other element's layout has changed as a result, and that can be expensive. If that calculation takes longer than one animation frame (around 16.7 milliseconds), then the animation frame will be skipped, resulting in "jank"
since that frame wasn't rendered in time. In Paul Lewis' article "Pixels are Expensive", he goes further in depth at how pixels are rendered and the various performance expenses.

In short, our goal is to be short -- we want to calculate the least amount of style changes necessary, as quickly as possible. The key to this is only animating transform and opacity, and FLIP explains how we can simulate layout changes using only transform.

What is FLIP?

FLIP is a mnemonic device and technique first coined by Paul Lewis, which stands for First, Last, Invert, Play. His article contains an excellent explanation of the technique, but I’ll outline it here:

  • First: before anything happens, record the current (i.e., first) position and dimensions of the element that will transition. You can use getBoundingClientRect() for this, as will be shown below.
  • Last: execute the code that causes the transition to instantaneously happen, and record the final (i.e., last) position and dimensions of the element.*
  • Invert: since the element is in the last position, we want to create the illusion that it’s in the first position, by using transform to modify its position and dimensions. This takes a little math, but it’s not too difficult.
  • Play: with the element inverted (and pretending to be in the first position), we can move it back to its last position by setting its transform to none.

Below is how these steps can be implemented:

const elm = document.querySelector('.some-element');

// First: get the current bounds
const first = getBoundingClientRect(elm);

// execute the script that causes layout change
doSomething();

// Last: get the final bounds
const last = getBoundingClientRect(elm);

// Invert: determine the delta between the 
// first and last bounds to invert the element
const deltaX = first.left - last.left;
const deltaY = first.top - last.top;
const deltaW = first.width / last.width;
const deltaH = first.height / last.height;

// Play: animate the final element from its first bounds
// to its last bounds (which is no transform)
elm.animate([{
  transformOrigin: 'top left',
  transform: `
    translate(${deltaX}px, ${deltaY}px)
    scale(${deltaW}, ${deltaH})
  `
}, {
  transformOrigin: 'top left',
  transform: 'none'
}], {
  duration: 300,
  easing: 'ease-in-out',
  fill: 'both'
});

See the Pen How the FLIP technique works by David Khourshid (@davidkpiano) on CodePen.

There are two important things to note:

  1. If the element’s size changed, you can transform scale in order to “resize” it with no performance penalty; however, make sure to set transformOrigin to 'top left' since that’s where we based our delta calculations.
  2. We’re using the Web Animations API to animate the element here, but you’re free to use any other animation engine, such as GSAP, Anime, Velocity, Just-Animate, Mo.js and more.

Shared Element Transitions

One common use-case for transitioning an element between app views and states is that the final element might not be the same DOM element as the initial element. In Android, this is similar to a shared element transition, except that the element isn’t “recycled” from view to view in the DOM as it is on Android.

Nevertheless, we can still achieve the FLIP transition with a little magic illusion:

const firstElm = document.querySelector('.first-element');

// First: get the bounds and then hide the element (if necessary)
const first = getBoundingClientRect(firstElm);
firstElm.style.setProperty('visibility', 'hidden');

// execute the script that causes view change
doSomething();

// Last: get the bounds of the element that just appeared
const lastElm = document.querySelector('.last-element');
const last = getBoundingClientRect(lastElm);

// continue with the other steps, just as before.
// remember: you're animating the lastElm, not the firstElm.

Below is an example of how two completely disparate elements can appear to be the same element using shared element transitions. Click one of the pictures to see the effect.

See the Pen FLIP example with WAAPI by David Khourshid (@davidkpiano) on CodePen.

Parent-Child Transitions

With the previous implementations, the element bounds are based on the window. For most use cases, this is fine, but consider this scenario:

  • An element changes position and needs to transition.
  • That element contains a child element, which itself needs to transition to a different position inside the parent.

Since the previously calculated bounds are relative to the window, our calculations for the child element are going to be off. To solve this, we need to ensure that the bounds are calculated relative to the parent element instead:

const parentElm = document.querySelector('.parent');
const childElm = document.querySelector('.parent &gt; .child');

// First: parent and child
const parentFirst = getBoundingClientRect(parentElm);
const childFirst = getBoundingClientRect(childElm);

doSomething();

// Last: parent and child
const parentLast = getBoundingClientRect(parentElm);
const childLast = getBoundingClientRect(childElm);

// Invert: parent
const parentDeltaX = parentFirst.left - parentLast.left;
const parentDeltaY = parentFirst.top - parentLast.top;

// Invert: child relative to parent
const childDeltaX = (childFirst.left - parentFirst.left)
  - (childLast.left - parentLast.left);
const childDeltaY = (childFirst.top - parentFirst.top)
  - (childLast.top - parentLast.top);
  
// Play: using the WAAPI
parentElm.animate([
  { transform: `translate(${parentDeltaX}px, ${parentDeltaY}px)` },
  { transform: 'none' }
], { duration: 300, easing: 'ease-in-out' });

childElm.animate([
  { transform: `translate(${childDeltaX}px, ${childDeltaY}px)` },
  { transform: 'none' }
], { duration: 300, easing: 'ease-in-out' });

A few things to note here, as well:

  1. The timing options for the parent and child (duration, easing, etc.) do not necessarily need to match with this technique. Feel free to be creative!
  2. Changing dimensions in parent and/or child (width, height) was purposefully omitted in this example, since it is an advanced and complex topic. Let’s save that for another tutorial.
  3. You can combine the shared element and parent-child techniques for greater flexibility.

Using Flipping.js for Full Flexibility

The above techniques might seem straightforward, but they can get quite tedious to code once you have to keep track of multiple elements transitioning. Android eases this burden by:

  • baking shared element transitions into the core SDK
  • allowing developers to identify which elements are shared by using a common android:transitionName XML attribute

I’ve created a small library called Flipping.js with the same idea in mind. By adding a data-flip-key="..." attribute to HTML elements, it’s possible to predictably and efficiently keep track of elements that might change position and dimensions from state to state.

For example, consider this initial view:

    <section class="gallery">
      <div class="photo-1" data-flip-key="photo-1">
        <img src="/photo-1"/>
      </div>
      <div class="photo-2" data-flip-key="photo-2">
        <img src="/photo-2"/>
      </div>
      <div class="photo-3" data-flip-key="photo-3">
        <img src="/photo-3"/>
      </div>
    </section>

And this separate detail view:

    <section class="details">
      <div class="photo" data-flip-key="photo-1">
        <img src="/photo-1"/>
      </div>
      <p class="description">
        Lorem ipsum dolor sit amet...
      </p>
    </section>

Notice in the above example that there are 2 elements with the same data-flip-key="photo-1". Flipping.js tracks the “active” element by choosing the first element that meet these criteria:

  • The element exists in the DOM (i.e., it hasn’t been removed or detached)
  • The element is not hidden (hint: getBoundingClientRect(elm) will have { width: 0, height: 0 } for hidden elements)
  • Any custom logic specified in the selectActive option.

Getting Started with Flipping.js

There’s a few different packages for Flipping, depending on your needs:

  • flipping.js: tiny and low-level; only emits events when element bounds change
  • flipping.web.js: uses WAAPI to animate transitions
  • flipping.gsap.js: uses GSAP to animate transitions
  • More adapters coming soon!

You can grab the minified code directly from unpkg:

Or you can npm install flipping --save and import it into your projects:

// import not necessary when including the unpkg scripts in a <script src="..."> tag
import Flipping from 'flipping/adapters/web';

const flipping = new Flipping();

// First: let Flipping read all initial bounds
flipping.read();

// execute the change that causes any elements to change bounds
doSomething();

// Last, Invert, Play: the flip() method does it all
flipping.flip();

Handling FLIP transitions as a result of a function call is such a common pattern, that the .wrap(fn) method transparently wraps (or “decorates”) the given function by first calling .read(), then getting the return value of the function, then calling .flip(), then returning the return value. This leads to much less code:

const flipping = new Flipping();

const flippingDoSomething = flipping.wrap(doSomething);

// anytime this is called, FLIP will animate changed elements
flippingDoSomething();

Here is an example of using flipping.wrap() to easily achieve the shifting letters effect. Click anywhere to see the effect.

See the Pen Flipping Birthstones #Codevember by David Khourshid (@davidkpiano) on CodePen.

Adding Flipping.js to Existing Projects

In another article, we created a simple React gallery app using finite state machines. It works just as expected, but the UI could use some smooth transitions between states to prevent “jumping” and improve the user experience. Let’s add Flipping.js into our React app to accomplish this. (Keep in mind, Flipping.js is framework-agnostic.)

Step 1: Initialize Flipping.js

The Flipping instance will live on the React component itself, so that it’s isolated to only changes that occur within that component. Initialize Flipping.js by setting it up in the componentDidMount lifecycle hook:

  componentDidMount() {
    const { node } = this;
    if (!node) return;
    
    this.flipping = new Flipping({
      parentElement: node
    });
    
    // initialize flipping with the initial bounds
    this.flipping.read();
  }

By specifying parentElement: node, we’re telling Flipping to only look for elements with a data-flip-key in the rendered App, instead of the entire document.

Then, modify the HTML elements with the data-flip-key attribute (similar to React’s key prop) to identify unique and “shared” elements:

  renderGallery(state) {
    return (
      <section className="ui-items" data-state={state}>
        {this.state.items.map((item, i) =>
          <img
            src={item.media.m}
            className="ui-item"
            style=--i
            key={item.link}
            onClick={() => this.transition({
              type: 'SELECT_PHOTO', item
            })}
            data-flip-key={item.link}
          />
        )}
      </section>
    );
  }
  renderPhoto(state) {
    if (state !== 'photo') return;
    
    return (
      <section
        className="ui-photo-detail"
        onClick={() => this.transition({ type: 'EXIT_PHOTO' })}>
        <img
          src={this.state.photo.media.m}
          className="ui-photo"
          data-flip-key={this.state.photo.link}
        />
      </section>
    )
  }

Notice how the img.ui-item and img.ui-photo are represented by data-flip-key={item.link} and data-flip-key={this.state.photo.link} respectively: when the user clicks on an img.ui-item, that item is set to this.state.photo, so the .link values will be equal.

And since they are equal, Flipping will smoothly transition from the img.ui-item thumbnail to the larger img.ui-photo.

Now we need to do two more things:

  1. call this.flipping.read() whenever the component will update
  2. call this.flipping.flip() whenever the component did update

Some of you might have already guessed where these method calls are going to occur: componentWillUpdate and componentDidUpdate, respectively:

  componentWillUpdate() {
    this.flipping.read();
  }
  
  componentDidUpdate() {
    this.flipping.flip();
  }

And, just like that, if you’re using a Flipping adapter (such as flipping.web.js or flipping.gsap.js), Flipping will keep track of all elements with a [data-flip-key] and smoothly transition them to their new bounds whenever they change. Here is the final result:

See the Pen FLIPping Gallery App by David Khourshid (@davidkpiano) on CodePen.

If you would rather implement custom animations yourself, you can use flipping.js as a simple event emitter. Read the documentation for more advanced use-cases.

Flipping.js and its adapters handle the shared element and parent-child transitions by default, as well as:

  • interrupted transitions (in adapters)
  • enter/move/leave states
  • plugin support for plugins such as mirror, which allows newly entered elements to “mirror” another element’s movement
  • and more planned in the future!

Resources

Similar libraries include:

  • FlipJS by Paul Lewis himself, which handles simple single-element FLIP transitions
  • React-Flip-Move, a useful React library by Josh Comeau
  • BarbaJS, not necessarily a FLIP library, but one that allows you to add smooth transitions between different URLs, without page jumps.

Further resources:


Animating Layouts with the FLIP Technique is a post from CSS-Tricks



from CSS-Tricks http://ift.tt/2k2Vp3P
via IFTTT

Netflix functions without client-side React, and it’s a good thing

Recently Netflix removed client-side React from their landing page which caused a bit of a stir. So Jake Archibald investigated why the team did that and how it’s actually a good thing for the React community in the long term:

When the PS4 was released in 2013, one of its advertised features was progressive downloading – allowing gamers to start playing a game while it's downloading. Although this was a breakthrough for consoles, the web has been doing this for 20 years. The HTML spec (warning: 8mb document), despite its size, starts rendering once ~20k is fetched.

Unfortunately, it's a feature we often engineer-away with single page apps, by channelling everything through a medium that isn't streaming-friendly, such as a large JS bundle.

I like the whole vibe of this post because it suggests that we should be careful when we pick our tools; we only should pick the right tool for the right job, instead of treating every issue as if it needs a giant hammer made of JavaScript.

Direct Link to ArticlePermalink


Netflix functions without client-side React, and it’s a good thing is a post from CSS-Tricks



from CSS-Tricks http://ift.tt/2yZoUcG
via IFTTT

Apps Have Command Prompts Now

Knowledge Graph Eats Featured Snippets, Jumps +30%

Posted by Dr-Pete

Over the past two years, we've seen a steady and substantial increase in Featured Snippets on Google SERPs. In our 10,000-keyword daily tracking set, Featured Snippets have gone from about 5.5% of queries in November 2015 to a recent high of just over 16% (roughly tripling). Other data sets, with longer tail searches, have shown even higher prevalence.

Near the end of October (far-right of the graph), we saw our first significant dip (spotted by Brian Patterson on SEL). This dip occurred over about a 4-day period, and represents roughly a 10% drop in searches with Featured Snippets. Here's an enhanced, 2-week view (note: Y-axis is expanded to show the day-over-day changes more clearly):

Given the up-and-to-the-right history of Featured Snippets and the investments people have been making optimizing for these results, a 10% drop is worthy of our attention.

What happened, exactly?

To be honest, when we investigate changes like this, the best we can usually do is produce a list of keywords that lost Featured Snippets. Usually, we focus on high-volume keywords, which tend to be more interesting. Here's a list of keywords that lost Featured Snippets during that time period:

  • CRM
  • ERP
  • MBA
  • buddhism
  • web design
  • anger management
  • hosting
  • DSL
  • ActiveX
  • ovulation

From an explanatory standpoint, this list isn't usually very helpful – what exactly do "web design", "buddhism", and "ovulation" have in common (please, don't answer that)? In this case, though, there was a clear and interesting pattern. Almost all of the queries that lost Featured Snippets gained Knowledge Panels that look something like this one:

These new panels account for the vast majority of the lost Featured Snippets I've spot-checked, and all of them are general Knowledge Panels coming directly from Wikipedia. In some cases, Google is using a more generic Knowledge Graph entry. For example, "HDMI cables", which used to show a Featured Snippet (dominated by Amazon, last I checked), now shows no snippet and a generic panel for "HDMI":

In very rare cases, a SERP added the new Knowledge Panel but retained the Featured Snippet, such as the top of this search for "credit score":

These situations seemed to be the exceptions to the rule.

What about other SERPs?

The SERPs that lost Featured Snippets were only one part of this story. Over the same time period, we saw an explosion (about +30%) in Knowledge Panels:

This Y-axis has not been magnified – the jump in Knowledge Panels is clearly visible even at normal scale. Other tracking sites saw similar, dramatic increases, including this data from RankRanger. This jump appears to be a similar type of descriptive panel, ranging from commercial keywords, like "wedding dresses" and "Halloween costumes"...

...to brand keywords, like "Ray-Ban"...

Unlike definition boxes, many of these new panels appear on words and phrases that appear to be common knowledge and add little value. Here's a panel on "job search"...

I suspect that most people searching for "job search" or "job hunting" don't need it defined. Likewise, people searching for "travel" probably weren't confused about what travel actually is...

Thanks for clearing that up, Google. I've decided to spare you all and leave out a screenshot for "toilet" (go ahead and Google it). Almost all of these new panels appear to be driven by Wikipedia (or Wikidata), and most of them are single-paragraph definitions of terms.

Were there other changes?

During the exact same period, we also noticed a drop in SERPs with inline image results. Here's a graph of the same 2-week period reported for the other features:

This drop almost exactly mirrors the increase in Knowledge Panels. In cases where the new panels were added, those panels almost always contain a block of images at the top. This block seems to have replaced inline image results. It's interesting to note that, because image blocks in the left-hand column consume an organic position, this change freed up an organic spot on the first page of results for those terms.

Why did Google do this?

It's likely that Google is trying to standardize answers for common terms, and perhaps they were seeing quality or consistency issues in Featured Snippets. In some cases, like "HDMI cables", Featured Snippets were often coming from top e-commerce sites, which are trying to sell products. These aren't always a good fit for unbiased definitions. Its also likely that Google would like to beef up the Knowledge Graph and rely less, where possible, on outside sites for answers.

Unfortunately, this also means that the answers are coming from a much less diverse pool (and, from what we've seen, almost entirely from Wikipedia), and it reduces the organic opportunity for sites that were previously ranking for or trying to compete for Featured Snippets. In many cases, these new panels also seem to add very little. Someone searching for "ERP" might be helped by a brief definition, but someone searching for "travel" is unlikely looking to have it explained to them.

As always, there's not much we can do but monitor the situation and adapt. Featured Snippets are still at historically high levels and represent a legitimate organic opportunity. There's also win-win, since efforts invested in winning Featured Snippets tend to improve organic ranking and, done right, can produce a better user experience for both search and website visitors.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog http://ift.tt/2Bqh5LA
via IFTTT

Friday, 24 November 2017

Editing the W3C HTML5 spec

Bruce Lawson has been tapped to co-edit the W3C HTML5 spec and, in his announcement post, clarified the difference between that and the WHATWG spec:

The WHATWG spec is a future-facing document; lots of ideas are incubated there. The W3C spec is a snapshot of what works interoperably – authors who don’t care much about what may or may not be round the corner, but who need solid advice on what works now may find this spec easier to use.

I was honestly unfamiliar with the WHATWG spec and now I find it super interesting to know there are two working groups pushing HTML forward in distinct but (somewhat) cooperative ways.

Kudos to you, Bruce! And, yes, Vive open standards!

Direct Link to ArticlePermalink


Editing the W3C HTML5 spec is a post from CSS-Tricks



from CSS-Tricks http://ift.tt/2myBVon
via IFTTT

On the Growing Popularity of Atomic CSS

Which of My Competitor's Keywords Should (& Shouldn't ) I Target? - Whiteboard Friday

Posted by randfish

You don't want to try to rank for every one of your competitors' keywords. Like most things with SEO, it's important to be strategic and intentional with your decisions. In today's Whiteboard Friday, Rand shares his recommended process for understanding your funnel, identifying the right competitors to track, and prioritizing which of their keywords you ought to target.

Which of my competitor's keyword should I target?

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. So this week we're chatting about your competitors' keywords and which of those competitive keywords you might want to actually target versus not.

Many folks use tools, like SEMrush and Ahrefs and KeywordSpy and Spyfu and Moz's Keyword Explorer, which now has this feature too, where they look at: What are the keywords that my competitors rank for, that I may be interested in? This is actually a pretty smart way to do keyword research. Not the only way, but a smart way to do it. But the challenge comes in when you start looking at your competitors' keywords and then realizing actually which of these should I go after and in what priority order. In the world of competitive keywords, there's actually a little bit of a difference between classic keyword research.

So here I've plugged in Hammer and Heels, which is a small, online furniture store that has some cool designer furniture, and Dania Furniture, which is a competitor of theirs — they're local in the Seattle area, but carry sort of modern, Scandinavian furniture — and IndustrialHome.com, similar space. So all three of these in a similar space, and you can see sort of keywords that return that several of these, one or more of these rank for. I put together difficulty, volume, and organic click-through rate, which are some of the metrics that you'll find. You'll find these metrics actually in most of the tools that I just mentioned.

Process:

So when I'm looking at this list, which ones do I want to actually go after and not, and how do I choose? Well, this is the process I would recommend.

I. Try and make sure you first understand your keyword to conversion funnel.

So if you've got a classic sort of funnel, you have people buying down here — this is a purchase — and you have people who search for particular keywords up here, and if you understand which people you lose and which people actually make it through the buying process, that's going to be very helpful in knowing which of these terms and phrases and which types of these terms and phrases to actually go after, because in general, when you're prioritizing competitive keywords, you probably don't want to be going after these keywords that send traffic but don't turn into conversions, unless that's actually your goal. If your goal is raw traffic only, maybe because you serve advertising or other things, or because you know that you can capture a lot of folks very well through retargeting, for example maybe Hammer and Heels says, "Hey, the biggest traffic funnel we can get because we know, with our retargeting campaigns, even if a keyword brings us someone who doesn't convert, we can convert them later very successfully," fine. Go ahead.

II. Choose competitors that tend to target the same audience(s).

So the people you plug in here should tend to be competitors that tend to target the same audiences. Otherwise, your relevance and your conversion get really hard. For example, I could have used West Elm, which does generally modern furniture as well, but they're very, very broad. They target just about everyone. I could have done Ethan Allen, which is sort of a very classic, old-school furniture maker. Probably a really different audience than these three websites. I could have done IKEA, which is sort of a low market brand for everybody. Again, not kind of the match. So when you are targeting conversion heavy, assuming that these folks were going after mostly conversion focused or retargeting focused rather than raw traffic, my suggestion would be strongly to go after sites with the same audience as you.

If you're having trouble figuring out who those people are, one suggestion is to check out a tool called SimilarWeb. It's expensive, but very powerful. You can plug in a domain and see what other domains people are likely to visit in that same space and what has audience overlap.

III. The keyword selection process should follow some of these rules:

A. Are easiest first.

So I would go after the ones that tend to be, that I think are going to be most likely for me to be able to rank for easiest. Why do I recommend that? Because it's tough in SEO with a lot of campaigns to get budget and buy-in unless you can show progress early. So any time you can choose the easiest ones first, you're going to be more successful. That's low difficulty, high odds of success, high odds that you actually have the team needed to make the content necessary to rank. I wouldn't go after competitive brands here.

B. Are similar to keywords you target that convert well now.

So if you understand this funnel well, you can use your AdWords campaign particularly well for this. So you look at your paid keywords and which ones send you highly converting traffic, boom. If you see that lighting is really successful for our furniture brand, "Oh, well look, glass globe chandelier, that's got some nice volume. Let's go after that because lighting already works for us."

Of course, you want ones that fit your existing site structure. So if you say, "Oh, we're going to have to make a blog for this, oh we need a news section, oh we need a different type of UI or UX experience before we can successfully target the content for this keyword," I'd push that down a little further.

C. High volume, low difficulty, high organic click-through rate, or SERP features you can reach.

So basically, when you look at difficulty, that's telling you how hard is it for me to rank for this potential keyword. If I look in here and I see some 50 and 60s, but I actually see a good number in the 30s and 40s, I would think that glass globe chandelier, S-shaped couch, industrial home furniture, these are pretty approachable. That's impressive stuff.

Volume, I want as high as I can get, but oftentimes high volume leads to very high difficulty.
Organic click-through rate percentage, this is essentially saying what percent of people click on the 10 blue link style, organic search results. Classic SEO will help get me there. However, if you see low numbers, like a 55% for this type of chair, you might take a look at those search results and see that a lot of images are taking up the other organic click-through, and you might say, "Hey, let's go after image SEO as well." So it's not just organic click-through rate. You can also target SERP features.

D. Are brands you carry/serve, generally not competitor's brand names.

Then last, but not least, I would urge you to go after brands when you carry and serve them, but not when you don't. So if this Ekornes chair is something that your furniture store, that Hammers and Heels actually carries, great. But if it's something that's exclusive to Dania, I wouldn't go after it. I would generally not go after competitors' brand names or branded product names with an exception, and I actually used this site to highlight this. Industrial Home Furniture is both a branded term, because it's the name of this website — Industrial Home Furniture is their brand — and it's also a generic. So in those cases, I would tell you, yes, it probably makes sense to go after a category like that.

If you follow these rules, you can generally use competitive intel on keywords to build up a really nice portfolio of targetable, high potential keywords that can bring you some serious SEO returns.

Look forward to your comments and we'll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog http://ift.tt/2BaEIqe
via IFTTT

Wednesday, 22 November 2017

Declining Complexity in CSS

The fourth edition of Eric Meyer and Estelle Weyl's CSS: The Definitive Guide was recently released. The new book weighs in at 1,016 pages, which is up drastically from 447 in the third edition, which was up slightly from 436 in the second edition.

Despite the appearance of CSS needing more pages to capture more complicated concepts, Eric suggests that CSS is actually easier to grasp than ever before and its complexity has actually declined between editions:

But the core principles and mechanisms are no more complicated than they were a decade or even two decades ago. If anything, they’re easier to grasp now, because we don’t have to clutter our minds with float behaviors or inline layout just to try to lay out a page. Flexbox and Grid (chapters 12 and 13, by the way) make layout so much simpler than ever before, while simultaneously providing far more capability than ever before.

In short, yes, lots of new concepts have been introduced since 2007 when the third edition was released, but they're solving the need to use layout, err, tricks to make properties bend in ways they were never intended:

It’s still an apparent upward trend, but think about all the new features that have come out since the 3rd Edition, or are coming out right now: gradients, multiple backgrounds, sticky positioning, flexbox, Grid, blending, filters, transforms, animation, and media queries, among others. A lot of really substantial capabilities. They don’t make CSS more convoluted, but instead extend it into new territories.

Hear, hear! Onward and upward, no matter how many pages it takes.

Direct Link to ArticlePermalink


Declining Complexity in CSS is a post from CSS-Tricks



from CSS-Tricks http://ift.tt/2j9d4mb
via IFTTT

HTML Email and Accessibility

Tuesday, 21 November 2017

Styleable

The Wix dev team throws their hat into the CSS preprocessor ring:

Stylable is a CSS preprocessor that enables you to write reusable, highly-performant, styled components. Each component exposes a style API that maps its internal parts so you can reuse components across teams without sacrificing stylability.

  • Scopes styles to components so they don’t “leak” and clash with other styles.
  • Enables custom pseudo-classes and pseudo-elements that abstract the internal structure of a component. These can then be styled externally.
  • Uses themes so you can apply different look and feel across your web application.

At build time, the preprocessor converts the Stylable CSS into flat, static, valid, vanilla CSS that works cross-browser.

Looks like Sass luminary Chris Eppstein is getting in on the game of scoped styles with the not-yet-released CSS Blocks. And think of Vue's support for <style scoped>, and the popularity of utility libraries. I think scoped styles might be the hottest CSS topic in 2018.

Direct Link to ArticlePermalink


Styleable is a post from CSS-Tricks



from CSS-Tricks http://ift.tt/2z8DviH
via IFTTT

Advocating for Accessible UI Design

AMP-lify Your Digital Marketing in 2018

Posted by EricEnge

Should you AMP-lify your site in 2018?

This is a question on the mind of many publishers. To help answer it, this post is going to dive into case studies and examples showing results different companies had with AMP.

If you’re not familiar with Accelerated Mobile Pages (AMP), it’s an open-source project aimed at allowing mobile website content to render nearly instantly. This initiative that has Google as a sponsor, but it is not a program owned by Google, and it’s also supported by Bing, Baidu, Twitter, Pinterest, and many other parties.


Some initial background

Since its inception in 2015, AMP has come a long way. When it first hit the scene, AMP was laser-focused on media sites. The reason those types of publishers wanted to participate in AMP was clear: It would make their mobile sites much faster, AND Google was offering a great deal of incremental exposure in Google Search through the “Top Stories news carousel.”

Basically, you can only get in the Top Stories carousel on a mobile device if your page is implemented in AMP, and that made AMP a big deal for news sites. But if you’re not a news site, what’s in it for you? Simple: providing a better user experience online can lead to more positive website metrics and revenue.

We know that fast-loading websites are better for the user. But what you may not be aware of is how speed can impact the bottom line. Google-sponsored research shows that AMP leads to an average of a 2X increase in time spent on page (details can be seen here). The data also shows e-commerce sites experience an average 20 percent increase in sales conversions compared to non-AMP web pages.

Stepping outside the world of AMP for a moment, data from Amazon, Walmart, and Yahoo show a compelling impact of page load time on metrics like traffic, conversion and sales:

You can see that for Amazon, a mere one-tenth of a second increase in page load time (so one-tenth of a second slower) would drive a $1.3 billion drop in sales. So, page speed can have a direct impact on revenue. That should count for something.

What do users say about AMP? 9to5Google.com recently conducted a poll where they asked users: “Are you more inclined to click on an AMP link than a regular one?” The majority of people (51.14 percent) said yes to that question. Here are the detailed results:

This poll suggests that even for non-news sites, there is a very compelling reason to do AMP for SEO. Not because it increases your rankings, per se, but because you may get more click-throughs (more traffic) from the organic search results. Getting more traffic from organic search, after all, is the goal of SEO. In addition, you’re likely to get more time on site and more conversions.


How the actual implementation of AMP impacts your results

Before adopting any new technology, you need understand what you’re getting into.

At Stone Temple Consulting, we performed a research study that included 10 different types of websites that adopted AMP to see what results they had and what challenges they ran into. (Go here to see more details from the study.)

Let’s get right to the results. One site, Thrillist, converted 90 percent of their web pages over a four-week period of time. They saw a 70 percent lift in organic search traffic to their site — 50 percent of that growth came from AMP.

One anonymous participant in the study, another large media publisher, converted 95 percent of their web pages to AMP, and once again the development effort as approximately four weeks long. They saw a 67 percent lift in organic search traffic on one of their sites, and a 30% lift on another site.

So, media sites do well, but we knew that would be the case. What about e-commerce sites? Consider the case of Myntra, a company that is the largest fashion retailer in India. Their implementation took about 11 days of effort.

This implementation covered all of their main landing pages from Google, covering between 85% and 90% of their organic search traffic. For their remaining pages (such as the individual product pages) they implemented a Progressive Web App, which helps those pages perform better as well. They saw a 40% reduction in bounce rate on their pages, as well as a lift in their overall e-commerce results. You can see detailed results here.

Then there is the case of Event Tickets Center. They implemented 99.9% of their pages in AMP, and opted to create an AMP-immersive experience. Page load times on their site dropped from five to six seconds to one second.

They saw improvements in user engagement metrics, with a drop in bounce rate of 10%, an increase in pages per session of 6%, and session duration of 13%. But, the stunning stat is that they report a whopping 100% increase in e-commerce conversions. You can see the full case study here.

But it’s not always the case that AMP adopters will see a huge lift in results. When that’s not the case, there’s likely one culprit: not taking the time to implement AMP thoroughly. A big key to AMP is not to simply use a plugin, set it, and forget it.

To get good results, you’ll need to invest the time to make the AMP version of your pages substantially similar (if not identical) to your normal responsive mobile pages, and with today’s AMP, for the majority of publishers, that is absolutely possible to do. In addition to this being critical to the performance of AMP pages, on November 16, 2017, Google announced that they will exclude pages from the AMP carousel if the content on your AMP page is not substantially similar to that of your mobile responsive page.

This typically means creating brand-new templates for the major landing pages of your site, or if you are using a plugin, using their custom styling options (most of them allow this). If you’re going to take on AMP, it’s imperative that you take the time to get this right.

From our research, you can see in the slide below the results from the 10 sites that adopted AMP. Eight of those sites are colored in green, and those are the sites that saw strong results from their AMP implementation.

Then there are two listed in yellow. Those are the sites that have not yet seen good results. In both of those cases, there were implementation problems. One of the sites (the Lead Gen site above) launched pages with a broken hamburger menu, and a UI that was not up to par with the responsive mobile pages, and their metrics are weak.

We’ve been working with them to fix that and their metrics are steadily improving. The first round of fixes brought the user engagement metrics much closer to that of the mobile responsive pages, but there is still more work to do.

The other site (the retail site in yellow above) launched AMP pages without their normal faceted navigation, and also without a main menu, saw really bad results, and pulled it back down. They're working on a better AMP implementation now, and hope to relaunch soon.

So, when you think about implementing AMP, you have to go all the way with it and invest the time to do a complete job. That will make it harder, for sure, but that’s OK — you’ll be far better off in the end.


How we did it at Stone Temple (and what we found)

Here at Stone Temple Consulting, we experimented with AMP ourselves, using an AMP plugin versus a hand-coded AMP web page. I’ll share the results of that next.

Experiment No. 1: WordPress AMP plugin

Our site is on WordPress, and there are plugins that make the task of doing AMP easier if you have a WordPress site — however, that doesn’t mean install the plugin, turn it on, and you’re done.

Below you can see a comparison of the standard StoneTemple.com mobile page on the left contrasted with the default StoneTemple.com page that comes out of the AMP plugin that we used on the site called AMP by Automatic.

You’ll see that the look and feel is dramatically different between the two, but to be fair to the plugin, we did what I just said you shouldn’t do. We turned it on, did no customization, and thought we were done.

As a result, there’s no hamburger menu. The logo is gone. It turns out that by default, the link at the top (“Stone Temple”) goes to StoneTemple.com/amp, but there’s no page for that, so it returns a 404 error, and the list of problems goes on. As noted, we had not used the customization options available in the plugin, which can be used to rectify most (if not all) of these problems, and the pages can be customized to look a lot better. As part of an ongoing project, we’re working on that.

It’s a lot faster, yes… but is it a better user experience? Looking at the data, we can see the impact of this broken implementation of AMP. The metrics are not good.

Looking at the middle line highlighted in orange, you’ll see the standard mobile page metrics. On the top line, you’ll see the AMP page metrics — and they’re all worse: higher bounce rate, fewer pages per session, and lower average session time.

Looking back to the image of the two web pages, you can see why. We were offering an inferior user interface because we weren’t giving the user any opportunities to interact. Therefore, we got predictable results.

Experiment No. 2: Hand-coded AMP web page

One of the common myths about AMP is that an AMP page needs to be a stripped-down version of your site to succeed. To explore whether or not that was true, we took the time at Stone Temple Consulting to hand-code a version of one of our article pages for AMP. Here is a look at how that came out:

As you can see from the screenshots above, we created a version of the page that looked nearly identical to the original. We also added a bit of extra functionality with a toggle sidebar feature. With that, we felt we made something that had even better usability than the original page.

The result of these changes? The engagement metrics for the AMP pages on StoneTemple.com went up dramatically. For the record, here are our metrics including the handcrafted AMP pages:

As you can see, the metrics have improved dramatically. We still have more that we can do with the handcrafted page as well, and we believe we can get these metrics to be better than that of the standard mobile responsive page. At this point in time, total effort on the handcrafted page template was about 40 hours.

Note: We do believe that we can get engagement on the AMP by Automatic plugin version to go way up, too. One of the reasons we did the hand-coded version was to get hands-on experience with AMP coding. We’re working on a better custom implementation of the AMP by Automatic pages in parallel.


Bonus challenge: AMP analytics

Aside from the actual implementation of AMP, there is a second major issue to be concerned about if you want to be successful: the tracking. The default tracking in Google Analytics for AMP pages is broken, and you’ll need to patch it.

Just to explain what the issue is, let’s look at the following illustration:

The way AMP works (and one of the things that helps with speeding up your web pages) is that your content is served out of a cache on Google. When a user clicks on the AMP link in the search results, that page lives in Google’s cache (on Google.com). That’s the web page that gets sent to the user.

The problem occurs when a user is viewing your web page on Google’s cache, and then clicks on a link within that page (say, to the home page of your site). This action means they leave the Google.com page and get the next page delivered from your server (in the example above, I’m using the StoneTemple.com server.)

From a web analytics point of view, those are two different websites. The analytics for StoneTemple.com is going to view that person who clicked on the AMP page in the Google cache as a visitor from a third-party website, and not a visitor from search. In other words, the analytics for StoneTemple.com won’t record it as a continuation of the same session; it’ll be tracked as a new session.

You can (and should) set up analytics for your AMP pages (the ones running on Google.com), but those are normally going to run as a separate set of analytics. Nearly every action on your pages in the Google cache will result in the user leaving the Google cache, and that will be seen as leaving the site that the AMP analytics is tracking. The result is that in the analytics for your AMP pages running on Google.com:

  • Your pages per session will be about one
  • Bounce rate will be very high (greater than 90 percent)
  • Session times will be very short

Then, for the AMP analytics on your domain, your number of visitors will not reflect any of the people who arrive on an AMP page first, and will only include those who view a second page on the site (on your main domain). If you try fixing this by adding your AMP analytics visit count to your main site analytics count, you’ll be double counting people that click through from one to the other.

There is a fix for this, and it’s referred to as “session stitching.” This is a really important fix to implement, and Google has provided it by creating an API that allows you to share the client ID information from AMP analytics with your regular website analytics. As a result, the analytics can piece together that it’s a continuation of the same session.

For more, you can see how to implement the fix to remedy both basic and advanced metrics tracking in my article on session stitching here.


Wrapping up

AMP can offer some really powerful benefits — improved site speed, better user experience and more revenue — but only for those publishers that take the time to implement the AMP version of their AMP site thoroughly, and also address the tracking issue in analytics so they can see the true results.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog http://ift.tt/2mLsmTc
via IFTTT

Passkeys: What the Heck and Why?

These things called  passkeys  sure are making the rounds these days. They were a main attraction at  W3C TPAC 2022 , gained support in  Saf...