Friday 30 November 2018

Blue Beanie Day 2018

Another year!

I feel the same this year as I have in the past. Web standards, as an overall idea, has entirely taken hold and won the day. That's worth celebrating, as the web would be kind of a joke without them. So now, our job is to uphold them. We need to cry foul when we see a browser go rogue and ship an API outside the standards process. That version of competition is what could lead the web back to a dark place where we're creating browser-specific versions. That becomes painful, we stop doing it, and slowly, the web loses.

Direct Link to ArticlePermalink

The post Blue Beanie Day 2018 appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2PcN2N0
via IFTTT

Nesting Components in Figma

Embed a Blog Onto Any Website With DropInBlog

Local Search Ranking Factors 2018: Local Today, Key Takeaways, and the Future

Posted by Whitespark

In the past year, local SEO has run at a startling and near-constant pace of change. From an explosion of new Google My Business features to an ever-increasing emphasis on the importance of reviews, it's almost too much to keep up with. In today's Whiteboard Friday, we welcome our friend Darren Shaw to explain what local is like today, dive into the key takeaways from his 2018 Local Search Ranking Factors survey, and offer us a glimpse into the future according to the local SEO experts.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans. I'm Darren Shaw from Whitespark, and today I want to talk to you about the local search ranking factors. So this is a survey that David Mihm has run for the past like 10 years. Last year, I took it over, and it's a survey of the top local search practitioners, about 40 of them. They all contribute their answers, and I aggregate the data and say what's driving local search. So this is what the opinion of the local search practitioners is, and I'll kind of break it down for you.

Local search today

So these are the results of this year's survey. We had Google My Business factors at about 25%. That was the biggest piece of the pie. We have review factors at 15%, links at 16%, on-page factors at 14%, behavioral at 10%, citations at 11%, personalization and social at 6% and 3%. So that's basically the makeup of the local search algorithm today, based on the opinions of the people that participated in the survey.

The big story this year is Google My Business. Google My Business factors are way up, compared to last year, a 32% increase in Google My Business signals. I'll talk about that a little bit more over in the takeaways. Review signals are also up, so more emphasis on reviews this year from the practitioners. Citation signals are down again, and that makes sense. They continue to decline I think for a number of reasons. They used to be the go-to factor for local search. You just built out as many citations as you could. Now the local search algorithm is so much more complicated and there's so much more to it that it's being diluted by all of the other factors. Plus it used to be a real competitive difference-maker. Now it's not, because everyone is pretty much getting citations. They're considered table stakes now. By seeing a drop here, it doesn't mean you should stop doing them. They're just not the competitive difference-maker they used to be. You still need to get listed on all of the important sites.

Key takeaways

All right, so let's talk about the key takeaways.

1. Google My Business

The real story this year was Google My Business, Google My Business, Google My Business. Everyone in the comments was talking about the benefits they're seeing from investing in a lot of these new features that Google has been adding.

Google has been adding a ton of new features lately — services, descriptions, Google Posts, Google Q&A. There's a ton of stuff going on in Google My Business now that allows you to populate Google My Business with a ton of extra data. So this was a big one.

✓ Take advantage of Google Posts

Everyone talked about Google Posts, how they're seeing Google Posts driving rankings. There are a couple of things there. One is the semantic content that you're providing Google in a Google post is definitely helping Google associate those keywords with your business. Engagement with Google Posts as well could be driving rankings up, and maybe just being an active business user continuing to post stuff and logging in to your account is also helping to lift your business entity and improve your rankings. So definitely, if you're not on Google Posts, get on it now.

If you search for your category, you'll see a ton of businesses are not doing it. So it's also a great competitive difference-maker right now.

✓ Seed your Google Q&A

Google Q&A, a lot of businesses are not even aware this exists. There's a Q&A section now. Your customers are often asking questions, and they're being answered by not you. So it's valuable for you to get in there and make sure you're answering your questions and also seed the Q&A with your own questions. So add all of your own content. If you have a frequently asked questions section on your website, take that content and put it into Google Q&A. So now you're giving lots more content to Google.

✓ Post photos and videos

Photos and videos, continually post photos and videos, maybe even encourage your customers to do that. All of that activity is helpful. A lot of people don't know that you can now post videos to Google My Business. So get on that if you have any videos for your business.

✓ Fill out every field

There are so many new fields in Google My Business. If you haven't edited your listing in a couple of years, there's a lot more stuff in there that you can now populate and give Google more data about your business. All of that really leads to engagement. All of these extra engagement signals that you're now feeding Google, from being a business owner that's engaged with your listing and adding stuff and from users, you're giving them more stuff to look at, click on, and dwell on your listing for a longer time, all that helps with your rankings.

2. Reviews

✓ Get more Google reviews

Reviews continue to increase in importance in local search, so, obviously, getting more Google reviews. It used to be a bit more of a competitive difference-maker. It's becoming more and more table stakes, because everybody seems to be having lots of reviews. So you definitely want to make sure that you are competing with your competition on review count and lots of high-quality reviews.

✓ Keywords in reviews

Getting keywords in reviews, so rather than just asking for a review, it's useful to ask your customers to mention what service they had provided or whatever so you can get those keywords in your reviews.

✓ Respond to reviews (users get notified now!)

Responding to reviews. Google recently started notifying users that if the owner has responded to you, you'll get an email. So all of that is really great, and those responses, it's another signal to Google that you're an engaged business.

✓ Diversify beyond Google My Business for reviews

Diversify. Don't just focus on Google My Business. Look at other sites in your industry that are prominent review sites. You can find them if you just look for your own business name plus reviews, if you search that in Google, you're going to see the sites that Google is saying are important for your particular business.

You can also find out like what are the sites that your competitors are getting reviews on. Then if you just do a search like keyword plus city, like "lawyers + Denver," you might find sites that are important for your industry as well that you should be listed on. So check out a couple of your keywords and make sure you're getting reviews on more sites than just Google.

3. Links

Then links, of course, links continue to drive local search. A lot of people in the comments talked about how a handful of local links have been really valuable. This is a great competitive difference-maker, because a lot of businesses don't have any links other than citations. So when you get a few of these, it can really have an impact.

✓ From local industry sites and sponsorships

They really talk about focusing on local-specific sites and industry-specific sites. So you can get a lot of those from sponsorships. They're kind of the go-to tactic. If you do a search for in title sponsors plus city name, you're going to find a lot of sites that are listing their sponsors, and those are opportunities for you, in your city, that you could sponsor that event as well or that organization and get a link.

The future!

All right. So I also asked in the survey: Where do you see Google going in the future? We got a lot of great responses, and I tried to summarize that into three main themes here for you.

1. Keeping users on Google

This is a really big one. Google does not want to send its users to your website to get the answer. Google wants to have the answer right on Google so that they don't have to click. It's this zero-click search result. So you see Rand Fishkin talking about this. This has been happening in local for a long time, and it's really amplified with all of these new features Google has been adding. They want to have all of your data so that they don't have to send users to find it somewhere else. Then that means in the future less traffic to your website.

So Mike Blumenthal and David Mihm also talk about Google as your new homepage, and this concept is like branded search.

  • What does your branded search look like?
  • So what sites are you getting reviews on?
  • What does your knowledge panel look like?

Make that all look really good, because Google doesn't want to send people to your new website.

2. More emphasis on behavioral signals

David Mihm is a strong voice in this. He talks about how Google is trying to diversify how they rank businesses based on what's happening in the real world. They're looking for real-world signals that actual humans care about this business and they're engaging with this business.

So there's a number of things that they can do to track that -- so branded search, how many people are searching for your brand name, how many people are clicking to call your business, driving directions. This stuff is all kind of hard to manipulate, whereas you can add more links, you can get more reviews. But this stuff, this is a great signal for Google to rely on.

Engagement with your listing, engagement with your website, and actual humans in your business. If you've seen on the knowledge panel sometimes for brick-and-mortar business, it will be like busy times. They know when people are actually at your business. They have counts of how many people are going into your business. So that's a great signal for them to use to understand the prominence of your business. Is this a busy business compared to all the other ones in the city?

3. Google will monetize everything

Then, of course, a trend to monetize as much as they can. Google is a publicly traded company. They want to make as much money as possible. They're on a constant growth path. So there are a few things that we see coming down the pipeline.

Local service ads are expanding across the country and globally and in different industries. So this is like a paid program. You have to apply to get into it, and then Google takes a cut of leads. So if you are a member of this, then Google will send leads to you. But you have to be verified to be in there, and you have to pay to be in there.

Then taking a cut from bookings, you can now book directly on Google for a lot of different businesses. If you think about Google Flights and Google Hotels, Google is looking for a way to monetize all of this local search opportunity. That's why they're investing heavily in local search so they can make money from it. So seeing more of these kinds of features rolling out in the future is definitely coming. Transactions from other things. So if I did book something, then Google will take a cut for it.

So that's the future. That's sort of the news of the local search ranking factors this year. I hope it's been helpful. If you have any questions, just leave some comments and I'll make sure to respond to them all. Thanks, everybody.

Video transcription by Speechpad.com


If you missed our recent webinar on the Local Search Ranking Factors survey with Darren Shaw and Dr. Pete, don't worry! You can still catch the recording here:

Check out the webinar

You'll be in for a jam-packed hour of deeper insights and takeaways from the survey, as well as some great audience-contributed Q&A.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog https://ift.tt/2r8VDH0
via IFTTT

Thursday 29 November 2018

DevTools for Designers

Preventing Content Reflow From Lazy-Loaded Images

You know the concept of lazy loading images. It prevents the browser from loading images until those images are in (or nearly in) the browser's viewport.

There are a plethora of JavaScript-based lazy loading solutions. GitHub has over 3,400 different lazy load repos, and those are just the ones with "lazy load" in a searchable string! Most of them rely on the same trick: Instead of putting an image's URL in the src attribute, you put it in data-src — which is the same pattern for responsive images:

  • JavaScript watches the user scroll down the page
  • When the use encounters an image, JavaScript moves the data-src value into src where it belongs
  • The browser requests the image and it loads into view

The result is the browser loading fewer images up front so that the page loads faster. Additionally, if the user never scrolls far enough to see an image, that image is never loaded. That equals faster page loads and less data the user needs to spend.

"This is amazing!" you may be thinking. And, you’re right... it is amazing!

That said, it does indeed introduce a noticeable problem: images not containing the src attribute (including when it’s empty or invalid) have no height. This means that they're not the right size in the page layout until they're lazy-loaded.

When a user scrolls and images are lazy-loaded, those img elements go from a height of 0 pixels to whatever they need to be. This causes reflow, where the content below or around the image gets pushed to make room for the freshly loaded image. Reflow is a problem because it's a user-blocking operation. It slows down the browser by forcing it to recalculate the layout of any elements that are affected by that image's shape. The CSS scroll-behavior property may help here at some point, but its support needs to improve before it’s a viable option.

Lazy loading doesn't guarantee that the image will fully load before it enters the viewport. The result is a perceived janky experience, even if it’s a big performance win.

There are other issues with lazy loading images that are worth mentioning but are outside the scope of this post. For example, if JavaScript fails to run at all, then no images will load on the page. That’s a common concern for any JavaScript-based solution but this article only concerned with solving the problems introduced by reflow.

If we could force pre-loaded images to maintain their normal width and height (i.e. their aspect ratio), we could prevent reflow problems while still lazy loading them. This is something I recently had to solve building a progressive web app at DockYard where I work.

For future reference, there's an HTML attribute called intrinsicsize that's designed to preserve the aspect ratio, but right now, that's just experimental in Chrome.

Here’s how we did it.

Maintaining aspect ratio

There are many ways to go about the way we can maintain aspect ratios. Chris once rounded up an exhaustive list of options, but here’s what we’re looking at for image-specific options.

The image itself

The image src provides a natural aspect ratio. Even when an image is resized responsively, its natural dimensions still apply. Here's a pretty common bit of responsive image CSS:

img {
  max-width: 100%;
  height: auto;
}

That CSS is telling images not to exceed the width of the element that contains them, but to scale the height properly so that there's no "stretching" or "squishing" as the image is resized. Even if the image has inline height and width attributes, this CSS will keep them behaving nicely on small viewports.

However, that "natural aspect ratio" behavior breaks down if there's no src yet. Browsers don't care about data-src and don't do anything with it, so it’s not really a viable solution for lazy loading reflow; but it is important to help understand the "normal" way images are laid out once they've loaded.

A pseudo-element

Many developers — including myself — have been frustrated trying to use pseudo-elements (e.g. ::before and ::after) to add decorations to img elements. Browsers don't render an image’s pseudo-elements because img is a replaced element, meaning its layout is controlled by an external resource.

However, there is an exception to that rule: If an image’s src attribute is invalid, browsers will render its pseudo-elements. So, if we store the src for an image in data-src and the src is empty, then we can use a pseudo-element to set an aspect ratio:

[data-src]::before {
  content: '';
  display: block;
  padding-top: 56.25%;
}

That'll set a 16:9 aspect ratio on ::before for any element with a data-src attribute. As soon as the data-src becomes the src, the browser stops rendering ::before and the image's natural aspect ratio takes over.

Here’s a demo:

See the Pen Image Aspect Ratio: ::before padding by James Steinbach (@jdsteinbach) on CodePen.

There are a couple drawbacks to this solution, however. First, it relies on CSS and HTML working together. Your stylesheet needs to have a declaration for each image aspect ratio you need to support. It would be much better if the template could insert an image without needing CSS edits.

Second, it doesn't work in Safari 12 and below, or Edge, at the time of writing. That's a pretty big traffic swatch to send poor layouts. To be fair, maintaining the aspect ratio is a bit of a progressive enhancement — there's nothing "broken" about the final rendered page. Still, it’s much more ideal to solve the reflow problem and for images to render as expected.

Data URI (Base64) PNGs

Another way we attempted to preserve the aspect ratio was to inline data URI for the src. as PNG. Using png-pixel.com will help with the lift of all that base64-encoding with any dimensions and colors. This can go straight into the image's src attribute in the HTML:

<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAMAAAACCAQAAAA3fa6RAAAADklEQVR42mNkAANGCAUAACMAA2w/AMgAAAAASUVORK5CYII=" data-src="//picsum.photos/900/600" alt="Lazy loading test image" />

The inline PNG there has a 3:2 aspect ratio (the same aspect ratio as the final image). When src is replaced with the data-src value, the image will maintain its aspect ratio exactly like we want!

Here’s a demo:

See the Pen Image Aspect Ratio: inline base64 PNG by James Steinbach (@jdsteinbach) on CodePen.

And, yes, this approach also comes with some drawbacks. Although the browser support is much better, it's complicated to maintain. We need to generate a base64 string for each new image size, then make that object of strings available to whatever templating tool that’s being used. It's also not the most efficient way to represent this data.

I kept exploring and found a smaller way.

Combine SVG with base64

After exploring the inline PNG option, I wondered if SVG might be a smaller format for inline images and here's what I found: An SVG with a viewBox declaration is a placeholder image with an easily editable native aspect ratio.

First, I tried base64-encoding an SVG. Here's an example of what that looked like in my HTML:

<img src="data:image/svg+xml;base64,PHN2ZyB4bWxucz0naHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmcnIHZpZXdCb3g9JzAgMCAzIDInPjwvc3ZnPg==" data-src="//picsum.photos/900/600" alt="Lazy loading test image">

On small, simple aspect ratios, this is roughly equivalent in size to the base64 PNGs. A 1:1 ratio would be 114 bytes with base64 PNG and 106 bytes with base64 SVG. A 2:3 ratio is 118 bytes with base64 PNG and 106 bytes with base64 SVG.

However, using base64 SVG for larger, more complex ratios stay small, which is a real winner in file size. A 16:9 ratio is 122 bytes in base64 PNG and 110 bytes in base64 SVG. A 923:742 ratio is 3,100 bytes in base64 PNG but only 114b in base64 SVG! (That's not a common aspect ratio, but I needed to test with custom dimensions with my client's use case.)

Here’s a table to see those comparisons more clearly:

Aspect Ratio base64 PNG base64 SVG
1:1 114 bytes 106 bytes
2:3 118 bytes 106 bytes
16:9 122 bytes 110 bytes
923:742 3,100 bytes 114 bytes

The differences are negligible with simple ratios, but you can see how extremely well SVG scales as ratios become complex.

We've got much better browser support now. This technique is supported by all the big players, including Chrome, Firefox, Safari, Opera, IE11, and Edge, but also has great support in mobile browsers, including Safari iOS, Chrome for Android, and Samsung for Android (from 4.4 up).

Here's a demo:

See the Pen Image Aspect Ratio: inline base64 SVG by James Steinbach (@jdsteinbach) on CodePen.

🏆 We have a winner!

Yes, we do, but stick with me as we improve this approach even more! I remembered Chris suggesting that we should not use base64 encoding with SVG inlined in CSS background-images and thought that advice might apply here, too.

In this case, instead of base64-encoding the SVGs, I used the "Optimized URL-encoded" technique from that post. Here's the markup:

<img src="data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 3 2'%3E%3C/svg%3E" data-src="//picsum.photos/900/600" alt="Lazy loading test image" />

This is just a tad smaller than base64 SVG. The 1:1 is 106 bytes in base64 and 92 bytes when URL-encoding. 16:9 outputs 110 bytes in base64 and 97 bytes when URL-encoded.

If you're interested in more data size by file and encoding format, this demo compares different byte sizes between all of these techniques.

However, the real benefits that make the URL-encoded SVG a clear winner are that its format is human-readable, easily template-able, and infinitely customizable!

You don't need to create a CSS block or generate a base64 string to get a perfect placeholder for images where the dimensions are unknown! For example, here's a little React component that uses this technique:

const placeholderSrc = (width, height) => `data:image/svg+xml,%3Csvg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 ${width} ${height}"%3E%3C/svg%3E`

const lazyImage = ({url, width, height, alt}) => {
  return (
    <img
      src={placeholderSrc(width, height)}
      data-src={url}
      alt={alt} />
  )
}

See the Pen React LazyLoad Image with Stable Aspect Ratio by James Steinbach (@jdsteinbach) on CodePen.

Or, if you prefer Vue:

See the Pen Vue LazyLoad Image with Stable Aspect Ratio by James Steinbach (@jdsteinbach) on CodePen.

I'm happy to report that browser support hasn't changed with this improvement — we've still got the full support as base64 SVG!

Conclusion

We've explored several techniques to prevent content reflow by preserving the aspect ratio of a lazy-loaded image before the swap happens. The best technique I was able to find is inlined and optimized URL-encoded SVG with image dimensions defined in the viewBox attribute. That can be scripted with a function like this:

const placeholderSrc = (width, height) => `data:image/svg+xml,%3Csvg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 ${width} ${height}"%3E%3C/svg%3E`

There are several benefits to this technique:

  • Solid browser support across desktop and mobile
  • Smallest byte size
  • Human-readable format
  • Easily templated without run-time encoding calls
  • Infinitely extensible

What do you think of this approach? Have you used something similar or have a completely different way of handling reflow? Let me know!

The post Preventing Content Reflow From Lazy-Loaded Images appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2RnBgkG
via IFTTT

Wednesday 28 November 2018

The State of Local SEO: Industry Insights for a Successful 2019

Posted by MiriamEllis

A thousand thanks to the 1,411 respondents who gave of their time and knowledge in contributing to this major survey! You’ve created a vivid image of what real-life, everyday local search marketers and local business owners are observing on a day-to-day basis, what strategies are working for them right now, and where some frankly stunning opportunities for improvement reside. Now, we’re ready to share your insights into:

  • Google Updates
  • Citations
  • Reviews
  • Company infrastructure
  • Tool usage
  • And a great deal more...

This survey pooled the observations of everyone from people working to market a single small business, to agency marketers with large local business clients:

Respondents who self-selected as not marketing a local business were filtered from further survey results.

Thanks to you, this free report is a window into the industry. Bring these statistics to teammates and clients to earn the buy-in you need to effectively reach local consumers in 2019.

Get the full report

There are so many stories here worthy of your time

Let’s pick just one, to give a sense of the industry intelligence you’ll access in this report. Likely you’ve now seen the Local Search Ranking Factors 2018 Survey, undertaken by Whitespark in conjunction with Moz. In that poll of experts, we saw Google My Business signals being cited as the most influential local ranking component. But what was #2? Link building.

You might come away from that excellent survey believing that, since link building is so important, all local businesses must be doing it. But not so. The State of the Local SEO Industry Report reveals that:

When asked what’s working best for them as a method for earning links, 35% of local businesses and their marketers admitted to having no link building strategy in place at all:

And that, Moz friends, is what opportunity looks like. Get your meaningful local link building strategy in place in the new year, and prepare to leave ⅓ of your competitors behind, wondering how you surpassed them in the local and organic results.

The full report contains 30+ findings like this one. Rivet the attention of decision-makers at your agency, quote persuasive statistics to hesitant clients, and share this report with teammates who need to be brought up to industry speed. When read in tandem with the Local Search Ranking Factors survey, this report will help your business or agency understand both what experts are saying and what practitioners are experiencing.

Sometimes, local search marketing can be a lonely road to travel. You may find yourself wondering, “Does anyone understand what I do? Is anyone else struggling with this task? How do I benchmark myself?” You’ll find both confirmation and affirmation today, and Moz’s best hope is that you’ll come away a better, bolder, more effective local marketer. Let’s begin!

Download the report


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog https://ift.tt/2SkHdiA
via IFTTT

What If?

Harry Roberts writes about working on a project with a rather nasty design flaw. The website was entirely dependent on images loading before rendering any of the content. He digs into why this bad for accessibility and performance but goes further to describe how this can ripple into other problems:

While ever you build under the assumption that things will always work smoothly, you’re leaving yourself completely ill-equipped to handle the scenario that they don’t. Remember the fallacies; think about resilience.

Harry then suggests that we should always ask ourselves a key question when developing a website: what if this image doesn’t load? For example, if the user is on a low-end device, using a flakey network, using an obscure browser, looking at the site without a crucial API or feature available... you get the idea.

While we're on this note, we asked what makes a good front-end developer a little while back and I think this is the best answer to that question after reading Harry's post: a good front-end developer is constantly asking themselves, "What if?"

Direct Link to ArticlePermalink

The post What If? appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2RaxYBs
via IFTTT

Front-End Developers Have to Manage the Loading Experience

Web performance is a huge complicated topic. There are metrics like total requests, page weight, time to glass, time to interactive, first input delay, etc. There are things to think about like asynchronous requests, render blocking, and priority downloading. We often talk about performance budgets and performance culture.

How that first document comes down from the server is a hot topic. That is where most back-end related performance talk enters the picture. It gives rise to architectures like the JAMstack, where gosh, at least we don't have to worry about index.html being slow.

Images have a performance story all to themselves (formats! responsive images!). Fonts also (FOUT'n'friends!). CSS also (talk about render blocking!). Service workers can be involved at every level. And, of course, JavaScript is perhaps the most talked about villain of performance. All of this is balanced with perhaps the most important general performance concept: perceived performance. Front-end developers already have a ton of stuff we're responsible for regarding performance. 80% is the generally quoted number and that sounds about right to me.

For a moment, let's assume we're going to build a site and we're not going to server-side render it. Instead, we're going to load an empty document and kick off data API calls as quickly as we can, then render the site with that data. Not a terribly rare scenario these days. As you might imagine, >we now have another major concern: handling the loading experience.

I mused about this the other day. Here's an example:

I'd say that loading experience is pretty janky, and I'm on about the best hardware and internet connection money can buy. It's not a disaster and surely many, many thousands of people use this particular site successfully every day. That said, it doesn't feel fast, smooth, or particularly nice like you'd think a high-budget website would in these Future Times.

Part of the reason is probably because that page isn't server-side rendered. For whatever reason (we can't really know from the outside), that's not the way they went. Could be developer efficiency, security, a temporary state during a re-write... who knows! (It probably isn't ignorance.)

What are we to do? Well, I think this is a somewhat new problem in front-end development. We've told the browser: "Hey, we got this. We're gonna load things all out of order depending on when our APIs cough stuff up to us and our front-end framework decides it's time to do so." I can see the perspective here where this isn't ideal and we've given up something that browsers are incredibly good at only to do it less well ourselves. But hey, like I've laid out a bit here, the world is complicated.

What is actually happening is that these front-end frameworks are aware of this issue and are doing things to help manage it. Back in April of this year, Dan Abramov introduced React Suspense. It seems like a tool for helping front-end devs like us manage the idea that we now need to deal with more loading state stuff than we ever have before:

At about 14 minutes, he gets into fetching data with placeholder components, caching and such. This issue isn't isolated to React, of course, but keeping in that theme, here's a conference talk by Andrew Clark that hit home with me even more quickly (but ultimately uses the same demo and such):

Just the idea of waiting to show spinners for a little bit can go a long way in de-jankifying loading.

Mikael Ainalem puts a point on this in a recent article, A Brief History of Flickering Spinners. He explains more clearly what I was trying to say:

One reason behind this development is the change we’ve seen in asynchronous programming. Asynchronous programming is a lot easier than it used to be. Most modern languages have good support for loading data on the fly. Modern JavaScript has incorporated Promises and with ES7 comes the async and await keywords. With the async/await keywords one can easily fetch data and process it when needed. This means that we need to think a step further about how we show users that data is loading.

Plus, he offers some solutions!

See the Pen Flickering spinners by Mikael Ainalem (@ainalem) on CodePen.

We've got to get better at this.

The post Front-End Developers Have to Manage the Loading Experience appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2E283sz
via IFTTT

Using a New Correlation Model to Predict Future Rankings with Page Authority

Posted by rjonesx.

Correlation studies have been a staple of the search engine optimization community for many years. Each time a new study is released, a chorus of naysayers seem to come magically out of the woodwork to remind us of the one thing they remember from high school statistics — that "correlation doesn't mean causation." They are, of course, right in their protestations and, to their credit, and unfortunate number of times it seems that those conducting the correlation studies have forgotten this simple aphorism.

We collect a search result. We then order the results based on different metrics like the number of links. Finally, we compare the orders of the original search results with those produced by the different metrics. The closer they are, the higher the correlation between the two.

That being said, correlation studies are not altogether fruitless simply because they don't necessarily uncover causal relationships (ie: actual ranking factors). What correlation studies discover or confirm are correlates.

Correlates are simply measurements that share some relationship with the independent variable (in this case, the order of search results on a page). For example, we know that backlink counts are correlates of rank order. We also know that social shares are correlates of rank order.

Correlation studies also provide us with direction of the relationship. For example, ice cream sales are positive correlates with temperature and winter jackets are negative correlates with temperature — that is to say, when the temperature goes up, ice cream sales go up but winter jacket sales go down.

Finally, correlation studies can help us rule out proposed ranking factors. This is often overlooked, but it is an incredibly important part of correlation studies. Research that provides a negative result is often just as valuable as research that yields a positive result. We've been able to rule out many types of potential factors — like keyword density and the meta keywords tag — using correlation studies.

Unfortunately, the value of correlation studies tends to end there. In particular, we still want to know whether a correlate causes the rankings or is spurious. Spurious is just a fancy sounding word for "false" or "fake." A good example of a spurious relationship would be that ice cream sales cause an increase in drownings. In reality, the heat of the summer increases both ice cream sales and people who go for a swim. That swimming can cause drownings. So while ice cream sales is a correlate of drowning, it is *spurious.* It does not cause the drowning.

How might we go about teasing out the difference between causal and spurious relationships? One thing we know is that a cause happens before its effect, which means that a causal variable should predict a future change.

An alternative model for correlation studies

I propose an alternate methodology for conducting correlation studies. Rather than measure the correlation between a factor (like links or shares) and a SERP, we can measure the correlation between a factor and changes in the SERP over time.

The process works like this:

  1. Collect a SERP on day 1
  2. Collect the link counts for each of the URLs in that SERP
  3. Look for any URLs are out of order with respect to links; for example, if position 2 has fewer links than position 3
  4. Record that anomaly
  5. Collect the same SERP in 14 days
  6. Record if the anomaly has been corrected (ie: position 3 now out-ranks position 2)
  7. Repeat across ten thousand keywords and test a variety of factors (backlinks, social shares, etc.)

So what are the benefits of this methodology? By looking at change over time, we can see whether the ranking factor (correlate) is a leading or lagging feature. A lagging feature can automatically be ruled out as causal. A leading factor has the potential to be a causal factor.

We collect a search result. We record where the search result differs from the expected predictions of a particular variable (like links or social shares). We then collect the same search result 2 weeks later to see if the search engine has corrected the out-of-order results.

Following this methodology, we tested 3 different common correlates produced by ranking factors studies: Facebook shares, number of root linking domains, and Page Authority. The first step involved collecting 10,000 SERPs from randomly selected keywords in our Keyword Explorer corpus. We then recorded Facebook Shares, Root Linking Domains, and Page Authority for every URL. We noted every example where 2 adjacent URLs (like positions 2 and 3 or 7 and 8) were flipped with respect to the expected order predicted by the correlating factor. For example, if the #2 position had 30 shares while the #3 position had 50 shares, we noted that pair. Finally, 2 weeks later, we captured the same SERPs and identified the percent of times that Google rearranged the pair of URLs to match the expected correlation. We also randomly selected pairs of URLs to get a baseline percent likelihood that any 2 adjacent URLs would switch positions. Here were the results...

The outcome

It's important to note that it is incredibly rare to expect a leading factor to show up strongly in an analysis like this. While the experimental method is sound, it's not as simple as a factor predicting future — it assumes that in some cases we will know about a factor before Google does. The underlying assumption is that in some cases we have seen a ranking factor (like an increase in links or social shares) before Googlebot has and that in the 2 week period, Google will catch up and correct the incorrectly ordered results. As you can expect, this is a rare occasion. However, with a sufficient number of observations, we should be able to see a statistically significant difference between lagging and leading results. However, the methodology only detects when a factor is both leading and Moz Link Explorer discovered the relevant factor before Google.

Factor Percent Corrected P-Value 95% Min 95% Max
Control 18.93% 0
Facebook Shares Controlled for PA 18.31% 0.00001 -0.6849 -0.5551
Root Linking Domains 20.58% 0.00001 0.016268 0.016732
Page Authority 20.98% 0.00001 0.026202 0.026398

Control:

In order to create a control, we randomly selected adjacent URL pairs in the first SERP collection and determined the likelihood that the second will outrank the first in the final SERP collection. Approximately 18.93% of the time the worse ranking URL would overtake the better ranking URL. By setting this control, we can determine if any of the potential correlates are leading factors - that is to say that they are potential causes of improved rankings.

Facebook Shares:

Facebook Shares performed the worst of the three tested variables. Facebook Shares actually performed worse than random (18.31% vs 18.93%), meaning that randomly selected pairs would be more likely to switch than those where shares of the second were higher than the first. This is not altogether surprising as it is the general industry consensus that social signals are lagging factors — that is to say the traffic from higher rankings drives higher social shares, not social shares drive higher rankings. Subsequently, we would expect to see the ranking change first before we would see the increase in social shares.

RLDs

Raw root linking domain counts performed substantially better than shares at ~20.5%. As I indicated before, this type of analysis is incredibly subtle because it only detects when a factor is both leading and Moz Link Explorer discovered the relevant factor before Google. Nevertheless, this result was statistically significant with a P value <0.0001 and a 95% confidence interval that RLDs will predict future ranking changes around 1.5% greater than random.

Page Authority

By far, the highest performing factor was Page Authority. At 21.5%, PA correctly predicted changes in SERPs 2.6% better than random. This is a strong indication of a leading factor, greatly outperforming social shares and outperforming the best predictive raw metric, root linking domains.This is not unsurprising. Page Authority is built to predict rankings, so we should expect that it would outperform raw metrics in identifying when a shift in rankings might occur. Now, this is not to say that Google uses Moz Page Authority to rank sites, but rather that Moz Page Authority is a relatively good approximation of whatever link metrics Google is using to determine ranking sites.

Concluding thoughts

There are so many different experimental designs we can use to help improve our research industry-wide, and this is just one of the methods that can help us tease out the differences between causal ranking factors and lagging correlates. Experimental design does not need to be elaborate and the statistics to determine reliability do not need to be cutting edge. While machine learning offers much promise for improving our predictive models, simple statistics can do the trick when we're establishing the fundamentals.

Now, get out there and do some great research!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog https://ift.tt/2FIOSpt
via IFTTT

Tuesday 27 November 2018

SEO Services

TRCREATIVE are experts in all areas of SEO.

SEO or Search Engine Optimisation to give it it's full name, is the process of optimising your website to appear in Google results for targeted keywords.

If you're looking for local SEO services, look no further than TRCREATIVE

Front-end development is not a problem to be solved

HTML and CSS are often seen as a burden.

This is a feeling I’ve noticed from engineers and designers I’ve worked with in the past, and it’s a sentiment that’s a lot more transparent with the broader web community at large. You can hear it in Medium posts and on indie blogs, whether in conversations about CSS, web performance, or design tools.

The sentiment is that front-end development is a problem to be solved: “if we just have the right tools and frameworks, then we might never have to write another line of HTML or CSS ever again!” And oh boy what a dream that would be, right?

Well, no, actually. I certainly don’t think that front-end development is a problem at all.

What's behind this feeling? Well, designers want tools that let them draw pictures and export a batch of CSS and HTML files like Dreamweaver promised back in the day. On the other end, engineers don’t want to sweat accessibility, web performance or focus states among many, many other things. There’s simply too many edge cases, too many devices, and too many browsers to worry about. The work is just too much.

Consequently, I empathize with these feelings as a designer/developer myself, but I can’t help but get a little upset when I read about someone's relationship with Bootstrap or design systems, frameworks or CSS-in-JS solutions — and even design tools like Sketch or Figma. It appears that we treat front-end development as a burden, or something we want to void altogether by abstracting it with layers of tools.

We should see front-end development as a unique skillset that is critical to the success of any project.

I believe that’s why frameworks and tools like Bootstrap are so popular; not necessarily because they’re a collection of helpful components, but a global solution that corrects an inherent issue. And when I begin to see “Bootstrap” in multiple resumés for front-end applications, I immediately assume that we're going to be at odds with our approaches to design and development.

Bootstrap isn’t a skill though — front-end development is.

And this isn’t me just being a curmudgeon... I hope. I genuinely want tools that help us make better decisions, that help us build accessible, faster, and more beautiful websites in a way that pushes the web forward. That said, I believe the communities built up around these tools encourage designing and developing in a way that's ignorant of front-end skills and standards.

What’s the point in learning about vanilla HTML, CSS and JavaScript if they wind up becoming transpiled by other tools and languages?

Don’t get me wrong — I don’t think there’s anything wrong with Bootstrap, or CSS-in-JS, or CSS Modules, or fancy design tools. But building our careers around the limitations of these tools is a minor tragedy. Front-end development is complex because design is complex. Transpiling our spoken language into HTML and CSS requires vim and nuance, and always will. That’s not going to be resolved by a tool but by diligent work over a long period of time.

I reckon HTML and CSS deserve better than to be processed, compiled, and spat out into the browser, whether that’s through some build process, app export, or gigantic framework library of stuff that we half understand. HTML and CSS are two languages that deserve our care and attention to detail. Writing them is a skill.

I know I’m standing on a metaphorical soapbox here, and perhaps I’m being a tad melodramatic, but front-end development is not a problem to be solved. It’s a cornerstone of the web, and it’s not going away any time soon.

Is it?

The post Front-end development is not a problem to be solved appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2SdW4v2
via IFTTT

FUIF: Responsive Images by Design

Jon Sneyers:

One of the main motivations for FUIF is to have an image format that is responsive by design, which means it’s no longer necessary to produce many variants of the same image: low-quality placeholders, thumbnails, many downscaled versions for many display resolutions. A single file, truncated at different offsets, can do the same thing.

FLIF isn't anywhere near ready to use, but it's a fascinating idea. I love the idea that the format stores the image data in such a way that you request just first few kilobytes of the file and to essentially get a low-quality version, then you request more as needed. See this little demo from Eric Portis that shows it off somewhat via a Service Worker and a progressive JPG.

If this idea ever does get legs and support in browsers, Cloudinary is super well suited to take advantage of that, as they serve the best image format for the current browser — and that is massive for image performance.

Direct Link to ArticlePermalink

The post FUIF: Responsive Images by Design appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2DEELjF
via IFTTT

Monday 26 November 2018

What to Do with Your Old Blog Posts

Posted by -LaurelTaylor-

Around 2005 or so, corporate blogs became the thing to do. Big players in the business world touted that such platforms could “drive swarms of traffic to your main website, generate more product sales” and even “create an additional stream of advertising income” (Entrepreneur Magazine circa 2006). With promises like that, what marketer or exec wouldn’t jump on the blog bandwagon?

Unfortunately, initial forays into branded content did not always dwell on minor issues like “quality” or “entertainment,” instead focusing on sheer bulk and, of course, all the keywords. Now we have learned better, and many corporate blogs are less prolific and offer more value. But on some sites, behind many, many “next page” clicks, this old content can still be found lurking in the background.

This active company blog still features over 900 pages of posts dating back to 2006

This situation leaves current SEOs and content teams in a bit of a pickle. What should you do if your site has excessive quantities of old blog posts? Are they okay just sitting there? Do you need to do something about them?

Why bother addressing old blog posts?

On many sites, the sheer number of pages are the biggest reason to consider improving or scaling back old content. If past content managers chose quantity over quality, heaps of old posts eventually get buried, all evergreen topics have been written about before, and it becomes increasingly harder to keep inventory of your content.

From a technical perspective, depending on the scale of the old content you're dealing with, pruning back the number of pages that you put forward can help increase your crawl efficiency. If Google has to crawl 1,000 URLs to find 100 good pieces of content, they are going to take note and not spend as much time combing through your content in the future.

From a marketing perspective, your content represents your brand, and improving the set of content that you put forward helps shape the way customers see you as an authority in your space. Optimizing and curating your existing content can give your collection of content a clearer topical focus, makes it more easily discoverable, and ensures that it provides value for users and the business.

Zooming out for a second to look at this from a higher level: If you've already decided that it's worth investing in blog content for your company, it’s worth getting the most from your existing resources and ensuring that they aren’t holding you back.

Decide what to keep: Inventory and assessment

Inventory

The first thing to do before accessing your blog posts is to make sure you know what you have. A full list of URLs and coordinating metadata is incredibly helpful in both evaluating and documenting.

Depending on the content management system that you use, obtaining this list can be as simple as exporting a database field. Alternatively, URLs can be gleaned from a combination of Google Analytics data, Webmaster Tools, and a comprehensive crawl with a tool such as Screaming Frog. This post gives a good outline of how to get the data you need from these sources.

Regardless of whether you have a list of URLs yet, it’s also good to do a full crawl of your blog to see what the linking structure looks like at this point, and how that may differ from what you see in the CMS.

Assessment

Once you know what you have, it’s time to assess the content and decide if it's worth holding on to. When I do this, I like to ask these 5 questions:

1. Is it beneficial for users?

Content that's beneficial for users is helpful, informative, or entertaining. It answers questions, helps them solve problems, or keeps them interested. This could be anything from a walkthrough for troubleshooting to a collection of inspirational photos.

Screenshots of old real estate articles: one is about how to select a location, and the other is about how to deal with the undead

These 5-year-old blog posts from different real estate blogs illustrate past content that still offers value to current users, and past content that may be less beneficial for a user

2. Is it beneficial for us?

Content that is beneficial to us is earning organic rankings, traffic, or backlinks, or is providing business value by helping drive conversions. Additionally, content that can help establish branding or effectively build topical authority is great to have on any site.

3. Is it good?

While this may be a bit of a subjective question to ask about any content, it’s obvious when you read content that isn’t good. This is about fundamental things such as if content doesn’t make sense, has tons of grammatical errors, is organized poorly, or doesn’t seem to have a point to it.

4. Is it relevant?

If content isn’t at least tangentially relevant to your site, industry, or customers, you should have a really good reason to keep it. If it doesn’t meet any of the former qualifications already, it probably isn’t worth holding on to.

These musings from a blog of a major hotel brand may not be the most relevant to their industry

5. Is it causing any issues?

Problematic content may include duplicate content, duplicate targeting, plagiarized text, content that is a legal liability, or any other number of issues that you probably don’t want to deal with on your site. I find that the assessment phase is a particularly good opportunity to identify posts that target the same topic, so that you can consolidate them.

Using these criteria, you can divide your old blog posts into buckets of “keep” and “don’t keep.” The “don’t keep” can be 301 redirected to either the most relevant related post or the blog homepage. Then it’s time to further address the others.

What to do with the posts you keep

So now you have a pile of “keep” posts to sort out! All the posts that made it this far have already been established to have value of some kind. Now we want to make the most of that value by improving, expanding, updating, and promoting the content.

Improve

When setting out to improve an old post that has good bones, it can be good to start with improvements on targeting and general writing and grammar. You want to make sure that your blog post has a clear point, is targeting a specific topic and terms, and is doing so in proper English (or whatever language your blog may be in).

Once the content itself is in good shape, make sure to add any technical improvements that the piece may need, such as relevant interlinking, alt text, or schema markup.

Then it’s time to make sure it’s pretty. Visual improvements such as adding line breaks, pull quotes, and imagery impact user experience and can keep people on the page longer.

Expand or update

Not all old blog posts are necessarily in poor shape, which can offer a great opportunity. Another way to get more value out of them is to repurpose or update the information that they contain to make old content fresh again. Data says that this is well worth the effort, with business bloggers that update older posts being 74% more likely to report strong results.

A few ways to expand or update a post are to explore a different take on the initial thesis, add newer data, or integrate more recent developments or changed opinions. Alternatively, you could expand on a piece of content by reinterpreting it in another medium, such as new imagery, engaging video, or even as audio content.

Promote

If you’ve invested resources in content creation and optimization, it only makes sense to try to get as many eyes as possible on the finished product. This can be done in a few different ways, such assharing and re-sharing on branded social channels, resurfacing posts to the front page of your blog, or even a bit of external promotion through outreach.

The follow-up

Once your blog has been pruned and you’re working on getting the most value out of your existing content, an important final step is to keep tabs on the effect these changes are having.

The most significant measure of success is organic organic traffic; even if your blog is designed for lead generation or other specific goals, the number of eyes on the page should have a strong correlation to the content’s success by other measures as well. For the best representation of traffic totals, I monitor organic sessions by landing page in Google Analytics.

I also like to keep an eye on organic rankings, as you can get an early glimpse of whether a piece is gaining traction around a particular topic before it's successful enough to earn organic traffic with those terms.

Remember that regardless of what changes you’ve made, it will usually take Google a few months to sort out the relevance and rankings of the updated content. So be patient, monitor, and keep expanding, updating, and promoting!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog https://ift.tt/2r1KFTE
via IFTTT

CSS Grid in IE: Duplicate area names now supported!

You might not need a loop

Ire Aderinokun has written a nifty piece using loops and when we might consider replacing it with another method, say .map() and .filter(). I particularly like what she has to say here:

As I mentioned earlier, loops are a great tool for a lot of cases, and the existence of these new methods doesn't mean that loops shouldn't be used at all.

I think these methods are great because they provide code that is in a way self-documenting. When we use the filter() method instead of a for loop, it is easier to understand at first glance what the purpose of the logic is.

However, these methods have very specific use cases and may be overkill if their full value isn't being used. An example of this is the map() method, which can technically be used to replace almost any arbitrary loop. If in our first example, we only wanted to modify the original articles array and not create a new, modified, amazingArticles, using this method would be unnecessary. It's important to use the method that suits each scenario, to make sure that we aren't over- or under-performing.

If you’re interested in digging more into this subject, Adan Giese wrote a great post about the .filter() method a short while ago that’s definitely worth checking out. Oh, and speaking of lots of different ways to approach loops, Chris compiled a list of options for looping over querySelectorAll NodeLists where forEach is just one of many options.

Direct Link to ArticlePermalink

The post You might not need a loop appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2QSVQcA
via IFTTT

Friday 23 November 2018

The Current State of Styling Scrollbars

If you need to style your scrollbars right now, one option is to use a collection of ::webkit prefixed CSS properties.

See the Pen CSS-Tricks Almanac: Scrollbars by Chris Coyier (@chriscoyier) on CodePen.

Sadly, that doesn't help out much for Firefox or Edge, or the ecosystem of browsers around those.

But if that's good enough for what you need, you can get rather classy with it:

See the Pen Custom Scrollbar styling by Devstreak (@devstreak) on CodePen.

There are loads of them on CodePen to browse. It's a nice thing to abstract with a Sass @mixin as well.

There is good news on this front! The standards bodies that be have moved toward a standardizing methods to style scrollbars, starting with the gutter (or width) of them. The main property will be scrollbar-gutter and Geoff has written it up here. Hopefully Autoprefixer will help us as the spec is finalized and browsers start to implement it so we can start writing the standardized version and get any prefixed versions from that.

But what if we need cross-browser support?

If styled scrollbars are a necessity (and I don't blame you), then you'll probably have to reach for a JavaScript solution. There must be dozens of libraries for that. I ran across simplebar and it looks like a pretty modern one with easy instantiation. Here's a demo of that:

See the Pen simple-bar by Chris Coyier (@chriscoyier) on CodePen.

Here's another one called simple-scrollbar:

See the Pen simple-scroll-bar by Chris Coyier (@chriscoyier) on CodePen.

Das Surma did a very fancy tutorial that creates a custom scrollbar that doesn't actually require any JavaScript when scrolling (good for perf), but does require some JavaScript setup code.

Custom scrollbars are extremely rare and that’s mostly due to the fact that scrollbars are one of the remaining bits on the web that are pretty much unstylable (I’m looking at you, date picker). You can use JavaScript to build your own, but that’s expensive, low fidelity and can feel laggy. In this article we will leverage some unconventional CSS matrices to build a custom scroller that doesn’t require any JavaScript while scrolling, just some setup code.

I'll embed a copy here:

See the Pen Custom Scrollbar Example from Google Chrome Labs by Chris Coyier (@chriscoyier) on CodePen.

The post The Current State of Styling Scrollbars appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2KuNdSU
via IFTTT

What SEOs Can Learn from AdWords - Whiteboard Friday

Posted by DiTomaso

Organic and paid search aren't always at odds; there are times when there's benefit in knowing how they work together. Taking the time to know the ins and outs of AdWords can improve your rankings and on-site experience. In today's edition of Whiteboard Friday, our fabulous guest host Dana DiTomaso explains how SEOs can improve their game by taking cues from paid search in this Whiteboard Friday.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hi, my name is Dana DiTomaso. I'm President and Partner at Kick Point, and one of the things that we do at Kick Point is we do both SEO and paid. One of the things that's really useful is when SEO and paid work together. But what's even better is when SEOs can learn from paid to make their stuff better.

One of the things that is great about AdWords or Google Ads — whenever you're watching this, it may be called one thing or the other — is that you can learn a lot from what has a high click-through rate, what performs well in paid, and paid is way faster than waiting for Google to catch up to the awesome title tags you've written or the new link building that you've done to see how it's going to perform. So I'm going to talk about four things today that you can learn from AdWords, and really these are easy things to get into in AdWords.

Don't be intimidated by the interface. You can probably just get in there and look at it yourself, or talk to your AdWords person. I bet they'd be really excited that you know what a callout extension is. So we're going to start up here.

1. Negative keywords

The first thing is negative keywords. Negative keywords, obviously really important. You don't want to show up for things that you shouldn't be showing up for.

Often when we need to take over an AdWords account, there aren't a lot of negative keywords. But if it's a well-managed account, there are probably lots of negatives that have been added there over time. What you want to look at is if there's poor word association. So in your industry, cheap, free, jobs, and then things like reviews and coupons, if these are really popular search phrases, then maybe this is something you need to create content for or you need to think about how your service is presented in your industry.

Then what you can do to change that is to see if there's something different that you can do to present this kind of information. What are the kinds of things your business doesn't want? Are you definitely not saying these things in the content of your website? Or is there a way that you can present the opposite opinion to what people might be searching for, for example? So think about that from a content perspective.

2. Title tags and meta descriptions

Then the next thing are title tags and meta descriptions. Title tags and meta descriptions should never be a write it once and forget it kind of thing. If you're an on-it sort of SEO, you probably go in every once in a while and try to tweak those title tags and meta descriptions. But the problem is that sometimes there are just some that aren't performing. So go into Google Search Console, find the title tags that have low click-through rate and high rankings, and then think about what you can do to test out new ones.

Then run an AdWords campaign and test out those title tags in the title of the ad. Test out new ad copy — that would be your meta descriptions — and see what actually brings a higher click-through rate. Then whichever one does, ta-da, that's your new title tags and your meta descriptions. Then add those in and then watch your click-through rate increase or decrease.

Make sure to watch those rankings, because obviously title tag changes can have an impact on your rankings. But if it's something that's keyword rich, that's great. I personally like playing with meta descriptions, because I feel like meta descriptions have a bigger impact on that click-through rate than title tags do, and it's something really important to think about how are we making this unique so people want to click on us. The very best meta description I've ever seen in my life was for an SEO company, and they were ranking number one.

They were obviously very confident in this ranking, because it said, "The people above me paid. The people below me aren't as good as me. Hire me for your SEO." I'm like, "That's a good meta description." So what can you do to bring in especially that brand voice and your personality into those titles, into those meta descriptions and test it out with ads first and see what's going to resonate with your audience. Don't just think about click-through rate for these ads.

Make sure that you're thinking about conversion rate. If you have a really long sales cycle, make sure those leads that you're getting are good, because what you don't want to have happen is have an ad that people click on like crazy, they convert like crazy, and then the customers are just a total trash fire. You really want to make sure you're driving valuable business through this kind of testing. So this might be a bit more of a longer-term piece for you.

3. Word combinations

The third thing you can look at are word combinations.

So if you're not super familiar with AdWords, you may not be familiar with the idea of broad match modifier. So in AdWords we have broad phrases that you can search for, recipes, for example, and then anything related to the word "recipe" will show up. But you could put in a phrase in quotes. You could say "chili recipes." Then if they say, "I would like a chili recipe," it would come up.

If it says "chili crockpot recipes," it would not come up. Now if you had + chili + recipes, then anything with the phrase "chili recipes" would come up, which can be really useful. If you have a lot of different keyword combinations and you don't have time for that, you can use broad match modifier to capture a lot of them. But then you have to have a good negative keyword list, speaking as an AdWords person for a second.

Now one of the things that can really come out of broad match modifier are a lot of great, new content ideas. If you look at the keywords that people had impressions from or clicks from as a result of these broad match modifier keywords, you can find the strangest phrasing that people come up with. There are lots of crazy things that people type into Google. We all know this, especially if it's voice search and it's obviously voice search.

One of the fun things to do is look and see if anybody has "okay Google" and then the search phrase, because they said "okay Google" twice and then Google searched "okay Google" plus the phrase. That's always fun to pick up. But you can also pick up lots of different content ideas, and this can help you modify poorly performing content for example. Maybe you're just not saying the thing in the way in which your audience is saying it.

AdWords gives you totally accurate data on what your customers are thinking and feeling and saying and searching. So why not use that kind of data? So definitely check out broad match modifier stuff and see what you can do to make that better.

4. Extensions

Then the fourth thing is extensions. So extensions are those little snippets that can show up under an ad.

You should always have all of the extensions loaded in, and then maybe Google picks some, maybe they won't, but at least they're there as an option. Now one thing that's great are callout extensions. Those are the little site links that are like free trial, and people click on those, or find out more information or menu or whatever it might be. Now testing language in those callout extensions can help you with your call-to-action buttons.

Especially if you're thinking about things like people want to download a white paper, well, what's the best way to phrase that? What do you want to say for things like a submit button for your newsletter or for a contact form? Those little, tiny pieces, that are called micro-copy, what can you do by taking your highest performing callout extensions and then using those as your call-to-action copy on your website?

This is really going to improve your lead click-through rate. You're going to improve the way people feel about you, and you're going to have that really nice consistency between the language that you see in your advertising and the language that you have on your website, because one thing you really want to avoid as an SEO is to get into that silo where this is SEO and this is AdWords and the two of you aren't talking to each other at all and the copy just feels completely disjointed between the paid side and the organic side.

It should all be working together. So by taking the time to understand AdWords a little bit, getting to know it, getting to know what you can do with it, and then using some of that information in your SEO work, you can improve your on-site experience as well as rankings, and your paid person is probably going to appreciate that you talked to them for a little bit.

Thanks.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog https://ift.tt/2QhZHCQ
via IFTTT

Thursday 22 November 2018

State of Houdini (Chrome Dev Summit 2018)

Here’s a great talk by Das Surma where he looks into what Houdini is and how much of it is implemented in browsers. If you’re unfamiliar with that, Houdini is a series of technologies and APIs that gives developers low level access to how CSS properties work in a fundamental way. Check out Ana Tudor's deep dive into its impact on animations for some incredible examples of it in practice.

What I particularly like about this video is the way Das mentions the CSS Paint API which lets you do a bunch of bizarre things with CSS, such as creating "squircle" shapes and changing how borders work. It looks wonderfully robust and it should give us super powers in the near future. Ruth John wrote up this extensive overview on the topic earlier this year and it's worth a read as well.

Direct Link to ArticlePermalink

The post State of Houdini (Chrome Dev Summit 2018) appeared first on CSS-Tricks.



from CSS-Tricks https://www.youtube.com/watch?v=lK3OiJvwgSc
via IFTTT

Add Instant awesomeness to your interfaces with this insanely large icon set

Tuesday 20 November 2018

Announcing the 2018 Local Search Ranking Factors Survey

Posted by Whitespark

It has been another year (and a half) since the last publication of the Local Search Ranking Factors, and local search continues to see significant growth and change. The biggest shift this year is happening in Google My Business signals, but we’re also seeing an increase in the importance of reviews and continued decreases in the importance of citations.

Check out the full survey!

Huge growth in Google My Business

Google has been adding features to GMB at an accelerated rate. They see the revenue potential in local, and now that they have properly divorced Google My Business from Google+, they have a clear runway to develop (and monetize) local. Here are just some of the major GMB features that have been released since the publication of the 2017 Local Search Ranking Factors:

  • Google Posts available to all GMB users
  • Google Q&A
  • Website builder
  • Services
  • Messaging
  • Videos
  • Videos in Google Posts

These features are creating shifts in the importance of factors that are driving local search today. This year has seen the most explosive growth in GMB specific factors in the history of the survey. GMB signals now make up 25% the local pack/finder pie chart.

GMB-specific features like Google Posts, Google Q&A, and image/video uploads are frequently mentioned as ranking drivers in the commentary. Many businesses are not yet investing in these aspects of local search, so these features are currently a competitive advantage. You should get on these before everyone is doing it.

Here’s your to do list:

  1. Start using Google posts NOW. At least once per week, but preferably a few times per week. Are you already pushing out posts to Facebook, Instagram, or Twitter? Just use the same, lightly edited, content on Google Posts. Also, use calls to action in your posts to drive direct conversions.
  2. Seed the Google Q&A with your own questions and answers. Feed that hyper-relevant, semantically rich content to Google. Relevance FTW.
  3. Regularly upload photos and videos. (Did you know that you can upload videos to GMB now?)
  4. Make sure your profile is 100% complete. If there is an empty field in GMB, fill it. If you haven’t logged into your GMB account in a while, you might be surprised to see all the new data points you can add to your listing.

Why spend your time on these activities? Besides the potential relevance boost you’ll get from the additional content, you’re also sending valuable engagement signals. Regularly logging into your listing and providing content shows Google that you’re an active and engaged business owner that cares about your listing, and the local search experts are speculating that this is also providing ranking benefits. There’s another engagement angle here too: user engagement. Provide more content for users to engage with and they’ll spend more time on your listing clicking around and sending those helpful behavioral signals to Google.

Reviews on the rise

Review signals have also seen continued growth in importance over last year.

Review signals were 10.8% in 2015, so over the past 3 years, we’ve seen a 43% increase in the importance of review signals:

Many practitioners talked about the benefits they’re seeing from investing in reviews. I found David Mihm’s comments on reviews particularly noteworthy. When asked “What are some strategies/tactics that are working particularly well for you at the moment?”, he responded with:

“In the search results I look at regularly, I continue to see reviews playing a larger and larger role. Much as citations became table stakes over the last couple of years, reviews now appear to be on their way to becoming table stakes as well. In mid-to-large metro areas, even industries where ranking in the 3-pack used to be possible with a handful of reviews or no reviews, now feature businesses with dozens of reviews at a minimum — and many within the last few months, which speaks to the importance of a steady stream of feedback.
Whether the increased ranking is due to review volume, keywords in review content, or the increased clickthrough rate those gold stars yield, I doubt we'll ever know for sure. I just know that for most businesses, it's the area of local SEO I'd invest the most time and effort into getting right -- and done well, should also have a much more important flywheel effect of helping you build a better business, as the guys at GatherUp have been talking about for years.”

Getting keywords in your reviews is a factor that has also risen. In the 2017 survey, this factor ranked #26 in the local pack/finder factors. It is now coming in at #14.

I know this is the Local Search Ranking Factors, and we’re talking about what drives rankings, but you know what’s better than rankings? Conversions. Yes, reviews will boost your rankings, but reviews are so much more valuable than that because a ton of positive reviews will get people to pick up the phone and call your business, and really, that’s the goal. So, if you’re not making the most of reviews yet, get on it!

A quick to do list for reviews would be:

  1. Work on getting more Google reviews (obviously). Ask every customer.
  2. Encourage keywords in the reviews by asking customers to mention the specific service or product in their review.
  3. Respond to every review. (Did you know that Google now notifies the reviewer when the owner responds?)
  4. Don’t only focus on reviews. Actively solicit direct customer feedback as well so you can mark it up in schema/JSON and get stars in the search results.
  5. Once you’re killing it on Google, diversify and get reviews on the other important review sites for your industry (but also continue to send customers to Google).

For a more in-depth discussion of review strategy, please see the blog post version of my 2018 MozCon presentation, “How to Convert Local Searchers Into Customers with Reviews.”

Meh, links

To quote Gyi Tsakalakis: “Meh, links.” All other things being equal, links continue to be a key differentiator in local search. It makes sense. Once you have a complete and active GMB listing, your citations squared away, a steady stream of reviews coming in, and solid content on your website, the next step is links. The trouble is, links are hard, but that’s also what makes them such a valuable competitive differentiator. They ARE hard, so when you get quality links they can really help to move the needle.

When asked, “What are some strategies/tactics that are working particularly well for you at the moment?” Gyi responded with:

“Meh, links. In other words, topically and locally relevant links continue to work particularly well. Not only do these links tend to improve visibility in both local packs and traditional results, they're also particularly effective for improving targeted traffic, leads, and customers. Find ways to earn links on the sites your local audience uses. These typically include local news, community, and blog sites.”

Citations?

Let’s make something clear: citations are still very valuable and very important.

Ok, with that out of the way, let’s look at what’s been happening with citations over the past few surveys:

I think this decline is related to two things:

  1. As local search gets more complex, additional signals are being factored into the algorithm and this dilutes the value that citations used to provide. There are just more things to optimize for in local search these days.
  2. As local search gains more widespread adoption, more businesses are getting their citations consistent and built out, and so citations become less of a competitive difference maker than they were in the past.

Yes, we are seeing citations dropping in significance year after year, but that doesn’t mean you don’t need them. Quite the opposite, really. If you don’t get them, you’re going to have a bad time. Google looks to your citations to help understand how prominent your business is. A well established and popular business should be present on the most important business directories in their industry, and if it’s not, that can be a signal of lower prominence to Google.

The good news is that citations are one of the easiest items to check off your local search to do list. There are dozens of services and tools out there to help you get your business listed and accurate for only a few hundred dollars. Here’s what I recommend:

  1. Ensure your business is listed, accurate, complete, and duplicate-free on the top 10-15 most important sites in your industry (including the primary data aggregators and industry/city-specific sites).
  2. Build citations (but don’t worry about duplicates and inconsistencies) on the next top 30 to 50 sites.

Google has gotten much smarter about citation consistency than they were in the past. People worry about it much more than they need to. An incorrect or duplicate listing on an insignificant business listing site is not going to negatively impact your ability to rank.

You could keep building more citations beyond the top 50, and it won’t hurt, but the law of diminishing returns applies here. As you get deeper into the available pool of citation sites, the quality of these sites decreases, and the impact they have on your local search decreases with it. That said, I have heard from dozens of agencies that swear that “maxing out” all available citation opportunities seems to have a positive impact on their local search, so your mileage may vary. ¯\_(ツ)_/¯

The future of local search

One of my favorite questions in the commentary section is “Comments about where you see Google is headed in the future?” The answers here, from some of the best minds in local search, are illuminating. The three common themes I pulled from the responses are:

  1. Google will continue providing features and content so that they can provide the answers to most queries right in the search results and send less clicks to websites. Expect to see your traffic from local results to your website decline, but don’t fret. You want those calls, messages, and driving directions more than you want website traffic anyway.
  2. Google will increase their focus on behavioral signals for rankings. What better way is there to assess the real-world popularity of a business than by using signals sent by people in the real world. We can speculate that Google is using some of the following signals right now, and will continue to emphasize and evolve behavioral ranking methods:
    1. Searches for your brand name.
    2. Clicks to call your business.
    3. Requests for driving directions.
    4. Engagement with your listing.
    5. Engagement with your website.
    6. Credit card transactions.
    7. Actual human foot traffic in brick-and-mortar businesses.
  3. Google will continue monetizing local in new ways. Local Services Ads are rolling out to more and more industries and cities, ads are appearing right in local panels, and you can book appointments right from local packs. Google isn’t investing so many resources into local out of the goodness of their hearts. They want to build the ultimate resource for instant information on local services and products, and they want to use their dominant market position to take a cut of the sales.

And that does it for my summary of the survey results. A huge thank you to each of the brilliant contributors for giving their time and sharing their knowledge. Our understanding of local search is what it is because of your excellent work and contributions to our industry.

There is much more to read and learn in the actual resource itself, especially in all the comments from the contributors, so go dig into it:

Click here for the full results!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog https://ift.tt/2FzySWz
via IFTTT

Passkeys: What the Heck and Why?

These things called  passkeys  sure are making the rounds these days. They were a main attraction at  W3C TPAC 2022 , gained support in  Saf...