Friday, 31 August 2018

New mobile Chrome feature would disable scripts on slow connections

This is a possible upcoming feature for mobile Chrome:

If a Data Saver user is on a 2G-speed or slower network according to the NetInfo API, Chrome disables scripts and sends an intervention header on every resource request. Users are shown a UI at the bottom of the screen indicating the page has been modified to save data. Users can enable scripts on the page by tapping “Show original” in the UI.

And the people shout: progressive enhancement!

Jeremy Keith:

An excellent idea for people in low-bandwidth situations: automatically disable JavaScript. As long as the site is built with progressive enhancement, there’s no problem (and if not, the user is presented with the choice to enable scripts).

Power to the people!

George Burduli reports:

This is huge news for developing countries where mobile data packets may cost a lot and are not be affordable to all. Enabling NoScript by default will make sure that users don’t burn through their data without knowledge. The feature will probably be available in the Chrome 69, which will also come with the new Material Design refresh.

The post New mobile Chrome feature would disable scripts on slow connections appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2PSgKsa
via IFTTT

JAMstack_conf

I love a good conference that exists because there is a rising tide in technology. JAMstack_conf:

Static site generators, serverless architectures, and powerful APIs are giving front-end teams fullstack capabilities — without the pain of owning infrastructure. It’s a new approach called the JAMstack.

I'll be speaking at it! I've been pretty interested in all this and trying to learn and document as much as I can.

Direct Link to ArticlePermalink

The post JAMstack_conf appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2wytPy8
via IFTTT

Props and PropTypes in React

Building Better Customer Experiences - Whiteboard Friday

Posted by DiTomaso

Are you mindful of your customer's experience after they become a lead? It's easy to fall in the same old rut of newsletters, invoices, and sales emails, but for a truly exceptional customer experience that improves their retention and love for your brand, you need to go above and beyond. In this week's episode of Whiteboard Friday, the ever-insightful Dana DiTomaso shares three big things you can start doing today that will immensely better your customer experience and make earning those leads worthwhile.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hi, Moz fans. My name is Dana DiTomaso. I'm the President and partner of Kick Point, and today I'm going to talk to you about building better customer experiences. I know that in marketing a lot of our jobs revolve around getting leads and more leads and why can't we have all of the leads.

The typical customer experience:

But in reality, the other half of our job should be making sure that those leads are taken care of when they become customers. This is especially important if you don't have, say, a customer care department. If you do have a customer care department, really you should be interlocking with what they do, because typically what happens, when you're working with a customer, is that after the sale, they usually get surveys.

- Surveys

"How did we do? Please rate us on a scale of 1 to 10," which is an enormous scale and kind of useless. You're a 4, or you're an 8, or you're a 6. Like what actually differentiates that, and how are people choosing that?

- Invoices

Then invoices, like obviously important because you have to bill people, particularly if you have a big, expensive product or you're a SaaS business. But those invoices are sometimes kind of impersonal, weird, and maybe not great.

- Newsletters

Maybe you have a newsletter. That's awesome. But is the newsletter focused on sales? One of the things that we see a lot is, for example, if somebody clicks a link in the newsletter to get to your website, maybe you've written a blog post, and then they see a great big popup to sign up for our product. Well, you're already a customer, so you shouldn't be seeing that popup anymore.

What we've seen on other sites, like Help Scout actually does a great job of this, is that they have a parameter of newsletter at the end of any URLs they put in their newsletter, and then the popups are suppressed because you're already in the newsletter so you shouldn't see a popup encouraging you to sign up or join the newsletter, which is kind of a crappy experience.

- Sales emails

Then the last thing are sales emails. This is my personal favorite, and this can really be avoided if you go into account-based marketing automation instead of personal-based marketing automation.

We had a situation where I was a customer of the hosting company. It was in my name that we've signed up for all of our clients, and then one of our developers created a new account because she needed to access something. Then immediately the sales emails started, not realizing we're at the same domain. We're already a customer. They probably shouldn't have been doing the hard sale on her. We've had this happen again and again.

So just really make sure that you're not sending your customers or people who work at the same company as your customers sales emails. That's a really cruddy customer experience. It makes it look like you don't know what's going on. It really can destroy trust.

Tips for an improved customer experience

So instead, here are some extra things that you can do. I mean fix some of these things if maybe they're not working well. But here are some other things you can do to really make sure your customers know that you love them and you would like them to keep paying you money forever.

1. Follow them on social media

So the first thing is following them on social. So what I really like to do is use a tool such as FullContact. You can take everyone's email addresses, run them through FullContact, and it will come back to you and say, "Here are the social accounts that this person has." Then you go on Twitter and you follow all of these people for example. Or if you don't want to follow them, you can make a list, a hidden list with all of their social accounts in there.

Then you can see what they share. A tool like Nuzzel, N-U-Z-Z for Americans, zed zed for Canadians, N-U-Z-Z-E-L is a great tool where you can say, "Tell me all the things that the people I follow on social or the things that this particular list of people on social what they share and what they're engaged in." Then you can see what your customers are really interested in, which can give you a good sense of what kinds things should we be talking about.

A company that does this really well is InVision, which is the app that allows you to share prototypes with clients, particularly design prototypes. So they have a blog, and a lot of that blog content is incredibly useful. They're clearly paying attention to their customers and the kinds of things they're sharing based on how they build their blog content. So then find out if you can help and really think about how I can help these customers through the things that they share, through the questions that they're asking.

Then make sure to watch unbranded mentions too. It's not particularly hard to monitor a specific list of people and see if they tweet things like, "I really hate my (insert what you are)right now," for example. Then you can head that off at the pass maybe because you know that this was this customer. "Oh, they just had a bad experience. Let's see what we can do to fix it,"without being like, "Hey, we were watching your every move on Twitter.Here's something we can do to fix it."

Maybe not quite that creepy, but the idea is trying to follow these people and watch for those unbranded mentions so you can head off a potential angry customer or a customer who is about to leave off at the pass. Way cheaper to keep an existing customer than get a new one.

2. Post-sale monitoring

So the next thing is post-sale monitoring. So what I would like you to do is create a fake customer. If you have lots of sales personas, create a fake customer that is each of those personas, and then that customer should get all the emails, invoices, everything else that a regular customer that fits that persona group should get.

Then take a look at those accounts. Are you awesome, or are you super annoying? Do you hear nothing for a year, except for invoices, and then, "Hey, do you want to renew?" How is that conversation going between you and that customer? So really try to pay attention to that. It depends on your organization if you want to tell people that this is what's happening, but you really want to make sure that that customer isn't receiving preferential treatment.

So you want to make sure that it's kind of not obvious to people that this is the fake customer so they're like, "Oh, well, we're going to be extra nice to the fake customer." They should be getting exactly the same stuff that any of your other customers get. This is extremely useful for you.

3. Better content

Then the third thing is better content. I think, in general, any organization should reward content differently than we do currently.

Right now, we have a huge focus on new content, new content, new content all the time, when in reality, some of your best-performing posts might be old content and maybe you should go back and update them. So what we like to tell people about is the Microsoft model of rewarding. They've used this to reward their employees, and part of it isn't just new stuff. It's old stuff too. So the way that it works is 33% is what they personally have produced.

So this would be new content, for example. Then 33% is what they've shared. So think about for example on Slack if somebody shares something really useful, that's great. They would be rewarded for that. But think about, for example, what you can share with your customers and how that can be rewarding, even if you didn't write it, or you can create a roundup, or you can put it in your newsletter.

Like what can you do to bring value to those customers? Then the last 33% is what they shared that others produced. So is there a way that you can amplify other voices in your organization and make sure that that content is getting out there? Certainly in marketing, and especially if you're in a large organization, maybe you're really siloed, maybe you're an SEO and you don't even talk to the paid people, there's cool stuff happening across the entire organization.

A lot of what you can bring is taking that stuff that others have produced, maybe you need to turn it into something that is easy to share on social media, or you need to turn it into a blog post or a video, like Whiteboard Friday, whatever is going to work for you, and think about how you can amplify that and get it out to your customers, because it isn't just marketing messages that customers should be seeing.

They should be seeing all kinds of messages across your organization, because when a customer gives you money, it isn't just because your marketing message was great. It's because they believe in the thing that you are giving them. So by reinforcing that belief through the types of content that you create, that you share, that you find that other people share, that you shared out to your customers, a lot of sharing, you can certainly improve that relationship with your customers and really turn just your average, run-of-the-mill customer into an actual raving fan, because not only will they stay longer, it's so much cheaper to keep an existing customer than get a new one, but they'll refer people to you, which is also a lot easier than buying a lot of ads or spending a ton of money and effort on SEO.

Thanks!

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog https://ift.tt/2oqR3CB
via IFTTT

Thursday, 30 August 2018

CSS Shape Editors

The Ecological Impact of Browser Diversity

​The Ultimate Guide to Headless CMS

Struggling to engage your customers with seamless omnichannel digital experiences?

Then headless CMS is the technology you’ve been waiting for. But with all the buzz around this new technology, you might be feeling a bit lost.

Download our free headless CMS guide and get all the information you need to understand headless CMS architecture and multichannel content management, learn how to future-proof your content against any upcoming technology, and see the benefits of being programming-language agnostic.

Grab your complimentary The Ultimate Guide to Headless CMS eBook.

The post ​The Ultimate Guide to Headless CMS appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2LG9Gf4
via IFTTT

Wednesday, 29 August 2018

“View Source” in DevTools

An Intro to Web Site Testing with Cypress

The Long-Term Link Acquisition Value of Content Marketing

Posted by KristinTynski

Recently, new internal analysis of our work here at Fractl has yielded a fascinating finding:

Content marketing that generates mainstream press is likely 2X as effective as originally thought. Additionally, the long-term ROI is potentially many times higher than previously reported.

I’ll caveat that by saying this applies only to content that can generate mainstream press attention. At Fractl, this is our primary focus as a content marketing agency. Our team, our process, and our research are all structured around figuring out ways to maximize the newsworthiness and promotional success of the content we create on behalf of our clients.

Though data-driven content marketing paired with digital PR is on the rise, there is still a general lack of understanding around the long-term value of any individual content execution. In this exploration, we sought to answer the question: What link value does a successful campaign drive over the long term? What we found was surprising and strongly reiterated our conviction that this style of data-driven content and digital PR yields some of the highest possible ROI for link building and SEO.

To better understand this full value, we wanted to look at the long-term accumulation of the two types of links on which we report:

  1. Direct links from publishers to our client’s content on their domain
  2. Secondary links that link to the story the publisher wrote about our client’s content

While direct links are most important, secondary links often provide significant additional pass-through authority and can often be reclaimed through additional outreach and converted into direct do-follow links (something we have a team dedicated to doing at Fractl).

Below is a visualization of the way our content promotion process works:

So how exactly do direct links and secondary links accumulate over time?

To understand this, we did a full audit of four successful campaigns from 2015 and 2016 through today. Having a few years of aggregation gave us an initial benchmark for how links accumulate over time for general interest content that is relatively evergreen.

We profiled four campaigns:

The first view we looked at was direct links, or links pointing directly to the client blog posts hosting the content we’ve created on their behalf.

There is a good deal of variability between campaigns, but we see a few interesting general trends that show up in all of the examples in the rest of this article:

  1. Both direct and secondary links will accumulate in a few predictable ways:
    1. A large initial spike with a smooth decline
    2. A buildup to a large spike with a smooth decline
    3. Multiple spikes of varying size
  2. Roughly 50% of the total volume of links that will be built will accumulate in the first 30 days. The other 50% will accumulate over the following two years and beyond.
  3. A small subset of direct links will generate their own large spikes of secondary links.

We'll now take a look at some specific results. Let’s start by looking at direct links (pickups that link directly back to our client’s site or landing page).

The typical result: A large initial spike with consistent accumulation over time

This campaign, featuring artistic imaginings of what bodies in video games might look like with normal BMI/body sizes, shows the most typical pattern we witnessed, with a very large initial spike and a relatively smooth decline in link acquisition over the first month.

After the first month, long-term new direct link acquisition continued for more than two years (and is still going today!).

The less common result: Slow draw up to a major spike

In this example, you can see that sometimes it takes a few days or even weeks to see the initial pickup spike and subsequent primary syndication. In the case of this campaign, we saw a slow buildup to the pinnacle at about a week from the first pickup (exclusive), with a gradual decline over the following two weeks.

"These initial stories were then used as fodder or inspiration for stories written months later by other publications."

Zooming out to a month-over-month view, we can see resurgences in pickups happening at unpredictable intervals every few months or so. These spikes continued up until today with relative consistency. This happened as some of the stories written during the initial spike began to rank well in Google. These initial stories were then used as fodder or inspiration for stories written months later by other publications. For evergreen topics such as body image (as was the case in this campaign), you will also see writers and editors cycle in and out of writing about these topics as they trend in the public zeitgeist, leading to these unpredictable yet very welcomed resurgences in new links.

Least common result: Multiple spikes in the first few weeks

The third pattern we observed was seen on a campaign we executed examining hate speech on Twitter. In this case, we saw multiple spikes during this early period, corresponding to syndications on other mainstream publications that then sparked their own downstream syndications and individual virality.

Zooming out, we saw a similar result as the other examples, with multiple smaller spikes more within the first year and less frequently in the following two years. Each of these bumps is associated with the story resurfacing organically on new publications (usually a writer stumbling on coverage of the content during the initial phase of popularity).

Long-term resurgences

Finally, in our fourth example that looked at germs on water bottles, we saw a fascinating phenomenon happen beyond the first month where there was a very significant secondary spike.

This spike represents syndication across (all or most) of the iHeartRadio network. As this example demonstrates, it isn’t wholly unusual to see large-scale networks pick up content even a year or later that rival or even exceed the initial month’s result.

Aggregate trends

"50% of the total links acquired happened in the first month, and the other 50% were acquired in the following two to three years."

When we looked at direct links back to all four campaigns together, we saw the common progression of link acquisition over time. The chart below shows the distribution of new links acquired over two years. We saw a pretty classic long tail distribution here, where 50% of the total links acquired happened in the first month, and the other 50% were acquired in the following two to three years.

"If direct links are the cake, secondary links are the icing, and both accumulate substantially over time."

Links generated directly to the blog posts/landing pages of the content we’ve created on our clients’ behalf are only really a part of the story. When a campaign garners mainstream press attention, the press stories can often go mildly viral, generating large numbers of syndications and links to these stories themselves. We track these secondary links and reach out to the writers of these stories to try and get link attributions to the primary source (our clients’ blog posts or landing pages where the story/study/content lives).

These types of links also follow a similar pattern over time to direct links. Below are the publishing dates of these secondary links as they were found over time. Their over-time distribution follows the same pattern, with 50% of results being realized within the first month and the following 50% of the value coming over the next two to three years.

The value in the long tail

By looking at multi-year direct and secondary links built to successful content marketing campaigns, it becomes apparent that the total number of links acquired during the first month is really only about half the story.

For campaigns that garner initial mainstream pickups, there is often a multi-year long tail of links that are built organically without any additional or future promotions work beyond the first month. While this long-term value is not something we report on or charge our clients for explicitly, it is extremely important to understand as a part of a larger calculus when trying to decide if doing content marketing with the goal of press acquisition is right for your needs.

Cost-per-link (a typical way to measure ROI of such campaigns) will halve if links built are measured over these longer periods — moving a project you perhaps considered a marginal success at one month to a major success at one year.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog https://ift.tt/2Pgs6VD
via IFTTT

Tuesday, 28 August 2018

A Quarter-Million Reasons to Use Moz's Link Intersect Tool

Posted by rjonesx.

Let me tell you a story.

It begins with me in a hotel room halfway across the country, trying to figure out how I'm going to land a contract from a fantastic new lead, worth annually $250,000. We weren't in over our heads by any measure, but the potential client was definitely looking at what most would call "enterprise" solutions and we weren't exactly "enterprise."

Could we meet their needs? Hell yes we could — better than our enterprise competitors — but there's a saying that "no one ever got fired for hiring IBM"; in other words, it's always safe to go with the big guys. We weren't an IBM, so I knew that by reputation alone we were in trouble. The RFP was dense, but like most SEO gigs, there wasn't much in the way of opportunity to really differentiate ourselves from our competitors. It would be another "anything they can do, we can do better" meeting where we grasp for reasons why we were better. In an industry where so many of our best clients require NDAs that prevent us from producing really good case studies, how could I prove we were up to the task?

In less than 12 hours we would be meeting with the potential client and I needed to prove to them that we could do something that our competitors couldn't. In the world of SEO, link building is street cred. Nothing gets the attention of a client faster than a great link. I knew what I needed to do. I needed to land a killer backlink, completely white-hat, with no new content strategy, no budget, and no time. I needed to walk in the door with more than just a proposal — I needed to walk in the door with proof.

I've been around the block a few times when it comes to link building, so I wasn't at a loss when it came to ideas or strategies we could pitch, but what strategy might actually land a link in the next few hours? I started running prospecting software left and right — all the tools of the trade I had at my disposal — but imagine my surprise when the perfect opportunity popped up right in little old Moz's Open Site Explorer Link Intersect tool. To be honest, I hadn't used the tool in ages. We had built our own prospecting software on APIs, but the perfect link just popped up after adding in a few of their competitors on the off chance that there might be an opportunity or two.

There it was:

  1. 3,800 root linking domains to the page itself
  2. The page was soliciting submissions
  3. Took pull requests for submissions on GitHub!

I immediately submitted a request and began the refresh game, hoping the repo was being actively monitored. By the next morning, we had ourselves a link! Not just any link, but despite the client having over 50,000 root linking domains, this was now the 15th best link to their site. You can imagine me anxiously awaiting the part of the meeting where we discussed the various reasons why our services were superior to that of our competitors, and then proceeded to demonstrate that superiority with an amazing white-hat backlink acquired just hours before.

The quarter-million-dollar contract was ours.

Link Intersect: An undervalued link building technique

Backlink intersect is one of the oldest link building techniques in our industry. The methodology is simple. Take a list of your competitors and identify the backlinks pointing to their sites. Compare those lists to find pages that overlap. Pages which link to two or more of your competitors are potentially resource pages that would be interested in linking to your site as well. You then examine these sites and do outreach to determine which ones are worth contacting to try and get a backlink.

Let's walk through a simple example using Moz's Link Intersect tool.

Getting started

We start on the Link Intersect page of Moz's new Link Explorer. While we had Link Intersect in the old Open Site Explorer, you're going to to want to use our new Link Intersect, which is built from our giant index of 30 trillion links and is far more powerful.

For our example here, I've chosen a random gardening company in Durham, North Carolina called Garden Environments. The website has a Domain Authority of 17 with 38 root linking domains.

We can go ahead and copy-paste the domain into "Discover Link Opportunities for this URL" at the top of the Link Intersect page. If you notice, we have the choice of "Root Domain, Subdomain, or Exact Page":

Choose between domain, subdomain or page

I almost always choose "root domain" because I tend to be promoting a site as a whole and am not interested in acquiring links to pages on the site from other sites that already link somewhere else on the site. That is to say, by choosing "root domain," any site that links to any page on your site will be excluded from the prospecting list. Of course, this might not be right for your situation. If you have a hosted blog on a subdomain or a hosted page on a site, you will want to choose subdomain or exact page to make sure you rule out the right backlinks.

You also have the ability to choose whether we report back to you root linking domains or Backlinks. This is really important and I'll explain why.

choose between page or domain

Depending on your link building campaign, you'll want to vary your choice here. Let's say you're looking for resource pages that you can list your website on. If that's the case, you will want to choose "pages." The Link Intersect tool will then prioritize pages that have links to multiple competitors on them, which are likely to be resource pages you can target for your campaign. Now, let's say you would rather find publishers that talk about your competitors and are less concerned about them linking from the same page. You want to find sites that have linked to multiple competitors, not pages. In that case, you would choose "domains." The system will then return the domains that have links to multiple competitors and give you example pages, but you wont be limited only to pages with multiple competitors on them.

In this example, I'm looking for resource pages, so I chose "pages" rather than domains.

Choosing your competitor sites

A common mistake made at this point is to choose exact competitors. Link builders will often copy and paste a list of their biggest competitors and cross their fingers for decent results. What you really want are the best link pages and domains in your industry — not necessarily your competitors.

In this example I chose the gardening page on a local university, a few North Carolina gardening and wildflower associations, and a popular page that lists nurseries. Notice that you can choose subdomain, domain, or exact page as well for each of these competitor URLs. I recommend choosing the broadest category (domain being broadest, exact page being narrowest) that is relevant to your industry. If the whole site is relevant, go ahead and choose "domain."

Analyzing your results

The results returned will prioritize pages that link to multiple competitors and have a high Domain Authority. Unlike some of our competitors' tools, if you put in a competitor that doesn't have many backlinks, it won't cause the whole report to fail. We list all the intersections of links, starting with the most and narrowing down to the fewest. Even though the nurseries website doesn't provide any intersections, we still get back great results!

analyze link results

Now we have some really great opportunities, but at this point you have two choices. If you really prefer, you can just export the opportunities to CSV like any other tool on the market, but I prefer to go ahead and move everything over into a Link Tracking List.

add to link list

By moving everything into a link list, we're going to be able to track link acquisition over time (once we begin reaching out to these sites for backlinks) and we can also sort by other metrics, leave notes, and easily remove opportunities that don't look fruitful.

What did we find?

Remember, we started off with a site that has barely any links, but we turned up dozens of easy opportunities for link acquisition. We turned up a simple resources page on forest resources, a potential backlink which could easily be earned via a piece of content on forest stewardship.

example opportunity

We turned up a great resource page on how to maintain healthy soil and yards on a town government website. A simple guide covering the same topics here could easily earn a link from this resource page on an important website.

example opportunity 2

These were just two examples of easy link targets. From community gardening pages, websites dedicated to local creek, pond, and stream restoration, and general enthusiast sites, the Link Intersect tool turned up simple backlink gold. What is most interesting to me, though, was that these resource pages never included the words "resources" or "links" in the URLs. Common prospecting techniques would have just missed these opportunities altogether.

While it wasn't the focus of this particular campaign, I did choose the alternate of "show domains" rather than "pages" that link to the competitors. We found similarly useful results using this methodology.

example list of domains opportunity

For example, we found CarolinaCountry.com had linked to multiple of the competitor sites and, as it turns out, would be a perfect publication to pitch for a story as part of of a PR campaign for promoting the gardening site.

Takeaways

The new Link Intersect tool in Moz's Link Explorer combines the power of our new incredible link index with the complete features of a link prospecting tool. Competitor link intersect remains one of the most straightforward methods for finding link opportunities and landing great backlinks, and Moz's new tool coupled with Link Lists makes it easier than ever. Go ahead and give it a run yourself — you might just find the exact link you need right when you need it.

Find link opportunities now!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog https://ift.tt/2LvUvVC
via IFTTT

Super-Powered Grid Components with CSS Custom Properties

​Reinvest Your Time With HelloSign API

(This is a sponsored post.)

G2 Crowd says HelloSign's API is 2x faster to implement than any other eSign provider. What are you going to do with all the time you save? Try it free Today!

Direct Link to ArticlePermalink

The post ​Reinvest Your Time With HelloSign API appeared first on CSS-Tricks.



from CSS-Tricks https://synd.co/2nIEPEO
via IFTTT

Monday, 27 August 2018

A Tale of Two Buttons

I enjoy front-end developer thought progression articles like this one by James Nash. Say you have a button which needs to work in "normal" conditions (light backgrounds) and one with reverse-colors for going on dark backgrounds. Do you have a modifier class on the button itself? How about on the container? How can inheritance and the cascade help? How about custom properties?

I think embracing CSS’s cascade can be a great way to encourage consistency and simplicity in UIs. Rather than every new component being a free for all, it trains both designers and developers to think in terms of aligning with and re-using what they already have.

Direct Link to ArticlePermalink

The post A Tale of Two Buttons appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2BErrf4
via IFTTT

A native lazy load for the web platform

A new Chrome feature dubbed "Blink LazyLoad" is designed to dramatically improve performance by deferring the load of below-the-fold images and third-party <iframe>s.

The goals of this bold experiment are to improve the overall render speed of content that appears within a user’s viewport (also known as above-the-fold), as well as, reduce network data and memory usage. ✨

👨‍🏫 How will it work?

It’s thought that temporarily delaying less important content will drastically improve overall perceived performance.

If this proposal is successful, automatic optimizations will be run during the load phase of a page:

  • Images and iFrames will be analysed to gauge importance.
  • If they’re seen to be non-essential, they will be deferred, or not loaded at all:
    • Deferred items will only be loaded if the user has scrolled to the area nearby.
    • A blank placeholder image will be used until an image is fetched.

The public proposal has a few interesting details:

  • LazyLoad is made up of two different mechanisms: LazyImages and LazyFrames.
  • Deferred images and iFrames will be loaded when a user has scrolled within a given number of pixels. The number of pixels will vary based on three factors:
  • Once the browser has established that an image is located below the fold, it will issue a range request to fetch the first few bytes of an image to establish its dimensions. The dimensions will then be used to create a placeholder.

The lazyload attribute will allow authors to specify which elements should or should not be lazy loaded. Here’s an example that indicates that this content is non-essential:

<iframe src="ads.html" lazyload="on"></iframe>

There are three options:

  • on - Indicates a strong preference to defer fetching until the content can be viewed.
  • off - Fetch this resource immediately, regardless of view-ability.
  • auto - Let the browser decide (has the same effect as not using the lazyload attribute at all).

🔒 Implementing a secure LazyLoad policy

Feature policy: LazyLoad will provide a mechanism that allows authors to force opting in or out of LazyLoad functionality on a per-domain basis (similar to how Content Security Policies work). There is a yet-to-be-merged pull request that describes how it might work.

🤔 What about backwards compatibility?

At this point, it is difficult to tell if these page optimizations could cause compatibility issues for existing sites.

Third party iFrames are used for a large number of purposes like ads, analytics or authentication. Delaying or not loading a crucial iFrame (because the user never scrolls that far) could have dramatic unforeseeable effects. Pages that rely on an image or iFrame having been loaded and present when onLoad fires could also face significant issues.

These automatic optimizations could silently and efficiently speed up Chrome’s rendering speed without any notable issues for users. The Google team behind the proposal are carefully measuring the performance characteristics of LazyLoad’s effects through metrics that Chrome records.

💻 Enabling LazyLoad

At the time of writing, LazyLoad is only available in Chrome Canary, behind two required flags:

  • chrome://flags/#enable-lazy-image-loading
  • chrome://flags/#enable-lazy-frame-loading

Flags can be enabled by navigating to chrome://flags in a Chrome browser.

📚 References and materials

👋 In closing

As we embark on welcoming the next billion users to the web, it’s humbling to know that we are only just getting started in understanding the complexity of browsers, connectivity, and user experience.

The post A native lazy load for the web platform appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2Nlb7B7
via IFTTT

Friday, 24 August 2018

Using CSS Clip Path to Create Interactive Effects, Part II

This is a follow up to my previous post looking into clip paths. Last time around, we dug into the fundamentals of clipping and how to get started. We looked at some ideas to exemplify what we can do with clipping. We’re going to take things a step further in this post and look at different examples, discuss alternative techniques, and consider how to approach our work to be cross-browser compatible.

One of the biggest drawbacks of CSS clipping, at the time of writing, is browser support. Not having 100% browser coverage means different experiences for viewers in different browsers. We, as developers, can’t control what browsers support — browser vendors are the ones who implement the spec and different vendors will have different agendas.

One thing we can do to overcome inconsistencies is use alternative technologies. The feature set of CSS and SVG sometimes overlap. What works in one may work in the other and vice versa. As it happens, the concept of clipping exists in both CSS and SVG. The SVG clipping syntax is quite different, but it works the same. The good thing about SVG clipping compared to CSS is its maturity level. Support is good all the way back to old IE browsers. Most bugs are fixed by now (or at least one hope they are).

This is what the SVG clipping support looks like:

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

Desktop

Chrome Opera Firefox IE Edge Safari
4 9 3 9 12 3.2

Mobile / Tablet

iOS Safari Opera Mobile Opera Mini Android Android Chrome Android Firefox
3.2 10 all 4.4 67 60

Clipping as a transition

A neat use case for clipping is transition effects. Take The Silhouette Slideshow demo on CodePen:

See the Pen Silhouette zoom slideshow by Mikael Ainalem (@ainalem) on CodePen.

A "regular" slideshow cycles though images. Here, to make it a bit more interesting, there's a clipping effect when switching images. The next image enters the screen through a silhouette of of the previous image. This creates the illusion that the images are connected to one another, even if they are not.

The transitions follow this process:

  1. Identify the focal point (i.e., main subject) of the image
  2. Create a clipping path for that object
  3. Cut the next image with the path
  4. The cut image (silhouette) fades in
  5. Scale the clipping path until it's bigger than the viewport
  6. Complete the transition to display the next image
  7. Repeat!

Let’s break down the sequence, starting with the first image. We’ll split this up into multiple pens so we can isolate each step.

See the Pen Silhouette zoom slideshow explained I by Mikael Ainalem (@ainalem) on CodePen.

This is the basic structure of the SVG markup:

<svg>
  ...
  <image class="..." xlink:href="..." />
  ...
</svg>

For this image, we then want to create a mask of the focal point — in this case, the person’s silhouette. If you’re unsure how to go about creating a clip, check out my previous article for more details because, generally speaking, making cuts in CSS and SVG is fundamentally the same:

  1. Import an image into the SVG editor
  2. Draw a path around the object
  3. Convert the path to the syntax for SVG clip path. This is what goes in the SVG’s <defs> block.
  4. Paste the SVG markup into the HTML

If you’re handy with the editor, you can do most of the above in the editor. Most editors have good support for masks and clip paths. I like to have more control over the markup, so I usually do at least some of the work by hand. I find there’s a balance between working with an SVG editor vs. working with markup. For example, I like to organize the code, rename the classes and clean up any cruft the editor may have dropped in there.

Mozilla Developer Network does a fine job of documenting SVG clip paths. Here’s a stripped-down version of the markup used by the original demo to give you an idea of how a clip path fits in:

<svg>
  <defs>
    <clipPath id="clip"> <!-- Clipping defined -->
      <path class="clipPath clipPath2" d="..." />
    </clipPath>
  </defs>
  ...
  <path ... clip-path="url(#clip)"/> <!-- Clipping applied -->
</svg>

Let’s use a colored rectangle as a placeholder for the next image in the slideshow. This helps to clearly visualize the shape that part that’s cut out and will give a clearer idea of the shape and its movement.

See the Pen Silhouette zoom slideshow explained II by Mikael Ainalem (@ainalem) on CodePen.

Now that we have the silhouette, let’s have a look at the actual transition. In essence, we’re looking at two parts of the transition that work together to create the effect:

  • First, the mask fades into view.
  • After a brief delay (200ms), the clip path scales up in size.

Note the translate value in the upscaling rule. It’s there to make sure the mask stays in the focal point as things scale up. This is the CSS for those transitions:

.clipPath {
  transition: transform 1200ms 500ms; /* Delayed transform transition */
  transform-origin: 50%;
}

.clipPath.active {
  transform: translateX(-30%) scale(15); /* Upscaling and centering mask */
}

.image {
  transition: opacity 1000ms; /* Fade-in, starts immediately */
  opacity: 0;
}

.image.active {
  opacity: 1;
}

Here’s what we get — an image that transitions to the rectangle!

See the Pen Silhouette zoom slideshow explained III by Mikael Ainalem (@ainalem) on CodePen.

Now let’s replace the rectangle with the next image to complete the transition:

See the Pen Silhouette zoom slideshow explained IV by Mikael Ainalem (@ainalem) on CodePen.

Repeating the above procedure for each image is how we get multiple slides.

The last thing we need is logic to cycle through the images. This is a matter of bookkeeping, determining which is the current image and which is the next, so on and so forth:

remove = (remove + 1) % images.length;
current = (current + 1) % images.length

Note that this examples is not supported by Firefox at the time of writing because is lacks support for scaling clip paths. I hope this is something that will be addressed in the near future.

Clipping to emerge foreground objects into the background

Another interesting use for clipping is for revealing and hiding effects. We can create parts of the view where objects are either partly or completely hidden making for a fun way to make background images interact with foreground content. For instance, we could have objects disappear behind elements in the background image, say a building or a mountain. It becomes even more interesting when we pair that idea up with animation or scrolling effects.

See the Pen Parallax clip by Mikael Ainalem (@ainalem) on CodePen.

This example uses a clipping path to create an effect where text submerges into the photo — specifically, floating behind mountains as a user scrolls down the page. To make it even more interesting, the text moves with a parallax effect. In other words, the different layers move at different speeds to enhance the perspective.

We start with a simple div and define a background image for it in the CSS:

See the Pen Parallax clip Explained I by Mikael Ainalem (@ainalem) on CodePen.

The key part in the photo is the line that separates the foreground layer from the layers in the background of the photo. Basically, we want to split the photo into two parts — a perfect use-case for clipping!

Let’s follow the same process we’ve covered before and cut elements out by following a line. In your photo editor, create a clipping path between those two layers. The way I did it was to draw a path following the line in the photo. To close off the path, I connected the line with the top corners.

Here’s visual highlighting the background layers in blue:

See the Pen Parallax clip Explained II by Mikael Ainalem (@ainalem) on CodePen.

Any SVG content drawn below the blue area will be partly or completely hidden. This creates an illusion that content disappears behind the hill. For example, here’s a circle that’s drawn on top of the blue background when part of it overlaps with the foreground layer:

See the Pen Parallax clip Explained III by Mikael Ainalem (@ainalem) on CodePen.

Looks kind of like the moon poking out of the mountain top!

All that’s left to recreate my original demo is to change the circle to text and move it when the user scrolls. One way to do that is through a scroll event listener:

window.addEventListener('scroll', function() {
  logo.setAttribute('transform',`translate(0 ${html.scrollTop / 10 + 5})`);
  clip.setAttribute('transform',`translate(0 -${html.scrollTop / 10 + 5})`);
});

Don’t pay too much attention to the + 5 used when calculating the distance. It’s only there as a sloppy way to offset the element. The important part is where things are divided by 10, which creates the parallax effect. Scrolling a certain amount will proportionally move the element and the clip path. Template literals convert the calculated value to a string which is used for the transform property value as an offset to the SVG nodes.

Combining clipping and masking

Clipping and masking are two interesting concepts. One lets you cut out pieces of content whereas the other let’s you do the opposite. Both techniques are useful by themselves but there is no reason why we can’t combine their powers!

When combining clipping and masking, you can split up objects to create different visual effects on different parts. For example:

See the Pen parallax logo blend by Mikael Ainalem (@ainalem) on CodePen.

I created this effect using both clipping and masking on a logo. The text, split into two parts, blends with the background image, which is a beautiful monochromatic image of the New York’s Statue of Liberty. I use different colors and opacities on different parts of the text to make it stand out. This creates an interesting visual effect where the text blends in with the background when it overlaps with the statue — a splash of color to an otherwise grey image. There is, besides clipping and masking, a parallax effect here as well. The text moves in a different speed relative to the image when the user hovers or moves (touch) over the image.

To illustrate the behavior, here is what we get when the masked part is stripped out:

See the Pen parallax logo blend Explained I by Mikael Ainalem (@ainalem) on CodePen.

This is actually a neat feature in itself because the text appears to flow behind the statue. That’s a good use of clipping. But, we’re going to mix in some creative masking to let the text blend into the statue.

Here's the same demo, but with the mask applied and the clip disabled:

See the Pen parallax logo blend Explained II by Mikael Ainalem (@ainalem) on CodePen.

Notice how masking combines the text with the statue and uses the statue as the visual bounds for the text. Clipping allows us to display the full text while maintaining that blending. Again, the final result:

See the Pen parallax logo blend by Mikael Ainalem (@ainalem) on CodePen.

Wrapping up

Clipping is a fun way to create interactions and visual effects. It can enhance slide-shows or make objects stand out of images, among other things. Both SVG and CSS provide the ability to apply clip paths and masks to elements, though with different syntaxes. We can pretty much cut any web content nowadays. It is only your imagination that sets the limit.

If you happen to create anything cool with the things we covered here, please share them with me in the comments!

The post Using CSS Clip Path to Create Interactive Effects, Part II appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2BLDvLq
via IFTTT

SEO Negotiation: How to Ace the Business Side of SEO - Whiteboard Friday

Posted by BritneyMuller

SEO isn't all meta tags and content. A huge part of the success you'll see is tied up in the inevitable business negotiations. In this week's Whiteboard Friday, our resident expert Britney Muller walks us through a bevy of smart tips and considerations that will strengthen your SEO negotiation skills, whether you're a seasoned pro or a newbie to the practice.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hey, Moz fans. Welcome to another edition of Whiteboard Friday. So today we are going over all things SEO negotiation, so starting to get into some of the business side of SEO. As most of you know, negotiation is all about leverage.

It's what you have to offer and what the other side is looking to gain and leveraging that throughout the process. So something that you can go in and confidently talk about as SEOs is the fact that SEO has around 20% more opportunity than both mobile and desktop PPC combined.

This is a really, really big deal. It's something that you can showcase. These are the stats to back it up. We will also link to the research to this down below. Good to kind of have that in your back pocket. Aside from this, you will obviously have your audit. So potential client, you're looking to get this deal.

Get the most out of the SEO audit

☑ Highlight the opportunities, not the screw-ups

You're going to do an audit, and something that I have always suggested is that instead of highlighting the things that the potential client is doing wrong, or screwed up, is to really highlight those opportunities. Start to get them excited about what it is that their site is capable of and that you could help them with. I think that sheds a really positive light and moves you in the right direction.

☑ Explain their competitive advantage

I think this is really interesting in many spaces where you can sort of say, "Okay, your competitors are here, and you're currently here and this is why,"and to show them proof. That makes them feel as though you have a strong understanding of the landscape and can sort of help them get there.

☑ Emphasize quick wins

I almost didn't put this in here because I think quick wins is sort of a sketchy term. Essentially, you really do want to showcase what it is you can do quickly, but you want to...

☑ Under-promise, over-deliver

You don't want to lose trust or credibility with a potential client by overpromising something that you can't deliver. Get off to the right start. Under-promise, over-deliver.

Smart negotiation tactics

☑ Do your research

Know everything you can about this clientPerhaps what deals they've done in the past, what agencies they've worked with. You can get all sorts of knowledge about that before going into negotiation that will really help you.

☑ Prioritize your terms

So all too often, people go into a negotiation thinking me, me, me, me, when really you also need to be thinking about, "Well, what am I willing to lose?What can I give up to reach a point that we can both agree on?" Really important to think about as you go in.

☑ Flinch!

This is a very old, funny negotiation tactic where when the other side counters, you flinch. You do this like flinch, and you go, "Oh, is that the best you can do?" It's super silly. It might be used against you, in which case you can just say, "Nice flinch." But it does tend to help you get better deals.

So take that with a grain of salt. But I look forward to your feedback down below. It's so funny.

☑ Use the words "fair" and "comfortable"

The words "fair" and "comfortable" do really well in negotiations. These words are inarguable. You can't argue with fair. "I want to do what is comfortable for us both. I want us both to reach terms that are fair."

You want to use these terms to put the other side at ease and to also help bridge that gap where you can come out with a win-win situation.

☑ Never be the key decision maker

I see this all too often when people go off on their own, and instantly on their business cards and in their head and email they're the CEO.

They are this. You don't have to be that, and you sort of lose leverage when you are. When I owned my agency for six years, I enjoyed not being CEO. I liked having a board of directors that I could reach out to during a negotiation and not being the sole decision maker. Even if you feel that you are the sole decision maker, I know that there are people that care about you and that are looking out for your business that you could contact as sort of a business mentor, and you could use that in negotiation. You can use that to help you. Something to think about.

Tips for negotiation newbies

So for the newbies, a lot of you are probably like, "I can never go on my own. I can never do these things." I'm from northern Minnesota. I have been super awkward about discussing money my whole life for any sort of business deal. If I could do it, I promise any one of you watching this can do it.

☑ Power pose!

I'm not kidding, promise. Some tips that I learned, when I had my agency, was to power pose before negotiations. So there's a great TED talk on this that we can link to down below. I do this before most of my big speaking gigs, thanks to my gramsy who told me to do this at SMX Advanced like three years ago.

Go ahead and power pose. Feel good. Feel confident. Amp yourself up.

☑ Walk the walk

You've got to when it comes to some of these things and to just feel comfortable in that space.

☑ Good > perfect

Know that good is better than perfect. A lot of us are perfectionists, and we just have to execute good. Trying to be perfect will kill us all.

☑ Screw imposter syndrome

Many of the speakers that I go on different conference circuits with all struggle with this. It's totally normal, but it's good to acknowledge that it's so silly. So to try to take that silly voice out of your head and start to feel good about the things that you are able to offer.

Take inspiration where you can find it

I highly suggest you check out Brian Tracy's old-school negotiation podcasts. He has some old videos. They're so good. But he talks about leverage all the time and has two really great examples that I love so much. One being jade merchants. So these jade merchants that would take out pieces of jade and they would watch people's reactions piece by piece that they brought out.

So they knew what piece interested this person the most, and that would be the higher price. It was brilliant. Then the time constraints is he has an example of people doing business deals in China. When they landed, the Chinese would greet them and say, "Oh, can I see your return flight ticket? I just want to know when you're leaving."

They would not make a deal until that last second. The more you know about some of these leverage tactics, the more you can be aware of them if they were to be used against you or if you were to leverage something like that. Super interesting stuff.

Take the time to get to know their business

☑ Tie in ROI

Lastly, just really take the time to get to know someone's business. It just shows that you care, and you're able to prioritize what it is that you can deliver based on where they make the most money off of the products or services that they offer. That helps you tie in the ROI of the things that you can accomplish.

☑ Know the order of products/services that make them the most money

One real quick example was my previous company. We worked with plastic surgeons, and we really worked hard to understand that funnel of how people decide to get any sort of elective procedure. It came down to two things.

It was before and after photos and price. So we knew that we could optimize for those two things and do very well in their space. So showing that you care, going the extra mile, sort of tying all of these things together, I really hope this helps. I look forward to the feedback down below. I know this was a little bit different Whiteboard Friday, but I thought it would be a fun topic to cover.

So thank you so much for joining me on this edition of Whiteboard Friday. I will see you all soon. Bye.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog https://ift.tt/2PxrYSF
via IFTTT

Thursday, 23 August 2018

::before vs :before

Note the double-colon ::before versus the single-colon :before. Which one is correct?

Technically, the correct answer is ::before. But that doesn't mean you should automatically use it.

The situation is that:

  • double-colon selectors are pseudo-elements.
  • single-colon selectors are pseudo-selectors.

::before is definitely a pseudo-element, so it should use the double colon.

The distinction between a pseudo-element and pseudo-selector is already confusing. Fortunately, ::after and ::before are fairly straightforward. They literally add something new to the page, an element.

But something like ::first-letter is also a pseudo-element. The way I reason that out in my brain is that it's selecting a part of something in which there is no existing HTML element for. There is no <span> around that first letter you're targeting, so that first letter is almost like a new element you're adding on the page. That differs from pseudo-selectors which are selecting things that already exist, like the :nth-child(2) or whatever.

Even though ::before is a pseudo-element and a double-colon is the correct way to use pseudo-elements, should you?

There is an argument that perhaps you should use :before, which goes like this:

  1. Internet Explorer 8 and below only supported :before, not ::before
  2. All modern browsers support it both ways, since tons of sites use :before and browsers really value backwards compatibility.
  3. Hey it's one less character as a bonus.

I've heard people say that they have a CSS linter that requires (or automates) them to be single-colon. Personally, I'm OK with people doing that. Seems fine. I'd value consistency over which way you choose to go.

On the flip side, there's an argument for going with ::before that goes like this:

  1. Single-colon pseudo-elements were a mistake. There will never be any more pseudo-elements with a single-colon.
  2. If you have the distinction straight in your mind, might as well train your fingers to do it right.
  3. This is already confusing enough, so let's just follow the correctly specced way.

I've got my linter set up to force me to do double-colons. I don't support Internet Explorer 8 anyway and it feels good to be doing things the "right" way.

The post ::before vs :before appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2MrBcSG
via IFTTT

A Basic WooCommerce Setup to Sell T-Shirts

Tuesday, 21 August 2018

CSS Logical Properties

A property like margin-left seems fairly logical, but as Manuel Rego Casasnovas says:

Imagine that you have some right-to-left (RTL) content on your website your left might be probably the physical right, so if you are usually setting margin-left: 100px for some elements, you might want to replace that with margin-right: 100px.

Direction, writing mode, and even flexbox all have the power to flip things around and make properties less logical and more difficult to maintain than you'd hope. Now we'll have margin-inline-start for that. The full list is:

  • margin-{block,inline}-{start,end}
  • padding-{block,inline}-{start,end}
  • border-{block,inline}-{start,end}-{width,style,color}

Manuel gets into all the browser support details.

Rachel Andrew also explains the logic:

... these values have moved away from the underlying assumption that content on the web maps to the physical dimensions of the screen, with the first word of a sentence being top left of the box it is in. The order of lines in grid-area makes complete sense if you had never encountered the existing way that we set these values in a shorthand.

Here's the logical properties and how they map to existing properties in a default left to right nothing-else-happening sort of way.

Property Logical Property
margin-top margin-block-start
margin-left margin-inline-start
margin-right margin-inline-end
margin-bottom margin-block-end
Property Logical Property
padding-top padding-block-start
padding-left padding-inline-start
padding-right padding-inline-end
padding-bottom padding-block-end
Property Logical Property
border-top{-size|style|color} border-block-start{-size|style|color}
border-left{-size|style|color} border-inline-start{-size|style|color}
border-right{-size|style|color} border-inline-end{-size|style|color}
border-bottom{-size|style|color} border-block-end{-size|style|color}
Property Logical Property
top offset-block-start
left offset-inline-start
right offset-inline-end
bottom offset-block-end

Direct Link to ArticlePermalink

The post CSS Logical Properties appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2LgPO1X
via IFTTT

ABeamer: a frame-by-frame animation framework

“Old Guard”

Someone asked Chris Ferdinandi what his biggest challenge is as a web developer:

... the thing I struggle the most with right now is determining when something new is going to change the way our industry works for the better, and when it’s just a fad that will fade away in a year or three.

I try to avoid jumping from fad to fad, but I also don’t want to be that old guy who misses out on something that’s an important leap forward for us.

He goes on explain a situation where, as a young buck developer, he was very progressive and even turned down a job where they weren't hip to responsive design. But now worries that might happen to him:

I’ll never forget that moment, though. Because it was obvious to me that there was an old guard of developers who didn’t get it and couldn’t see the big shift that was coming in our industry.

Now that I’m part of the older guard, and I’ve been doing this a while, I’m always afraid that will happen to me.

I feel that.

I try to lean as new-fancy-progressive as I can to kinda compensate for old-guard-syndrome. I have over a decade of experience building websites professionally, which isn't going to evaporate (although some people feel otherwise). I'm hoping those things balance me out.

Direct Link to ArticlePermalink

The post “Old Guard” appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2MzMvVs
via IFTTT

NEW On-Demand Crawl: Quick Insights for Sales, Prospecting, & Competitive Analysis

Posted by Dr-Pete

In June of 2017, Moz launched our entirely rebuilt Site Crawl, helping you dive deep into crawl issues and technical SEO problems, fix those issues in your Moz Pro Campaigns (tracked websites), and monitor weekly for new issues. Many times, though, you need quick insights outside of a Campaign context, whether you're analyzing a prospect site before a sales call or trying to assess the competition.

For years, Moz had a lab tool called Crawl Test. The bad news is that Crawl Test never made it to prime-time and suffered from some neglect. The good news is that I'm happy to announce the full launch (as of August 2018) of On-Demand Crawl, an entirely new crawl tool built on the engine that powers Site Crawl, but with a UI designed around quick insights for prospecting and competitive analysis.

While you don’t need a Campaign to run a crawl, you do need to be logged into your Moz Pro subscription. If you don’t have a subscription, you can sign-up for a free trial and give it a whirl.

How can you put On-Demand Crawl to work? Let's walk through a short example together.


All you need is a domain

Getting started is easy. From the "Moz Pro" menu, find "On-Demand Crawl" under "Research Tools":

Just enter a root domain or subdomain in the box at the top and click the blue button to kick off a crawl. While I don't want to pick on anyone, I've decided to use a real site. Our recent analysis of the August 1st Google update identified some sites that were hit hard, and I've picked one (lilluna.com) from that list.

Please note that Moz is not affiliated with Lil' Luna in any way. For the most part, it seems to be a decent site with reasonably good content. Let's pretend, just for this post, that you're looking to help this site out and determine if they'd be a good fit for your SEO services. You've got a call scheduled and need to spot-check for any major problems so that you can go into that call as informed as possible.

On-Demand Crawls aren't instantaneous (crawling is a big job), but they'll generally finish between a few minutes and an hour. We know these are time-sensitive situations. You'll soon receive an email that looks like this:

The email includes the number of URLs crawled (On-Demand will currently crawl up to 3,000 URLs), the total issues found, and a summary table of crawl issues by category. Click on the [View Report] link to dive into the full crawl data.


Assess critical issues quickly

We've designed On-Demand Crawl to assist your own human intelligence. You'll see some basic stats at the top, but then immediately move into a graph of your top issues by count. The graph only displays issues that occur at least once on your site – you can click "See More" to show all of the issues that On-Demand Crawl tracks (the top two bars have been truncated)...

Issues are also color-coded by category. Some items are warnings, and whether they matter depends a lot on context. Other issues, like "Critcal Errors" (in red) almost always demand attention. So, let's check out those 404 errors. Scroll down and you'll see a list of "Pages Crawled" with filters. You're going to select "4xx" in the "Status Codes" dropdown...

You can then pretty easily spot-check these URLs and find out that they do, in fact, seem to be returning 404 errors. Some appear to be legitimate content that has either internal or external links (or both). So, within a few minutes, you've already found something useful.

Let's look at those yellow "Meta Noindex" errors next. This is a tricky one, because you can't easily determine intent. An intentional Meta Noindex may be fine. An unintentional one (or hundreds of unintentional ones) could be blocking crawlers and causing serious harm. Here, you'll filter by issue type...

Like the top graph, issues appear in order of prevalence. You can also filter by all pages that have issues (any issues) or pages that have no issues. Here's a sample of what you get back (the full table also includes status code, issue count, and an option to view all issues)...

Notice the "?s=" common to all of these URLs. Clicking on a few, you can see that these are internal search pages. These URLs have no particular SEO value, and the Meta Noindex is likely intentional. Good technical SEO is also about avoiding false alarms because you lack internal knowledge of a site. On-Demand Crawl helps you semi-automate and summarize insights to put your human intelligence to work quickly.


Dive deeper with exports

Let's go back to those 404s. Ideally, you'd like to know where those URLs are showing up. We can't fit everything into one screen, but if you scroll up to the "All Issues" graph you'll see an "Export CSV" option...

The export will honor any filters set in the page list, so let's re-apply that "4xx" filter and pull the data. Your export should download almost immediately. The full export contains a wealth of information, but I've zeroed in on just what's critical for this particular case...

Now, you know not only what pages are missing, but exactly where they link from internally, and can easily pass along suggested fixes to the customer or prospect. Some of these turn out to be link-heavy pages that could probably benefit from some clean-up or updating (if newer recipes are a good fit).

Let's try another one. You've got 8 duplicate content errors. Potentially thin content could fit theories about the August 1st update, so this is worth digging into. If you filter by "Duplicate Content" issues, you'll see the following message...

The 8 duplicate issues actually represent 18 pages, and the table returns all 18 affected pages. In some cases, the duplicates will be obvious from the title and/or URL, but in this case there's a bit of mystery, so let's pull that export file. In this case, there's a column called "Duplicate Content Group," and sorting by it reveals something like the following (there's a lot more data in the original export file)...

I've renamed "Duplicate Content Group" to just "Group" and included the word count ("Words"), which could be useful for verifying true duplicates. Look at group #7 – it turns out that these "Weekly Menu Plan" pages are very image heavy and have a common block of text before any unique text. While not 100% duplicated, these otherwise valuable pages could easily look like thin content to Google and represent a broader problem.


Real insights in real-time

Not counting the time spent writing the blog post, running this crawl and diving in took less than an hour, and even that small amount of time spent uncovered more potential issues than what I could cover in this post. In less than an hour, you can walk into a client meeting or sales call with in-depth knowledge of any domain.

Keep in mind that many of these features also exist in our Site Crawl tool. If you're looking for long-term, campaign insights, use Site Crawl (if you just need to update your data, use our "Recrawl" feature). If you're looking for quick, one-time insights, check out On-Demand Crawl. Standard Pro users currently get 5 On-Demand Crawls per month (with limits increasing at higher tiers).

Your On-Demand Crawls are currently stored for 90 days. When you re-enter the feature, you'll see a table of all of your recent crawls (the image below has been truncated):

Click on any row to go back to see the crawl data for that domain. If you get the sale and decide to move forward, congratulations! You can port that domain directly into a Moz campaign.

We hope you'll try On-Demand Crawl out and let us know what you think. We'd love to hear your case studies, whether it's sales, competitive analysis, or just trying to solve the mysteries of a Google update.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog https://ift.tt/2OTfW53
via IFTTT

Passkeys: What the Heck and Why?

These things called  passkeys  sure are making the rounds these days. They were a main attraction at  W3C TPAC 2022 , gained support in  Saf...