Tuesday 31 October 2017

A Reasonable Approach for Getting Comfortable With Command Line

Considering how much the command line is an integral part of the developer's workflow, it should not be thought overly difficult or tedious to learn.

At one time I avoided it myself, but one day began teaching myself ways to make the difficult as easy as it should be. I got over the hurdle, and you can too. It was worth investing my time to increase my command line comfort-level, and I am going to share a few tips and resources that I found helpful in this post.

The intended reader is the type who typically avoids the command line, or perhaps uses it occasionally but not as a regular or essential tool.

Tip #1: Maintain a Pragmatic Mindset

The way to get more comfortable with the command line is this: practice. You practice, and you get better. There is no secret trick to this; study and repetition of skills will turn into understanding and mastery. There is no use in the mindset that you cannot do this; it will only keep you from your goal. You may as well discard such thoughts and get down to it.

Tip #2: Keep a Cheat sheet

Don't be afraid to keep a cheat sheet. I find that a thin, spiral-bound notebook kept next to my keyboard is perfect; writing the command down helps commit it to memory; having it in a place where I can refer to it while I am typing is convenient to the process. Do not permit yourself merely to copy and paste; you will not learn this way. Until you know the command, make yourself type it out.

Tip #3: Peruse languages outside of the one(s) you normally use

  1. Spend time looking at commands in various languages, looking at the commands even if you don't immediately absorb, use, or remember them. It is worth it to invest a bit of time regularly, looking at these commands; patterns will eventually emerge. Some of them may even come back to you at an unexpected time and give you an extra eureka moment!
  2. Skimming through books with lots of CLI commands can prove interestingly useful for recognizing patterns in commands. I even take this one step further by getting my favorites spiral-bound. I am a big fan of spiral binding; a place like FedEx offers coil binding services at a surprisingly low cost.

Tip #4: Practice... safely

When I am advising someone who is new to contributing to open source, they are inevitably a bit nervous about it. I think this is perfectly natural if only to comfort myself that my own nervousness about it was perfectly natural. A good way to practice, though, is to set up your own repository for a project and regularly commit to it. Simply using common Git commands in a terminal window to commit inconsequential changes to a project of your own, will establish the "muscle memory" so that when it does come time to actually commit code of consequence, you won't be held back by still being nervous about the commands themselves.

These are the commands I have noticed most common to use in the practical day-to-day of development. It's perfectly acceptable to expect yourself to learn these and to be able to do any of them without a second thought. Do not use a GUI tool (they make weird merge choices). Learn how to write these commands yourself.

  • Check status
  • Create a new branch and switch to it
  • Add files
    • Add all the changes
    • Just add one of the changes
  • Commit
  • Push to a remote branch
  • Get a list of your branches
  • Checkout a branch
  • Delete a branch
  • Delete a branch even if there are changes
  • Fetch and merge the changes to a branch

Syncing a fork took longer to learn- I don't often spend my work hours writing code for a repository that I don't have access to. While contributing to open source software, however, I had to learn how to do this. The GitHub article about the topic is sufficient; even now I still have it bookmarked.

Tip #5: Level Up!

I really enjoy using Digital Ocean to level up my skills. Their step-by-step guides are really useful, and for $5 USD per month, "Droplets" are a cost-effective way to do so.

Here's a suggested self-learning path (which, feel free to choose your own adventure! There are over 1700 tutorials in the Digital Ocean community):

  1. Create a Droplet with Ghost pre-installed. There is a little command line work to finalize the installation, which makes it a good candidate. It's not completely done for you but there's not so much to do that it's overwhelming. There's even an excellent tutorial already written by Melissa Anderson.
  2. Set up a GitHub repo to work on a little theming for Ghost, making small changes and practice your command line work.

It would be remiss of me to write any guide without mentioning Ember, as the ember-cli is undoubtedly one of the strongest there is. Feel free to head over to the docs and read that list!

Conclusion

There may be some that find this brief guide too simplistic. However, as famously said by S. Thompson in Calculus Made Easy- "What one fool can do, other fools can do also." Don't let anyone else make you think that using the command line is horribly difficult, or that they are a genius because they can. With practice, you will be able to do it, and it will soon be a simple thing.


A Reasonable Approach for Getting Comfortable With Command Line is a post from CSS-Tricks



from CSS-Tricks http://ift.tt/2z3Aq6W
via IFTTT

SmashingConf 2018: Fetch Those Early-Bird Tickets! πŸ‡¬πŸ‡§ πŸ‡ΊπŸ‡Έ πŸ‡¨πŸ‡¦


   

Great conferences are all about learning new skills and making new connections. That’s why we’ve set up a couple of new adventures for SmashingConf 2018 — just practical sessions, new formats, new lightning talks, evening sessions and genuine, interesting conversations — with a dash of friendly networking! Taking place in London, San Francisco, Toronto. Tickets? Glad you asked!

A queen cat welcoming you a Smashing Conference in London, February 7 to 8, 2018
SmashingConf London: everything web performance. Feb 7–8.

Performance matters. Next year, we’re thrilled to venture to London for our brand new conference fully dedicated to everything front-end performance. Dealing with ads, third-party scripts, A/B testing, HTTP/2, debugging, JAM stack, PWA, web fonts loading, memory/CPU perf, service workers. Plus lightning community talks.

The post SmashingConf 2018: Fetch Those Early-Bird Tickets! πŸ‡¬πŸ‡§ πŸ‡ΊπŸ‡Έ πŸ‡¨πŸ‡¦ appeared first on Smashing Magazine.



from Smashing Magazine http://ift.tt/2gQD5pW
via IFTTT

Unfiltered: How to Show Up in Local Search Results

Posted by sherrybonelli

If you're having trouble getting your local business' website to show up in the Google local 3-pack or local search results in general, you're not alone. The first page of Google's search results seems to have gotten smaller over the years – the top and bottom of the page are often filled with ads, the local 7-pack was trimmed to a slim 3-pack, and online directories often take up the rest of page one. There is very little room for small local businesses to rank on the first page of Google.

To make matters worse, Google has a local "filter" that can strike a business, causing their listing to drop out of local search results for seemingly no reason – often, literally, overnight. Google's local filter has been around for a while, but it became more noticeable after the Possum algorithm update, which began filtering out even more businesses from local search results.

If you think about it, this filter is not much different than websites ranking organically in search results: In an ideal world, the best sites win the top spots. However, the Google filter can have a significantly negative impact on local businesses that often rely on showing up in local search results to get customers to their doors.

What causes a business to get filtered?

Just like the multitude of factors that go into ranking high organically, there are a variety of factors that go into ranking in the local 3-pack and the Local Finder.

http://ift.tt/2iNGREw

Here are a few situations that might cause you to get filtered and what you can do if that happens.

Proximity matters

With mobile search becoming more and more popular, Google takes into consideration where the mobile searcher is physically located when they're performing a search. This means that local search results can also depend on where the business is physically located when the search is being done.

A few years ago, if your business wasn't located in the large city in your area, you were at a significant disadvantage. It was difficult to rank when someone searched for "business category + large city" – simply because your business wasn't physically located in the "large city." Things have changed slightly in your favor – which is great for all the businesses who have a physical address in the suburbs.

According to Ben Fisher, Co-Founder of SteadyDemand.com and a Google Top Contributor, "Proximity and Google My Business data play an important role in the Possum filter. Before the Hawk Update, this was exaggerated and now the radius has been greatly reduced." This means there's hope for you to show up in the local search results – even if your business isn't located in a big city.

Google My Business categories

When you're selecting a Google My Business category for your listing, select the most specific category that's appropriate for your business.

However, if you see a competitor is outranking you, find out what category they are using and select the same category for your business (but only if it makes sense.) Then look at all the other things they are doing online to increase their organic ranking and emulate and outdo them.

If your category selections don't work, it's possible you've selected too many categories. Too many categories can confuse Google to the point where it's not sure what your company's specialty is. Try deleting some of the less-specific categories and see if that helps you show up.

Your physical address

If you can help it, don't have the same physical address as your competitors. Yes, this means if you're located in an office building (or worse, a "virtual office" or a UPS Store address) and competing companies are also in your building, your listing may not show up in local search results.

When it comes to sharing an address with a competitor, Ben Fisher recommends, "Ensure that you do not have the same primary category as your competitor if you are in the same building. Their listing may have more trust by Google and you would have a higher chance of being filtered."

Also, many people think that simply adding a suite number to your address will differentiate your address enough from a competitor at the same location — it won't. This is one of the biggest myths in local SEO. According to Fisher, "Google doesn't factor in suite numbers."

Additionally, if competing businesses are located physically close to you, that, too, can impact whether you show up in local search results. So if you have a competitor a block or two down from your company, that can lead to one of you being filtered.

Practitioners

If you're a doctor, attorney, accountant or are in some other industry with multiple professionals working in the same office location, Google may filter out some of your practitioners' listings. Why? Google doesn't want one business dominating the first page of Google local search results. This means that all of the practitioners in your company are essentially competing with one another.

To offset this, each practitioner's Google My Business listing should have a different category (if possible) and should be directed to different URLs (either a page about the practitioner or a page about the specialty – they should not all point to the site's home page).

For instance, at a medical practice, one doctor could select the family practice category and another the pediatrician category. Ideally you would want to change those doctors' landing pages to reflect those categories, too:

http://ift.tt/2zkeBAt
http://ift.tt/2iNKs5m

Another thing you can do to differentiate the practitioners and help curtail being filtered is to have unique local phone numbers for each of them.

Evaluate what your competitors are doing right

If your listing is getting filtered out, look at the businesses that are being displayed and see what they're doing right on Google Maps, Google+, Google My Business, on-site, off-site, and in any other areas you can think of. If possible, do an SEO site audit on their site to see what they're doing right that perhaps you should do to overtake them in the rankings.

When you're evaluating your competition, make sure you focus on the signals that help sites rank organically. Do they have a better Google+ description? Is their GMB listing completely filled out but yours is missing some information? Do they have more 5-star reviews? Do they have more backlinks? What is their business category? Start doing what they're doing – only better.

In general Google wants to show the best businesses first. Compete toe-to-toe with the competitors that are ranking higher than you with the goal of eventually taking over their highly-coveted spot.

Other factors that can help you show up in local search results

As mentioned earlier, Google considers a variety of data points when it determines which local listings to display in search results and which ones to filter out. Here are a few other signals to pay attention to when optimizing for local search results:

Reviews

If everything else is equal, do you have more 5-star reviews than your competition? If so, you will probably show up in the local search results instead of your competitors. Google is one of the few review sites that encourages businesses to proactively ask customers to leave reviews. Take that as a clue to ask customers to give you great reviews not only on your Google My Business listing but also on third-party review sites like Facebook, Yelp, and others.

Posts

Are you interacting with your visitors by offering something special to those who see your business listing? Engaging with your potential customers by creating a Post lets Google know that you are paying attention and giving its users a special deal. Having more "transactions and interactions" with your potential customers is a good metric and can help you show up in local search results.

Google+

Despite what the critics say, Google+ is not dead. Whenever you make a Facebook or Twitter post, go ahead and post to Google+, too. Write semantic posts that are relevant to your business and relevant to your potential customers. Try to write Google+ posts that are approximately 300 words in length and be sure to keyword optimize the first 100 words of each post. You can often see some minor increases in rankings due to well-optimized Google+ posts, properly optimized Collections, and an engaged audience.

Here's one important thing to keep in mind: Google+ is not the place to post content just to try and rank higher in local search. (That's called spam and that is a no-no.) Ensure that any post you make to Google+ is valuable to your end-users.

Keep your Google My Business listing current

Adding photos, updating your business hours for holidays, utilizing the Q&A or booking features, etc. can help you show off in rankings. However, don't add content just to try and rank higher. (Your Google My Business listing is not the place for spammy content.) Make sure the content you add to your GMB listing is both timely and high-quality content. By updating/adding content, Google knows that your information is likely accurate and that your business is engaged. Speaking of which...

Be engaged

Interacting with your customers online is not only beneficial for customer relations, but it can also be a signal to Google that can positively impact your local search ranking results. David Mihm, founder of Tidings, feels that by 2020, the difference-making local ranking factor will be engagement.

engagement-ranking-factor.jpg

(Source: The Difference-Making Local Ranking Factor of 2020)

According to Mihm, "Engagement is simply a much more accurate signal of the quality of local businesses than the traditional ranking factors of links, directory citations, and even reviews." This means you need to start preparing now and begin interacting with potential customers by using GMB's Q&A and booking features, instant messaging, Google+ posts, responding to Google and third-party reviews, ensure your website's phone number is "click-to-call" enabled, etc.

Consolidate any duplicate listings

Some business owners go overboard and create multiple Google My Business listings with the thought that more has to be better. This is one instance where having more can actually hurt you. If you discover that for whatever reason your business has more than one GMB listing, it's important that you properly consolidate your listings into one.

Other sources linking to your website

If verified data sources, like the Better Business Bureau, professional organizations and associations, chambers of commerce, online directories, etc. link to your website, that can have an impact on whether or not you show up on Google's radar. Make sure that your business is listed on as many high-quality and authoritative online directories as possible – and ensure that the information about your business – especially your company's Name, Address and Phone Number (NAP) -- is consistent and accurate.

So there you have it! Hopefully you found some ideas on what to do if your listing is being filtered on Google local results.

What are some tips that you have for keeping your business "unfiltered"?


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog http://ift.tt/2yguKqs
via IFTTT

Monday 30 October 2017

Make Like it Matters

(This is a sponsored post.)

Our sponsor Media temple is holding a contest to give away a bunch of stuff, including a nice big monitor and gift cards. Entering is easy, you just drop them an image or URL to a project you're proud of. Do it quickly though, as entries end on Tuesday. Then the top 20 will be publicly voted on. US residents only.

Direct Link to ArticlePermalink


Make Like it Matters is a post from CSS-Tricks



from CSS-Tricks http://ift.tt/2xk3W3Z
via IFTTT

Emulating CSS Timing Functions with JavaScript

Variable Fonts from Adobe Originals

Sunday 29 October 2017

WordPress + PWAs

One of the sessions from the Chrome Dev Summit, hosted by Das Surma and Daniel Walmsley. It's not so much about WordPress as it is about CMS powered sites that aren't really "apps", if there is such a thing, and the possibility of turning that site into a Progressive Web AppSite.

I find the CMS + PWA combo interesting because:

  • If you aren't stoked about AMP, and let's face it, a lot of people are not stoked about AMP, but do like the idea of a super fast website, a PWA is likely of high interest. Whereas AMP feels like you're making an alternate version of your site, PWAs feel like you're making the website you have much better.
  • Some PWA work is generic and easy-ish (use HTTPS) and some PWA is bespoke and hard (make the site work offline). For lack of a better way to explain it, CMS's know about themselves in such a way that they can provide tooling to make PWAs way easier. For example, Jetpack just doing it for you. It's the same kind of thing we saw with responsive images. It's not trivial to handle by hand, but a CMS can just do it for you.

If this topic doesn't trip your trigger, there is a playlist of all the sessions here. Dan Fabulich watched all 10 hours of it and summarizes it as:

Google wants you to build PWAs, reduce JavaScript file size, use Web Components, and configure autofill. They announced only a handful of features around payments, authentication, Android Trusted Web Activities, Chrome Dev Tools, and the Chrome User Experience Report.

Direct Link to ArticlePermalink


WordPress + PWAs is a post from CSS-Tricks



from CSS-Tricks https://www.youtube.com/watch?v=Di7RvMlk9io
via IFTTT

Sketching Interfaces

From the same team that worked on the incredibly wild idea of using React to make Sketch documents comes an even wilder idea:

Sketching seemed like the natural place to start. As interface designers, sketching is an intuitive method of expressing a concept. We wanted to see how it might look to skip a few steps in the product development lifecycle and instantly translate our sketches into a finished product.

In other words, a camera looks at the sketches, figures out what design patterns are being insinuated, and renders them in a browser.

I wouldn't doubt design tooling gets this sophisticated in coming years. Mostly I think: if your design team is this forward thinking and experimental, you've done a fantastic job putting a team together. Hopefully you can keep them happy designing travel websites, or somehow pivot to design tooling itself.

Direct Link to ArticlePermalink


Sketching Interfaces is a post from CSS-Tricks



from CSS-Tricks http://ift.tt/2iuMqaJ
via IFTTT

Friday 27 October 2017

Getting Around a Revoked Certificate in OSX

Houdini Experiments

These experiments by Vincent De Oliveira are just the kind of thing Houdini needs to get developers interested. Or maybe I should say designers, as these demos are particularly good at demonstrating how allowing low-level painting to the screen unlocks just about anything you want it to.

Direct Link to ArticlePermalink


Houdini Experiments is a post from CSS-Tricks



from CSS-Tricks http://ift.tt/2i9g9Ci
via IFTTT

How to Use the "Keywords by Site" Data in Tools (Moz, SEMrush, Ahrefs, etc.) to Improve Your Keyword Research and Targeting - Whiteboard Friday

Posted by randfish

One of the most helpful functions of modern-day SEO software is the idea of a "keyword universe," a database of tens of millions of keywords that you can tap into and discover what your site is ranking for. Rankings data like this can be powerful, and having that kind of power at your fingertips can be intimidating. In today's Whiteboard Friday, Rand explains the concept of the "keyword universe" and shares his most useful tips to take advantage of this data in the most popular SEO tools.

How to use keywords by site

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we're going to chat about the Keywords by Site feature that exists now in Moz's toolset — we just launched it this week — and SEMrush and Ahrefs, who have had it for a little while, and there are some other tools out there that also do it, so places like KeyCompete and SpyFu and others.

In SEO software, there are two types of rankings data:

A) Keywords you've specifically chosen to track over time

Basically, the way you can think of this is, in SEO software, there are two kinds of keyword rankings data. There are keywords that you have specifically selected or your marketing manager or your SEO has specifically selected to track over time. So I've said I want to track X, Y and Z. I want to see how they rank in Google's results, maybe in a particular location or a particular country. I want to see the position, and I want to see the change over time. Great, that's your set that you've constructed and built and chosen.

B) A keyword "universe" that gives wide coverage of tens of millions of keywords

But then there's what's called a keyword universe, an entire universe of keywords that's maintained by a tool provider. So SEMrush has their particular database, their universe of keywords for a bunch of different languages, and Ahrefs has their keyword universe of keywords that each of those two companies have selected. Moz now has its keyword universe, a universe of, I think in our case, about 40 million keywords in English in the US that we track every two weeks, so we'll basically get rankings updates. SEMrush tracks their keywords monthly. I think Ahrefs also does monthly.

Depending on the degree of change, you might care or not care about the various updates. Usually, for keywords you've specifically chosen, it's every week. But in these cases, because it's tens of millions or hundreds of millions of keywords, they're usually tracking them weekly or monthly.

So in this universe of keywords, you might only rank for some of them. It's not ones you've specifically selected. It's ones the tool provider has said, "Hey, this is a broad representation of all the keywords that we could find that have some real search volume that people might be interested in who's ranking in Google, and we're going track this giant database." So you might see some of these your site ranks for. In this case, seven of these keywords your site ranks for, four of them your competitors rank for, and two of them both you and your competitors rank for.

Remarkable data can be extracted from a "keyword universe"

There's a bunch of cool data, very, very cool data that can be extracted from a keyword universe. Most of these tools that I mentioned do this.

Number of ranking keywords over time

So they'll show you how many keywords a given site ranks for over time. So you can see, oh, Moz.com is growing its presence in the keyword universe, or it's shrinking. Maybe it's ranking for fewer keywords this month than it was last month, which might be a telltale sign of something going wrong or poorly.

Degree of rankings overlap

You can see the degree of overlap between several websites' keyword rankings. So, for example, I can see here that Moz and Search Engine Land overlap here with all these keywords. In fact, in the Keywords by Site tool inside Moz and in SEMrush, you can see what those numbers look like. I think Moz actually visualizes it with a Venn diagram. Here's Distilled.net. They're a smaller website. They have less content. So it's no surprise that they overlap with both. There's some overlap with all three. I could see keywords that all three of them rank for, and I could see ones that only Distilled.net ranks for.

Estimated traffic from organic search

You can also grab estimated traffic. So you would be able to extract out — Moz does not offer this, but SEMrush does — you could see, given a keyword list and ranking positions and an estimated volume and estimated click-through rate, you could say we're going to guess, we're going to estimate that this site gets this much traffic from search. You can see lots of folks doing this and showing, "Hey, it looks this site is growing its visits from search and this site is not." SISTRIX does this in Europe really nicely, and they have some great blog posts about it.

Most prominent sites for a given set of keywords

You can also extract out the most prominent sites given a set of keywords. So if you say, "Hey, here are a thousand keywords. Tell me who shows up most in this thousand-keyword set around the world of vegetarian recipes." The tool could extract out, "Okay, here's the small segment. Here's the galaxy of vegetarian recipe keywords in our giant keyword universe, and this is the set of sites that are most prominent in that particular vertical, in that little galaxy."

Recommended applications for SEOs and marketers

So some recommended applications, things that I think every SEO should probably be doing with this data. There are many, many more. I'm sure we can talk about them in the comments.

1. Identify important keywords by seeing what you rank for in the keyword universe

First and foremost, identify keywords that you probably should be tracking, that should be part of your reporting. It will make you look good, and it will also help you keep tabs on important keywords where if you lost rankings for them, you might cost yourself a lot of traffic.

Monthly granularity might not be good enough. You might want to say, "Hey, no, I want to track these keywords every week. I want to get reporting on them. I want to see which page is ranking. I want to see how I rank by geo. So I'm going to include them in my specific rank tracking features." You can do that in the Moz Keywords by Site, you'd go to Keyword Explorer, you'd select the root domain instead of the keyword, and you'd plug in your website, which maybe is Indie Hackers, a site that I've been reading a lot of lately and I like a lot.

You could see, "Oh, cool. I'm not tracking stock trading bot or ark servers, but those actually get some nice traffic. In this case, I'm ranking number 12. That's real close to page one. If I put in a little more effort on my ark servers page, maybe I could be on page one and I could be getting some of that sweet traffic, 4,000 to 6,000 searches a month. That's really significant." So great way to find additional keywords you should be adding to your tracking.

2. Discover potential keywords targets that your competitors rank for (but you don't)

Second, you can discover some new potential keyword targets when you're doing keyword research based on the queries your competition ranks for that you don't. So, in this case, I might plug in "First Round." First Round Capital has a great content play that they've been doing for many years. Indie Hackers might say, "Gosh, there's a lot of stuff that startups and tech founders are interested in that First Round writes about. Let me see what keywords they're ranking for that I'm not ranking for."

So you plug in those two to Moz's tool or other tools. You could see, "Aha, I'm right. Look at that. They're ranking for about 4,500 more keywords than I am." Then I could go get that full list, and I could sort it by volume and by difficulty. Then I could choose, okay, these keywords all look good, check, check, check. Add them to my list in Keyword Explorer or Excel or Google Docs if you're using those and go to work.

3. Explore keywords sets from large, content-focused media sites with similar audiences

Then the third one is you can explore keyword sets. I'm going to urge you to. I don't think this is something that many people do, but I think that it really should be, which is to look outside of your little galaxy of yourself and your competitors, direct competitors, to large content players that serve your audience.

So in this case, I might say, "Gosh, I'm Indie Hackers. I'm really competing maybe more directly with First Round. But you know what? HBR, Harvard Business Review, writes about a lot of stuff that my audience reads. I see people on Twitter that are in my audience share it a lot. I see people in our forums discussing it and linking out to their articles. Let me go see what they are doing in the content world."

In fact, when you look at the Venn diagram, which I just did in the Keywords by Site tool, I can see, "Oh my god, look there's almost no overlap, and there's this huge opportunity." So I might take HBR and I might click to see all their keywords and then start looking through and sort, again, probably by volume and maybe with a difficulty filter and say, "Which ones do I think I could create content around? Which ones do they have really old content that they haven't updated since 2010 or 2011?" Those types of content opportunities can be a golden chance for you to find an audience that is likely to be the right types of customers for your business. That's a pretty exciting thing.

So, in addition to these, there's a ton of other uses. I'm sure over the next few months we'll be talking more about them here on Whiteboard Friday and here on the Moz blog. But for now, I would love to hear your uses for tools like SEMrush and the Ahrefs keyword universe feature and Moz's keyword universe feature, which is called Keywords by Site. Hopefully, we'll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog http://ift.tt/2zRX67A
via IFTTT

Thursday 26 October 2017

Quick Wins For Improving Performance And Security Of Your Website

The Output Element

Last night I was rooting around in the cellars of a particularly large codebase and stumbled upon our normalize.css which makes sure that all of our markup renders in a similar way across different browsers. I gave it a quick skim and found styles for a rather peculiar element called <output> that I'd never seen or even heard of before.

According to MDN, it "represents the result of a calculation or user action" typically used in forms. And rather embarrassingly for me, it isn't a new and fancy addition to the spec since Chris used it in a post all the way back in 2011.

But regardless! What does output do and how do we use it? Well, let's say we have an input with a type of range. Then we add an output element and correlate it to the input with its for attribute.

<input type="range" name="quantity" id="quantity" min="0" max="100">
<output for="quantity"></output>

See the Pen Input Output #2 by CSS-Tricks (@css-tricks) on CodePen.

It... doesn't really do anything. By default, output doesn't have any styles and doesn't render a box or anything in the browser. Also, nothing happens when we change the value of our input.

We'll have to tie everything together with JavaScript. No problem! First we need to find our input in the DOM with JavaScript, like so:

const rangeInput = document.querySelector('input');

Now we can append an event listener onto it so that whenever we edit the value (by sliding left or right on our input) we can detect a change:

const rangeInput = document.querySelector('input');

rangeInput.addEventListener('change', function() {
  console.log(this.value);
});

this.value will always refer to the value of the rangeInput because we're using it inside our event handler and we can then return that value to the console to make sure everything works. After that we can then find our output element in the DOM:

const rangeInput = document.querySelector('input');
const output = document.querySelector('output');

rangeInput.addEventListener('change', function() {
  console.log(this.value);
});

And then we edit our event listener to set the value of that output to change whenever we edit the value of the input:

const rangeInput = document.querySelector('input');
const output = document.querySelector('output');

rangeInput.addEventListener('change', function() {
  output.value = this.value;
});

And voilΓ‘! There we have it, well mostly anyway. Once you change the value of the input our output will now reflect that:

See the Pen Input Output #3 by Robin Rendle (@robinrendle) on CodePen.

We should probably improve this a bit by settting a default value to our output so that it's visible as soon as you load the page. We could do that with the HTML itself and set the value inside the output:

<output for="quantity">50</output>

But I reckon that's not particularly bulletproof. What happens when we want to change the min or max of our input? We'd always have to change our output, too. Let's set the state of our output in our script. Here's a new function called setDefaultState:

function setDefaultState() {
  output.value = rangeInput.value;
}

When the DOM has finished loading and then fire that function:

document.addEventListener('DOMContentLoaded', function(){
  setDefaultState();
});

See the Pen Input Output #4 by Robin Rendle (@robinrendle) on CodePen.

Now we can style everything! But there's one more thing. The event listener change is great and all but it doesn't update the text immediately as you swipe left or right. Thankfully there's a new type of event listener called input with fairly decent browser support that we can use instead. Here's all our code with that addition in place:

const rangeInput = document.querySelector('input');
const output = document.querySelector('output');

function setDefaultState() {
  output.value = rangeInput.value;
}

rangeInput.addEventListener('input', function() {
  output.value = this.value;
});

document.addEventListener('DOMContentLoaded', function() {
  setDefaultState();
});

See the Pen Input Output #5 by Robin Rendle (@robinrendle) on CodePen.

And there we have it! An input, with an output.


The Output Element is a post from CSS-Tricks



from CSS-Tricks http://ift.tt/2z8CPgZ
via IFTTT

Heavy images slowing down your site?

Wednesday 25 October 2017

Code Review Etiquette

Code reviews are a big part of writing software, especially when working within a team. It is important to have an agreed-upon etiquette for reviewing code within a team. A code review is a critique and a critique can often feel more personal than the code writing itself. A sloppy, under-researched, or insensitive code critique can cause difficulties between team members, reduce overall team productivity, and diminish code quality over time. This post will briefly define code reviews, describe some common mistakes, and provide some quick tips for improving a code review process.

What are code reviews?

Code reviews are the process of sharing code so that other engineers can review it. Code reviews can happen verbally during pair programming sessions, or through reviewing code on websites like CodePen and GitHub. Mainly, code reviews happen in tools like GitHub when engineers submit pull requests.

Critiques are hugely beneficial. Convening engineers to discussions about code ensure that they're on the same page, regardless of whether it's in person or by sharing comments. Also, a review can help catch small mistakes in code or comments—like spelling and it can help newer or more junior coders learn the codebase. When done well, regular code reviews have nothing but benefits for all involved.

A common goal for code reviews is to make code changes as minimal and clear as possible so that the reviewer can easily understand what has changed and what is happening in the code. If code reviews are smaller, they're more frequent — potentially several a day — and more manageable.

Reviewing code should be a part of every developer's workflow. Senior reviewers are given the opportunity to teach/mentor, and even learn something new from time to time. Junior reviewers can grow and often help ensure code readability through the questions they ask. In fact, junior engineers are usually the best team members to ensure code readability.

For an engineer who works alone, asking for feedback from outsiders — at meet-ups, GitHub, Open Source Slack Channels, Reddit, Twitter, etc — can allow the solo coder the opportunity to participate in a code review process.

If we could all agree on an established process and language for reviewing code, then maintaining a positive environment for creative and productive engineering is easier. A code review etiquette benefits everyone — whether working alone or within a team.

Harsh code reviews can hurt feelings

Seeing bugs and issues continue to roll in and being mentally unable to address them has led to feelings of failure and depression. When looking at the moment project, I could only see the negatives. The bugs and misnomers and mistakes I had made. It led to a cycle of being too depressed to contribute, which led to being depressed because I wasn't contributing.

- Tim Wood, creator of Momentjs

There are many online comments, posts, and tweets by prolific engineers expressing that their feelings have been hurt by code reviews. This doesn't directly mean that reviewers are trying to be mean. Feeling defensive is a normal, quite human reaction to a critique or feedback. A reviewer should be aware of how the pitch, tone, or sentiment of their comments could be interpreted but the reviewee — see Occam's Razor.

Although reviewing code is very beneficial, a poor or sloppy review can have the opposite outcome. Avoid criticism without providing context. In other words, take the time to explain why something is wrong, where it went wrong, and how to avoid the mistake moving forward. Showing this level of respect for the reviewee strengthens the team, improves engineering awareness, and helps to provide agreed-upon technical definitions.

Quick tips for improving code review etiquette

Code is logical in nature. It is easy to pinpoint code that is incorrect or could be improved, just like it is easy to notice spelling misteaks. The human condition, when looking at and discussing logical things (like code), is to disregard the feelings of other people. This causes feelings to get hurt and a loss of focus on learning and collaboration.

Improving code review etiquette, because of the human condition, is difficult! Here is a quick list of things that I've done, said, seen, or received, that are easy wins in the art of Code Review Etiquette.

Remove the person

Without realizing it, engineers can neglect the difference between insightful critique and criticism because of personal relationships in communication.

The lines below dissect a code review comment of a theoretical function where it is suggested that there is an opportunity to return out of the function early.

You and I: Using you or I is probably not offensive intentionally, so don't worry. However, over time, involving the person can start to feel less positive—especially if vocal tones are ever added.

You should return out of this function early

We: Using we is inclusive and a safe way to say something more directly without making someone feel defensive. However, if the person speaking says we, and has not worked on the code at all, it may seem falsely inclusive and insensitive.

We should return out of this function early

No personal reference: Without personal reference, conversation or review will closely communicate the problem, idea, or issue.

Return out of this function early

Notice how the amount of text needed to communicate the same thing without using personal references takes fewer words and has greater clarity. This helps with human interaction, separates code discussion from personal discussion, and fewer words are needed to communicate the same idea.

Keep passionate conversations quiet

Passion is an important motivator for improving. Passion that is critical in nature can be very considerate and motivating. Feedback that is critical in nature is most useful if the person receiving the critique is engaged. This sort of communication comes up a lot during architectural conversations or when discussing new products.

Feedback that is critical in nature is most useful if the person receiving the critique is engaged. Note: the person receiving the information must be engaged with the critique.

Imagine this comment when stated with exaggerated physical movement, more excited vocal tone, and higher volume.

There are 8 web fonts used in this mock which may affect page load speed or even certain tracking metrics that could be caused by new race conditions!

Then, imagine a similar comment, even terser but stated with a calm demeanor, slower delivery, and a normal vocal volume — followed by a question.

There are 8 web fonts used in this mock. This will affect page load speed and possible tracking metrics because of potential race conditions. How can this be improved?

Notice how the comments above are almost the same. The second comment is even more direct. It states a problem as a fact and then requests feedback.

An important thing to remember when being passionate is taking on a quieter tone. This is a physical decision — not a social one. Passionate language can be the same, and perceived very differently, based on the orientation of the communicator's tone. If physical tone (body language), vocal tone, vocal pitch, and vocal volume remain gentle, it is observed that it is much more likely for an audience to remain engaged — even if the critique is critical in nature.

If the tone is aggressive in nature (exaggerated physical movement, more excited vocal tone, higher volume), the actual words used can be gentle in nature, but the audience can feel very differently. This communication can lead to embarrassment, a disengaged audience, and even loss of respect.

Aggressive communication is common with passionate communication because the human condition wants to protect ideas that we're passionate about. So, don't worry about it too much if you observe that your audience is disengaged when discussing something that you're passionate about. The key is to remember that if you can create perceived gentle communication, it will be easier for your audience to remain engaged — even if they are not initially in agreement.

Don't review the author, review the code

Following the conversation above, the act of pointing, within written conversation or actual body language, in almost any situation is not optimal for good communication. It changes the focal point of the conversation from the context of the conversation to a person or a thing.

The response below provides a comment and then a link. In the context of the code review, the second part of the comment and link takes the reader out of the context of the code review, which is confusing.

// Return out of this function earlier
// You need to learn about functional programming

The comment below provides a comment, and then a pseudo-code suggestion.

/* 
  return early like this:
*/
const calculateStuff = (stuff) => {
  if (noStuff) return
  // calculate stuff
  return calculatedStuff
}

In the two examples above, the first example causes the reader to go far beyond the issue. The conversation is more abstract—even existential. The second example refers directly to the issue and then provides a pseudo code snippet that relates directly to the comment.

It is best to only comment on contextually specific items when reviewing code. Broad comments lead to a loss of context. If broader discussions must happen, they should happen outside of code reviews. This keeps the code review clear and scoped to the code that is being reviewed.

Right and wrong can change

Developers almost always want to re-write things. It is natural to break problems down into tasks in real-time to address today's situation. However, focusing on the who's and why's of a product's history is important to conceptualize because great context is gained. 'History repeats itself' is an important phrase to remember when critiquing products or when a product you've written is critiqued. There is always a great amount of knowledge to be gained from historical context.

JavaScript was made in a week, considered a hacky scripting language, and then became the most widely used programming language in the world. Scalable Vector Graphics (SVGs) were supported in 1999, pretty much forgotten about, and now, they continue to gain popularity for the new opportunities they provide. Even the World Wide Web (the Internet) was meant for document sharing with little anticipation of the current result today. All of these technologies are important to remember when considering software and engineering—as logical and predictable results are supposed to be, success is often derived from unexpected results! Be open!

Some resources and tools that can help with code review etiquette

Conclusion

The list above includes general, high-level things that can help with positive engagement when talking about, reviewing, or reading about code—code review etiquette.

I am a hypocrite. I have made all the mistakes that I advise not to do in this article. I am human. My goal here is to help others avoid the mistakes that I have made, and to perhaps encourage some behavior standards for reviewing code so that engineers can more openly discuss code with less worry about being hurt or hurting others.


Code Review Etiquette is a post from CSS-Tricks



from CSS-Tricks http://ift.tt/2y6wg9y
via IFTTT

How to Do a Competitor Analysis for SEO

Posted by John.Reinesch

Competitive analysis is a key aspect when in the beginning stages of an SEO campaign. Far too often, I see organizations skip this important step and get right into keyword mapping, optimizing content, or link building. But understanding who our competitors are and seeing where they stand can lead to a far more comprehensive understanding of what our goals should be and reveal gaps or blind spots.

By the end of this analysis, you will understand who is winning organic visibility in the industry, what keywords are valuable, and which backlink strategies are working best, all of which can then be utilized to gain and grow your own site’s organic traffic.

Why competitive analysis is important

SEO competitive analysis is critical because it gives data about which tactics are working in the industry we are in and what we will need to do to start improving our keyword rankings. The insights gained from this analysis help us understand which tasks we should prioritize and it shapes the way we build out our campaigns. By seeing where our competitors are strongest and weakest, we can determine how difficult it will be to outperform them and the amount of resources that it will take to do so.

Identify your competitors

The first step in this process is determining who are the top four competitors that we want to use for this analysis. I like to use a mixture of direct business competitors (typically provided by my clients) and online search competitors, which can differ from whom a business identifies as their main competitors. Usually, this discrepancy is due to local business competitors versus those who are paying for online search ads. While your client may be concerned about the similar business down the street, their actual online competitor may be a business from a neighboring town or another state.

To find search competitors, I simply enter my own domain name into SEMrush, scroll down to the “Organic Competitors” section, and click “View Full Report.”

The main metrics I use to help me choose competitors are common keywords and total traffic. Once I've chosen my competitors for analysis, I open up the Google Sheets Competitor Analysis Template to the “Audit Data” tab and fill in the names and URLs of my competitors in rows 2 and 3.

Use the Google Sheets Competitor Analysis Template

A clear, defined process is critical not only for getting repeated results, but to scale efforts as you start doing this for multiple clients. We created our Competitor Analysis Template so that we can follow a strategic process and focus more on analyzing the results rather than figuring out what to look for anew each time.

In the Google Sheets Template, I've provided you with the data points that we'll be collecting, the tools you'll need to do so, and then bucketed the metrics based on similar themes. The data we're trying to collect relates to SEO metrics like domain authority, how much traffic the competition is getting, which keywords are driving that traffic, and the depth of competitors’ backlink profiles. I have built in a few heatmaps for key metrics to help you visualize who's the strongest at a glance.

This template is meant to serve as a base that you can alter depending on your client’s specific needs and which metrics you feel are the most actionable or relevant.

Backlink gap analysis

A backlink gap analysis aims to tell us which websites are linking to our competitors, but not to us. This is vital data because it allows us to close the gap between our competitors’ backlink profiles and start boosting our own ranking authority by getting links from websites that already link to competitors. Websites that link to multiple competitors (especially when it is more than three competitors) have a much higher success rate for us when we start reaching out to them and creating content for guest posts.

In order to generate this report, you need to head over to the Moz Open Site Explorer tool and input the first competitor’s domain name. Next, click “Linking Domains” on the left side navigation and then click “Request CSV” to get the needed data.

Next, head to the SEO Competitor Analysis Template, select the “Backlink Import - Competitor 1” tab, and paste in the content of the CSV file. It should look like this:

Repeat this process for competitors 2–4 and then for your own website in the corresponding tabs marked in red.

Once you have all your data in the correct import tabs, the “Backlink Gap Analysis” report tab will populate. The result is a highly actionable report that shows where your competitors are getting their backlinks from, which ones they share in common, and which ones you don’t currently have.

It’s also a good practice to hide all of the “Import” tabs marked in red after you paste the data into them, so the final report has a cleaner look. To do this, just right-click on the tabs and select “Hide Sheet,” so the report only shows the tabs marked in blue and green.

For our clients, we typically gain a few backlinks at the beginning of an SEO campaign just from this data alone. It also serves as a long-term guide for link building in the months to come as getting links from high-authority sites takes time and resources. The main benefit is that we have a starting point full of low-hanging fruit from which to base our initial outreach.

Keyword gap analysis

Keyword gap analysis is the process of determining which keywords your competitors rank well for that your own website does not. From there, we reverse-engineer why the competition is ranking well and then look at how we can also rank for those keywords. Often, it could be reworking metadata, adjusting site architecture, revamping an existing piece of content, creating a brand-new piece of content specific to a theme of keywords, or building links to your content containing these desirable keywords.

To create this report, a similar process as the backlink gap analysis one is followed; only the data source changes. Go to SEMrush again and input your first competitor’s domain name. Then, click on the “Organic Research” positions report in the left-side navigation menu and click on "Export" on the right.

Once you download the CSV file, paste the content into the “Keyword Import - Competitor 1” tab and then repeat the process for competitors 2–4 and your own website.

The final report will now populate on the “Keyword Gap Analysis” tab marked in green. It should look like the one below:

This data gives us a starting point to build out complex keyword mapping strategy documents that set the tone for our client campaigns. Rather than just starting keyword research by guessing what we think is relevant, we have hundreds of keywords to start with that we know are relevant to the industry. Our keyword research process then aims to dive deeper into these topics to determine the type of content needed to rank well.

This report also helps drive our editorial calendar, since we often find keywords and topics where we need to create new content to compete with our competitors. We take this a step further during our content planning process, analyzing the content the competitors have created that is already ranking well and using that as a base to figure out how we can do it better. We try to take some of the best ideas from all of the competitors ranking well to then make a more complete resource on the topic.

Using key insights from the audit to drive your SEO strategy

It is critically important to not just create this report, but also to start taking action based on the data that you have collected. On the first tab of the spreadsheet template, we write in insights from our analysis and then use those insights to drive our campaign strategy.

Some examples of typical insights from this document would be the average number of referring domains that our competitors have and how that relates to our own backlink profile. If we are ahead of our competitors regarding backlinks, content creation might be the focal point of the campaign. If we are behind our competitors in regards to backlinks, we know that we need to start a link building campaign as soon as possible.

Another insight we gain is which competitors are most aggressive in PPC and which keywords they are bidding on. Often, the keywords that they are bidding on have high commercial intent and would be great keywords to target organically and provide a lift to our conversions.

Start implementing competitive analyses into your workflow

Competitive analyses for SEO are not something that should be overlooked when planning a digital marketing strategy. This process can help you strategically build unique and complex SEO campaigns based on readily available data and the demand of your market. This analysis will instantly put you ahead of competitors who are following cookie-cutter SEO programs and not diving deep into their industry. Start implementing this process as soon as you can and adjust it based on what is important to your own business or client’s business.

Don’t forget to make a copy of the spreadsheet template here:

Get the Competitive Analysis Template


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog http://ift.tt/2z5akRd
via IFTTT

Tuesday 24 October 2017

Creating Vue.js Transitions & Animations

Tangential Content Earns More Links and Social Shares in Boring Industries [New Research]

Posted by kerryjones

Many companies still don’t see the benefit of creating content that isn’t directly about their products or brand. But unless you have a universally interesting brand, you’ll be hard-pressed to attract much of an audience if all you do is publish brand-centric content.

Content marketing is meant to solve this dilemma. By offering genuinely useful content to your target customers, rather than selling to them, you earn their attention and over time gain their trust.

And yet, I find myself explaining the value of non-branded content all too often. I frequently hear grumblings from fellow marketers that clients and bosses refuse to stray from sales-focused content. I see companies publishing what are essentially advertorials and calling it content marketing.

In addition to turning off customers, branded content can be extremely challenging for building links or earning PR mentions. If you’ve ever done outreach for branded content, you’ve probably gotten a lot of pushback from the editors and writers you’ve pitched. Why? Most publishers bristle at content that feels like a brand endorsement pretending not to be a brand endorsement (and expect you to pay big bucks for a sponsored content or native advertising spot).

Fortunately, there’s a type of content that can earn your target customers’ attention, build high-quality links, and increase brand awareness...

Tangential content: The cure for a boring niche

At Fractl, we refer to content on a topic that’s related to (but not directly about) the brand that created it as "tangential content."

Some hypothetical examples of tangential content would be:

  • A pool installation company creating content about summer safety tips and barbeque recipes.
  • A luggage retailer publishing country-specific travel guides.
  • An auto insurance broker offering car maintenance advice.

While there’s a time for branded content further down the sales funnel, tangential content might be right for you if you want to:

  1. Reach a wide audience and gain top-of-funnel awareness. Not a lot of raving fans in your “boring” brand niche? Tangential topics can get you in front of the masses.
  2. Target a greater number of publishers during outreach to increase your link building and PR mention potential. Tangential topics work well for outreach because you can expand your pool of publishers (larger niches vs. a small niche with only a few dedicated sites).
  3. Create more emotional content that resonates with your audience. In an analysis of more than 300 client campaigns, we found the content that received more than 200 media mentions was more likely than low-performing campaigns to have a strong emotional hook. If your brand niche doesn’t naturally tug on the heartstrings, tangential content is one way to create an emotional reaction.
  4. Build a more diverse content library and not be limited to creating content around one topic. If you’ve maxed out on publishing content about your niche, broadening your content repertoire to tangential topics can reinvigorate your content strategy (and your motivation).

Comparison of tangential vs. on-brand content performance

In our experience at Fractl, tangential content has been highly effective for link building campaigns, especially in narrow client niches that lack broad appeal. While we’ve assumed this is true based on our observations, we now have the data to back up our assumption.

We recently categorized 835 Fractl client campaigns as either “tangential” or “on-brand,” then compared the average number of pickups (links and press mentions) and number of social shares for each group. Our hunch was right: The tangential campaigns earned 30% more media mentions and 77% more social shares on average than the brand-focused campaigns.

So what exactly does a tangential campaign look like? Below are some real examples of our client campaigns that illustrate how tangential topics can yield stellar results.

Most Hateful/Most Politically Correct Places

  • Client niche: Apartment listing site
  • Campaign topic: Which states and cities use the most prejudiced/racist language based on geo-tagged Twitter data
  • Results: 67,000+ social shares and 620 media pickups, including features on CNET, Slate, Business Insider, AOL, Yahoo, Mic, The Daily Beast, and Adweek

Why it worked

After a string of on-brand campaigns for this client yielded average results, we knew capitalizing on a hot-button, current issue would attract tons of attention. This topic still ties back into the client’s main objective of helping people find a home since the community and location of that home are important factors in one’s decisions. Check out the full case study of this campaign for more insights into why it was successful.

Most Instagrammed Locations

  • Client niche: Bus fare comparison and booking tool
  • Campaign topic: Points of interest where people post the most Instagram photos in North America
  • Results: 40,000+ social shares and more than 300 pickups, including TIME, NBC News, Business Insider, Today, Yahoo!, AOL, Fast Company, and The Daily Mail

Why it worked

Our client’s niche, bus travel, had a limited audience, so we chose a topic that was of interest to anyone who enjoys traveling, regardless of the mode of transportation they use to get there. By incorporating data from a popular social network and using an idea with a strong geographic focus, we could target a lot of different groups — the campaign appealed to travel enthusiasts, Instagram users, and regional and city news outlets (including TV stations). For more details about our thought process behind this idea, see the campaign case study.

Most Attractive NFL Players and Teams

whitney-mercilus.png

Client niche: Sports apparel retailer

Campaign topic: Survey that rates the most attractive NFL players

Results: 45,000+ social shares and 247 media pickups, including CBS Sports, USA Today, Fox Sports, and NFL.com

Why it worked

Since diehard fans want to show off that their favorite player is the best, even if it’s just in the looks department, we were confident this lighthearted campaign would pique fan interest. But fans weren’t the only ones hitting the share button — the campaign also grabbed the attention of the featured teams and players, with many sharing on their social media profiles, which helped drive exposure.

On-brand content works best in certain verticals

Tangential content isn’t always necessary for earning top-of-funnel awareness. So, how do you know if your brand-centric topics will garner lots of interest? A few things to consider:

  • Is your brand topic interesting or useful to the general population?
  • Are there multiple publishers that specifically cover your niche? Do these publishers have large readerships?
  • Are you already publishing on-brand content that is achieving your goals/expectations?

We’ve seen several industry verticals perform very well using branded content. When we broke down our campaign data by vertical, we found our top performing on-brand campaign topics were technology, drugs and alcohol, and marketing.

Some examples of our successful on-brand campaign topics include:

  • “Growth of SaaS” for a B2B software comparison website
  • “Influencers on Instagram” for an influencer marketplace
  • “Global Drug Treatment Trends” for an addiction recovery client
  • “The Tech Job Network” for a tech career website

Coming up with tangential content ideas

Once you free yourself from only brainstorming brand-centric ideas, you might find it easy to dream up tangential concepts. If you need a little help, here are a few tips to get you started:

Review your buyer personas.

In order to know which tangential topics to choose, you need to understand your target audience’s interests and where your niche intersects with those interests. The best way to find this information? Buyer personas. If you don’t already have detailed buyer personas built out, Mike King’s epic Moz post from a few years ago remains the bible on personas in my opinion.

Find topics your audience cares about with Facebook Audience Insights.

Using its arsenal of user data, this Facebook ads tool gives you a peek into the interests and lifestyles of your target audience. These insights can supplement and inform your buyer personas. See the incredibly actionable post “How to Create Buyer Personas on a Budget Using Facebook Audience Insights” for more help with leveraging this tool.

Consider how trending news topics are tangential to your brand.

Pay attention to themes that keep popping up in the news and how your brand relates back to these stories (this is how the most racist/bigoted states and cities campaign I mentioned earlier in this post came to be). Also anticipate seasonal or event-based topics that are tangential to your brand. For example, a tire manufacturer may want to create content on protecting your car from flooding and storm damage during hurricane season.

Test tangential concepts on social media.

Not sure if a tangential topic will go over well? Before moving forward with a big content initiative, test it out by sharing content related to the topic on your brand’s social media accounts. Does it get a good reaction? Pro tip: spend a little bit of money promoting these as sponsored posts to ensure they get in front of your followers.

Have you had success creating content outside of your brand niche? I’d love to hear about your tangential content examples and the results you achieved, please share in the comments!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog http://ift.tt/2itJIlR
via IFTTT

Monday 23 October 2017

NEW in Keyword Explorer: See Who Ranks & How Much with Keywords by Site

Posted by randfish

For many years now, Moz's customers and so, so many of my friends and colleagues in the SEO world have had one big feature request from our toolset: "GIVE ME KEYWORDS BY SITE!"

Today, we're answering that long-standing request with that precise data inside

Keyword Explorer:

This data is likely familiar to folks who've used tools like SEMRush, KeywordSpy, Spyfu, or others, and we have a few areas we think are stronger than these competitors, and a few known areas of weakness (I'll get to both in a minute). For those who aren't familiar with this type of data, it offers a few big, valuable solutions for marketers and SEOs of all kinds. You can:

  1. Get a picture of how many (and which) keywords your site is currently ranking for, in which positions, even if you haven't been directly rank-tracking.
  2. See which keywords your competitors rank for as well, giving you new potential keyword targets.
  3. Run comparisons to see how many keywords any given set of websites share rankings for, or hold exclusively.
  4. Discover new keyword opportunities at the intersection of your own site's rankings with others, or the intersection of multiple sites in your space.
  5. Order keywords any site ranks for by volume, by ranking position, or by difficulty
  6. Build lists or add to your keyword lists right from the chart showing a site's ranking keywords
  7. Choose to see keywords by root domain (e.g. *.redfin.com including all subdomains), subdomain (e.g. just "www.redfin.com" or just "press.redfin.com"), or URL (e.g. just "http://ift.tt/2i73XFL")
  8. Export any list of ranking keywords to a CSV, along with the columns of volume, difficulty, and ranking data

Find your keywords by site

My top favorite features in this new release are:

#1 - The clear, useful comparison data between sites or pages

Comparing the volume of a site's ranking keywords is a really powerful way to show how, even when there's a strong site in a space (like Sleepopolis in the mattress reviews world), they are often losing out in the mid-long tail of rankings, possibly because they haven't targeted the quantity of keywords that their competitors have.

This type of crystal-clear interface (powerful enough to be used by experts, but easily understandable to anyone) really impressed me when I saw it. Bravo to Moz's UI folks for nailing it.

#2 - The killer Venn diagram showing keyword overlaps

Aww yeah! I love this interactive venn diagram of the ranking keywords, and the ability to see the quantity of keywords for each intersection at a glance. I know I'll be including screenshots like this in a lot of the analyses I do for friends, startups, and non-profits I help with SEO.

#3 - The accuracy & recency of the ranking, volume, & difficulty data

As you'll see in the comparison below, Moz's keyword universe is technically smaller than some others. But I love the trustworthiness of the data in this tool. We refresh not only rankings, but keyword volume data multiple times every month (no dig on competitors, but when volume or rankings data is out of date, it's incredibly frustrating, and lessens the tool's value for me). That means I can use and rely on the metrics and the keyword list — when I go to verify manually, the numbers and the rankings match. That's huge.

Caveat: Any rankings that are personalized or geo-biased tend to have some ranking position changes or differences. If you're doing a lot of geographically sensitive rankings research, it's still best to use a rank tracking solution like the one in Moz Pro Campaigns (or, at an enterprise level, a tool like STAT).


How does Moz's keyword universe stack up to the competition? We're certainly the newest player in this particular space, but we have some advantages over the other players (and, to be fair, some drawbacks too). Moz's Russ Jones put together this data to help compare:

Click the image for a larger version

Obviously, we've made the decision to be generally smaller, but fresher, than most of our competitors. We do this because:

  • A) We believe the most-trafficked keywords matter more when comparing the overlaps than getting too far into the long tail (this is particularly important because once you get into the longer tail of search demand, an unevenness in keyword representation is nearly unavoidable and can be very misleading)
  • B) Accuracy matters a lot with these types of analyses, and keyword rankings data that's more than 3–4 weeks out of date can create false impressions. It's also very tough to do useful comparisons when some keyword rankings have been recently refreshed and others are weeks or months behind.
  • C) We chose an evolving corpus that uses clickstream-fed data from Jumpshot to cycle in popular keywords and cycle out others that have lost popularity. In this fashion, we feel we can provide the truest, most representational form of the keyword universe being used by US searchers right now.

Over time, we hope to grow our corpus (so long as we can maintain accuracy and freshness, which provide the advantages above), and extend to other geographies as well.

If you're a Moz Pro subscriber and haven't tried out this feature yet, give it a spin. To explore keywords by site, simply enter a root domain, subdomain, or exact page into the universal search bar in Keyword Explorer. Use the drop if you need to modify your search (for example, researching a root domain as a keyword).

There's immense value to be had here, and a wealth of powerful, accurate, timely rankings data that can help boost your SEO targeting and competitive research efforts. I'm looking forward to your comments, questions, and feedback!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog http://ift.tt/2yJxKtW
via IFTTT

Reboot, Resets, and Reasoning

I saw in an article by Nicholas Cerminara the other day (careful visiting that link, looks like they have some tracking scripts run wild) that Bootstrap 4 has a new CSS reset baked in they are calling Reboot:

Reboot, a collection of element-specific CSS changes in a single file, kickstart Bootstrap to provide an elegant, consistent, and simple baseline to build upon.

If you're new to CSS development, the whole idea of a CSS reset is to deal with styling inconsistencies across browsers. For example, just now I popped a <button> onto a page with no other styling whatsoever. Chrome applies padding: 2px 6px 3px; - Firefox applies padding: 0 8px;. A CSS reset would apply new padding to that element, so that all browsers are consistent about what they apply. There are loads of examples like that.

By way of a bit of history...

In 2007 Jeff Starr rounded up a bunch of different CSS resets. The oldest one dated is Tantek Γ‡elik's undohtml.css (that's a direct link to the source). We can see that the purpose behind it was to strip away default styling.

/* undohtml.css */
/* (CC) 2004 Tantek Celik. Some Rights Reserved.             */
/*   http://ift.tt/o655VX                   */
/* This style sheet is licensed under a Creative Commons License. */

/* Purpose: undo some of the default styling of common (X)HTML browsers */

By far, the most popular reset came shortly after: the Meyer reset. It has different stuff in it than Tantek's did (it has even been updated with some HTML5 elements) but the spirit is the same: remove default styling. You'll probably recognize this famous block of code, finding its way into your DevTools style panel everywhere:

html, body, div, span, applet, object, iframe,
h1, h2, h3, h4, h5, h6, p, blockquote, pre,
a, abbr, acronym, address, big, cite, code,
del, dfn, em, img, ins, kbd, q, s, samp,
small, strike, strong, sub, sup, tt, var,
b, u, i, center,
dl, dt, dd, ol, ul, li,
fieldset, form, label, legend,
table, caption, tbody, tfoot, thead, tr, th, td,
article, aside, canvas, details, embed, 
figure, figcaption, footer, header, hgroup, 
menu, nav, output, ruby, section, summary,
time, mark, audio, video {
        margin: 0;
        padding: 0;
        border: 0;
        font-size: 100%;
        font: inherit;
        vertical-align: baseline;
}

Start with a reset like this (at the top of your production stylesheet) and the styles you write afterword will be on a steady foundation.

Years later, as HTML5 became more real, resets like Richard Clark's HTML5 Reset gained popularity. It was still a modified version of the Meyer reset, and the retained that spirit.

article,aside,details,figcaption,figure,
footer,header,hgroup,menu,nav,section { 
    display:block;
}

Sprinkled all throughout this, there were plenty of developers who went minimal by just zapping margin and padding from everything and leaving it at that:

* {
  padding: 0;
  margin: 0;
}

Dumb trivia: the CSS-Tricks logo was inspired by the universal selector and that idea.

Along comes Normalize.css...

Normalize.css represents the first meaningful shift in spirit for what a CSS reset should do. This is what seemed so different about it to me:

  • It was a fresh evaluation of everything that could be styled different across browsers and it address all of it. Where older CSS resets were a handful of lines of code, the uncompressed and documented normalize is 447.
  • It didn't remove any styling from elements that were already consistent across browsers (for the most part). For example, there isn't anything in Normalize for h2-h6 elements, just a fix for a weird h1 thing. That means you aren't zapping away header hierarchy, that default styling remains.
  • It was more accommodating to the idea of altering it, rather than just including it. For example, there is a section just for the <pre> tag and one line of that sets its font-family. You could change that to the font-family you want, and it would be just as effective of a reset.

The code is satisfying to read, as it explains what it's doing without drowning in specifics:

/**
 * 1. Remove the bottom border in Chrome 57- and Firefox 39-.
 * 2. Add the correct text decoration in Chrome, Edge, IE, Opera, and Safari.
 */

abbr[title] {
  border-bottom: none; /* 1 */
  text-decoration: underline; /* 2 */
  text-decoration: underline dotted; /* 2 */
}

Today Normalize is at 7.0.0 and has going on 30,000 GitHub stars. It's wicked popular.

So... resets can be opinionated?

We've seen lots of different takes on CSS resets and we've seen fundamental shifts in the approach, so I think it's fair to say CSS resets can take an opinionated stance.

Let's consider some ways...

  • Does the reset touch every single possible element? Or a subset of elements? How does it decide which elements to touch and which not to?
  • What properties are changed? Only ones with cross-browser differences? Or some other criteria, like the similarity to other elements that needed changes? Is it OK to apply properties to elements that don't have cross-browser issues in the name of consistency and efficiency?
  • Do you try to preserve the spirit of the user agent stylesheet? Sensible defaults?
  • Do you apply any properties that don't have cross-browser issues could be considered beneficial to "reset", like typographic defaults or box-sizing?
  • Do you include "toolbox" classes for common needs? Or leave that for other projects to handle?
  • Are you concerned about the size of it?
  • Do you use a preprocessor or any other tooling?

Take a look at Vanilla CSS Un-Reset. Loads of opinions here, starting with the idea that it's meant to re-style elements after you un-style then with a reset. It set's the body font size in pt, set a very specific monospace font stack, includes a ol ol ol ol selector, a clearfix, and alignment helper classes. No judgment there. People make things to help with their own problems and I'm sure this was helpful to the creator. But we can see the opinions shine through there.

Now look at MiniReset.css. Very different! It does wipe out type styles "so that using semantic markup doesn't affect the styling", but leaves some defaults in place on purpose "so that buttons and inputs keep their default layout", puts in some things that don't have cross-browser problems but are useful globally (box-sizing), and adds some minor responsive design helpers.

Totally different set of opinions there.

Jonathan Neal created a reset called santize.css that is very clear about it's opinions. Search for the word "opinionated" in the source code and you'll see it 19 times. All these are choices that Jonathan made based on research and what seem to be modern best practices, and no doubt sprinkled with his own needs and desires for what should be in a reset.

/*
 * Remove the text shadow on text selections (opinionated).
 * 1. Restore the coloring undone by defining the text shadow (opinionated).
 */

::-moz-selection {
        background-color: #b3d4fc; /* 1 */
        color: #000000; /* 1 */
        text-shadow: none;
}

::selection {
        background-color: #b3d4fc; /* 1 */
        color: #000000; /* 1 */
        text-shadow: none;
}

The word "reset"

Personally, I think it's useful to think of all of them under the same umbrella term and just be aware of the philosophical differences. But, Normalize intentionally separates itself:

A modern, HTML5-ready alternative to CSS resets

Sanitize calls itself a CSS library and doesn't use the word "reset" anywhere except to cite the Meyer reset.

Reboot

Reboot is interesting as it's perhaps the newest player in this world. It's file history dates back to 2015, which is probably related to Bootstrap 4 taking a while to drop after Bootstrap 3. Reboot doesn't have its own repo, it's a part of Bootstrap. Here's the direct file and the docs.

The way they think about it is interesting:

Reboot builds upon Normalize, providing many HTML elements with somewhat opinionated styles using only element selectors. Additional styling is done only with classes. For example, we reboot some <table> styles for a simpler baseline and later provide .table, .table-bordered, and more.

You can have a class that does styling, but if you use a reset, you don't have to overload that class with reset styles that handle cross-browser consistency issues.

//
// Tables
//

table {
  border-collapse: collapse; // Prevent double borders
}

caption {
  padding-top: $table-cell-padding;
  padding-bottom: $table-cell-padding;
  color: $text-muted;
  text-align: left;
  caption-side: bottom;
}

th {
  // Matches default `<td>` alignment by inheriting from the `<body>`, or the
  // closest parent with a set `text-align`.
  text-align: inherit;
}

It's definitely opinionated, but in a way that rolls with Bootstrap nicely. The fact that it's buried within Bootstrap is pretty good signaling this is designed for that world, not as a drop-in for any project. That said, I did my best to compile a straight CSS version of it here.

Tailoring a reset based on browser support

So long as we're talking about the past and future of resets, it's worth mentioning Browserslist again, which is a standardized format for declaring what browsers/versions a project supports.

A reset could be built in a such a way that the things it includes know why they are there. Exactly what browser and version it is there to support. Then if browserslist configuration says that particular browser isn't supported by this project anyway, that CSS could be removed.

That's what PostCSS Normalize does.


Reboot, Resets, and Reasoning is a post from CSS-Tricks



from CSS-Tricks http://ift.tt/2i0a1w7
via IFTTT

Passkeys: What the Heck and Why?

These things called  passkeys  sure are making the rounds these days. They were a main attraction at  W3C TPAC 2022 , gained support in  Saf...