Friday, 29 September 2017
CSS Grid PlayGround
Really great work by the Mozilla gang. Curious, as they already have MDN for CSS Grid, which isn't only a straight reference, they have plenty of "guides". Not that I'm complaining, the design and learning flow of this are fantastic. And of course, I'm a fan of the "View on CodePen" links ;)
There are always lots of ways to learn something. I'm a huge fan of Rachel Andrew's totally free video series and our own guide. This also seems a bit more playground-like.
Direct Link to Article — Permalink
CSS Grid PlayGround is a post from CSS-Tricks
from CSS-Tricks http://ift.tt/2wiZfqL
via IFTTT
Thursday, 28 September 2017
A Poll About Pattern Libraries and Hiring
I was asked (by this fella on Twitter) a question about design patterns. It has an interesting twist though, related to hiring, which I hope makes for a good poll.
Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.
I'll let this run for a week or two. Then (probably) instead of writing a new post with the results, I'll update this one with the results. Feel free to comment with the reasoning for your vote.
Results!
At the time of this update (September 2017), the poll has been up for about 6 weeks.
61% of folks said they would be more likely to want a job somewhere that were actively using (or working toward) a pattern library.
That's a strong number I'd say! Especially when 32% of folks responded that they don't care. So for 93% of folks, they either are incentivized to work for you because of a pattern library or don't mind. So is a pattern library good not only for your codebase and business, for attracting talent as well.
Only 7% of folks would be less likely to want to work there. Presumably, that's either because they enjoy that kind of work and it's already done, or find it limiting.
Read the comments below for some interesting further thoughts.
A Poll About Pattern Libraries and Hiring is a post from CSS-Tricks
from CSS-Tricks http://ift.tt/2vYM816
via IFTTT
Wednesday, 27 September 2017
How to Track Your Local SEO & SEM: A Guide
Posted by nickpierno
If you asked me, I’d tell you that proper tracking is the single most important element in your local business digital marketing stack. I’d also tell you that even if you didn’t ask, apparently.
A decent tracking setup allows you to answer the most important questions about your marketing efforts. What’s working and what isn’t?
Many digital marketing strategies today still focus on traffic. Lots of agencies/developers/marketers will slap an Analytics tracking code on your site and call it a day. For most local businesses, though, traffic isn’t all that meaningful of a metric. And in many cases (e.g. Adwords & Facebook), more traffic just means more spending, without any real relationship to results.
What you really need your tracking setup to tell you is how many leads (AKA conversions) you’re getting, and from where. It also needs to do so quickly and easily, without you having to log into multiple accounts to piece everything together.
If you’re spending money or energy on SEO, Adwords, Facebook, or any other kind of digital traffic stream and you’re not measuring how many leads you get from each source, stop what you’re doing right now and make setting up a solid tracking plan your next priority.
This guide is intended to fill you in on all the basic elements you’ll need to assemble a simple, yet flexible and robust tracking setup.
Google Analytics
Google Analytics is at the center of virtually every good web tracking setup. There are other supplemental ways to collect web analytics (like Heap, Hotjar, Facebook Pixels, etc), but Google Analytics is the free, powerful, and omnipresent tool that virtually every website should use. It will be the foundation of our approach in this guide.
Analytics setup tips
Analytics is super easy to set up. Create (or sign into) a Google account, add your Account and Property (website), and install the tracking code in your website’s template.
Whatever happens, don’t let your agency or developer set up your Analytics property on their own Account. Agencies and developers: STOP DOING THIS! Create a separate Google/Gmail account and let this be the "owner" of a new Analytics Account, then share permission with the agency/developer’s account, the client’s personal Google account, and so on.
The “All Website Data” view will be created by default for a new property. If you’re going to add filters or make any other advanced changes, be sure to create and use a separate View, keeping the default view clean and pure.
Also be sure to set the appropriate currency and time zone in the “View Settings.” If you ever use Adwords, using the wrong currency setting will result in a major disagreement between Adwords and Analytics.
Goals
Once your basic Analytics setup is in place, you should add some goals. This is where the magic happens. Ideally, every business objective your website can achieve should be represented as a goal conversion. Conversions can come in many forms, but here are some of the most common ones:
- Contact form submission
- Quote request form submission
- Phone call
- Text message
- Chat
- Appointment booking
- Newsletter signup
- E-commerce purchase
How you slice up your goals will vary with your needs, but I generally try to group similar “types” of conversions into a single goal. If I have several different contact forms on a site (like a quick contact form in the sidebar, and a heftier one on the contact page), I might group those as a single goal. You can always dig deeper to see the specific breakdown, but it’s nice to keep goals as neat and tidy as possible.
To create a goal in Analytics:
- Navigate to the Admin screen.
- Under the appropriate View, select Goals and then + New Goal.
- You can either choose between a goal Template, or Custom. Most goals are easiest to set up choosing Custom.
- Give your goal a name (ex. Contact Form Submission) and choose a type. Most goals for local businesses will either be a Destination or an Event.
Pro tip: Analytics allows you to associate a dollar value to your goal conversions. If you can tie your goals to their actual value, it can be a powerful metric to measure performance with. A common way to determine the value of a goal is to take the average value of a sale and multiply it by the average closing rate of Internet leads. For example, if your average sale is worth $1,000, and you typically close 1/10 of leads, your goal value would be $100.
Form tracking
The simplest way to track form fills is to have the form redirect to a "Thank You" page upon submission. This is usually my preferred setup; it’s easy to configure, and I can use the Thank You page to recommend other services, articles, etc. on the site and potentially keep the user around. I also find a dedicated Thank You page to provide the best affirmation that the form submission actually went through.
Different forms can all use the same Thank You page, and pass along variables in the URL to distinguish themselves from each other so you don’t have to create a hundred different Thank You pages to track different forms or goals. Most decent form plugins for Wordpress are capable of this. My favorite is Gravityforms. Contact Form 7 and Ninja Forms are also very popular (and free).
Another option is using event tracking. Event tracking allows you to track the click of a button or link (the submit button, in the case of a web form). This would circumvent the need for a thank you page if you don’t want to (or can’t) send the user elsewhere when they submit a form. It’s also handy for other, more advanced forms of tracking.
Here’s a handy plugin for Gravityforms that makes setting up event tracking a snap.
Once you’ve got your form redirecting to a Thank You page or generating an event, you just need to create a goal in Analytics with the corresponding value.
You can use Thank You pages or events in a similar manner to track appointment booking, web chats, newsletter signups, etc.
Call tracking
Many businesses and marketers have adopted form tracking, since it’s easy and free. That’s great. But for most businesses, it leaves a huge volume of web conversions untracked.
If you’re spending cash to generate traffic to your site, you could be hemorrhaging budget if you’re not collecting and attributing the phone call conversions from your website.
There are several solutions and approaches to call tracking. I use and recommend CallRail, which also seems to have emerged as the darling of the digital marketing community over the past few years thanks to its ease of use, great support, fair pricing, and focus on integration. Another option (so I don’t come across as completely biased) is CallTrackingMetrics.
You’ll want to make sure your call tracking platform allows for integration with Google Analytics and offers something called "dynamic number insertion."
Dynamic number insertion uses JavaScript to detect your actual local phone number on your website and replace it with a tracking number when a user loads your page.
Dynamic insertion is especially important in the context of local SEO, since it allows you to keep your real, local number on your site, and maintain NAP consistency with the rest of your business’s citations. Assuming it’s implemented properly, Google will still see your real number when it crawls your site, but users will get a tracked number.
Basically, magic.
There are a few ways to implement dynamic number insertion. For most businesses, one of these two approaches should fit the bill.
Number per source
With this approach, you'll create a tracking number for each source you wish to track calls for. These sources might be:
- Organic search traffic
- Paid search traffic
- Facebook referral traffic
- Yelp referral traffic
- Direct traffic
- Vanity URL traffic (for visitors coming from an offline TV or radio ad, for example)
When someone arrives at your website from one of these predefined sources, the corresponding number will show in place of your real number, wherever it’s visible. If someone calls that number, an event will be passed to Analytics along with the source.
This approach isn’t perfect, but it’s a solid solution if your site gets large amounts of traffic (5k+ visits/day) and you want to keep call tracking costs low. It will do a solid job of answering the basic questions of how many calls your site generates and where they came from, but it comes with a few minor caveats:
- Calls originating from sources you didn’t predefine will be missed.
- Events sent to Analytics will create artificial sessions not tied to actual user sessions.
- Call conversions coming from Adwords clicks won’t be attached to campaigns, ad groups, or keywords.
Some of these issues have more advanced workarounds. None of them are deal breakers… but you can avoid them completely with number pools — the awesomest call tracking method.
Number pools
“Keyword Pools,” as CallRail refers to them, are the killer app for call tracking. As long as your traffic doesn’t make this option prohibitively expensive (which won’t be a problem for most local business websites), this is the way to go.
In this approach, you create a pool with several numbers (8+ with CallRail). Each concurrent visitor on your site is assigned a different number, and if they call it, the conversion is attached to their session in Analytics, as well as their click in Adwords (if applicable). No more artificial sessions or disconnected conversions, and as long as you have enough numbers in your pool to cover your site’s traffic, you’ll capture all calls from your site, regardless of source. It’s also much quicker to set up than a number per source, and will even make you more attractive and better at sports!
You generally have to pay your call tracking provider for additional numbers, and you’ll need a number for each concurrent visitor to keep things running smoothly, so this is where massive amounts of traffic can start to get expensive. CallRail recommends you look at your average hourly traffic during peak times and include ¼ the tally as numbers in your pool. So if you have 30 visitors per hour on average, you might want ~8 numbers.
Implementation
Once you’ve got your call tracking platform configured, you’ll need to implement some code on your site to allow the dynamic number insertion to work its magic. Most platforms will provide you with a code snippet and instructions for installation. If you use CallRail and Wordpress, there’s a handy plugin to make things even simpler. Just install, connect, and go.
To get your calls recorded in Analytics, you’ll just need to enable that option from your call tracking service. With CallRail you simply enable the integration, add your domain, and calls will be sent to your Analytics account as Events. Just like with your form submissions, you can add these events as a goal. Usually it makes sense to add a single goal called “Phone Calls” and set your event conditions according to the output from your call tracking service. If you’re using CallRail, it will look like this:
Google Search Console
It’s easy to forget to set up Search Console (formerly Webmaster Tools), because most of the time it plays a backseat role in your digital marketing measurement. But miss it, and you’ll forego some fundamental technical SEO basics (country setting, XML sitemaps, robots.txt verification, crawl reports, etc.), and you’ll miss out on some handy keyword click data in the Search Analytics section. Search Console data can also be indispensable for diagnosing penalties and other problems down the road, should they ever pop up.
Make sure to connect your Search Console with your Analytics property, as well as your Adwords account.
With all the basics of your tracking setup in place, the next step is to bring your paid advertising data into the mix.
Google Adwords
Adwords is probably the single most convincing reason to get proper tracking in place. Without it, you can spend a lot of money on clicks without really knowing what you get out of it. Conversion data in Adwords is also absolutely critical in making informed optimizations to your campaign settings, ad text, keywords, and so on.
If you’d like some more of my rantings on conversions in Adwords and some other ways to get more out of your campaigns, check out this recent article :)
Getting your data flowing in all the right directions is simple, but often overlooked.
Linking with Analytics
First, make sure your Adwords and Analytics accounts are linked. Always make sure you have auto-tagging enabled on your Adwords account. Now all your Adwords data will show up in the Acquisition > Adwords area of Analytics. This is a good time to double-check that you have the currency correctly set in Analytics (Admin > View Settings); otherwise, your Adwords spend will be converted to the currency set in Analytics and record the wrong dollar values (and you can’t change data that’s already been imported).
Next, you’ll want to get those call and form conversions from Analytics into Adwords.
Importing conversions in Adwords
Some Adwords management companies/consultants might disagree, but I strongly advocate an Analytics-first approach to conversion tracking. You can get call and form conversions pulled directly into Adwords by installing a tracking code on your site. But don’t.
Instead, make sure all your conversions are set up as goals in Analytics, and then import them into Adwords. This allows Analytics to act as your one-stop-shop for reviewing your conversion data, while providing all the same access to that data inside Adwords.
Call extensions & call-only ads
This can throw some folks off. You will want to track call extensions natively within Adwords. These conversions are set up automatically when you create a call extension in Adwords and elect to use a Google call forwarding number with the default settings.
Don’t worry though, you can still get these conversions tracked in Analytics if you want to (I could make an argument either for or against). Simply create a single “offline” tracking number in your call tracking platform, and use that number as the destination for the Google forwarding number.
This also helps counteract one of the oddities of Google’s call forwarding system. Google will actually only start showing the forwarding number on desktop ads after they have received a certain (seemingly arbitrary) minimum number of clicks per week. As a result, some calls are tracked and some aren’t — especially on smaller campaigns. With this little trick, Analytics will show all the calls originating from your ads — not just ones that take place once you’ve paid Google enough each week.
Adwords might give you a hard time for using a number in your call extensions that isn’t on your website. If you encounter issues with getting your number verified for use as a call extension, just make sure you have linked your Search Console to your Adwords account (as indicated above).
Now you’ve got Analytics and Adwords all synced up, and your tracking regimen is looking pretty gnarly! There are a few other cool tools you can use to take full advantage of your sweet setup.
Google Tag Manager
If you’re finding yourself putting a lot of code snippets on your site (web chat, Analytics, call tracking, Adwords, Facebook Pixels, etc), Google Tag Manager is a fantastic tool for managing them all from one spot. You can also do all sorts of advanced slicing and dicing.
GTM is basically a container that you put all your snippets in, and then you put a single GTM snippet on your site. Once installed, you never need to go back to your site’s code to make changes to your snippets. You can manage them all from the GTM interface in a user-friendly, version-controlled environment.
Don’t bother if you just need Analytics on your site (and are using the CallRail plugin). But for more robust needs, it’s well worth considering for its sheer power and simplicity.
Here’s a great primer on making use of Google Tag Manager.
UTM tracking URLs & Google Campaign URL Builder
Once you’ve got conversion data occupying all your waking thoughts, you might want to take things a step further. Perhaps you want to track traffic and leads that come from an offline advertisement, a business card, an email signature, etc. You can build tracking URLs that include UTM parameters (campaign, source, and medium), so that when visitors come to your site from a certain place, you can tell where that place was!
Once you know how to build these URLs, you don’t really need a tool, but Google’s Campaign URL Builder makes quick enough work of it that it’s bound to earn a spot in your browser’s bookmarks bar.
Pro tip: Use a tracking URL on your Google My Business listing to help distinguish traffic/conversions coming in from your listing vs traffic coming in from the organic search results. I’d recommend using:
Source: google
Medium: organic
Campaign name: gmb-listing (or something)
This way your GMB traffic still shows up in Analytics as normal organic traffic, but you can drill down to the gmb-listing campaign to see its specific performance.
Bonus pro tip: Use a vanity domain or a short URL on print materials or offline ads, and point it to a tracking URL to measure their performance in Analytics.
Rank tracking
Whaaat? Rank tracking is a dirty word to conversion tracking purists, isn’t it?
Nah. It’s true that rank tracking is a poor primary metric for your digital marketing efforts, but it can be very helpful as a supplemental metric and for helping to diagnose changes in traffic, as Darren Shaw explored here.
For local businesses, we think our Local Rank Tracker is a pretty darn good tool for the job.
Google My Business Insights
Your GMB listing is a foundational piece of your local SEO infrastructure, and GMB Insights offer some meaningful data (impressions and clicks for your listing, mostly). It also tries to tell you how many calls your listing generates for you, but it comes up a bit short since it relies on "tel:" links instead of tracking numbers. It will tell you how many people clicked on your phone number, but not how many actually made the call. It also won’t give you any insights into calls coming from desktop users.
There’s a great workaround though! It just might freak you out a bit…
Fire up your call tracking platform once more, create an “offline” number, and use it as your “primary number” on your GMB listing. Don’t panic. You can preserve your NAP consistency by demoting your real local number to an “additional number” slot on your GMB listing.
I don’t consider this a necessary step, because you’re probably not pointing your paid clicks to your GMB listing. However, combined with a tracking URL pointing to your website, you can now fully measure the performance of Google My Business for your business!
Disclaimer: I believe that this method is totally safe, and I’m using it myself in several instances, but I can’t say with absolute certainty that it won’t impact your rankings. Whitespark is currently testing this out on a larger scale, and we’ll share our findings once they’re assembled!
Taking it all in
So now you’ve assembled a lean, mean tracking machine. You’re already feeling 10 years younger, and everyone pays attention when you enter the room. But what can you do with all this power?
Here are a few ways I like to soak up this beautiful data.
Pop into Analytics
Since we’ve centralized all our tracking in Analytics, we can answer pretty much any performance questions we have within a few simple clicks.
- How many calls and form fills did we get last month from our organic rankings?
- How does that compare to the month before? Last year?
- How many paid conversions are we getting? How much are we paying on average for them?
- Are we doing anything expensive that isn’t generating many leads?
- Does our Facebook page generate any leads on our website?
There are a billion and seven ways to look at your Analytics data, but I do most of my ogling from Acquisition > All Traffic > Channels. Here you get a great overview of your traffic and conversions sliced up by channels (Organic Search, Paid Search, Direct, Referral, etc). You can obviously adjust date ranges, compare to past date ranges, and view conversion metrics individually or as a whole. For me, this is Analytics home base.
Acquisition > All Traffic > Source/Medium can be equally interesting, especially if you’ve made good use of tracking URLs.
Make some sweet SEO reports
I can populate almost my entire standard SEO client report from the Acquisition section of Analytics. Making conversions the star of the show really helps to keep clients engaged in their monthly reporting.
Google Analytics dashboards
Google’s Dashboards inside Analytics provide a great way to put the most important metrics together on a single screen. They’re easy to use, but I’ve always found them a bit limiting. Fortunately for data junkies, Google has recently released its next generation data visualization product...
Google Data Studio
This is pretty awesome. It’s very flexible, powerful, and user-friendly. I’d recommend skipping the Analytics Dashboards and going straight to Data Studio.
It will allow to you to beautifully dashboard-ify your data from Analytics, Adwords, Youtube, DoubleClick, and even custom databases or spreadsheets. All the data is “live” and dynamic. Users can even change data sources and date ranges on the fly! Bosses love it, clients love it, and marketers love it… provided everything is performing really well ;)
Supermetrics
If you want to get really fancy, and build your own fully custom dashboard, develop some truly bespoke analysis tools, or automate your reporting regimen, check out Supermetrics. It allows you to pull data from just about any source into Google Sheets or Excel. From there, your only limitation is your mastery of spreadsheet-fu and your imagination.
TL;DR
So that’s a lot of stuff. If you’d like to skip the more nuanced explanations, pro tips, and bad jokes, here’s the gist in point form:
- Tracking your digital marketing is super important.
- Don’t just track traffic. Tracking conversions is critical.
- Use Google Analytics. Don’t let your agency use their own account.
- Set up goals for every type of lead (forms, calls, chats, bookings, etc).
- Track forms with destinations (thank you pages) or events.
- Track your calls, probably using CallRail.
- Use "number per source" if you have a huge volume of traffic; otherwise, use number pools (AKA keyword pools). Pools are better.
- Set up Search Console and link it to your Analytics and Adwords accounts.
- Link Adwords with Analytics.
- Import Analytics conversions into Adwords instead of using Adwords’ native conversion tracking snippet...
- ...except for call extensions. Track those within and Adwords AND in Analytics (if you want to) by using an “offline” tracking number as the destination for your Google forwarding numbers.
- Use Google Tag Manager if you have more than a couple third-party scripts to run on your site (web chat, Analytics, call tracking, Facebook Pixels etc).
- Use Google Campaign URL Builder to create tracked URLs for tracking visitors from various sources like offline advertising, email signatures, etc.
- Use a tracked URL on your GMB listing.
- Use a tracked number as your “primary” GMB listing number (if you do this, make sure you put your real local number as a “secondary” number). Note: We think this is safe, but we don’t have quite enough data to say so unequivocally. YMMV.
- Use vanity domains or short URLs that point to your tracking URLs to put on print materials, TV spots, etc.
- Track your rankings like a boss.
- Acquisition > All Traffic > Channels is your new Analytics home base.
- Consider making some Google Analytics Dashboards… and then don’t, because Google Data Studio is way better. So use that.
- Check out Supermetrics if you want to get really hardcore.
- Don’t let your dreams be dreams.
If you’re new to tracking your digital marketing, I hope this provides a helpful starting point, and helps cut through some of the confusion and uncertainty about how to best get set up.
If you’re a conversion veteran, I hope there are a few new or alternative ideas here that you can use to improve your setup.
If you’ve got anything to add, correct, or ask, leave a comment!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from The Moz Blog http://ift.tt/2wVCoRD
via IFTTT
Tuesday, 26 September 2017
How and Why to Do a Mobile/Desktop Parity Audit
Posted by Everett
Google still ranks webpages based on the content, code, and links they find with a desktop crawler. They’re working to update this old-school approach in favor of what their mobile crawlers find instead. Although the rollout will probably happen in phases over time, I’m calling the day this change goes live worldwide “D-day” in the post below. Mobilegeddon was already taken.
You don’t want to be in a situation on D-day where your mobile site has broken meta tags, unoptimized titles and headers, missing content, or is serving the wrong HTTP status code. This post will help you prepare so you can sleep well between now then.
What is a mobile parity audit?
When two or more versions of a website are available on the same URL, a "parity audit" will crawl each version, compare the differences, and look for errors.
When do you need one?
You should do a parity audit if content is added, removed, hidden, or changed between devices without sending the user to a new URL.
This type of analysis is also useful for mobile sites on a separate URL, but that's another post.
What will it tell you? How will it help?
Is the mobile version of the website "optimized" and crawlable? Are all of the header response codes and tags set up properly, and in the same way, on both versions? Is important textual content missing from, or hidden, on the mobile version?
Why parity audits could save your butt
The last thing you want to do is scramble to diagnose a major traffic drop on D-day when things go mobile-first. Even if you don’t change anything now, cataloging the differences between site versions will help diagnose issues if/when the time comes.
It may also help you improve rankings right now.
I know an excellent team of SEOs for a major brand who, for severals months, had missed the fact that the entire mobile site (millions of pages) had title tags that all read the same: "BrandName - Mobile Site." They found this error and contacted us to take a more complete look at the differences between the two sites. Here are some other things we found:
- One page type on the mobile site had an error at the template level that was causing rel=canonical tags to break, but only on mobile, and in a way that gave Google conflicting instructions, depending on whether they rendered the page as mobile or desktop. The same thing could have happened with any tag on the page, including robots meta directives. It could also happen with HTTP header responses.
- The mobile site has fewer than half the amount of navigation links in the footer. How will this affect the flow of PageRank to key pages in a mobile-first world?
- The mobile site has far more related products on product detail pages. Again, how will this affect the flow of PageRank, or even crawl depth, when Google goes mobile-first?
- Important content was hidden on the mobile version. Google says this is OK as long as the user can drop down or tab over to read the content. But in this case, there was no way to do that. The content was in the code but hidden to mobile viewers, and there was no way of making it visible.
How to get started with a mobile/desktop parity audit
It sounds complicated, but really it boils down to a few simple steps:
- Crawl the site as a desktop user.
- Crawl the site as a mobile user.
- Combine the outputs (e.g. Mobile Title1, Desktop Title1, Mobile Canonical1, Desktop Canonical1)
- Look for errors and differences.
Screaming Frog provides the option to crawl the site as the Googlebot Mobile user-agent with a smartphone device. You may or may not need to render JavaScript.
You can run two crawls (mobile and desktop) with DeepCrawl as well. However, reports like "Mobile Word Count Mismatch" do not currently work on dynamic sites, even after two crawls.
The hack to get at the data you want is the same as with Screaming Frog: namely, running two crawls, exporting two reports, and using Vlookups in Excel to compare the columns side-by-side with URL being the unique identifier.
Here's a simplified example using an export from DeepCrawl:
As you can see in the screenshot above, blog category pages, like /category/cro/, are bigly different between devices types, not just in how they appear, but also in what code and content gets delivered and rendered as source code. The bigliest difference is that post teasers disappear on mobile, which accounts for the word count disparity.
Word count is only one data point. You would want to look at many different things, discussed below, when performing a mobile/desktop parity audit.
For now, there does NOT appear to be an SEO tool on the market that crawls a dynamic site as both a desktop and mobile crawler, and then generates helpful reports about the differences between them.
But there's hope!
Our industry toolmakers are hot on the trail, and at this point I'd expect features to release in time for D-day.
Deep Crawl
We are working on Changed Metrics reports, which will automatically show you pages where the titles and descriptions have changed between crawls. This would serve to identify differences on dynamic sites when the user agent is changed. But for now, this can be done manually by downloading and merging the data from the two crawls and calculating the differences.
Moz Pro
Dr. Pete says they've talked about comparing desktop and mobile rankings to look for warning signs so Moz could alert customers of any potential issues. This would be a very helpful feature to augment the other analysis of on-page differences.
Sitebulb
When you select "mobile-friendly," Sitebulb is already crawling the whole site first, then choosing a sample of (up to) 100 pages, and then recrawling these with the JavaScript rendering crawler. This is what produces their "mobile-friendly" report.
They're thinking about doing the same to run these parity audit reports (mobile/desktop difference checker), which would be a big step forward for us SEOs. Because most of these disparity issues happen at the template/page type level, taking URLs from different crawl depths and sections of the site should allow this tool to alert SEOs of potential mismatches between content and page elements on those two versions of the single URL.
Screaming Frog
Aside from the oversensitive hash values, SF has no major advantage over DeepCrawl at the moment. In fact, DeepCrawl has some mobile difference finding features that, if they were to work on dynamic sites, would be leaps and bounds ahead of SF.
That said, the process shared below uses Screaming Frog because it's what I'm most familiar with.
Customizing the diff finders
One of my SEO heroes, David Sottimano, whipped out a customization of John Resig's Javascript Diff Algorithm to help automate some of the hard work involved in these desktop/mobile parity audits.
You can make a copy of it here. Follow the instructions in the Readme tab. Note: This is a work in progress and is an experimental tool, so have fun!
On using the hash values to quickly find disparities between crawls
As Lunametrics puts it in their excellent guide to Screaming Frog Tab Definitions, the hash value "is a count of the number of URLs that potentially contain duplicate content. This count filters for all duplicate pages found via the hash value. If two hash values match, the pages are exactly the same in content."
I tried doing this, but found it didn't work very well for my needs for two reasons: because I was unable to adjust the sensitivity, and if even only one minor client-side JavaScript element changed, the page would get a new hash value.
When I asked DeepCrawl about it, I found out why:
The problem with using a hash to flag different content is that a lot of pages would be flagged as different, when they are essentially the same. A hash will be completely different if a single character changes.
Mobile parity audit process using Screaming Frog and Excel
Run two crawls
First, run two separate crawls. Settings for each are below. If you don't see a window or setting option, assume it was set to default.
1. Crawl 1: Desktop settings
Configurations ---> Spider
Your settings may vary (no pun intended), but here I was just looking for very basic things and wanted a fast crawl.
Configurations ---> HTTP Header ---> User-Agent
2. Start the first crawl
3. Save the crawl and run the exports
When finished, save it as desktop-crawl.seospider and run the Export All URLs report (big Export button, top left). Save the export as desktop-internal_all.csv.
4. Update user-agent settings for the second crawl
Hit the "Clear" button in Screaming Frog and change the User-Agent configuration to the following:
5. Start the second crawl
6. Save the crawl and run the exports
When finished, save it as mobile-crawl.seospider and run the Export All URLs report. Save the export as mobile-internal_all.csv.
Combine the exports in Excel
Import each CSV into a separate tab within a new Excel spreadsheet.
Create another tab and bring in the URLs from the Address column of each crawl tab. De-duplicate them.
Use Vlookups or other methods to pull in the respective data from each of the other tabs.
You'll end up with something like this:
A tab with a single row per URL, but with mobile and desktop columns for each datapoint. It helps with analysis if you can conditionally format/highlight instances where the desktop and mobile data does not match.
Errors & differences to look out for
Does the mobile site offer similar navigation options?
Believe it or not, you can usually fit the same amounts of navigation links onto a mobile site without ruining the user experience when done right. Here are a ton of examples of major retail brands approaching it in different ways, from mega navs to sliders and hamburger menus (side note: now I’m craving White Castle).
HTTP Vary User-Agent response headers
This is one of those things that seems like it could produce more caching problems and headaches than solutions, but Google says to use it in cases where the content changes significantly between mobile and desktop versions on the same URL. My advice is to avoid using Vary User-Agent if the variations between versions of the site are minimal (e.g. simplified navigation, optimized images, streamlined layout, a few bells and whistles hidden). Only use it if entire paragraphs of content and other important elements are removed.
Internal linking disparities
If your desktop site has twenty footer links to top-selling products and categories using optimized anchor text, and your mobile site has five links going to pages like “Contact Us” and “About” it would be good to document this so you know what to test should rankings drop after a mobile-first ranking algorithm shift.
Meta tags and directives
Do things like title tags, meta descriptions, robots meta directives, rel=canonical tags, and rel=next/prev tags match on both versions of the URL? Discovering this stuff now could avert disaster down the line.
Content length
There is no magic formula to how much content you should provide to each type of device, just as there is no magic formula for how much content you need to rank highly on Google (because all other things are never equal).
Imagine it's eight months from now and you're trying to diagnose what specific reasons are behind a post-mobile-first algorithm update traffic drop. Do the pages with less content on mobile correlate with lower rankings? Maybe. Maybe not, but I'd want to check on it.
Speed
Chances are, your mobile site will load faster. However, if this is not the case you definitely need to look into the issue. Lots of big client-side JavaScript changes could be the culprit.
Rendering
Sometimes JavaScript and other files necessary for the mobile render may be different from those needed for the desktop render. Thus, it's possible that one set of resources may be blocked in the robots.txt file while another is not. Make sure both versions fully render without any blocked resources.
Here’s what you need to do to be ready for a mobile-first world:
- Know IF there are major content, tag, and linking differences between the mobile and desktop versions of the site.
- If so, know WHAT those differences are, and spend time thinking about how that might affect rankings if mobile was the only version Google ever looked at.
- Fix any differences that need to be fixed immediately, such as broken or missing rel=canonicals, robots meta, or title tags.
- Keep everything else in mind for things to test after mobile-first arrives. If rankings drop, at least you’ll be prepared.
And here are some tools & links to help you get there:
- Mobile/Desktop Mismatch Reports by DeepCrawl
- Mobile-First Index Prep Tool by Merkle (Build on this please Merkle! Great start, but we need more!)
- Mobile-First Indexing, Google Webmaster Central
- Varvy Mobile Friendliness Tool
I suspect it won't be long before this type of audit is made unnecessary because we'll ONLY be worried about the mobile site. Until then, please comment below to share which differences you found, and how you chose to address them so we can all learn from each other.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from The Moz Blog http://ift.tt/2ftXbZM
via IFTTT
Monday, 25 September 2017
Moz's Brand-New SEO Learning Center Has Landed!
Posted by rachelgooodmanmoore
CHAPTER 1: A New Hope
A long time ago in a galaxy far, far away, marketers who wanted to learn about SEO were forced to mine deep into the caverns of Google search engine result pages to find the answers to even the most simple SEO questions.
Then, out of darkness came a new hope (with a mouthful of a name):
...the Learn SEO and Search Marketing hub!
The SEO and Search Marketing hub housed resources like the Beginner’s Guide to SEO and articles about popular SEO topics like meta descriptions, title tags, and robots.txt. Its purpose was to serve as a one-stop-shop for visitors looking to learn what SEO was all about and how to use it on their own sites.
The Learn SEO and Search marketing hub would go on to serve as a guiding light for searchers and site visitors looking to learn the ropes of SEO for many years to come.
CHAPTER 2: The Learning Hub Strikes Back
Since its inception in 2010, this hub happily served hundreds of thousands of Internet folk looking to learn the ropes of SEO and search marketing. But time took its toll on the hub. As marketing and search engine optimization grew increasingly complex, the Learning Hub lapsed into disrepair. While new content was periodically added, that content was hard to find and often intermingled with older, out-of-date resources. The Learning Hub became less of a hub and more of a list of resources… some of which were also lists of resources.
Offshoots like the Local Learning Center and Content Marketing Learning Center sprung up in an effort to tame the overgrown learning hub, but ‘twas all for naught: By autumn of 2016, Moz’s learning hub sites were a confusing nest of hard-to-navigate articles, guides, and 404s. Some articles were written for SEO experts and explained concepts in extensive, technical detail, while others were written for an audience with less extensive SEO knowledge. It was impossible to know which type of article you found yourself in before you wound up confused or discouraged.
What had once been a useful resource for marketers of all backgrounds was languishing in its age.
CHAPTER 3: The Return of the Learning Center
The vision behind the SEO and Search Marketing Hub had always been to educate SEOs and search marketers on the skills they needed to be successful in their jobs. While the site section continued to serve that purpose, somewhere along the along the way we started getting diminishing returns.
Our mission, then, was clear: Re-invent Moz’s learning resources with a new structure, new website, and new content.
As we set off on this mission, one thing was clear: The new Learning Center should serve as a home base for marketers and SEOs of all skill levels to learn what’s needed to excel in their work: from the fundamentals to expert-level content, from time-tested tenets of SEO success to cutting-edge tactics and tricks. If we weren’t able to accomplish this, our mission would all be for naught.
We also believed that a new Learning Center should make it easy for visitors of all skill levels and learning styles to find value: from those folks who want to read an article then dive into their work; to those who want to browse through libraries of focused SEO videos; to folks who want to learn from the experts in hands-on webinars.
So, that’s exactly what we built.
May we introduce to you the (drumroll, please) brand new, totally rebuilt SEO Learning Center!
Unlike the “list of lists” in the old Learn SEO and Search Marketing hub, the new Learning Center organizes content by topic.
Each topic has its own “topic hub.” There are eleven of these and they cover:
- Ranking and Visibility
- On-Site SEO
- Links and Link Building
- Local SEO
- Keywords and Keyword Research
- Crawling and Site Audits
- Analytics and Reporting
- Content Marketing
- Social Media and Influencer Marketing
- International SEO
- Mobile SEO
Each of the eleven topic hubs host a slew of hand-picked articles, videos, blog posts, webinars, Q&A posts, templates, and training classes designed to help you dive deeper into your chosen SEO topic.
All eleven of the hubs contain a “fundamentals” menu to help you wrap your brain around a topic, as well as a content feed with hundreds of resources to help you go even further. These feed resources are filterable by topic (for instance, content that’s about both ranking & visibility AND local SEO), SEO skill level (from beginner to advanced), and format.
And, if you’re brand new to a topic or not sure where to start, you can always find a link to the Beginner’s Guide to SEO right at the top of each page.
But we can only explain so much in words — check it out for yourself:
Visit the new SEO Learning Center!
CHAPTER 4: The Content Awakens
One of the main motivations behind rebuilding the Learning Center website was to make it easier for folks to find and move through a slew of educational content, be that a native Learning Center article, a blog post, a webinar, or otherwise. But it doesn’t do any good to make content easier to find if that content is totally out-of-date and unhelpful.
In addition to our mission to build a new Learning Center, we’ve also been quietly updating our existing articles to include the latest best practices, tactics, strategies, and resources. As part of this rewrite, we’ve also made an effort to keep each article as focused as possible around specifically one topic — a complete explanation of everything someone newer to the world of SEO needs to know about the given topic. What did that process look like in action? Check it out:
As of now we’ve updated 50+ articles, with more on the way!
Going forward, we’ll continue to iterate on the search experience within the new Learning Center. For example, while we always have our site search bar available, a Learning Center-specific search function would make finding articles even easier — and that’s just one of our plans for the future. Bigger projects include a complete update of the Beginner’s Guide to SEO (keep an eye on the blog for more news there, too), as well as our other introductory guides.
Help us, Moz-i Wan Community, you’re our only hope
We’ve already telekinetically moved mountains with this project, but the Learning Center is your resource — we’d love to hear what you’d like to see next, or if there’s anything really important you think we’ve missed. Head over, check it out, and tell us what you think in the comments!
Explore the new SEO Learning Center!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from The Moz Blog http://ift.tt/2yBZxdU
via IFTTT
5 things CSS developers wish they knew before they started
You can learn anything, but you can't learn everything 🙃
So accept that, and focus on what matters to you
— Una Kravets 👩🏻💻 (@Una) September 1, 2017
Una Kravets is absolutely right. In modern CSS development, there are so many things to learn. For someone starting out today, it's hard to know where to start.
Here is a list of things I wish I had known if I were to start all over again.
1. Don't underestimate CSS
It looks easy. After all, it's just a set of rules that selects an element and modifies it based on a set of properties and values.
CSS is that, but also so much more!
A successful CSS project requires the most impeccable architecture. Poorly written CSS is brittle and quickly becomes difficult to maintain. It's critical you learn how to organize your code in order to create maintainable structures with a long lifespan.
But even an excellent code base has to deal with the insane amount of devices, screen sizes, capabilities, and user preferences. Not to mention accessibility, internationalization, and browser support!
CSS is like a bear cub: cute and inoffensive but as he grows, he'll eat you alive.
- Learn to read code before writing and delivering code.
- It's your responsibility to stay up to date with best practice. MDN, W3C, A List Apart, and CSS-Tricks are your source of truth.
- The web has no shape; each device is different. Embrace diversity and understand the environment we live in.
2. Share and participate
Sharing is so important! How I wish someone had told me that when I started. It took me ten years to understand the value of sharing; when I did, it completely changed how I viewed my work and how I collaborate with others.
You'll be a better developer if you surround yourself with good developers, so get involved in open source projects. The CSS community is full of kind and generous developers. The sooner the better.
Share everything you learn. The path is as important as the end result; even the tiniest things can make a difference to others.
- Learn Git. Git is the language of open source and you definitely want to be part of it.
- Get involved in an open source project.
- Share! Write a blog, documentation, or tweets; speak at meetups and conferences.
- Find an accountability partner, someone that will push you to share consistently.
3. Pick the right tools
Your code editor should be an extension of your mind.
It doesn't matter if you use Atom, VSCode or old school Vim; the better you shape your tool to your thought process, the better developer you'll become. You'll not only gain speed but also have an uninterrupted thought line that results in fluid ideas.
The terminal is your friend.
There is a lot more about being a CSS developer than actually writing CSS. Building your code, compiling, linting, formatting, and browser live refresh are only a small part of what you'll have to deal with on a daily basis.
- Research which IDE is best for you. There are high performance text editors like Vim or easier to use options like Atom or VSCode.
- Pick up your way around the terminal and learn CLI as soon as possible. The short book "Working the command line" is a great starting point.
4. Get to know the browser
The browser is not only your canvas, but also a powerful inspector to debug your code, test performance, and learn from others.
Learning how the browser renders your code is an eye-opening experience that will take your coding skills to the next level.
Every browser is different; get to know those differences and embrace them. Love them for what they are. (Yes, even IE.)
- Spend time looking around the inspector.
- You'll not be able to own every single device; get a BrowserStack or CrossBrowserTesting account, it's worth it.
- Install every browser you can and learn how each one of them renders your code.
5. Learn to write maintainable CSS
It'll probably take you years, but if there is just one single skill a CSS developer should have, it is to write maintainable structures.
This means knowing exactly how the cascade, the box model, and specificity works. Master CSS architecture models, learn their pros and cons and how to implement them.
Remember that a modular architecture leads to independent modules, good performance, accessible structures, and responsive components (AKA: CSS happiness).
Learn about CSS architectures, keep up with the trends, and have an opinion!
Follow people that are paving the CSS roads like Harry Roberts, Una Kravets, Brad Frost, Ben Frain, Sara Soueidan, Chris Coyier, Eric Meyer, Jen Simmons, Rachel Andrew, and many many others.
The future looks bright
Modern CSS is amazing. Its future is even better. I love CSS and enjoy every second I spend coding.
If you need help, you can reach out to me or probably any of the CSS developers mentioned in this article. You might be surprised by how kind and generous the CSS community can be.
What do you think about my advice? What other advice would you give? Let me know what you think in the comments.
5 things CSS developers wish they knew before they started is a post from CSS-Tricks
from CSS-Tricks http://ift.tt/2fkFEiX
via IFTTT
Designing Websites for iPhone X
We've already covered "The Notch" and the options for dealing with it from an HTML and CSS perspective. There is a bit more detail available now, straight from the horse's mouth:
Safe area insets are not a replacement for margins.
... we want to specify that our padding should be the default padding or the safe area inset, whichever is greater. This can be achieved with the brand-new CSS functions
min()
andmax()
which will be available in a future Safari Technology Preview release.@supports(padding: max(0px)) { .post { padding-left: max(12px, constant(safe-area-inset-left)); padding-right: max(12px, constant(safe-area-inset-right)); } }
It is important to use @supports to feature-detect min and max, because they are not supported everywhere, and due to CSS’s treatment of invalid variables, to not specify a variable inside your @supports query.
Jeremey Keith's hot takes have been especially tasty, like:
You could add a bunch of proprietary CSS that Apple just pulled out of their ass.
Or you could make sure to set a background colour on your
body
element.I recommend the latter.
And:
This could be a one-word article: don’t.
More specifically, don’t design websites for any specific device.
Although if this pushes support forward for min()
and max()
as generic functions, that's cool.
Direct Link to Article — Permalink
Designing Websites for iPhone X is a post from CSS-Tricks
from CSS-Tricks http://ift.tt/2wMqglP
via IFTTT
Sunday, 24 September 2017
Marvin Visions
Marvin Visions is a new typeface designed in the spirit of those letters you’d see in scruffy old 80's sci-fi books. This specimen site has a really beautiful layout that's worth exploring and reading about the design process behind the work.
Direct Link to Article — Permalink
Marvin Visions is a post from CSS-Tricks
from CSS-Tricks http://ift.tt/2xpCvsM
via IFTTT
Friday, 22 September 2017
The Importance Of JavaScript Abstractions When Working With Remote Data
Recently I had the experience of reviewing a project and accessing its scalability and maintainability. There were a few bad practices here and there, a few strange pieces of code with lack of meaningful comments. Nothing uncommon for a relatively big (legacy) codebase, right?
However was something that I keep finding. A pattern that repeated itself throughout this codebase and a number of other projects I've looked through.They could be all by lack of abstraction. Ultimately, this was the cause for maintenance difficulty.
- In object-oriented programming, abstraction is one of the three central principles (along with encapsulation and inheritance). Abstraction is valuable for two key reasons:
- Abstraction hides certain details and only show the essential features of the object. It tries to reduce and factor out details so that the developer can focus on a few concepts at a time. This approach improves understandability as well as maintainability of the code.
- Abstraction helps us to reduce code duplication. Abstraction provides ways of dealing with crosscutting concerns and enables us to avoid tightly coupled code.
The lack of abstraction inevitably leads to problems with maintainability.
Often I've seen colleagues that want to take a step further towards more maintainable code, but they struggle to figure out and implement fundamental abstractions. Therefore, in this article, I'll share a few useful abstractions I use for the most common thing in the web world: working with remote data.
It's important to mention that, just like everything in the JavaScript world, there are tons of ways and different approaches how to implement a similar concept. I'll share my approach, but feel free to upgrade it or to tweak it based on your own needs. Or even better - improve it and share it in the comments below! ❤️
API Abstraction
I haven't had a project which doesn't use an external API to receive and send data in a while. That's usually one of the first and fundamental abstractions I define. I try to store as much API related configuration and settings there like:
- the API base url
- the request headers:
- the global error handling logic
const API = { /** * Simple service for generating different HTTP codes. Useful for * testing how your own scripts deal with varying responses. */ url: 'http://httpstat.us/', /** * fetch() will only reject a promise if the user is offline, * or some unlikely networking error occurs, such a DNS lookup failure. * However, there is a simple `ok` flag that indicates * whether an HTTP response's status code is in the successful range. */ _handleError(_res) { return _res.ok ? _res : Promise.reject(_res.statusText); }, /** * Get abstraction. * @return {Promise} */ get(_endpoint) { return window.fetch(this.url + _endpoint, { method: 'GET', headers: new Headers({ 'Accept': 'application/json' }) }) .then(this._handleError) .catch( error => { throw new Error(error) }); }, /** * Post abstraction. * @return {Promise} */ post(_endpoint, _body) { return window.fetch(this.url + _endpoint, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: _body, }) .then(this._handleError) .catch( error => { throw new Error(error) }); } };
In this module, we have 2 public methods, get()
and post()
which both return a Promise. On all places where we need to work with remote data, instead of directly calling the Fetch API via window.fetch()
, we use our API module abstraction - API.get()
or API.post()
.
Therefore, the Fetch API is not tightly coupled with our code.
Let's say down the road we read Zell Liew's comprehensive summary of using Fetch and we realize that our error handling is not really advanced, like it could be. We want to check the content type before we process with our logic any further. No problem. We modify only our APP
module, the public methods API.get()
and API.post()
we use everywhere else works just fine.
const API = {
/* ... */
/**
* Check whether the content type is correct before you process it further.
*/
_handleContentType(_response) {
const contentType = _response.headers.get('content-type');
if (contentType && contentType.includes('application/json')) {
return _response.json();
}
return Promise.reject('Oops, we haven\'t got JSON!');
},
get(_endpoint) {
return window.fetch(this.url + _endpoint, {
method: 'GET',
headers: new Headers({
'Accept': 'application/json'
})
})
.then(this._handleError)
.then(this._handleContentType)
.catch( error => { throw new Error(error) })
},
post(_endpoint, _body) {
return window.fetch(this.url + _endpoint, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: _body
})
.then(this._handleError)
.then(this._handleContentType)
.catch( error => { throw new Error(error) })
}
};
Let's say we decide to switch to zlFetch, the library which Zell introduces that abstracts away the handling of the response (so you can skip ahead to and handle both your data and errors without worrying about the response). As long as our public methods return a Promise, no problem:
import zlFetch from 'zl-fetch';
const API = {
/* ... */
/**
* Get abstraction.
* @return {Promise}
*/
get(_endpoint) {
return zlFetch(this.url + _endpoint, {
method: 'GET'
})
.catch( error => { throw new Error(error) })
},
/**
* Post abstraction.
* @return {Promise}
*/
post(_endpoint, _body) {
return zlFetch(this.url + _endpoint, {
method: 'post',
body: _body
})
.catch( error => { throw new Error(error) });
}
};
Let's say down the road due to whatever reason we decide to switch to jQuery Ajax for working with remote data. Not a huge deal once again, as long as our public methods return a Promise. The jqXHR objects returned by $.ajax()
as of jQuery 1.5 implement the Promise interface, giving them all the properties, methods, and behavior of a Promise.
const API = {
/* ... */
/**
* Get abstraction.
* @return {Promise}
*/
get(_endpoint) {
return $.ajax({
method: 'GET',
url: this.url + _endpoint
});
},
/**
* Post abstraction.
* @return {Promise}
*/
post(_endpoint, _body) {
return $.ajax({
method: 'POST',
url: this.url + _endpoint,
data: _body
});
}
};
But even if jQuery's $.ajax()
didn't return a Promise, you can always wrap anything in a new Promise(). All good. Maintainability++!
Now let's abstract away the receiving and storing of the data locally.
Data Repository
Let's assume we need to take the current weather. API returns us the temperature, feels-like, wind speed (m/s), pressure (hPa) and humidity (%). A common pattern, in order for the JSON response to be as slim as possible, attributes are compressed up to the first letter. So here's what we receive from the server:
{
"t": 30,
"f": 32,
"w": 6.7,
"p": 1012,
"h": 38
}
We could go ahead and use API.get('weather').t
and API.get('weather').w
wherever we need it, but that doesn't look semantically awesome. I'm not a fan of the one-letter-not-much-context naming.
Additionally, let's say we don't use the humidity (h
) and the feels like temperature (f
) anywhere. We don't need them. Actually, the server might return us a lot of other information, but we might want to use only a couple of parameters only. Not restricting what our weather module actually needs (stores) could grow to a big overhead.
Enter repository-ish pattern abstraction!
import API from './api.js'; // Import it into your code however you like
const WeatherRepository = {
_normalizeData(currentWeather) {
// Take only what our app needs and nothing more.
const { t, w, p } = currentWeather;
return {
temperature: t,
windspeed: w,
pressure: p
};
},
/**
* Get current weather.
* @return {Promise}
*/
get(){
return API.get('/weather')
.then(this._normalizeData);
}
}
Now throughout our codebase use WeatherRepository.get()
and access meaningful attributes like .temperature
and .windspeed
. Better!
Additionally, via the _normalizeData()
we expose only parameters we need.
There is one more big benefit. Imagine we need to wire-up our app with another weather API. Surprise, surprise, this one's response attributes names are different:
{
"temp": 30,
"feels": 32,
"wind": 6.7,
"press": 1012,
"hum": 38
}
No worries! Having our WeatherRepository
abstraction all we need to tweak is the _normalizeData()
method! Not a single other module (or file).
const WeatherRepository = {
_normalizeData(currentWeather) {
// Take only what our app needs and nothing more.
const { temp, wind, press } = currentWeather;
return {
temperature: temp,
windspeed: wind,
pressure: press
};
},
/* ... */
};
The attribute names of the API response object are not tightly coupled with our codebase. Maintainability++!
Down the road, say we want to display the cached weather info if the currently fetched data is not older than 15 minutes. So, we choose to use localStorage
to store the weather info, instead of doing an actual network request and calling the API each time WeatherRepository.get()
is referenced.
As long as WeatherRepository.get()
returns a Promise, we don't need to change the implementation in any other module. All other modules which want to access the current weather don't (and shouldn't) care how the data is retrieved - if it comes from the local storage, from an API request, via Fetch API or via jQuery's $.ajax()
. That's irrelevant. They only care to receive it in the "agreed" format they implemented - a Promise which wraps the actual weather data.
So, we introduce two "private" methods _isDataUpToDate()
- to check if our data is older than 15 minutes or not and _storeData()
to simply store out data in the browser storage.
const WeatherRepository = {
/* ... */
/**
* Checks weather the data is up to date or not.
* @return {Boolean}
*/
_isDataUpToDate(_localStore) {
const isDataMissing =
_localStore === null || Object.keys(_localStore.data).length === 0;
if (isDataMissing) {
return false;
}
const { lastFetched } = _localStore;
const outOfDateAfter = 15 * 1000; // 15 minutes
const isDataUpToDate =
(new Date().valueOf() - lastFetched) < outOfDateAfter;
return isDataUpToDate;
},
_storeData(_weather) {
window.localStorage.setItem('weather', JSON.stringify({
lastFetched: new Date().valueOf(),
data: _weather
}));
},
/**
* Get current weather.
* @return {Promise}
*/
get(){
const localData = JSON.parse( window.localStorage.getItem('weather') );
if (this._isDataUpToDate(localData)) {
return new Promise(_resolve => _resolve(localData));
}
return API.get('/weather')
.then(this._normalizeData)
.then(this._storeData);
}
};
Finally, we tweak the get()
method: in case the weather data is up to date, we wrap it in a Promise and we return it. Otherwise - we issue an API call. Awesome!
There could be other use-cases, but I hope you got the idea. If a change requires you to tweak only one module - that's excellent! You designed the implementation in a maintainable way!
If you decide to use this repository-ish pattern, you might notice that it leads to some code and logic duplication, because all data repositories (entities) you define in your project will probably have methods like _isDataUpToDate()
, _normalizeData()
, _storeData()
and so on...
Since I use it heavily in my projects, I decided to create a library around this pattern that does exactly what I described in this article, and more!
Introducing SuperRepo
SuperRepo is a library that helps you implement best practices for working with and storing data on the client-side.
/**
* 1. Define where you want to store the data,
* in this example, in the LocalStorage.
*
* 2. Then - define a name of your data repository,
* it's used for the LocalStorage key.
*
* 3. Define when the data will get out of date.
*
* 4. Finally, define your data model, set custom attribute name
* for each response item, like we did above with `_normalizeData()`.
* In the example, server returns the params 't', 'w', 'p',
* we map them to 'temperature', 'windspeed', and 'pressure' instead.
*/
const WeatherRepository = new SuperRepo({
storage: 'LOCAL_STORAGE', // [1]
name: 'weather', // [2]
outOfDateAfter: 5 * 60 * 1000, // 5 min // [3]
request: () => API.get('weather'), // Function that returns a Promise
dataModel: { // [4]
temperature: 't',
windspeed: 'w',
pressure: 'p'
}
});
/**
* From here on, you can use the `.getData()` method to access your data.
* It will first check if out data outdated (based on the `outOfDateAfter`).
* If so - it will do a server request to get fresh data,
* otherwise - it will get it from the cache (Local Storage).
*/
WeatherRepository.getData().then( data => {
// Do something awesome.
console.log(`It is ${data.temperature} degrees`);
});
The library does the same things we implemented before:
- Gets data from the server (if it's missing or out of date on our side) or otherwise - gets it from the cache.
- Just like we did with
_normalizeData()
, thedataModel
option applies a mapping to our rough data. This means:- Throughout our codebase, we will access meaningful and semantic attributes like
.temperature
and.windspeed
instead of.t
and.s
.- Expose only parameters you need and simply don't include any others.
- If the response attributes names change (or you need to wire-up another API with different response structure), you only need to tweak here - in only 1 place of your codebase.
Plus, a few additional improvements:
- Performance: if
WeatherRepository.getData()
is called multiple times from different parts of our app, only 1 server request is triggered. - Scalability:
- You can store the data in the
localStorage
, in the browser storage (if you're building a browser extension), or in a local variable (if you don't want to store data across browser sessions). See the options for thestorage
setting. - You can initiate an automatic data sync with
WeatherRepository.initSyncer()
. This will initiate a setInterval, which will countdown to the point when the data is out of date (based on theoutOfDateAfter
value) and will trigger a server request to get fresh data. Sweet.
- You can store the data in the
To use SuperRepo, install (or simply download) it with NPM or Bower:
npm install --save super-repo
Then, import it into your code via one of the 3 methods available:
- Static HTML:
<script src="/node_modules/super-repo/src/index.js"></script>
- Using ES6 Imports:
// If transpiler is configured (Traceur Compiler, Babel, Rollup, Webpack) import SuperRepo from 'super-repo';
- … or using CommonJS Imports
// If module loader is configured (RequireJS, Browserify, Neuter) const SuperRepo = require('super-repo');
And finally, define your SuperRepositories :)
For advanced usage, read the documentation I wrote. Examples included!
Summary
The abstractions I described above could be one fundamental part of the architecture and software design of your app. As your experience grows, try to think about and apply similar concepts not only when working with remote data, but in other cases where they make sense, too.
When implementing a feature, always try to discuss change resilience, maintainability, and scalability with your team. Future you will thank you for that!
The Importance Of JavaScript Abstractions When Working With Remote Data is a post from CSS-Tricks
from CSS-Tricks http://ift.tt/2xtMCML
via IFTTT
10 Things that DO NOT (Directly) Affect Your Google Rankings - Whiteboard Friday
Posted by randfish
What do the age of your site, your headline H1/H2 preference, bounce rate, and shared hosting all have in common? You might've gotten a hint from the title: not a single one of them directly affects your Google rankings. In this rather comforting Whiteboard Friday, Rand lists out ten factors commonly thought to influence your rankings that Google simply doesn't care about.
Video Transcription
Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we're going to chat about things that do not affect your Google rankings.
So it turns out lots of people have this idea that anything and everything that you do with your website or on the web could have an impact. Well, some things have an indirect impact and maybe even a few of these do. I'll talk through those. But tons and tons of things that you do don't directly affect your Google rankings. So I'll try and walk through some of these that I've heard or seen questions about, especially in the recent past.
1. The age of your website.
First one, longstanding debate: the age of your website. Does Google care if you registered your site in 1998 or 2008 or 2016? No, they don't care at all. They only care the degree to which your content actually helps people and that you have links and authority signals and those kinds of things. Granted, it is true there's correlation going in this direction. If you started a site in 1998 and it's still going strong today, chances are good that you've built up lots of links and authority and equity and all these kinds of signals that Google does care about.
But maybe you've just had a very successful first two years, and you only registered your site in 2015, and you've built up all those same signals. Google is actually probably going to reward that site even more, because it's built up the same authority and influence in a very small period of time versus a much longer one.
2. Whether you do or don't use Google apps and services.
So people worry that, "Oh, wait a minute. Can't Google sort of monitor what's going on with my Google Analytics account and see all my data there and AdSense? What if they can look inside Gmail or Google Docs?"
Google, first off, the engineers who work on these products and the engineers who work on search, most of them would quit right that day if they discovered that Google was peering into your Gmail account to discover that you had been buying shady links or that you didn't look as authoritative as you really were on the web or these kinds of things. So don't fear the use of these or the decision not to use them will hurt or harm your rankings in Google web search in any way. It won't.
3. Likes, shares, plus-ones, tweet counts of your web pages.
So you have a Facebook counter on there, and it shows that you have 17,000 shares on that page. Wow, that's a lot of shares. Does Google care? No, they don't care at all. In fact, they're not even looking at that or using it. But what if it turns out that many of those people who shared it on Facebook also did other activities that resulted in lots of browser activity and search activity, click-through activity, increased branding, lower pogo-sticking rates, brand preference for you in the search results, and links? Well, Google does care about a lot of those things. So indirectly, this can have an impact. Directly, no. Should you buy 10,000 Facebook shares? No, you should not.
4. What about raw bounce rate or time on site?
Well, this is sort of an interesting one. Let's say you have a time on site of two minutes, and you look at your industry averages, your benchmarks, maybe via Google Analytics if you've opted in to sharing there, and you see that your industry benchmarks are actually lower than average. Is that going to hurt you in Google web search? Not necessarily. It could be the case that those visitors are coming from elsewhere. It could be the case that you are actually serving up a faster-loading site and you're getting people to the information that they need more quickly, and so their time on site is slightly lower or maybe even their bounce rate is higher.
But so long as pogo-sticking type of activity, people bouncing back to the search results and choosing a different result because you didn't actually answer their query, so long as that remains fine, you're not in trouble here. So raw bounce rate, raw time on site, I wouldn't worry too much about that.
5. The tech under your site's hood.
Are you using certain JavaScript libraries like Node or React, one is Facebook, one is Google. If you use Facebook's, does Google give you a hard time about it? No. Facebook might, due to patent issues, but anyway we won't worry about that. .NET or what if you're coding up things in raw HTML still? Just fine. It doesn't matter. If Google can crawl each of these URLs and see the unique content on there and the content that Google sees and the content visitors see is the same, they don't care what's being used under the hood to deliver that to the browser.
6. Having or not having a knowledge panel on the right-hand side of the search results.
Sometimes you get that knowledge panel, and it shows around the web and some information sometimes from Wikipedia. What about site links, where you search for your brand name and you get branded site links? The first few sets of results are all from your own website, and they're sort of indented. Does that impact your rankings? No, it does not. It doesn't impact your rankings for any other search query anyway.
It could be that showing up here and it probably is that showing up here means you're going to get a lot more of these clicks, a higher share of those clicks, and it's a good thing. But does this impact your rankings for some other totally unbranded query to your site? No, it doesn't at all. I wouldn't stress too much. Over time, sites tend to build up site links and knowledge panels as their brands become bigger and as they become better known and as they get more coverage around the web and online and offline. So this is not something to stress about.
7. What about using shared hosting or some of the inexpensive hosting options out there?
Well, directly, this is not going to affect you unless it hurts load speed or up time. If it doesn't hurt either of those things and they're just as good as they were before or as they would be if you were paying more or using solo hosting, you're just fine. Don't worry about it.
8. Use of defaults that Google already assumes.
So when Google crawls a site, when they come to a site, if you don't have a robots.txt file, or you have a robots.txt file but it doesn't include any exclusions, any disallows, or they reach a page and it has no meta robots tag, they're just going to assume that they get to crawl everything and that they should follow all the links.
Using things like the meta robots "index, follow" or using, on an individual link, a rel=follow inside the href tag, or in your robots.txt file specifying that Google can crawl everything, doesn't boost anything. They just assume all those things by default. Using them in these places, saying yes, you can do the default thing, doesn't give you any special benefit. It doesn't hurt you, but it gives you no benefit. Google just doesn't care.
9. Characters that you use as separators in your title element.
So the page title element sits in the header of a document, and it could be something like your brand name and then a separator and some words and phrases after it, or the other way around, words and phrases, separator, the brand name. Does it matter if that separator is the pipe bar or a hyphen or a colon or any other special character that you would like to use? No, Google does not care. You don't need to worry about it. This is a personal preference issue.
Now, maybe you've found that one of these characters has a slightly better click-through rate and preference than another one. If you've found that, great. We have not seen one broadly on the web. Some people will say they particularly like the pipe over the hyphen. I don't think it matters too much. I think it's up to you.
10. What about using headlines and the H1, H2, H3 tags?
Well, I've heard this said: If you put your headline inside an H2 rather than an H1, Google will consider it a little less important. No, that is definitely not true. In fact, I'm not even sure the degree to which Google cares at all whether you use H1s or H2s or H3s, or whether they just look at the content and they say, "Well, this one is big and at the top and bold. That must be the headline, and that's how we're going to treat it. This one is lower down and smaller. We're going to say that's probably a sub-header."
Whether you use an H5 or an H2 or an H3, that is your CSS on your site and up to you and your designers. It is still best practices in HTML to make sure that the headline, the biggest one is the H1. I would do that for design purposes and for having nice clean HTML and CSS, but I wouldn't stress about it from Google's perspective. If your designers tell you, "Hey, we can't get that headline in H1. We've got to use the H2 because of how our style sheets are formatted." Fine. No big deal. Don't stress.
Normally on Whiteboard Friday, we would end right here. But today, I'd like to ask. These 10 are only the tip of the iceberg. So if you have others that you've seen people say, "Oh, wait a minute, is this a Google ranking factor?" and you think to yourself, "Ah, jeez, no, that's not a ranking factor," go ahead and leave them in the comments. We'd love to see them there and chat through and list all the different non-Google ranking factors.
Thanks, everyone. See you again next week for another edition of Whiteboard Friday. Take care.
Video transcription by Speechpad.com
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from The Moz Blog http://ift.tt/2xz3Bh4
via IFTTT
Thursday, 21 September 2017
Creating a Static API from a Repository
When I first started building websites, the proposition was quite basic: take content, which may or may not be stored in some form of database, and deliver it to people's browsers as HTML pages. Over the years, countless products used that simple model to offer all-in-one solutions for content management and delivery on the web.
Fast-forward a decade or so and developers are presented with a very different reality. With such a vast landscape of devices consuming digital content, it's now imperative to consider how content can be delivered not only to web browsers, but also to native mobile applications, IoT devices, and other mediums yet to come.
Even within the realms of the web browser, things have also changed: client-side applications are becoming more and more ubiquitous, with challenges to content delivery that didn't exist in traditional server-rendered pages.
The answer to these challenges almost invariably involves creating an API — a way of exposing data in such a way that it can be requested and manipulated by virtually any type of system, regardless of its underlying technology stack. Content represented in a universal format like JSON is fairly easy to pass around, from a mobile app to a server, from the server to a client-side application and pretty much anything else.
Embracing this API paradigm comes with its own set of challenges. Designing, building and deploying an API is not exactly straightforward, and can actually be a daunting task to less experienced developers or to front-enders that simply want to learn how to consume an API from their React/Angular/Vue/Etc applications without getting their hands dirty with database engines, authentication or data backups.
Back to Basics
I love the simplicity of static sites and I particularly like this new era of static site generators. The idea of a website using a group of flat files as a data store is also very appealing to me, which using something like GitHub means the possibility of having a data set available as a public repository on a platform that allows anyone to easily contribute, with pull requests and issues being excellent tools for moderation and discussion.
Imagine having a site where people find a typo in an article and submit a pull request with the correction, or accepting submissions for new content with an open forum for discussion, where the community itself can filter and validate what ultimately gets published. To me, this is quite powerful.
I started toying with the idea of applying these principles to the process of building an API instead of a website — if programs like Jekyll or Hugo take a bunch of flat files and create HTML pages from them, could we build something to turn them into an API instead?
Static Data Stores
Let me show you two examples that I came across recently of GitHub repositories used as data stores, along with some thoughts on how they're structured.
The first example is the ESLint website, where every single ESLint rule is listed along with its options and associated examples of correct and incorrect code. Information for each rule is stored in a Markdown file annotated with a YAML front matter section. Storing the content in this human-friendly format makes it easy for people to author and maintain, but not very simple for other applications to consume programmatically.
The second example of a static data store is MDN's browser-compat-data, a compendium of browser compatibility information for CSS, JavaScript and other technologies. Data is stored as JSON files, which conversely to the ESLint case, are a breeze to consume programmatically but a pain for people to edit, as JSON is very strict and human errors can easily lead to malformed files.
There are also some limitations stemming from the way data is grouped together. ESLint has a file per rule, so there's no way to, say, get a list of all the rules specific to ES6, unless they chuck them all into the same file, which would be highly impractical. The same applies to the structure used by MDN.
A static site generator solves these two problems for normal websites — they take human-friendly files, like Markdown, and transform them into something tailored for other systems to consume, typically HTML. They also provide ways, through their template engines, to take the original files and group their rendered output in any way imaginable.
Similarly, the same concept applied to APIs — a static API generator? — would need to do the same, allowing developers to keep data in smaller files, using a format they're comfortable with for an easy editing process, and then process them in such a way that multiple endpoints with various levels of granularity can be created, transformed into a format like JSON.
Building a Static API Generator
Imagine an API with information about movies. Each title should have information about the runtime, budget, revenue, and popularity, and entries should be grouped by language, genre, and release year.
To represent this dataset as flat files, we could store each movie and its attributes as a text, using YAML or any other data serialization language.
budget: 170000000
website: http://ift.tt/1nPjEaW
tmdbID: 118340
imdbID: tt2015381
popularity: 50.578093
revenue: 773328629
runtime: 121
tagline: All heroes start somewhere.
title: Guardians of the Galaxy
To group movies, we can store the files within language, genre and release year sub-directories, as shown below.
input/
├── english
│ ├── action
│ │ ├── 2014
│ │ │ └── guardians-of-the-galaxy.yaml
│ │ ├── 2015
│ │ │ ├── jurassic-world.yaml
│ │ │ └── mad-max-fury-road.yaml
│ │ ├── 2016
│ │ │ ├── deadpool.yaml
│ │ │ └── the-great-wall.yaml
│ │ └── 2017
│ │ ├── ghost-in-the-shell.yaml
│ │ ├── guardians-of-the-galaxy-vol-2.yaml
│ │ ├── king-arthur-legend-of-the-sword.yaml
│ │ ├── logan.yaml
│ │ └── the-fate-of-the-furious.yaml
│ └── horror
│ ├── 2016
│ │ └── split.yaml
│ └── 2017
│ ├── alien-covenant.yaml
│ └── get-out.yaml
└── portuguese
└── action
└── 2016
└── tropa-de-elite.yaml
Without writing a line of code, we can get something that is kind of an API (although not a very useful one) by simply serving the `input/` directory above using a web server. To get information about a movie, say, Guardians of the Galaxy, consumers would hit:
http://localhost/english/action/2014/guardians-of-the-galaxy.yaml
and get the contents of the YAML file.
Using this very crude concept as a starting point, we can build a tool — a static API generator — to process the data files in such a way that their output resembles the behavior and functionality of a typical API layer.
Format translation
The first issue with the solution above is that the format chosen to author the data files might not necessarily be the best format for the output. A human-friendly serialization format like YAML or TOML should make the authoring process easier and less error-prone, but the API consumers will probably expect something like XML or JSON.
Our static API generator can easily solve this by visiting each data file and transforming its contents to JSON, saving the result to a new file with the exact same path as the source, except for the parent directory (e.g. `output/` instead of `input/`), leaving the original untouched.
This results on a 1-to-1 mapping between source and output files. If we now served the `output/` directory, consumers could get data for Guardians of the Galaxy in JSON by hitting:
http://localhost/english/action/2014/guardians-of-the-galaxy.json
whilst still allowing editors to author files using YAML or other.
{
"budget": 170000000,
"website": "http://ift.tt/1nPjEaW",
"tmdbID": 118340,
"imdbID": "tt2015381",
"popularity": 50.578093,
"revenue": 773328629,
"runtime": 121,
"tagline": "All heroes start somewhere.",
"title": "Guardians of the Galaxy"
}
Aggregating data
With consumers now able to consume entries in the best-suited format, let's look at creating endpoints where data from multiple entries are grouped together. For example, imagine an endpoint that lists all movies in a particular language and of a given genre.
The static API generator can generate this by visiting all subdirectories on the level being used to aggregate entries, and recursively saving their sub-trees to files placed at the root of said subdirectories. This would generate endpoints like:
http://localhost/english/action.json
which would allow consumers to list all action movies in English, or
http://localhost/english.json
to get all English movies.
{
"results": [
{
"budget": 150000000,
"website": "http://ift.tt/2am8UEO",
"tmdbID": 311324,
"imdbID": "tt2034800",
"popularity": 21.429666,
"revenue": 330642775,
"runtime": 103,
"tagline": "1700 years to build. 5500 miles long. What were they trying to keep out?",
"title": "The Great Wall"
},
{
"budget": 58000000,
"website": "http://ift.tt/19ZQQxf",
"tmdbID": 293660,
"imdbID": "tt1431045",
"popularity": 23.993667,
"revenue": 783112979,
"runtime": 108,
"tagline": "Witness the beginning of a happy ending",
"title": "Deadpool"
}
]
}
To make things more interesting, we can also make it capable of generating an endpoint that aggregates entries from multiple diverging paths, like all movies released in a particular year. At first, it may seem like just another variation of the examples shown above, but it's not. The files corresponding to the movies released in any given year may be located at an indeterminate number of directories — for example, the movies from 2016 are located at `input/english/action/2016`, `input/english/horror/2016` and `input/portuguese/action/2016`.
We can make this possible by creating a snapshot of the data tree and manipulating it as necessary, changing the root of the tree depending on the aggregator level chosen, allowing us to have endpoints like http://localhost/2016.json
.
Pagination
Just like with traditional APIs, it's important to have some control over the number of entries added to an endpoint — as our movie data grows, an endpoint listing all English movies would probably have thousands of entries, making the payload extremely large and consequently slow and expensive to transmit.
To fix that, we can define the maximum number of entries an endpoint can have, and every time the static API generator is about to write entries to a file, it divides them into batches and saves them to multiple files. If there are too many action movies in English to fit in:
http://localhost/english/action.json
we'd have
http://localhost/english/action-2.json
and so on.
For easier navigation, we can add a metadata block informing consumers of the total number of entries and pages, as well as the URL of the previous and next pages when applicable.
{
"results": [
{
"budget": 150000000,
"website": "http://ift.tt/2am8UEO",
"tmdbID": 311324,
"imdbID": "tt2034800",
"popularity": 21.429666,
"revenue": 330642775,
"runtime": 103,
"tagline": "1700 years to build. 5500 miles long. What were they trying to keep out?",
"title": "The Great Wall"
},
{
"budget": 58000000,
"website": "http://ift.tt/19ZQQxf",
"tmdbID": 293660,
"imdbID": "tt1431045",
"popularity": 23.993667,
"revenue": 783112979,
"runtime": 108,
"tagline": "Witness the beginning of a happy ending",
"title": "Deadpool"
}
],
"metadata": {
"itemsPerPage": 2,
"pages": 3,
"totalItems": 6,
"nextPage": "/english/action-3.json",
"previousPage": "/english/action.json"
}
}
Sorting
It's useful to be able to sort entries by any of their properties, like sorting movies by popularity in descending order. This is a trivial operation that takes place at the point of aggregating entries.
Putting it all together
Having done all the specification, it was time to build the actual static API generator app. I decided to use Node.js and to publish it as an npm module so that anyone can take their data and get an API off the ground effortlessly. I called the module static-api-generator
(original, right?).
To get started, create a new folder and place your data structure in a sub-directory (e.g. `input/` from earlier). Then initialize a blank project and install the dependencies.
npm init -y
npm install static-api-generator --save
The next step is to load the generator module and create an API. Start a blank file called `server.js` and add the following.
const API = require('static-api-generator')
const moviesApi = new API({
blueprint: 'source/:language/:genre/:year/:movie',
outputPath: 'output'
})
In the example above we start by defining the API blueprint, which is essentially naming the various levels so that the generator knows whether a directory represents a language or a genre just by looking at its depth. We also specify the directory where the generated files will be written to.
Next, we can start creating endpoints. For something basic, we can generate an endpoint for each movie. The following will give us endpoints like /english/action/2016/deadpool.json
.
moviesApi.generate({
endpoints: ['movie']
})
We can aggregate data at any level. For example, we can generate additional endpoints for genres, like /english/action.json
.
moviesApi.generate({
endpoints: ['genre', 'movie']
})
To aggregate entries from multiple diverging paths of the same parent, like all action movies regardless of their language, we can specify a new root for the data tree. This will give us endpoints like /action.json
.
moviesApi.generate({
endpoints: ['genre', 'movie'],
root: 'genre'
})
By default, an endpoint for a given level will include information about all its sub-levels — for example, an endpoint for a genre will include information about languages, years and movies. But we can change that behavior and specify which levels to include and which ones to bypass.
The following will generate endpoints for genres with information about languages and movies, bypassing years altogether.
moviesApi.generate({
endpoints: ['genre'],
levels: ['language', 'movie'],
root: 'genre'
})
Finally, type npm start
to generate the API and watch the files being written to the output directory. Your new API is ready to serve - enjoy!
Deployment
At this point, this API consists of a bunch of flat files on a local disk. How do we get it live? And how do we make the generation process described above part of the content management flow? Surely we can't ask editors to manually run this tool every time they want to make a change to the dataset.
GitHub Pages + Travis CI
If you're using a GitHub repository to host the data files, then GitHub Pages is a perfect contender to serve them. It works by taking all the files committed to a certain branch and making them accessible on a public URL, so if you take the API generated above and push the files to a gh-pages
branch, you can access your API on http://ift.tt/2hkkcvl
.
We can automate the process with a CI tool, like Travis. It can listen for changes on the branch where the source files will be kept (e.g. master
), run the generator script and push the new set of files to gh-pages
. This means that the API will automatically pick up any change to the dataset within a matter of seconds – not bad for a static API!
After signing up to Travis and connecting the repository, go to the Settings panel and scroll down to Environment Variables. Create a new variable called GITHUB_TOKEN
and insert a GitHub Personal Access Token with write access to the repository – don't worry, the token will be safe.
Finally, create a file named `.travis.yml` on the root of the repository with the following.
language: node_js
node_js:
- "7"
script: npm start
deploy:
provider: pages
skip_cleanup: true
github_token: $GITHUB_TOKEN
on:
branch: master
local_dir: "output"
And that's it. To see if it works, commit a new file to the master
branch and watch Travis build and publish your API. Ah, GitHub Pages has full support for CORS, so consuming the API from a front-end application using Ajax requests will work like a breeze.
You can check out the demo repository for my Movies API and see some of the endpoints in action:
- Movie endpoint (Deadpool)
- List of genres with languages and years
- List of languages and years by genre (Action)
- Full list of languages with genres, years and movies
Going full circle with Staticman
Perhaps the most blatant consequence of using a static API is that it's inherently read-only – we can't simply set up a POST endpoint to accept data for new movies if there's no logic on the server to process it. If this is a strong requirement for your API, that's a sign that a static approach probably isn't the best choice for your project, much in the same way that choosing Jekyll or Hugo for a site with high levels of user-generated content is probably not ideal.
But if you just need some basic form of accepting user data, or you're feeling wild and want to go full throttle on this static API adventure, there's something for you. Last year, I created a project called Staticman, which tries to solve the exact problem of adding user-generated content to static sites.
It consists of a server that receives POST requests, submitted from a plain form or sent as a JSON payload via Ajax, and pushes data as flat files to a GitHub repository. For every submission, a pull request will be created for your approval (or the files will be committed directly if you disable moderation).
You can configure the fields it accepts, add validation, spam protection and also choose the format of the generated files, like JSON or YAML.
This is perfect for our static API setup, as it allows us to create a user-facing form or a basic CMS interface where new genres or movies can be added. When a form is submitted with a new entry, we'll have:
- Staticman receives the data, writes it to a file and creates a pull request
- As the pull request is merged, the branch with the source files (
master
) will be updated - Travis detects the update and triggers a new build of the API
- The updated files will be pushed to the public branch (
gh-pages
) - The live API now reflects the submitted entry.
Parting thoughts
To be clear, this article does not attempt to revolutionize the way production APIs are built. More than anything, it takes the existing and ever-popular concept of statically-generated sites and translates them to the context of APIs, hopefully keeping the simplicity and robustness associated with the paradigm.
In times where APIs are such fundamental pieces of any modern digital product, I'm hoping this tool can democratize the process of designing, building and deploying them, and eliminate the entry barrier for less experienced developers.
The concept could be extended even further, introducing concepts like custom generated fields, which are automatically populated by the generator based on user-defined logic that takes into account not only the entry being created, but also the dataset as a whole – for example, imagine a rank
field for movies where a numeric value is computed by comparing the popularity
value of an entry against the global average.
If you decide to use this approach and have any feedback/issues to report, or even better, if you actually build something with it, I'd love to hear from you!
References
Creating a Static API from a Repository is a post from CSS-Tricks
from CSS-Tricks http://ift.tt/2hkAojz
via IFTTT
Passkeys: What the Heck and Why?
These things called passkeys sure are making the rounds these days. They were a main attraction at W3C TPAC 2022 , gained support in Saf...
-
Posted by Dr-Pete On December 3rd, Google announced that they were rolling out the latest Core Update . Initially, the bulk of the impact s...
-
Posted by MiriamEllis Image credit: Abraham Williams If you’re marketing big brands with hundreds or thousands of locations, are you cert...