Thursday 31 May 2018

Forms, Auth and Serverless Functions on Gatsby and Netlify

​Customize payment solutions with our enhanced platform

(This is a sponsored post.)

We’ve upped our game by using developers’ feedback to improve the Authorize.Net payment platform. Check out our new, streamlined API, better sample code and SDKs, and use them to provide your merchants with a secure, scalable payment solution. You’ll see that it’s a seamless and efficient way to make sure you and your merchants get paid!

Start playing

Direct Link to ArticlePermalink

The post ​Customize payment solutions with our enhanced platform appeared first on CSS-Tricks.



from CSS-Tricks https://synd.co/2KOQvzB
via IFTTT

Wednesday 30 May 2018

What does the ‘h’ stand for in Vue’s render method?

If you’ve been working with Vue for a while, you may have come across this way of rendering your app — this is the default in the latest version of the CLI, in main.js:

new Vue({
 render: h => h(App)
}).$mount('#app')

Or, if you’re using a render function, possibly to take advantage of JSX:

Vue.component('jsx-example', {
  render (h) {
    return <div id="foo">bar</div>
  }
})

You may be wondering, what does that h do? What does it stand for? The h stands for hyperscript. It’s a riff of HTML, which means Hypertext Markup Language: since we’re dealing with a script, it’s become convention in virtual DOM implementations to use this substitution. This definition is also addressed in the documentation of other frameworks as well. Here it is, for example, in Cycle.js.

In this issue, Evan describes that:

Hyperscript itself stands for "script that generates HTML structures"

This is shortened to h because it’s easier to type. He also describes it a bit more in his Advanced Vue workshop on Frontend Masters.

Really, you can think of it as being short for createElement. Here would be the long form:

render: function (createElement) {
  return createElement(App);
}

If we replace that with an h, then we first arrive at:

render: function (h) {
  return h(App);
}

...which can then be shortened with the use of ES6 to:

render: h => h (App)

The Vue version takes up to three arguments:

render(h) {
  return h('div', {}, [...])
}
  1. The first is type of the element (here shown as div).
  2. The second is the data object. We nest some fields here, including: props, attrs, dom props, class and style.
  3. The third is an array of child nodes. We’ll then have nested calls and eventually return a tree of virtual DOM nodes.

There’s more in-depth information in the Vue Guide here.

The name hyperscript may potentially be confusing to some people, given the fact that hyperscript is actually the name of a library (what isn’t updated these days) and it actually has a small ecosystem. In this case, we’re not talking about that particular implementation.

Hope that clears things up for those who are curious!

The post What does the ‘h’ stand for in Vue’s render method? appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2J4We3y
via IFTTT

Getting Real with Retail: An Agency’s Guide to Inspiring In-Store Excellence

Posted by MiriamEllis

A screenshot of a negative 1-star review citing poor customer service

No marketing agency staffer feels good when they see a retail client getting reviews like this on the web.

But we can find out why they’re happening, and if we’re going above-and-beyond in our work, we just might be able to catalyze turning things around if we’re committed to being honest with clients and have an actionable strategy for their in-store improvements.

In this post, I’ll highlight some advice from an internal letter at Tesla that I feel is highly applicable to the retail sector. I’d also like to help your agency combat the retail blues headlining the news these days with big brands downsizing, liquidating and closing up shop — I’m going to share a printable infographic with some statistics with you that are almost guaranteed to generate the client positivity so essential to making real change. And, for some further inspiration, I’d like to offer a couple of anecdotes involving an Igloo cooler, a monk, reindeer moss, and reviews.

The genuine pain of retail gone wrong: The elusive cooler, "Corporate," and the man who could hardly stand

“Hi there,” I greeted the staffer at the customer service counter of the big department store. “Where would I find a small cooler?”

“We don’t have any,” he mumbled.

“You don’t have any coolers? Like, an Igloo cooler to take on a picnic to keep things cold?”

“Maybe over there,” he waved his hand in unconcern.

And I stood there for a minute, expecting him to actually figure this out for me, maybe even guide me to the appropriate aisle, or ask a manager to assist my transaction, if necessary. But in his silence, I walked away.

“Hi there,” I tried with more specificity at the locally owned general store the next day. “Where would I find something like a small Igloo cooler to keep things cold on a picnic?”

“I don’t know,” the staffer replied.

“Oh…” I said, uncomfortably.

“It could be upstairs somewhere,” he hazarded, and left me to quest for the second floor, which appeared to be a possibly-non-code-compliant catch-all attic for random merchandise, where I applied to a second dimly illuminated employee who told me I should probably go downstairs and escalate my question to someone else.

And apparently escalation was necessary, for on the third try, a very tall man was able to lift his gaze to some coolers on a top shelf… within clear view of the checkout counter where the whole thing began.

Why do we all have experiences like this?

“Corporate tells us what to carry” is the almost defensive-sounding refrain I have now received from three employees at two different Whole Foods Markets when asking if they could special order items for me since the Amazon buyout.

Because, you know, before they were Amazon-Whole Foods, staffers would gladly offer to procure anything they didn’t have in stock. Now, if they stop carrying that Scandinavian vitamin D-3 made from the moss eaten by reindeer and I’ve got to have it because I don’t want the kind made by irradiating sheep wool, I’d have to special order an entire case of it to get my hands on a bottle. Because, you know, “Corporate.”

Why does the distance between corporate and customer make me feel like the store I’m standing in, and all of its employees, are powerless? Why am I, the customer, left feeling powerless?

So maybe my search for a cooler, my worries about access to reindeer moss, and the laughable customer service I’ve experienced don’t signal “genuine pain.” But this does:

Screenshot of a one-star review: "The pharmacy shows absolutely no concern for the sick, aged and disabled from what I see and experienced. There's 2 lines for drops and pick up, which is fine, but keep in mind those using the pharmacy are sick either acute or chronic. No one wants to be there. The lines are often long with the phone ringing off the hook, so very understaffed. There are no chairs near the line to sit even if someone is in so much pain they can hardly stand, waiting area not nearby. If you have to drop and pick you have to wait in 2 separate lines. They won't inform the other window even though they are just feet away from each other. I saw one poor man wait 4 people deep, leg bandaged, leaning on a cart to be able to stand, but he was in the wrong line and was told to go to the other line. They could have easily taken the script, asked him to wait in the waiting area, walk the script 5 feet, and call him when it was his turn, but this fella who could barely stand had to wait in another line, 4 people deep. I was in the correct line, pick up. I am a disabled senior with cancer and chronic pain. However, I had a new Rx insurance card, beginning of the year. I was told that to wait in the other line, too! I was in the correct line, but the staff was so poorly trained she couldn't enter a few new numbers. This stuff happens repeatedly there. I've written and called the home office who sound so concerned but nothing changes. I tried to talk to manager, who naturally was "unavailable" but his underling made it clear their process was more important than the customers. All they have to do to fix the problem is provide nearby sitting or ask the customer to wait in the waiting area where there are chairs and take care of the problem behind the counter, but they would rather treat the sick, injured and old like garbage than make a small change that would make a big difference to their customers. Although they are very close I am looking for a pharmacy who actually cares to transfer my scripts, because I feel they are so uncaring and disinterested although it's their job to help the sick."

This is genuine pain. When customer service is failing to the point that badly treated patrons are being further distressed by the sight of fellow shoppers meeting the same fate, the cause is likely built into company structure. And your marketing agency is looking at a bonafide reputation crisis that could presage things like lawsuits, impactful reputation damage, and even closure for your valuable clients.

When you encounter customer service disasters, it begs questions like:

  1. Could no one in my situation access a list of current store inventory, or, barring that, seek out merchandise with me instead of risking the loss of a sale?
  2. Could no one offer to let “corporate” know that I’m dissatisfied with a “customer service policy” that would require me to spend $225 to buy a whole case of vitamins? Why am I being treated like a warehouse instead of a person?
  3. Could no one at the pharmacy see a man with a leg wound about to fall over, grab a folding chair for him, and keep him safe, instead of risking a lawsuit?

I think a “no” answer to all three questions proceeds from definite causes. And I think Tesla CEO, Elon Musk, had such causes in mind when he recently penned a letter to his own employees.

“It must be okay for people to talk directly and just make the right thing happen.”

“Communication should travel via the shortest path necessary to get the job done, not through the 'chain of command.' Any manager who attempts to enforce chain of command communication will soon find themselves working elsewhere.

A major source of issues is poor communication between depts. The way to solve this is allow free flow of information between all levels. If, in order to get something done between depts, an individual contributor has to talk to their manager, who talks to a director, who talks to a VP, who talks to another VP, who talks to a director, who talks to a manager, who talks to someone doing the actual work, then super dumb things will happen. It must be ok for people to talk directly and just make the right thing happen.

In general, always pick common sense as your guide. If following a 'company rule' is obviously ridiculous in a particular situation, such that it would make for a great Dilbert cartoon, then the rule should change.”
- Elon Musk, CEO, Tesla

Let’s parlay this uncommon advice into retail. If it’s everyone’s job to access a free flow of information, use common sense, make the right thing happen, and change rules that don’t make sense, then:

  1. Inventory is known by all store staff, and my cooler can be promptly located by any employee, rather than workers appearing helpless.
  2. Employees have the power to push back and insist that, because customers still expect to be able to special order merchandise, a specific store location will maintain this service rather than disappoint consumers.
  3. Pharmacists can recognize that patrons are often quite ill and can immediately place some chairs near the pharmacy counter, rather than close their eyes to suffering.

“But wait,” retailers may say. “How can I trust that an employee’s idea of ‘common sense’ is reliable?”

Let’s ask a monk for the answer.

“He took the time...”

I recently had the pleasure of listening to a talk given by a monk who was defining what it meant to be a good leader. He hearkened back to his young days, and to the man who was then the leader of his community.

“He was a busy man, but he took the time to get to know each of us one-on-one, and to be sure that we knew him. He set an example for me, and I watched him,” the monk explained.

Most monasteries function within a set of established rules, many of which are centuries old. You can think of these guidelines as a sort of policy. In certain communities, it’s perfectly acceptable that some of the members live apart as hermits most of the year, only breaking their meditative existence by checking in with the larger group on important holidays to share what they’ve been working on solo. In others, every hour has its appointed task, from prayer, to farming, to feeding people, to engaging in social activism.

The point is that everyone within a given community knows the basic guidelines, because at some point, they’ve been well-communicated. Beyond that, it is up to the individual to see whether they can happily live out their personal expression within the policy.

It’s a lot like retail can be, when done right. And it hinges on the question:

“Has culture been well-enough communicated to every employee so that he or she can act like the CEO of the company would in wide variety of circumstances?”

Or to put it another way, would Amazon owner Jeff Bezos be powerless to get me my vitamins?

The most accessible modern benchmark of good customer service — the online review — is what tells the public whether the CEO has “set the example.” Reviews tell whether time has been taken to acquaint every staffer with the business that employs them, preparing them to fit their own personal expression within the company’s vision of serving the public.

An employee who is able to recognize that an injured patron needs a seat while awaiting his prescription should be empowered to act immediately, knowing that the larger company supports treating people well. If poor training, burdensome chains of command, or failure to share brand culture are obstacles to common-sense personal initiative, the problem must be traced back to the CEO and corrected, starting from there.

And, of course, should a random staffer’s personal expression genuinely include an insurmountable disregard for other people, they can always be told it’s time to leave the monastery...

For marketing agencies, opportunity knocks

So your agency is auditing a valuable incoming client, and their negative reviews citing dirty premises, broken fixtures, food poisoning, slowness, rudeness, cluelessness, and lack of apparent concern make you say to yourself,

“Well, I was hoping we could clean up the bad data on the local business listings for this enterprise, but unless they clean up their customer service at 150 of their worst-rated locations, how much ROI are we really going to be able to deliver? What’s going on at these places?”

Let’s make no bones about this: Your honesty at this critical juncture could mean the difference between survival and closure for the brand.

You need to bring it home to the most senior level person you can reach in the organization that no amount of honest marketing can cover up poor customer service in the era of online reviews. If the brand has fallen to the level of the pharmacy I’ve cited, structural change is an absolute necessity. You can ask the tough questions, ask for an explanation of the bad reviews.

“But I’m just a digital marketer,” you may think. “I’m not in charge of whatever happens offline.”

Think again.

Headlines in retail land are horrid right now:

If you were a retail brand C-suite and were swallowing these predictions of doom with your daily breakfast, wouldn’t you be looking for inspiration from anyone with genuine insight? And if a marketing agency should make it their business to confront the truth while also being the bearer of some better news, wouldn’t you be ready to listen?

What is the truth? That poor reviews are symptoms smart doctors can use for diagnosis of structural problems.
What is the better news? The retail scenario is not nearly as dire as it may seem.

Why let hierarchy and traditional roles hold your agency back? Tesla wouldn’t. Why not roll up your sleeves and step into in-store? Organize and then translate the narrative negative reviews are telling about structural problems for the brand which have resulted in dangerously bad customer service. And then, be prepared to counter corporate inertia born of fear with some eye-opening statistics.

Print and share some good retail tidings

Local SEO infographic

Print your own copy of this infographic to share with clients.

At Moz, we’re working with enterprises to get their basic location data into shape so that they are ready to win their share of the predicted $1.4 trillion in mobile-influenced local sales by 2021, and your agency can use these same numbers to combat indecision and apathy for your retail clients. Look at that second statistic again: 90% of purchases are still happening in physical stores. At Moz, we ask our customers if their data is ready for this. Your agency can ask its clients if their reputations are ready for this, if their employees have what they need to earn the brand’s piece of that 90% action. Great online data + great in-store service = table stakes for retail success.

While I won’t play down the unease that major brand retail closures is understandably causing, I hope I’ve given you the tools to fight the “retail disaster” narrative. 85% more mobile users are searching for things like “Where do I buy that reindeer moss vitamin D3?” than they were just 3 years ago. So long as retail staff is ready to deliver, I see no “apocalypse” here.

Investing time

So, your agency has put in the time to identify a reputation problem severe enough that it appears to be founded in structural deficiencies or policies. Perhaps you’ve used some ORM software to do review sentiment analysis to discover which of your client’s locations are hurting worst, or perhaps you’ve done an initial audit manually. You've communicated the bad news to the most senior-level person you can reach at the company, and you've also shared the statistics that make change seem very worthwhile, begging for a new commitment to in-store excellence. What happens next?

While there are going to be nuances specific to every brand, my bet is that the steps will look like this for most businesses:

  1. C-suites need to invest time in creating a policy which a) abundantly communicates company culture, b) expresses trust in employee initiative, and c) dispenses with needless “chain of command” steps, while d) ensuring that every public facing staffer receives full and ongoing training. A recent study says 62% of new retail hires receive less than 10 hours of training. I’d call even these worrisome numbers optimistic. I worked at 5 retail jobs in my early youth. I’d estimate that I received no more than 1 hour of training at any of them.
  2. Because a chain of command can’t realistically be completely dispensed with in a large organization, store managers must then be allowed the time to communicate the culture, encourage employees to use common sense, define what “common sense” does and doesn’t look like to the company, and, finally, offer essential training.
  3. Employees at every level must be given the time to observe how happy or unhappy customers appear to be at their location, and they must be taught that their observations are of inestimable value to the brand. If an employee suggests a solution to a common consumer complaint, this should be recognized and rewarded.
  4. Finally, customers must be given the time to air their grievances at the time of service, in-person, with accessible, responsive staff. The word “corporate” need never come into most of these conversations unless a major claim is involved. Given that it may cost as much as 7x more to replace an unhappy customer than to keep an existing one happy, employees should be empowered to do business graciously and resolve complaints, in most cases, without escalation.

Benjamin Franklin may or may not have said that “time is money.” While the adage rings true in business, reviews have taught me the flip side — that a lack of time equals less money. Every negative review that cites helpless employees and poor service sounds to my marketing ears like a pocketful of silver dollars rolling down a drain.

The monk says good leaders make the time to communicate culture one-on-one.

Tesla says rules should change if they’re ridiculous.

Chairs should be offered to sick people… where common sense is applied.

Reviews can read like this:

Screenshot of a positive 5-star review: "Had personal attention of three Tesla employees at the same time. They let me sit in both the model cars they had for quite time time and let me freely fiddle and figure out all the gizmos of the car. Super friendly and answered all my questions. The sales staff did not pressure me to buy or anything, but only casually mentioned the price and test drive opportunities, which is the perfect touch for a car company like Tesla. "

And digital marketers have never known a time quite like this to have the ear of retail, maybe stepping beyond traditional boundaries into the fray of the real world. Maybe making a fundamental difference.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog https://ift.tt/2sqwV4W
via IFTTT

Tuesday 29 May 2018

Managing State in React With Unstated

As your application becomes more complex, the management of state can become tedious. A component's state is meant to be self-contained, which makes sharing state across multiple components a headache. Redux is usually the go-to library to manage state in React, however, depending on how complex your application is, you might not need Redux.

Unstated is an alternative that provides you with the functionality to manage state across multiple components with a Container class and Provider and Subscribe components. Let's see Unstated in action by creating a simple counter and then look at a more advanced to-do application.

Using Unstated to Create a Counter

The code for the counter we’re making is available on GitHub:

View Repo

You can add Unstated to your application with Yarn:

yarn add unstated

Container

The container extends Unstated's Container class. It is to be used only for state management. This is where the initial state will be initialized and the call to setState() will happen.

import { Container } from 'unstated'

class CounterContainer extends Container {
  state = {
    count: 0
  }

  increment = () => {
    this.setState({ count: this.state.count + 1 })
  }

  decrement = () => {
    this.setState({ count: this.state.count - 1 })
  }
}

export default CounterContainer

So far, we’ve defined the Container (CounterContainer), set its starting state for count at the number zero and defined methods for adding and subtracting to the component's state in increments and decrements of one.

You might be wondering why we haven’t imported React at this point. There is no need to import it into the Container since we will not be rendering JSX at all.

Events emitters will be used in order to call setState() and cause the components to re-render. The components that will make use of this container will have to subscribe to it.

Subscribe

The Subscribe component is used to plug the state into the components that need it. From here, we will be able to call the increment and decrement methods, which will update the state of the application and cause the subscribed component to re-render with the correct count. These methods will be triggered by a couple of buttons that contain events listeners to add or subtract to the count, respectively.

import React from 'react'
import { Subscribe } from 'unstated'

import CounterContainer from './containers/counter'

const Counter = () => {
  return (
    <Subscribe to={[CounterContainer]}>
      {counterContainer => (
        <div>
          <div>
            // The current count value
            Count: { counterContainer.state.count }
          </div>
          // This button will add to the count
          <button onClick={counterContainer.increment}>Increment</button>
          // This button will subtract from the count
          <button onClick={counterContainer.decrement}>Decrement</button>
        </div>
      )}
    </Subscribe>
  )
}

export default Counter

The Subscribe component is given the CounterContainer in the form of an array to its to prop. This means that the Subscribe component can subscribe to more than one container, and all of the containers are passed to the to prop of the Subscribe component in an array.

The counterContainer is a function that receives an instance of each container the Subscribe component subscribes to.

With that, we can now access the state and the methods made available in the container.

Provider

We'll make use of the Provider component to store the container instances and allow the children to subscribe to it.

import React, { Component } from 'react';
import { Provider } from 'unstated'

import Counter from './Counter'

class App extends Component {
  render() {
    return (
      <Provider>
        <Counter />
      </Provider>
    );
  }
}

export default App;

With this, the Counter component can make use of our counterContainer.

Unstated allows you to make use of all the functionality that React's setState() provides. For example, if we want to increment the previous state by one three times with one click, we can pass a function to setState() like this:

incrementBy3 = () => {
  this.setState((prevState) => ({ count: prevState.count + 1 }))
  this.setState((prevState) => ({ count: prevState.count + 1 }))
  this.setState((prevState) => ({ count: prevState.count + 1 }))
}

The idea is that the setState() still works like it does, but this time with the ability to keep the state contained in a Container class. It becomes easy to spread the state to only the components that need it.

Let’s Make a To-Do Application!

This is a slightly more advanced use of Unstated. Two components will subscribe to the container, which will manage all of the state, and the methods for updating the state. Again, the code is available on Github:

View Repo

The container will look like this:

import { Container } from 'unstated'

class TodoContainer extends Container {
  state = {
    todos: [
      'Mess around with unstated',
      'Start dance class'
    ],
    todo: ''
  };

  handleDeleteTodo = (todo) => {
    this.setState({
      todos: this.state.todos.filter(c => c !== todo)
    })
  }
 
  handleInputChange = (event) => {
    const todo = event.target.value
    this.setState({ todo });
  };

  handleAddTodo = (event) => {
    event.preventDefault()
    this.setState(({todos}) => ({
      todos: todos.concat(this.state.todo)
    }))
    this.setState({ todo: '' });
  }

}

export default TodoContainer

The container has an initial todos state which is an array with two items in it. To add to-do items, we have a todo state set to an empty string.

We’re going to need a CreateTodo component that will subscribe to the container. Each time a value is entered, the onChange event will trigger then fire the handleInputChange() method we have in the container. Clicking the submit button will trigger handleAddTodo(). The handleDeleteTodo() method receives a to-do and filters out the to-do that matches the one passed to it.

import React from 'react'
import { Subscribe } from 'unstated'

import TodoContainer from './containers/todoContainer'

const CreateTodo = () => {
  return (
    <div>
      <Subscribe to={[TodoContainer]}>
        {todos =>
          <div>
            <form onSubmit={todos.handleAddTodo}>
              <input
                type="text"
                value={todos.state.todo}
                onChange={todos.handleInputChange}
              />
              <button>Submit</button>
            </form>
          </div>
        }
      </Subscribe>
    </div>
  );
}

export default CreateTodo

When a new to-do is added, the todos state made available in the container is updated. The list of todos is pulled from the container to the Todos component, by subscribing the component to the container.

import React from 'react';
import { Subscribe } from 'unstated';

import TodoContainer from './containers/todoContainer'

const Todos = () => (
  <ul>
    <Subscribe to={[TodoContainer]}>
      {todos =>
        todos.state.todos.map(todo => (
          <li key={todo}>
            {todo} <button onClick={() => todos.handleDeleteTodo(todo)}>X</button>
          </li>
        ))
      }
    </Subscribe>
  </ul>
);

export default Todos

This component loops through the array of to-dos available in the container and renders them in a list.

Finally, we need to wrap the components that subscribe to the container in a provider like we did in the case of the counter. We do this in our App.js file exactly like we did in the counter example:

import React, { Component } from 'react';
import { Provider } from 'unstated'

import CreateTodo from './CreateTodo'
import Todos from './Todos'

class App extends Component {
  render() {
    return (
      <Provider>
        <CreateTodo />
        <Todos />
      </Provider>
    );
  }
}

export default App;

Wrapping Up

There are different ways of managing state in React depending on the complexity of your application and Unstated is a handy library that can make it easier. It’s worth reiterating the point that Redux, while awesome, is not always the best tool for the job, even though we often grab for it in these types of cases. Hopefully you now feel like you have a new tool in your belt.

The post Managing State in React With Unstated appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2xntXnJ
via IFTTT

​Build a realtime polling web app with Next.js

(This is a sponsored post.)

Learn to build a webapp that accepts user votes, using Next.js and Chart.js. Users can vote for their favorite pet, and the results are displayed in realtime on a graph in their browser using Pusher Channels.

Direct Link to ArticlePermalink

The post ​Build a realtime polling web app with Next.js appeared first on CSS-Tricks.



from CSS-Tricks https://synd.co/2kpl03M
via IFTTT

Tracking Your Link Prospecting Using Lists in Link Explorer

Posted by Dr-Pete

I'm a lazy marketer some days — I'll admit it. I don't do a lot of manual link prospecting, because it's a ton of work, outreach, and follow-up. There are plenty of times, though, where I've got a good piece of content (well, at least I hope it's good) and I want to know if it's getting attention from specific sites, whether they're in the search industry or the broader marketing or PR world. Luckily, we've made that question a lot easier to answer in Link Explorer, so today's post is for all of you curious but occasionally lazy marketers. Hop into the tool if you want to follow along:

Open Link Explorer

(1) Track your content the lazy way

When you first visit Link Explorer, you'll see that it defaults to "root domain":

Some days, you don't want to wade through your entire domain, but just want to target a single piece of content. Just enter or paste that URL, and select "exact page" (once you start typing a full path, we'll even auto-select that option for you):

Now I can see just the link data for that page (note: screenshots have been edited for size):

Good news — my Whiteboard Friday already has a decent link profile. That's already a fair amount to sort through, and as the link profile grows, it's only going to get tougher. So, how can I pinpoint just the sites I'm interested in and track those sites over time?

(2) Make a list of link prospects

This is the one part we can't automate for you. Make a list of prospects in whatever tool you please. Here's an imaginary list I created in Excel:

Obviously, this list is on the short side, but let's say I decide to pull a few of the usual suspects from the search marketing world, plus one from the broader marketing world, and a couple of aspirational sites (I'm probably not going to get that New York Times link, but let's dream big).

(3) Create a tracking list in Link Explorer

Obviously, I could individually search for these domains in my full list of inbound links, but even with six prospects, that's going to take some time. So, let's do this the lazy way. Back in Link Explorer, look at the very bottom of the left-hand navigation and you'll see "Link Targeting Lists":

Keep scrolling — I promise it's down there. Click on it, and you'll see something like this:

On the far-right, under the main header, click on "[+] Create new list." You'll get an overlay with a three-step form like the one below. Just give your list a name, provide a target URL (the page you want to track links to), and copy-and-paste in your list of prospects. Here's an example:

Click "Save," and you should immediately get back some data.

Alas, no link from the New York Times. The blue icons show me that the prospects are currently linking to Moz.com, but not to my target page. The green icon shows me that I've already got a head-start — Search Engine Land is apparently linking to this post (thanks, Barry!).

Click on any arrow in the "Notes" column, and you can add a note to that entry, like so:

Don't forget to hit "Save." Congratulations, you've created your first list! Well, I've created your first list for you. Geez, you really are lazy.

(4) Check in to track your progress

Of course, the real magic is that the list just keeps working for you. At any time, you can return to "Link Tracking Lists" on the Link Explorer menu, and now you'll see a master list of all your lists:

Just click on the list name you're interested in, and you can see your latest-and-greatest data. We can't build the links for you, but we can at least make keeping track of them a lot easier.

Bonus video: Now in electrifying Link-o-Vision!

Ok, it's just a regular video, although it does require electricity. If you're too lazy to read (in which case, let's be honest, you probably didn't get this far), I've put this whole workflow into an enchanting collection of words and sounds for you:

I hope you'll put your newfound powers to good. Let us know how you're using Tracking Lists (or how you plan to use them) in the comments, and where you'd like to see us take them next!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog https://ift.tt/2smZYpY
via IFTTT

Monday 28 May 2018

Solving Life’s Problems with CSS

How Much Data Is Missing from Analytics? And Other Analytics Black Holes

Posted by Tom.Capper

If you’ve ever compared two analytics implementations on the same site, or compared your analytics with what your business is reporting in sales, you’ve probably noticed that things don’t always match up. In this post, I’ll explain why data is missing from your web analytics platforms and how large the impact could be. Some of the issues I cover are actually quite easily addressed, and have a decent impact on traffic — there’s never been an easier way to hit your quarterly targets. ;)

I’m going to focus on GA (Google Analytics), as it's the most commonly used provider, but most on-page analytics platforms have the same issues. Platforms that rely on server logs do avoid some issues but are fairly rare, so I won’t cover them in any depth.

Side note: Our test setup (multiple trackers & customized GA)

On Distilled.net, we have a standard Google Analytics property running from an HTML tag in GTM (Google Tag Manager). In addition, for the last two years, I’ve been running three extra concurrent Google Analytics implementations, designed to measure discrepancies between different configurations.

(If you’re just interested in my findings, you can skip this section, but if you want to hear more about the methodology, continue reading. Similarly, don’t worry if you don’t understand some of the detail here — the results are easier to follow.)

Two of these extra implementations — one in Google Tag Manager and one on page — run locally hosted, renamed copies of the Google Analytics JavaScript file (e.g. www.distilled.net/static/js/au3.js, instead of www.google-analytics.com/analytics.js) to make them harder to spot for ad blockers. I also used renamed JavaScript functions (“tcap” and “Buffoon,” rather than the standard “ga”) and renamed trackers (“FredTheUnblockable” and “AlbertTheImmutable”) to avoid having duplicate trackers (which can often cause issues).

This was originally inspired by 2016-era best practice on how to get your Google Analytics setup past ad blockers. I can’t find the original article now, but you can see a very similar one from 2017 here.

Lastly, we have (“DianaTheIndefatigable”), which just has a renamed tracker, but uses the standard code otherwise and is implemented on-page. This is to complete the set of all combinations of modified and unmodified GTM and on-page trackers.

Two of Distilled’s modified on-page trackers, as seen on https://www.distilled.net/

Overall, this table summarizes our setups:

Tracker

Renamed function?

GTM or on-page?

Locally hosted JavaScript file?

Default

No

GTM HTML tag

No

FredTheUnblockable

Yes - “tcap”

GTM HTML tag

Yes

AlbertTheImmutable

Yes - “buffoon”

On page

Yes

DianaTheIndefatigable

No

On page

No

I tested their functionality in various browser/ad-block environments by watching for the pageviews appearing in browser developer tools:

Reason 1: Ad Blockers

Ad blockers, primarily as browser extensions, have been growing in popularity for some time now. Primarily this has been to do with users looking for better performance and UX on ad-laden sites, but in recent years an increased emphasis on privacy has also crept in, hence the possibility of analytics blocking.

Effect of ad blockers

Some ad blockers block web analytics platforms by default, others can be configured to do so. I tested Distilled’s site with Adblock Plus and uBlock Origin, two of the most popular ad-blocking desktop browser addons, but it’s worth noting that ad blockers are increasingly prevalent on smartphones, too.

Here’s how Distilled’s setups fared:

(All numbers shown are from April 2018)

Setup

Vs. Adblock

Vs. Adblock with “EasyPrivacy” enabled

Vs. uBlock Origin

GTM

Pass

Fail

Fail

On page

Pass

Fail

Fail

GTM + renamed script & function

Pass

Fail

Fail

On page + renamed script & function

Pass

Fail

Fail

Seems like those tweaked setups didn’t do much!

Lost data due to ad blockers: ~10%

Ad blocker usage can be in the 15–25% range depending on region, but many of these installs will be default setups of AdBlock Plus, which as we’ve seen above, does not block tracking. Estimates of AdBlock Plus’s market share among ad blockers vary from 50–70%, with more recent reports tending more towards the former. So, if we assume that at most 50% of installed ad blockers block analytics, that leaves your exposure at around 10%.

Reason 2: Browser “do not track”

This is another privacy motivated feature, this time of browsers themselves. You can enable it in the settings of most current browsers. It’s not compulsory for sites or platforms to obey the “do not track” request, but Firefox offers a stronger feature under the same set of options, which I decided to test as well.

Effect of “do not track”

Most browsers now offer the option to send a “Do not track” message. I tested the latest releases of Firefox & Chrome for Windows 10.

Setup

Chrome “do not track”

Firefox “do not track”

Firefox “tracking protection”

GTM

Pass

Pass

Fail

On page

Pass

Pass

Fail

GTM + renamed script & function

Pass

Pass

Fail

On page + renamed script & function

Pass

Pass

Fail

Again, it doesn’t seem that the tweaked setups are doing much work for us here.

Lost data due to “do not track”: <1%

Only Firefox Quantum’s “Tracking Protection,” introduced in February, had any effect on our trackers. Firefox has a 5% market share, but Tracking Protection is not enabled by default. The launch of this feature had no effect on the trend for Firefox traffic on Distilled.net.

Reason 3: Filters

It’s a bit of an obvious one, but filters you’ve set up in your analytics might intentionally or unintentionally reduce your reported traffic levels.

For example, a filter excluding certain niche screen resolutions that you believe to be mostly bots, or internal traffic, will obviously cause your setup to underreport slightly.

Lost data due to filters: ???

Impact is hard to estimate, as setup will obviously vary on a site-by site-basis. I do recommend having a duplicate, unfiltered “master” view in case you realize too late you’ve lost something you didn’t intend to.

Reason 4: GTM vs. on-page vs. misplaced on-page

Google Tag Manager has become an increasingly popular way of implementing analytics in recent years, due to its increased flexibility and the ease of making changes. However, I’ve long noticed that it can tend to underreport vs. on-page setups.

I was also curious about what would happen if you didn’t follow Google’s guidelines in setting up on-page code.

By combining my numbers with numbers from my colleague Dom Woodman’s site (you’re welcome for the link, Dom), which happens to use a Drupal analytics add-on as well as GTM, I was able to see the difference between Google Tag Manager and misplaced on-page code (right at the bottom of the <body> tag) I then weighted this against my own Google Tag Manager data to get an overall picture of all 5 setups.

Effect of GTM and misplaced on-page code

Traffic as a percentage of baseline (standard Google Tag Manager implementation):


Google Tag Manager

Modified & Google Tag Manager

On-Page Code In <head>

Modified & On-Page Code In <head>

On-Page Code Misplaced In <Body>

Chrome

100.00%

98.75%

100.77%

99.80%

94.75%

Safari

100.00%

99.42%

100.55%

102.08%

82.69%

Firefox

100.00%

99.71%

101.16%

101.45%

90.68%

Internet Explorer

100.00%

80.06%

112.31%

113.37%

77.18%

There are a few main takeaways here:

  • On-page code generally reports more traffic than GTM
  • Modified code is generally within a margin of error, apart from modified GTM code on Internet Explorer (see note below)
  • Misplaced analytics code will cost you up to a third of your traffic vs. properly implemented on-page code, depending on browser (!)
  • The customized setups, which are designed to get more traffic by evading ad blockers, are doing nothing of the sort.

It’s worth noting also that the customized implementations actually got less traffic than the standard ones. For the on-page code, this is within the margin of error, but for Google Tag Manager, there’s another reason — because I used unfiltered profiles for the comparison, there’s a lot of bot spam in the main profile, which primarily masquerades as Internet Explorer. Our main profile is by far the most spammed, and also acting as the baseline here, so the difference between on-page code and Google Tag Manager is probably somewhat larger than what I’m reporting.

I also split the data by mobile, out of curiosity:

Traffic as a percentage of baseline (standard Google Tag Manager implementation):


Google Tag Manager

Modified & Google Tag Manager

On-Page Code In <head>

Modified & On-Page Code In <head>

On-Page Code Misplaced In <Body>

Desktop

100.00%

98.31%

100.97%

100.89%

93.47%

Mobile

100.00%

97.00%

103.78%

100.42%

89.87%

Tablet

100.00%

97.68%

104.20%

102.43%

88.13%

The further takeaway here seems to be that mobile browsers, like Internet Explorer, can struggle with Google Tag Manager.

Lost data due to GTM: 1–5%

Google Tag Manager seems to cost you a varying amount depending on what make-up of browsers and devices use your site. On Distilled.net, the difference is around 1.7%; however, we have an unusually desktop-heavy and tech-savvy audience (not much Internet Explorer!). Depending on vertical, this could easily swell to the 5% range.

Lost data due to misplaced on-page code: ~10%

On Teflsearch.com, the impact of misplaced on-page code was around 7.5%, vs Google Tag Manager. Keeping in mind that Google Tag Manager itself underreports, the total loss could easily be in the 10% range.

Bonus round: Missing data from channels

I’ve focused above on areas where you might be missing data altogether. However, there are also lots of ways in which data can be misrepresented, or detail can be missing. I’ll cover these more briefly, but the main issues are dark traffic and attribution.

Dark traffic

Dark traffic is direct traffic that didn’t really come via direct — which is generally becoming more and more common. Typical causes are:

  • Untagged campaigns in email
  • Untagged campaigns in apps (especially Facebook, Twitter, etc.)
  • Misrepresented organic
  • Data sent from botched tracking implementations (which can also appear as self-referrals)

It’s also worth noting the trend towards genuinely direct traffic that would historically have been organic. For example, due to increasingly sophisticated browser autocompletes, cross-device history, and so on, people end up “typing” a URL that they’d have searched for historically.

Attribution

I’ve written about this in more detail here, but in general, a session in Google Analytics (and any other platform) is a fairly arbitrary construct — you might think it’s obvious how a group of hits should be grouped into one or more sessions, but in fact, the process relies on a number of fairly questionable assumptions. In particular, it’s worth noting that Google Analytics generally attributes direct traffic (including dark traffic) to the previous non-direct source, if one exists.

Discussion

I was quite surprised by some of my own findings when researching this post, but I’m sure I didn’t get everything. Can you think of any other ways in which data can end up missing from analytics?


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog https://ift.tt/2skU6gW
via IFTTT

Sunday 27 May 2018

Browser Extensions I Actually Use

I use around 10 at the moment and they all provide functionality to me I find extremely important. Sometimes that functionality is every day all day. Sometimes it's once in a blue moon but when you need it, you need it.

Direct Link to ArticlePermalink

The post Browser Extensions I Actually Use appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2KEyofq
via IFTTT

Thursday 24 May 2018

Learning Gutenberg: Setting up a Custom webpack Config

Gutenberg introduces the modern JavaScript stack into the WordPress ecosystem, which means some new tooling should be learned. Although tools like create-guten-block are incredibly useful, it’s also handy to know what’s going on under the hood.

Article Series:

  1. Series Introduction
  2. What is Gutenberg, Anyway?
  3. A Primer with create-guten-block
  4. Modern JavaScript Syntax
  5. React 101
  6. Setting up a Custom webpack (This Post)
  7. A Custom "Card" Block (Coming Soon!)
The files we will be configuring here should be familiar from what we covered in the Part 2 Primer with create-guten-block. If you’re like me (before reading Andy’s tutorial, that is!) and would rather not dive into the configuration part just yet, the scaffold created by create-guten-block matches what we are about to create here, so you can certainly use that as well.

Let’s jump in!

Getting started

Webpack takes the small, modular aspects of your front-end codebase and smooshes them down into one efficient file. It’s highly extendable and configurable and works as the beating heart of some of the most popular products and projects on the web. It’s very much a JavaScript tool, although it can be used for pretty much whatever you want. For this tutorial, it’s sole focus is JavaScript though.

What we’re going to get webpack doing is watch for our changes on some custom block files and compile them with Babel to generate JavaScript files that can be read by most browsers. It’ll also merge any dependencies that we import.

But first, we need a place to store our actual webpack setup and front-end files. In Part 2, when we poked around the files generated by create-guten-block, we saw that it created an infrastructure for a WordPress plugin that enqueued our front-end files in a WordPress way, and enabled us to activate the plugin through WordPress. I’m going to take this next section to walk us through setting up the infrastructure for a WordPress plugin for our custom block.

Setting up a plugin

Hopefully you still have a local WordPress instance running from our primer in Part 2, but if not, you’ll need to have one installed to continue with what we’re about to do. In that install, navigate to wp-content/plugins and create a fresh directory called card-block (spoiler alert: we’re going to make a card block... who doesn’t like cards?).

Then, inside card-block, create a file called card-block.php. This will be the equivalent to plugin.php from create-guten-block. Next, drop in this chunk of comments to tell WordPress to acknowledge this directory as a plugin and display it in the Plugins page of the Dashboard:

<?php
   /*
   Plugin Name: Card Block
   */

Don’t forget the opening PHP tag, but you can leave the closing one off since we’ll be adding more to this file soon enough.

WordPress looks for these comments to register a plugin in the same way it looks for comments at the top of style.css in a theme. This is an abbreviated version of what you’ll find at the top of other plugins’ main files. If you were planning to release it on the WordPress plugin repository, you’d want to add a description and version number as well as license and author information.

Go ahead and activate the plugin through the WordPress Dashboard, and I’ll take the steering wheel back to take us through setting up our Webpack config!

Getting started with webpack

The first thing we’re going to do is initialize npm. Run the following at the root of your plugin folder (wp-content/plugins/card-block):

npm init

This will ask you a few questions about your project and ultimately generate you a package.json file, which lists dependencies and stores core information about your project.

Next, let’s install webpack:

npm install webpack --save-dev

You might have noticed that we’re installing webpack locally to our project. This is a good practice to get into with crucial packages that are prone to large, breaking changes. It also means you and your team are all singing off the same song sheet.

Then run this:

npm install extract-text-webpack-plugin@next --save-dev

Then these Sass and CSS dependencies:

npm install node-sass sass-loader css-loader --save-dev

Now, NPX to allow us to use our local dependencies instead of any global ones:

npm install npx -g

Lastly, run this:

npm install webpack-cli --save-dev

That will install the webpack CLI for you.

Now that we have this installed, we should create our config file. Still in the root of your plugin, create a file called webpack.config.js and open that file so we can get coding.

With this webpack file, we’re going to be working with traditional ES5 code for maximum compatibility, because it runs with Node JS. You can use ES6 and Babel, but we’ll keep things as simple as possible for this tutorial.

Alight, let’s add some constants and imports. Add the following, right at the top of your webpack.config.js file:

var ExtractText = require('extract-text-webpack-plugin');
var debug = process.env.NODE_ENV !== 'production';
var webpack = require('webpack');

The debug var is declaring whether or not we’re in debug mode. This is our default mode, but it can be overridden by prepending our webpack commands with NODE_ENV=production. The debug boolean flag will determine whether webpack generates sourcemaps and minifies the code.

As you can see, we’re requiring some dependencies. We know that webpack is needed, so we’ll skip that. Let’s instead focus our attention on ExtractText. Essentially, ExtractText enables us to pull in files other than JavaScript into the mix. We’re going to need this for our Sass files. By default, webpack assumes everything is JavaScript, so ExtractText kind of *translates* the other types of files.

Let’s add some config now. Add the following after the webpack definition:

var extractEditorSCSS = new ExtractText({
  filename: './blocks.editor.build.css'
});

var extractBlockSCSS = new ExtractText({
  filename: './blocks.style.build.css'
});

What we’ve done there is instantiate two instances of ExtractText by passing a config object. All that we’ve set is the output of our two block stylesheets. We’ll come to these in the next series, but right now all you need to know is that this is the first step in getting our Sass compiling.

OK, after that last bit of code, add the following:

var plugins = [ extractEditorSCSS, extractBlockSCSS ];

Here we’ve got two arrays of plugins. Our ExtractText instances live in the core plugins set and we’ve got a couple of optimization plugins that are only smooshed into the core plugins set if we’re not in debug mode. We’re running that logic right at the end.

Next up, add this SCSS config object:

var scssConfig = {
  use: [
    {
      loader: 'css-loader'
    },
    {
      loader: 'sass-loader',
      options: {
        outputStyle: 'compressed'
      }
    }
  ]
};

This object is going to tell our webpack instance how to behave when it comes across scss files. We’ve got it in a config object to keep things as DRY as possible.

Last up, the meat and taters of the config:

module.exports = {
  context: __dirname,
  devtool: debug ? 'inline-sourcemap' : null,
  mode: debug ? 'development' : 'production',
  entry: './blocks/src/blocks.js',
  output: {
    path: __dirname + '/blocks/dist/',
    filename: 'blocks.build.js'
  },
  module: {
    rules: [
      {
        test: /\.js$/,
        exclude: /node_modules/,
        use: [
          {
            loader: 'babel-loader'
          }
        ]
      },
      {
        test: /editor\.scss$/,
        exclude: /node_modules/,
        use: extractEditorSCSS.extract(scssConfig)
      },
      {
        test: /style\.scss$/,
        exclude: /node_modules/,
        use: extractBlockSCSS.extract(scssConfig)
      }
    ]
  },
  plugins: plugins
};

That’s our entire config, so let’s break it down.

The script starts with module.exports which is essentially saying, "when you import or require me, this is what you’re getting." We can determine what we expose to whatever imports our code, which means we could run code above module.exports that do some heavy calculations, for example.

Next, let’s look at some of these following properties:

  • context is our base where paths will resolve from. We’ve passed __dirname, which is the current working directory.
  • devtool is where we define what sort of sourcemap we may or may not want. If we’re not in debug mode, we pass null with a ternary operator.
  • entry is where we tell webpack to start its journey of pack-ery. In our instance, this is the path to our blocks.js file.
  • output is what it says on the tin. We’re passing an object that defines the output path and the filename that we want to call it.

Next is module, which we’ll got into a touch more detail. The module section can contain multiple rules. In our instance, the only rule we have is looking for JavaScript and SCSS files. It’s doing this by searching with a regular expression that’s defined by the test property. The end-goal of the rule is to find the right sort of files and pass them into a loader, which is babel-loader in our case. As we learned in a previous tutorial in this series, Babel is what converts our modern ES6 code into more supported ES5 code.

Lastly is our plugins section. Here, we pass our array of plugin instances. In our project, we have plugins that minify code, remove duplicate code and one that reduces the length of commonly used IDs. This is all to make sure our production code is optimized.

For reference, this is how your full config file should look:

var ExtractText = require('extract-text-webpack-plugin');
var debug = process.env.NODE_ENV !== 'production';
var webpack = require('webpack');

var extractEditorSCSS = new ExtractText({
  filename: './blocks.editor.build.css'
});

var extractBlockSCSS = new ExtractText({
  filename: './blocks.style.build.css'
});

var plugins = [extractEditorSCSS, extractBlockSCSS];

var scssConfig = {
  use: [
    {
      loader: 'css-loader'
    },
    {
      loader: 'sass-loader',
      options: {
        outputStyle: 'compressed'
      }
    }
  ]
};

module.exports = {
  context: __dirname,
  devtool: debug ? 'inline-sourcemap' : null,
  mode: debug ? 'development' : 'production',
  entry: './blocks/src/blocks.js',
  output: {
    path: __dirname + '/blocks/dist/',
    filename: 'blocks.build.js'
  },
  module: {
    rules: [
      {
        test: /\.js$/,
        exclude: /node_modules/,
        use: [
          {
            loader: 'babel-loader'
          }
        ]
      },
      {
        test: /editor\.scss$/,
        exclude: /node_modules/,
        use: extractEditorSCSS.extract(scssConfig)
      },
      {
        test: /style\.scss$/,
        exclude: /node_modules/,
        use: extractBlockSCSS.extract(scssConfig)
      }
    ]
  },
  plugins: plugins
};
That’s webpack configured. I’ll take the mic back to show us how to officially register our block with WordPress. This should feel pretty familiar to those of you who are used to wrangling with actions and filters in WordPress.

Registering our block

Back in card-block.php, our main task now is to enqueue the JavaScript and CSS files we will be building with webpack. In a theme, we would do this with call wp_enqueue_script and wp_enqueue_style inside an action added to wp_enqueue_scripts. We essentially do the same thing here, except instead we enqueue the scripts and styles with a function specific to blocks.

Drop this code below the opening comment in card-block.php:

function my_register_gutenberg_card_block() {

  // Register our block script with WordPress
  wp_register_script(
    'gutenberg-card-block',
    plugins_url('/blocks/dist/blocks.build.js', __FILE__),
    array('wp-blocks', 'wp-element')
  );

  // Register our block's base CSS  
  wp_register_style(
    'gutenberg-card-block-style',
    plugins_url( '/blocks/dist/blocks.style.build.css', __FILE__ ),
    array( 'wp-blocks' )
  );
  
  // Register our block's editor-specific CSS
  wp_register_style(
    'gutenberg-card-block-edit-style',
    plugins_url('/blocks/dist/blocks.editor.build.css', __FILE__),
    array( 'wp-edit-blocks' )
  );  
  
  // Enqueue the script in the editor
  register_block_type('card-block/main', array(
    'editor_script' => 'gutenberg-card-block',
    'editor_style' => 'gutenberg-card-block-edit-style',
    'style' => 'gutenberg-card-block-edit-style'
  ));
}

add_action('init', 'my_register_gutenberg_card_block');

As the above comments indicate, we first register our script with WordPress using the handle gutenberg-card-block with two dependencies: wp-blocks and wp-elements. This function only registers a script, it does not enqueue it. We do something similar for out edit and main stylesheets.

Our final function, register_block_type, does the enqueuing for us. It also gives the block a name, card-block/main, which identifies this block as the main block within the namespace card-block, then identifies the script and styles we just registered as the main editor script, editor stylesheet, and primary stylesheet for the block.

If you are familiar with theme development, you’ve probably used get_template_directory() to handle file paths in hooks like the ones above. For plugin development, we use the function plugins_url() which does pretty much the same thing, except instead of concatenating a path like this: get_template_directory() . 'https://cdn.css-tricks.com/script.js', plugins_url() accepts a string path as a parameter and does the concatenation for us. The second parameter, _ FILE _, is one of PHP’s magic constants that equates to the full file path of the current file.

Finally, if you want to add more blocks to this plugin, you’d need a version of this function for each block. You could work out what’s variable and generate some sort of loop to keep it nice and DRY, further down the line. Now, I’ll walk us through getting Babel up and running.

Getting Babel running

Babel turns our ES6 code into better-supported ES5 code, so we need to install some dependencies. In the root of your plugin (wp-content/plugins/card-block), run the following:

npm install babel-core babel-loader babel-plugin-add-module-exports babel-plugin-transform-react-jsx babel-preset-env --save-dev

That big ol’ npm install adds all the Babel dependencies. Now we can add our .babelrc file which stores some settings for us. It prevents us from having to repeat them over and over in the command line.

While still in your theme folder, add the following file: .babelrc.

Now open it up and paste the following:

{
  "presets": ["env"],
  "plugins": [
    ["transform-react-jsx", {
      "pragma": "wp.element.createElement"
    }]
  ]
}

So, what we’ve got there are two things:

"presets": ["env"] is basically magic. It automatically determines which ES features to use to generate your ES5 code. We used to have to add different presets for all the different ES versions (e.g. ES2015), but it’s been simplified.

In the plugins, you’ll notice that there’s a React JSX transformer. That’s sorting out our JSX and turning it into proper JavaScript, but what we’re doing is telling it to generate WordPress elements, rather than React elements, which JSX is more commonly associated with.

Generate stub files

The last thing we’re going to do is generate some stub files and test that our webpack and WordPress setup is all good.

Go into your plugin directory and create a folder called blocks and, within that, create two folders: one called src and one called dist.

Inside the src folder, create the following files. We’ve added the paths, too, so you put them in the right place:

  • blocks.js
  • common.scss
  • block/block.js
  • block/editor.scss
  • block/style.scss

OK, so now we’ve generated the minimum amount of things, let’s run webpack. Open up your terminal and move into your current plugin folder—then we can run the following, which will fire-up webpack:

npx webpack

Pretty dang straightforward, huh? If you go ahead and look in your dist folder, you should see some compiled goodies in there!

Wrapping up

A lot of setup has been done, but all of our ducks are in a row. We’ve set up webpack, Babel and WordPress to all work as a team to build out or custom Gutenberg block (and future blocks). Hopefully now you feel more comfortable working with webpack and feel like you could dive in and make customizations to fit your projects.


Article Series:

  1. Series Introduction
  2. What is Gutenberg, Anyway?
  3. A Primer with create-guten-block
  4. Modern JavaScript Syntax
  5. React 101
  6. Setting up a Custom webpack (This Post)
  7. A Custom "Card" Block (Coming Soon!)

The post Learning Gutenberg: Setting up a Custom webpack Config appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2J6b2Cy
via IFTTT

​High Performance Hosting with No Billing Surprises

(This is a sponsored post.)

With DigitalOcean, you can spin up Droplet cloud servers with industry-leading price-performance and predictable costs. Our flexible configurations are sized for any application, and we save you up to 55% when compared to other cloud providers.

Get started today. Receive a free $100/60-day account credit good towards any DigitalOcean services.

Direct Link to ArticlePermalink

The post ​High Performance Hosting with No Billing Surprises appeared first on CSS-Tricks.



from CSS-Tricks https://synd.co/2IDHfxE
via IFTTT

Wednesday 23 May 2018

Learning Gutenberg: Modern JavaScript Syntax

One of the key changes that Gutenberg brings to the WordPress ecosystem is a heavy reliance on JavaScript. Helpfully, the WordPress team have really pushed their JavaScript framework into the present and future by leveraging the modern JavaScript stack, which is commonly referred to as ES6 in the community. It’s how we’ll refer to it as in this series too, to avoid confusion.

Let’s dig into this ES6 world a bit, as it’s ultimately going to help us understand how to structure and build a custom Gutenberg block.

Article Series:

  1. Series Introduction
  2. What is Gutenberg, Anyway?
  3. A Primer with create-guten-block
  4. Modern JavaScript Syntax (This Post)
  5. React 101 (Coming Soon!)
  6. Setting up a Custom webpack (Coming Soon!)
  7. A Custom "Card" Block (Coming Soon!)

What is ES6?

ES6 is short for “EcmaScript 6” which is the 6th edition of EcmaScript. It’s official name is ES2015, which you may have also seen around. EcmaScript has since gone through many iterations, but modern JavaScript is still often referred to as ES6. As you probably guessed, the iterations have continued ES2016, ES2017 and so-on. I actually asked a question on ShopTalk show about what we could name modern JavaScript, which I the conclusion was... ES6.

I’m going to run through some key features of ES6 that are useful in the context of Gutenberg.

Functions

Functions get a heck of an update with ES6. Two changes I want to focus on are Arrow Functions and Class Methods.

Inside a class you don’t actually need to write the word function anymore in ES6. This can be confusing, so check out this example:

class Foo { 
  // This is your 'bar' function
  bar() {
    return 'hello';
  }
}

You’d invoke bar() like this:

const fooInstance = new Foo();
const hi = fooInstance.bar();

This is commonplace in the land of modern JavaScript, so it’s good to clear it up.

Fun fact! ES6 Classes in JavaScript aren’t really “classes” in an object-oriented programming sense—under the hood, it’s the same old prototypical inheritance JavaScript has always had. Prior to ES6, the bar() method would be defined like so: Foo.prototype.bar = function() { ... }. React makes great use of ES6 classes, but it’s worth noting that ES6 classes are essentially syntactic sugar and hotly contested by some. If you’re interested in more details, checkout the MDN docs and this article on 2ality.

Right, let’s move on to arrow functions. 🚀

An arrow function gives us a compact syntax that is often used as a one-liner for expressions. It’s also used to maintain the value of this, as an arrow function won’t rebind this like setInterval or an event handler would usually do.

An example of an arrow function as an expression is as follows:

// Define an array of fruit objects
const fruit = [
  {
    name: 'Apple',
    color: 'red'
  },
  {
    name: 'Banana',
    color: 'yellow'
  },
  {
    name: 'Pear',
    color: 'green'
  }
];

// Select only red fruit from that collection
const redFruit = fruit.filter(fruitItem => fruitItem.color === 'red');

// Output should be something like Object { name: "Apple", color: "red" }
console.log(redFruit[0]);

As you can see above, because there was a single parameter and the function was being used as an expression, we can redact the brackets and parenthesis. This allows us to really compact our code and improve readability.

Let’s take a look at how we can use an arrow function as an event handler in our Foo class from before:

class Foo {
        
  // This is your 'bar' function
  bar() {
    let buttonInstance = document.querySelector('button');
    
    buttonInstance.addEventListener('click', evt => {
      console.log(this);
    });
  }
}

// Run the handler
const fooInstance = new Foo();
fooInstance.bar();

When the button is clicked, the output should be Foo { }, because this is the instance of Foo. If we were to replace that example with the following:

class Foo {
        
  // This is your 'bar' function
  bar() {
    let buttonInstance = document.querySelector('button');
    
    buttonInstance.addEventListener('click', function(evt) {
      console.log(this);
    });
  }
}

// Run the handler
const fooInstance = new Foo();
fooInstance.bar();

When the button is clicked, the output would be <button> because the function has bound this to be the <button> that was clicked.

You can read more about arrow functions with Wes Bos, who wrote an excellent article about them.

const, let, and var

You may have noticed that I’ve been using const and let in the above examples. These are also a part of ES6 and I’ll quickly explain what each one does.

If a value is absolutely constant and won’t change through re-assignment, or be re-declared, use a const. This would commonly be used when importing something or declaring non-changing properties such as a collection of DOM elements.

If you have a variable that you want to only be accessible in the block it was defined in, then use a let. This can be confusing to understand, so check out this little example:

function foo() {
  if (1 < 2) {
    let bar = 'always true';
    
    // Outputs: 'always true'
    console.log(bar);
  }
  
  // Outputs 'ReferenceError: bar is not defined'
  console.log(bar);
}

// Run the function so we can see our logs
foo();

This is a great way to keep control of your variables and make them disposable, in a sense.

Lastly, var is the same old friend we know and love so well. Unfortunately, between const and let, our friend is becoming more and more redundant as time goes on. Using var is totally acceptable though, so don’t be disheartened—you just won’t see it much in the rest of this tutorial!

Destructuring assignment

Destructuring allows you to extract object keys at the point where you assign them to your local variable. So, say you’ve got this object:

const foo = {
  people: [
    {
      name: 'Bar',
      age: 30
    },
    {
      name: 'Baz',
      age: 28
    }
  ],
  anotherKey: 'some stuff',
  heyAFunction() {
    return 'Watermelons are really refreshing in the summer' 
  }
};

Traditionally, you’d extract people with foo.people. With destructuring, you can do this:

let { people } = foo;

That pulls the people array out of the the foo object, so we can dump the foo. prefix and use it as it is: people. It also means that anotherKey and heyAFunction are ignored, because we don’t need them right now. This is great when you’re working with big complex objects where being able to selectively pick stuff out is really useful.

You can also make use of destructuring to break up an object into local variables to increase code readability. Let’s update the above snippet:

let { people } = foo;
let { heyAFunction } = foo;

Now we’ve got those two separate elements from the same object, while still ignoring anotherKey. If you ran console.log(people), it’d show itself an array and if you ran console.log(heyAFunction), you guessed it, it’d show itself as a function.

JSX

Most commonly found in React JS contexts: JSX is an XML-like extension to JavaScript that is designed to be compiled by preprocessors into normal JavaScript code. Essentially, it enables us to write HTML(ish) code within JavaScript, as long as we’re preprocessing it. It’s usually associated with a framework like React JS, but it’s also used for Gutenberg block development.

Let’s kick off with an example:

const hello = <h1 className="heading">Hello, Pal</h1>;

Pretty cool, huh? No templating system or escaping or concatenating required. As long as you return a single element, which can have many children, you’re all good. So let’s show something a touch more complex, with a React render function:

class MyComponent extends React.Component {
  /* Other parts redacted for brevity */
  
  render() {
    return (
      <article>
        <h2 className="heading">{ this.props.heading }</h2>
        <p className="lead">{ this.props.summary }</p>
      </article>
    );
  }
};

You can see above that we can drop expressions in wherever we want. This is also the case with element attributes, so we can have something like this:

<h2 className={ this.props.headingClass }> 
  { this.props.heading }
</h2> 

You might be thinking, “What are these random braces doing?”

The answer is that this is an expression, which you will see a ton of in JSX. Essentially, it’s a little inline execution of JavaScript that behaves very much like a PHP echo does.

You’ll also probably notice that it says className instead of class. Although it looks like HTML/XML, it’s still JavaScript, so reserved words naturally are avoided. Attributes are camel-cased too, so keep and eye out for that. Here’s a useful answer to why it’s like this.

JSX is really powerful as you’ll see while this series progresses. It’s a great tool in our stack and really useful to understand in general.

I like to think of JSX as made up-tag names that are actually just function calls. Pick out any of the made-up tags you see in Gutenberg, let's use <InspectorControls /> for example, and do a "Find in Folder" for class InspectorControls and you’ll see something structured like Andy’s example here! If you don't find it, then the JSX must be registered as functional component, and should turn up by searching for function InspectorControls.

Wrapping up

We’ve had a quick run through some of the useful features of ES6. There’s a ton more to learn, but I wanted to focus your attention on the stuff we’ll be using in this tutorial series. I’d strongly recommend your further your learning with Wes Bos’ courses, JavaScript 30 and ES6.io.

Next up, we’re going to build a mini React component!


Article Series:

  1. Series Introduction
  2. What is Gutenberg, Anyway?
  3. A Primer with create-guten-block
  4. Modern JavaScript Syntax (This Post)
  5. React 101 (Coming Soon!)
  6. Setting up a Custom webpack (Coming Soon!)
  7. A Custom "Card" Block (Coming Soon!)

The post Learning Gutenberg: Modern JavaScript Syntax appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2GIbBgE
via IFTTT

The MozCon 2018 Final Agenda

Posted by Trevor-Klein

MozCon 2018 is just around the corner — just over six weeks away — and we're excited to share the final agenda with you today. There are some familiar faces, and some who'll be on the MozCon stage for the first time, with topics ranging from the evolution of searcher intent to the increasing importance of local SEO, and from navigating bureaucracy for buy-in to cutting the noise out of your reporting.

We're also thrilled to announce this year's winning pitches for our six MozCon Community Speaker slots! If you're not familiar, each year we hold several shorter speaking slots, asking you all to submit your best pitches for what you'd like to teach everyone at MozCon. The winners — all members of the Moz Community — are invited to the conference alongside all our other speakers, and are always some of the most impressive folks on the stage. Check out the details of their talks below, and congratulations to this year's roster!

Still need your tickets? We've got you covered, but act fast — they're over 70% sold!

Pick up your ticket to MozCon!

The Agenda


Monday, July 9


8:30–9:30 am

Breakfast and registration

Doors to the conference will open at 8:00 for those looking to avoid registration lines and grab a cup of coffee (or two) before breakfast, which will be available starting at 8:30.


9:30–9:45 am

Welcome to MozCon 2018!
Sarah Bird

Moz CEO Sarah Bird will kick things off by sharing everything you need to know about your time at MozCon 2018, including conference logistics and evening events.

She'll also set the tone for the show with an update on the state of the SEO industry, illustrating the fact that there's more opportunity in it now than there's ever been before.


9:50–10:20 am

The Democratization of SEO
Jono Alderson

How much time and money we collectively burn by fixing the same kinds of basic, "binary," well-defined things over and over again (e.g., meta tags, 404s, URLs, etc), when we could be teaching others throughout our organizations not to break them in the first place?

As long as we "own" technical SEO, there's no reason (for example) for the average developer to learn it or care — so they keep making the same mistakes. We proclaim that others are doing things wrong, but by doing so we only reinforce the line between our skills and theirs.

We need to start giving away bits of the SEO discipline, and technical SEO is probably the easiest thing for us to stop owning. We need more democratization, education, collaboration, and investment in open source projects so we can fix things once, rather than a million times.


10:20–10:50 am

Mobile-First Indexing or a Whole New Google
Cindy Krum

The emergence of voice-search and Google Assistant is forcing Google to change its model in search, to favor their own entity understanding or the world, so that questions and queries can be answered in context. Many marketers are struggling to understand how their website and their job as an SEO or SEM will change, as searches focus more on entity-understanding, context and action-oriented interaction. This shift can either provide massive opportunities, or create massive threats to your company and your job — the main determining factor is how you choose to prepare for the change.


10:50–11:20 am

AM Break


11:30–11:50 am

It Takes a Village:
2x Your Paid Search Revenue by Smashing Silos
Community speaker: Amy Hebdon

Your company's unfair advantage to skyrocketing paid search revenue is within your reach, but it's likely outside the control of your paid search team. Good keywords and ads are just a few cogs in the conversion machine. The truth is, the success of the entire channel depends on people who don't touch the campaigns, and may not even know how paid search works. We'll look at how design, analysis, UX, PM and other marketing roles can directly impact paid search performance, including the most common issues that arise, and how to immediately fix them to improve ROI and revenue growth.


11:50 am–12:10 pm

The #1 and Only Reason Your SEO Clients Keep Firing You
Community speaker: Meredith Oliver

You have a kick-ass keyword strategy. Seriously, it could launch a NASA rocket; it's that good. You have the best 1099 local and international talent on your SEO team that working from home and an unlimited amount of free beard wax can buy. You have a super-cool animal inspired company name like Sloth or Chinchilla that no one understands, but the logo is AMAZING. You have all of this, yet, your client turnover rate is higher than Snoop Dogg's audience on an HBO comedy special. Why? You don't talk to your clients. As in really communicate, teach them what you know, help them get it, really get it, talk to them. How do I know? I was you. In my agency's first five years we churned and burned through clients faster than Kim Kardashian could take selfies. My mastermind group suggested we *proactively* set up and insist upon a monthly review meeting with every single client. It was a game-changer, and we immediately adopted the practice. Ten years later we have a 90% client retention rate and more than 30 SEO clients on retainer.



12:10–12:30 pm

Why "Blog" Is a Misnomer for Our 2018 Content Strategy
Community speaker: Taylor Coil

At the end of 2017, we totally redesigned our company's blog. Why? Because it's not really a blog anymore - it's an evergreen collection of traffic and revenue-generating resources. The former design catered to a time-oriented strategy surfacing consistently new posts with short half-lives. That made sense when we started our blog in 2014. Today? Not so much. In her talk, Taylor will detail how to make the perspective shift from "blog" to "collection of resources," why that shift is relevant in 2018's content landscape, and what changes you can make to your blog's homepage, nav, and taxonomy that reflect this new perspective.


12:30–2:00 pm

Lunch


2:05–2:35 pm

Near Me or Far:
How Google May Be Deciding Your Local Intent For You
Rob Bucci

In August 2017, Google stated that local searches without the "near me" modifier had grown by 150% and that searchers were beginning to drop geo-modifiers — like zip code and neighborhood — from local queries altogether. But does Google still know what searchers are after?

For example: the query [best breakfast places] suggests that quality takes top priority; [breakfast places near me] indicates that close proximity is essential; and [breakfast places in Seattle] seems to cast a city-wide net; while [breakfast places] is largely ambiguous.

By comparing non-geo-modified keywords against those modified with the prepositional phrases "near me" and "in [city name]" and qualifiers like "best," we hope to understand how Google interprets different levels of local intent and uncover patterns in the types of SERPs produced.

With a better understanding of how local SERPs behave, SEOs can refine keyword lists, tailor content, and build targeted campaigns accordingly.


2:35–3:05 pm

None of Us Is as Smart as All of Us
Lisa Myers

Success in SEO, or in any discipline, is frequently reliant on people's ability to work together. Lisa Myers started Verve Search in 2009, and from the very beginning was convinced of the importance of building a diverse team, then developing and empowering them to find their own solutions.

In this session she'll share her experiences and offer actionable advice on how to attract, develop, and retain the right people in order to build a truly world-class team.


3:05–3:35 pm

PM Break


3:45–4:15 pm

Search-Driven Content Strategy
Stephanie Briggs

Google's improvements in understanding language and search intent have changed how and why content ranks. As a result, many SEOs are chasing rankings that Google has already decided are hopeless. Stephanie will cover how this should impact the way you write and optimize content for search, and will help you identify the right content opportunities. She'll teach you how to persuade organizations to invest in content, and will share examples of strategies and tactics she has used to grow content programs by millions of visits.

4:15–4:55 pm

Ranking Is a Promise: Can You Deliver?
Dr. Pete Meyers

In our rush to rank, we put ourselves first, neglecting what searchers (and our future customers) want. Google wants to reward sites that deliver on searcher intent, and SERP features are a window into that intent. Find out how to map keywords to intent, understand how intent informs the buyer funnel, and deliver on the promise of ranking to drive results that attract clicks and customers.


7:00–10:00 pm

Kickoff Party

Networking the Mozzy way! Join us for an evening of fun on the first night of the conference (stay tuned for all the details!).



Tuesday, July 10


8:30–9:30 am

Breakfast


9:35–10:15 am

Content Marketing Is Broken
and Only Your M.O.M. Can Save You
Oli Gardner

Traditional content marketing focuses on educational value at the expense of product value, which is a broken and outdated way of thinking. We all need to sell a product, and our visitors all need a product to improve their lives, but we're so afraid of being seen as salesy that somehow we got lost, and we forgot why our content even exists. We need our M.O.M.s! No, not your actual mother. Your Marketing Optimization Map — your guide to exploring the nuances of optimized content marketing through a product-focused lens.

In this session you'll learn data and lessons from Oli's biggest ever content marketing experiment, and how those lessons have changed his approach to content; a context-to-content-to-conversion strategy for big content that converts; advanced methods for creating "choose your own adventure" navigational experiences to build event-based behavioral profiles of your visitors (using GTM and GA); and innovative ways to productize and market the technology you already have, with use cases your customers had never considered.


10:15–10:45 am

Lies, Damned Lies, and Analytics
Russ Jones

Search engine optimization is a numbers game. We want some numbers to go up (links, rankings, traffic, and revenue), others to go down (bounce rate, load time, and budget). Underlying all these numbers are assumptions that can mislead, deceive, or downright ruin your campaigns. Russ will help uncover the hidden biases, distortions, and fabrications that underlie many of the metrics we have come to trust implicitly and from the ashes show you how to build metrics that make a difference.


10:45–11:15 am

AM Break


11:25–11:55 am

The Awkward State of Local
Mike Ramsey

You know it exists. You know what a citation is, and have a sense for the importance of accurate listings. But with personalization and localization playing an increasing role in every SERP, local can no longer be seen in its own silo — every search and social marketer should be honing their understanding. For that matter, it's also time for local search marketers to broaden the scope of their work.


11:55 am–12:25 pm

The SEO Cyborg:
Connecting Search Technology and Its Users
Alexis Sanders

SEO requires a delicate balance of working for the humans you're hoping to reach, and the machines that'll help you reach them. To make a difference in today's SERPs, you need to understand the engines, site configurations, and even some machine learning, in addition to the emotional, raw, authentic connections with people and their experiences. In this talk, Alexis will help marketers of all stripes walk that line.


12:25–1:55 pm

Lunch


2:00–2:30 pm

Email Unto Others:
The Golden Rules for Human-Centric Email Marketing
Justine Jordan

With the arrival of GDPR and the ease with which consumers can unsubscribe and report spam, it's more important than ever to treat people like people instead of just leads. To understand how email marketing is changing and to identify opportunities for brands, Litmus surveyed more than 3,000 marketers worldwide. Justine will cover the biggest trends and challenges facing email today and help you put the human back in marketing’s most personal — and effective — marketing channel.

2:30–3:00 pm

Your Red-Tape Toolkit:
How to Win Trust and Get Approval for Search Work
Heather Physioc

Are your search recommendations overlooked and misunderstood? Do you feel like you hit roadblocks at every turn? Are you worried that people don't understand the value of your work? Learn how to navigate corporate bureaucracy and cut through red tape to help clients and colleagues understand your search work — and actually get it implemented. From diagnosing client maturity to communicating where search fits into the big picture, these tools will equip you to overcome obstacles to doing your best work.


3:00–3:30 pm

PM Break


3:40–4:10 pm

The Problem with Content &
Other Things We Don't Want to Admit
Casie Gillette

Everyone thinks they need content but they don't think about why they need it or what they actually need to create. As a result, we are overwhelmed with poor quality content and marketers are struggling to prove the value. In this session, we'll look at some of the key challenges facing marketers and how a data-driven strategy can help us make better decisions.


4:10–4:50 pm

Excel Is for Rookies:
Why Every Search Marketer Needs to Get Strong in BI, ASAP
Wil Reynolds

The analysts are coming for your job, not AI (at least not yet). Analysts stopped using Excel years ago; they use Tableau, Power BI, Looker! They see more data than you, and that is what is going to make them a threat to your job. They might not know search, but they know data. I'll document my obsession with Power BI and the insights I can glean in seconds which is helping every single client at Seer at the speed of light. Search marketers must run to this opportunity, as analysts miss out on the insights because more often than not they use these tools to report. We use them to find insights.



Wednesday, July 11


8:30–9:30 am

Breakfast


9:35–10:15 am

Machine Learning for SEOs
Britney Muller

People generally react to machine learning in one of two ways: either with a combination of fascination and terror brought on by the possibilities that lie ahead, or with looks of utter confusion and slight embarrassment at not really knowing much about it. With the advent of RankBrain, not even higher-ups at Google can tell us exactly how some things rank above others, and the impact of machine learning on SEO is only going to increase from here. Fear not: Moz's own senior SEO scientist, Britney Muller, will talk you through what you need to know.


10:15–10:45 am

Shifting Toward Engagement and Reviews
Darren Shaw

With search results adding features and functionality all the time, and users increasingly finding what they need without ever leaving the SERP, we need to focus more on the forest and less on the trees. Engagement and behavioral optimization are key. In this talk, Darren will offer new data to show you just how tight the proximity radius around searchers really is, and how reviews can be your key competitive advantage, detailing new strategies and tactics to take your reivews to the next level.

10:45–11:15 am

AM Break


11:25–11:45 am

Location-Free Local SEO
Community speaker: Tom Capper

Let's talk about local SEO without physical premises. Not the Google My Business kind — the kind of local SEO that job boards, house listing sites, and national delivery services have to reckon with. Should they have landing pages, for example, for "flower delivery in London?"

This turns out to be a surprisingly nuanced issue: In some industries, businesses are ranking for local terms without a location-specific page, and in others local pages are absolutely essential. I've worked with clients across several industries on why these sorts of problems exist, and how to tackle them. How should you figure out whether you need these pages, how can you scale them and incorporate them in your site architecture, and how many should you have for what location types?


11:45 am–12:05 pm

SEO without Traffic:
Community speaker: Hannah Thorpe

Answer boxes, voice search, and a reduction in the number of results displayed sometimes all result in users spending more time in the SERPs and less on our websites. But does that mean we should stop investing in SEO?

This talk will cover what metrics we should now care about, and how strategies need to change, covering everything from measuring more than just traffic and rankings to expanding your keyword research beyond just keyword volumes.


12:05–12:25 pm

Tools Change, People Don't:
Empathy-Driven Online Marketing
Community speaker: Ashley Greene

When everyone else zags, the winners zig. As winners, while your 101+ competitors are trying to automate 'til the cows come home and split test their way to greatness‚ you're zigging. Whether you're B2B or B2C, you're marketing to humans. Real people. Homo sapiens. But where is the human element in the game plan? Quite simply, it has gone missing, which provides a window of opportunity for the smartest marketers.

In this talk, Ashley will provide a framework of simple user interview and survey techniques to build customer empathy and your "voice of customer" playbook. Using real examples from companies like Slack, Pinterest, Intercom, and Airbnb, this talk will help you uncover your customers' biggest problems and pain points; know what, when, and how your customers research (and Google!) a need you solve; and find new sources of information and influencers so you can unearth distribution channels and partnerships.


12:25–1:55 pm

Lunch


2:00–2:30 pm

You Don't Know SEO
Michael King

Or maybe, "SEO you don't know you don't know." We've all heard people throw jargon around in an effort to sound smart when they clearly don't know what it means, and our industry of SEO is no exception. There are aspects of search that are acknowledged as important, but seldom actually understood. Michael will save us from awkward moments, taking complex topics like the esoteric components of information retrieval and log-file analysis, pairing them with a detailed understanding of technical implementation of common SEO recommendations, and transforming them into tools and insights we wish we'd never neglected.

2:30–3:00 pm

What All Marketers Can Do about Site Speed
Emily Grossman

At this point, we should all have some idea of how important site speed is to our performance in search. The recently announced "speed update" underscored that fact yet again. It isn't always easy for marketers to know where to start improving their site's speed, though, and a lot of folks mistakenly believe that site speed should only be a developer's problem. Emily will clear that up with an actionable tour of just how much impact our own work can have on getting our sites to load quickly enough for today's standards.

3:00–3:30 pm

PM Break


3:40–4:10 pm

Traffic vs. Signal
Dana DiTomaso

With an ever-increasing slate of options in tools like Google Tag Manager and Google Data Studio, marketers of all stripes are falling prey to the habit of "I'll collect this data because maybe I'll need it eventually," when in reality it's creating a lot of noise for zero signal.

We're still approaching our metrics from the organization's perspective, and not from the customer's perspective. Why, for example, are we not reporting on (or even thinking about, really) how quickly a customer can do what they need to do? Why are we still fixated on pageviews? In this talk, Dana will focus our attention on what really matters.


4:10–4:50 pm

Why Nine out of Ten Marketing Launches Suck
(And How to Be the One that Doesn't)
Rand Fishkin

More than ever before, marketers are launching things — content, tools, resources, products — and being held responsible for how/whether they resonate with customers and earn the amplification required to perform. But this is hard. Really, really hard. Most of the projects that launch, fail. What separates the wheat from the chaff isn't just the quality of what's built, but the process behind it. In this presentation, Rand will present examples of dismal failures and skyrocketing successes, and dive into what separates the two. You'll learn how anyone can make a launch perform better, and benefit from the power of being "new."


7:00–11:30 pm

MozCon Bash

Join us at Garage Billiards to wrap up the conference with an evening of networking, billiards, bowling, and karaoke with MozCon friends new and old. Don't forget to bring your MozCon badge and US ID or passport.



Grab your ticket today!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog https://ift.tt/2s591vX
via IFTTT

Passkeys: What the Heck and Why?

These things called  passkeys  sure are making the rounds these days. They were a main attraction at  W3C TPAC 2022 , gained support in  Saf...