Introducing the Skill-ometer

$T2eC16ZHJIkE9qU3k6hdBRj3B8efkw~~60_32About a mile from where I live, there’s a soccer field. If you were to pass by, you would see elementary-school teams practicing in the traditional way. Coaches set out orange cones, and kids form lines and wait their turn to participate in various drills.

If you saw them, you might think: Seems like a lot of kids are just standing around.

A few blocks away stands a high school. If you were to pass by, you would see a math class. The class operates in the traditional way: students sit silently in their seats as the teacher gives her lecture.

If you saw them, you might think: Seems like a lot of students are zoning out.

You might think those thoughts. But you wouldn’t have a way to objectively measure the effectiveness of their learning. You wouldn’t have a yardstick.

And you should.

If science has taught us anything over the past few years, it’s that all learning spaces are not created equal. High-quality methods of practice are efficient, because they are aligned with the ways our brains actually improve. Ineffective methods are inefficient, because they are aligned with tradition, or emotion, or the teacher’s ego, or what looks good.

There are an infinite number of ways to screw up a learning session. But high-quality practice sessions share a few basic characteristics. Which means that it should be possible to create a simple metric to measure practice effectiveness. And since that yardstick doesn’t seem to exist, I thought I’d take a crack at creating one.

Please say hello to the Skill-ometer, an attempt at measuring practice effectiveness by measuring seven key elements.

Here’s how it works: Score your practice session by responding to each of the following statements on a scale of 1-5: 1=strongly disagree; 2= disagree; 3=neutral; 4=agree; 5= strongly agree

  • Intensity: We gave 100 percent effort and attention.
  • Engagement: We were emotionally immersed in the tasks we took on. We knew what we had to do, and it felt like a game.
  • Practicality: We practiced exactly the skill that we’ll be using later, in the same way that we’ll be using it in “game situations”
  • Repetitions: We embraced the value of repetitions, especially for the most challenging skills
  • Clarity: We understood the day’s goal, and where it fit in the larger picture
  • Reachfulness: We were pushed to spend time on the edge of our abilities, struggling and reaching just past our current competence
  • Fun: It was hard, but not miserable. There were moments of laughter and surprise.

Scoring: (Out of a maximum 35)

  • 30-35: You are in the elite zone, hanging out with Peyton Manning and Yo-Yo Ma. Keep doing what you’re doing.
  • 25-30: This is a B-plus. You are highly effective, with a few things to work on.
  • 15-25: This is closer to a B-minus. You do a few things well, but have some clear weak spots that need addressing.
  • 5-15: You need to rethink your approach and design. Start by finding those in your field who score higher and study them.

Now, this is just a rough first attempt, but it’s interesting that most of these elements are about design and communication — areas that are 1) controlled by the coach; 2) can be planned for in advance.

I think it underlines the fact that the most effective learning sessions don’t depend on what happens in the classroom or on the field, but rather on what happens in the days and hours before, when the teacher or coach is thinking, planning, and communicating.

So here’s my question: what other factors do you think should be included in this metric? What other characteristics mark your most effective learning sessions? I’d love to hear your suggestions and ideas.