Rating Speakers, Control, and Context

I recently read Scott Berkun’s Confessions of a Public Speaker, and it got me thinking about speaker feedback. It was a timely read, as I’m (with a number of co-organizers) in the middle of preparations for WordCamp Boston this January.

How can we be sure the speakers we’ve accepted will deliver? How can we ensure they get the feedback they deserve (positive or negative)? Would using a site like SpeakerRate improve the situation?

Scott Berkun's Confessions of a Public Speaker

Berkun’s book isn’t really in the “how to become a better speaker” genre, though I think committed speakers who read it will undoubtedly get better. I’d have subtitled it “What Makes Consistently Successful Public Speaking Nearly Impossible.” Berkun covers all kinds of interferences which prevent talks from being successful, only some of which are under the direct control of the speaker. Room configuration, human physiology and instincts, poor alignment between speaker and occasion, the disconnect between what organizers need and what audiences want, and the sheer difficulty of teaching anyone anything (let alone teaching several hundred people the same thing all at once) all figure into what makes talks go poorly.

(This not incidentally makes the book great reading for conference organizers, managers who send employees to seminars, and people who attend talks as well as potential or frequent speakers).

In the chapter on “things people say” Berkun has a whole section on “Why most speaker evaluations are useless.” He points out that:

Most organizers never bother to collect feedback from the attendees, and of those who do, often it doesn’t get passed on to the speakers. It’s a shame because it’s most appropriate for the organizers to share feedback with the speakers; after all, they invited them to speak, so technically the speakers work for the hosts. But being as busy as they are, the organizers don’t always communicate the data gathered back to the speakers. They ask the good speakers to come back and leave the rest to figure out life for themselves.

Even on occasions where feedback is gathered and shared with speakers, it’s still not very useful as it provides no context. Berkun shares a real example in which his talk was rated by attendees on a scale from “very dissatisfied” to “very satisfied.” But, he points out, just knowing the distribution of the attendees who bothered to fill out the form (129 out of 500 or so in this case) along that scale doesn’t do any good without a comparison:

But the single most valuable data point is how my scores compare to other speakers. Without it, this feedback is useless. Perhaps my scores are the worst of all score in the history of presentations at this organization. Or perhaps they’re the best. There is no way to know.

Berkun points out the feedback speakers really need:

  • How did my presentation compare to the others?
  • What one change would have most improved my presentation?
  • What questions did you expect me to answer that went unanswered?
  • What annoyances did I let get in the way of giving you what you needed?

Giving speakers feedback based on this set of questions would be much more likely to improve their performance.

(Aside #1: something about the annoyances question still bugs me. When I was teaching I used to talk about “productive frustration” – which naturally comes from learning something new – as opposed to “non-productive frustration” – which comes from poorly written assignments, badly planned logistics, and other stuff not directly related to the hard work of learning. I think this is the sense Berkun’s after for “annoyances” here but I don’t think it quite gets across. Maybe something more like “What did I do or fail to do that got in the way you getting what you needed?”)

Which brings me to SpeakerRate. It’s a site which lets users give direct feedback on speakers and their talks. It feels to me mostly aimed at speakers themselves, though the site says:

SpeakerRate is a community site for event organizers, attendees, and speakers.

  • Event organizers can find speakers, learn about talks they’ve given in the past, and determine who would be a good match for the event they’re organizing.
  • Event attendees can provide constructive feedback to speakers, track the talks they’ve attended, and research upcoming talks that they might attend.
  • Event speakers can get valuable constructive feedback directly from attendees and find out how they can improve their content and delivery for their next talk. They can also establish a SpeakerRating, which will help them earn future speaking opportunities.

On the surface, this seems to me a great thing: let those who attend talks provide feedback directly to speakers, cutting out the need for event organizers to collect and manage feedback. Thinking about it as a speaker, frankly, it’s a bit frightening. In much the same way that I’m grateful there was no RateMyProfessors when I was teaching, I worry that this might encourage or facilitate the worst kinds of superficial feedback and speaker trashing. What if someone with an axe to grind starts leaving negative comments? Would other attendees come to the rescue of a speaker thus trashed?

SpeakerRate.com Homepage

(Aside #2: As a consultant working with companies who worry about what will be said about their products in social media, it’s quite easy to dismiss their concerns. The conversation will happen anyway, and you can’t be so invested in your belief that your products are superior that you ignore real feedback from real people. Funny how difficult it is to apply this same line of thought when the prospect of being rated on speaker rate arises. Why does this scare the crap out of me, while SlideShare doesn’t?)

Speakers (or, apparently, organizers) set up a page for each talk at SpeakerRate. Users of the site are then given the opportunity to rate that talk and leave comments.

SpeakerRate form showing Delivery and Content
SpeakerRate comment form

The problem is that I’m not sure how effective a simple rating on “Content” and “Delivery” plus a box for unprompted, free text comments is at conveying useful feedback. Wouldn’t it be better to offer a prompt other than “Leave a comment”? Maybe even allow speakers to ask specific questions of their own?

I suppose that the “speaker rating” which comes from some aggregate measure across multiple events would give you some rough sense of how different speakers compare.

But Is the “Speaker Rating” (a single number, which is presented to two decimal places implying a fair degree of precision if not accuracy), enough to really validate a speaker’s abilities?

Isn’t much more context necessary to really understand what a speaker has to offer an audience? Speakers who might be great in one context (a highly technical demo or how-to in front of 30 experienced developers) might be horrible in another context (a keynote to an audience of varied levels of experience).

Have you used SpeakerRate? Have you found it useful, as a speaker, an event organizer, or event a prospective event attendee? Would you recommend it to your neighborhood event organizer?

5 Comments

  1. John – thanks for taking a detailed look at SpeakerRate and for asking some good questions about the site. I’m one of the guys who helped build it, so I thought I’d share a few thoughts:

    1. Your first quote from Scott’s book is exactly the problem we hoped to solve with the site. Most speakers want constructive (read: not necessarily all positive, which is all you’ll get face-to-face after a talk) feedback on how they can improve. Our “last slide” goal is to have everyone include their SpeakerRate profile URL in their presentation where they’ll ask for direct feedback.

    2. The rating system (content & delivery, 1-5 scale) was something we discussed at length as we built the site. It’s not perfect, but we feel like it strikes a good balance with the direct comments. It’s a quick view on how you did, and rolls up nicely into an overall rating both for speakers and events.

    3. As for being frightening, we had the same concerns. Trolls posting bad ratings & comments unjustifiably aren’t good for anyone. To address this, we (a) ask people to verify their SpeakerRate profile by connecting it with their LinkedIn profile (an existing profile that they presumably care about. We’re still working out some kinks, but this seems to work pretty well. We also, (b) allow anyone to flag comments as “non-constructive.” This way the community can help keep the comments appropriate.

    We’ve been excited to see the response to the site this year, and we’re looking forward to doing more with it in 2010. Feedback from folks like you (and your readers) is very helpful. Thanks again!

  2. Thanks for the comment Brian – I hadn’t noticed the connection to LinkedIn, and I agree that using an identity people are invested in will greatly help reduce at least the highest volume kinds of bad behavior.

    I guess ultimately it’s a challenge: listing myself on speakerrate and letting folks make public comment requires being comfortable with an open, transparent conversation – which feels (but isn’t logically) different than knowing people might tweet, blog, or otherwise discuss your presentations in public.

    In other words, I need to walk the walk and start listing my talks on speakerrate. ;)

Comments are closed.