Building Evaluation Capacity

I recently attended the 27th Annual Visitor Studies Association conference in Albuquerque, NM. Given the theme was Building Capacity for Evaluation: Individuals, Institutions, the Field, it’s not surprising that “capacity building” was a common topic of discussion throughout the week. What do we mean by building capacity? Whose capacity are we building and why? Pulling together threads from throughout the conference, here are some of my thoughts:

Individual capacity building:

Any conference offers a chance to hear about developments in the field and to build your professional networks, which is a form of personal capacity-building. VSA in particular runs professional development workshops before and after the conference as an opportunity to sharpen your skills, be exposed to different approaches and to learn new techniques. These are useful for both newcomers to the field as well as more experienced researchers who might be interested in new ways of thinking, or new types of data collection and analysis.

A common thread I noticed was both the opportunities and challenges presented by technology – video and tracking software allow you to collect much more detailed data, and you can integrate different data types (audio, tracking data) into a single file. But technology’s no panacea, and good evaluation still boils down to having a well thought-through question you’re looking to investigate and the capacity to act on your findings.

Panel session at VSA 2014

Institutional capacity building:

There were a lot of discussions around how to increase the profile of Evaluation and Visitor Research within institutions. There seemed to be a general feeling that “buy-in” from other departments was often lacking: evaluation is poorly understood and therefore not valued by curators and others whose roles did not bring them into regular, direct contact with visitors. Some curators apparently come away with the impression that evaluators only asked visitors “what they don’t like”, or otherwise had a vested interest in exposing problems rather than celebrating successes[1]. Others believe they “already know” what happens on the exhibition floor, but without systematic observation may only be seeing what they want to see, or otherwise drawing conclusions about what works and what doesn’t based on their own assumptions, rather than evidence.

For many, the “aha!” moment comes when they become involved in the data collection process themselves. When people have an opportunity to observe and interview visitors, they start to appreciate where evaluation findings come from, and are subsequently more interested in the results. Several delegates described Damascene conversions of reluctant curators once they had participated in an evaluation. But others expressed reservations about this approach – does it give colleagues an oversimplified view of evaluation? Does it create the impression that “anyone can do evaluation”, therefore undermining our skills, knowledge and expertise? What about the impact on other functions of the museum: if curators, designers and others are spending time doing evaluation, what parts of their usual work will need to be sacrificed?

A counter to these reservations is that visitors are arguably the common denominator of *all* activities that take place in informal learning institutions, even if this isn’t obvious on a day to day basis in many roles. Participating in data collection acts as a reminder of this. Also, at its best, evaluation helps foster a more reflective practice more generally. But nonetheless the concerns are valid.

Capacity building across the Field:

I found this part of the discussion harder to be part of, as it was (understandably) focused on the US experience and was difficult to extrapolate to the Australian context due to massive differences in scale. One obvious difference is the impact that the National Science Foundation has had on the American museum landscape. NSF is a major funder of the production and evaluation of informal science learning [2]. NSF-supported websites like host literally hundreds of evaluation reports (that actually extend beyond the “science” remit that the site’s name implies – it’s a resource worth checking out).

There are a considerable number of science centres and science museums across the US, and because of these institutions’ history of prototyping interactive exhibits, they tend to have a larger focus on evaluation and visitor research than (say) history museums. Indeed, most of the delegates at VSA seem to represent science centres, zoos and aquariums, or are consultant evaluators for whom such institutions are their principal clients. There was also a reasonable art museum presence, and while there were a few representatives of historical sites, on the whole I got the impression that history museums were under-represented.

In any case, I came away with the impression that exhibition evaluation is more entrenched in museological practice in the US than it is here in Australia. It seems that front-end and formative research is commonly done as part of the exhibition development process, and conducting or commissioning summative evaluations of exhibitions is routine. In contrast, besides a handful of larger institutions, I don’t see a huge amount of evidence that exhibition evaluation is routinely happening in Australia. Perhaps this is just the availability heuristic at play – the US is much bigger so it’s easier to bring specific examples to mind. Or it could be that evaluation is happening in Australian museums, but as an internal process that is not being shared? Or something else?


[1] A lesson from this is that evaluation reports may read too much like troubleshootingdocuments and not give enough attention to what *is* working well.

[2] The Wellcome Trust plays a similar role in the UK, but as far as I’m aware there is nothing comparable (at least in scale) in Australia.

Museum Life Interview

I’m currently on my way back from Albuquerque, New Mexico, where I attended the Visitor Studies Association annual conference. It’s been a very thought provoking conference and has been a chance for me to present some of the results from my PhD research (more on the conference later, once I’ve had a chance to digest it all).

Sometimes when you’re in a different time zone, interesting opportunities present themselves – this time, while in Albuquerque, I was a guest on Carol Bossert’s online radio program Museum Life. It streamed live but also is available online:

It’s an in-depth interview: the whole show goes for a little under an hour (so go grab a coffee now if you plan to listen. . .). I talk a little bit about how I came to museums, what led to me pursuing a PhD, an overview of some of my research findings, and how I think these might be able to be applied to museum practice. I hope you find it interesting!

What do you want / need from an exhibition designer?

Exhibition design can be hard to pin down sometimes. It has been described as

“. . .a mode of communication that has meant different things at different times, continues to change and expand, and, in fact, is not even recognised universally as a discipline at all.” (Lorenc, Skolnick, & Berger, 2010, p12)

So if you’re commissioning an exhibition designer for the first time, it can be hard to know what you should be looking for. And it’s not a one-size-fits-all thing.

Many different types of specialists may lay claim to being able to design interpretive exhibitions. Such designers range from those with a grab-bag of soft skills that are hard to encapsulate in a few words, to people with clearly defined and quantifiable skill sets such as architects. And there’s a lot in between. In a tendering process, these apples and oranges may find themselves in direct competition with one another. If you’re the person letting and assessing tenders, on what basis should you choose?

I’ve been thinking through some of the issues I think clients should consider before commissioning a design team. This is what I’ve come up with so far:

Square pegs in round holes

It’s possible for a team to have the right skills, but deploy them in an inappropriate way. For instance, a big architectural firm may have ample experience in large complex buildings and fit-outs such as office buildings or shopping malls. Such a track record can be reassuring. But – if they see a museum as just another fit-out along the same lines, they may try and shoehorn it into the same production processes and protocols. Such a work plan will underestimate the amount of time and iteration it can take to get an exhibition layout, graphics and other media all working together in harmony. Office blocks and shopping malls don’t need to worry about “storylines”, so don’t expect standard fit-out processes to be able to accommodate them.

Such shoehorning is more likely to happen when a client uses a modified version of a boilerplate construction tender to call for bids: it doesn’t take into account the specific variables and vagaries of an exhibition.

A question to ask yourself: Does the firm “get” exhibitions or do they see them as yet another fit-out?

The certainty of the cookie cutter

In any exhibition project, certainty and creativity will be in tension. Maximising certainty will lead to cookie-cutter outcomes. Meanwhile, creativity can only flourish in a situation where there is room to make mistakes. Innovation comes with risk. Any given project will need to decide where it wants the creativity-certainty balance to lie. You can’t have your creativity cake and have the certainty of eating it!

Because it’s generally framed in terms of minimising risk, competitive tendering tends to prioritise certainty over creativity. This is not necessarily a problem. But, if you want innovation, you need to ensure your procurement processes allow space for it to happen. A standard tender probably won’t.

A question to ask yourself: Are we making it clear how much certainty we want and how much risk we can tolerate, or is our procurement process sending a mixed message in that regard?

Loose briefs

More often than not, it’s not what the brief says that will make you come unstuck, it’s what it doesn’t say. I’ve learned this one from bitter experience! Writing a brief is a bit like playing the tappers and listeners game – we forget that what’s obvious to us, frequently isn’t to anyone else. Misunderstandings in interpreting the brief can also be a failure of imagination on the brief-writer’s part – a case of not spelling it out simply because you can’t envisage it being any other way.

Another weakness of briefs is that they are often expected to capture in words a very specific and detailed image we have in our minds’ eye. It can only ever be the tip of the iceberg, and how someone will interpret a written description will vary hugely depending on their thinking style, prior experience, etc. Exhibitions are a visual medium. Sometimes it might be better to say it with a picture than leave it to words alone.

Things to try: Include visual materials such as mood boards part of the brief. Also, make a “return brief” document an early stage deliverable in the design project. This gives a chance for you and the designer to make sure you’re on the same page and iron out any wildly different interpretations of what’s expected.

Being a “good client”

I’ve been both sides of the client / designer fence, and appreciate that it’s a two-way street. No amount of dedication, skill or experience on the part of the design team can rescue fundamental issues with the client team, such as:

  • not making decisions, particularly-time critical ones
  • one client representative saying x, another saying y
  • not respecting the fact that you’re paying for a process, not just a product. Just because nothing has been built yet, doesn’t mean costs haven’t been incurred. Yes, iterations are part of the process but they cannot be done indefinitely without it affecting the price
  • not giving clear direction and feedback beyond “I’ll know it when I see it”
  • not recognising the limitations of your budget and timeframe
  • protracted, complicated and time-consuming procurement processes that expect design concepts at the pitch stage. This is one of the biggest bugbears of the design industry, and could be a post in its own right.

What tips would you give to a person looking to commission an exhibition designer for the first time?

Update: I posted this piece on LinkedIn, where there were a few very useful comments. Briefly:

  • Price shouldn’t be a key consideration in choosing a designer – it’s more important to have someone that understands what you want and how you work.
  • Be an informed client – do your homework about what you like and what you don’t
  • Resist the temptation to squeeze ‘just one more thing’ into the exhibition – “decide what to say, say it, then shut up!”

Reference: Lorenc, J., Skolnick, L., & Berger, C. (2010). What is exhibition design? Mies, Switzerland: Rotovision.