I receive emails from the Chicago Museum Exhibitors Group (CMEG), and the organization recently announced a meeting dedicated to evaluating exhibits, entitled Everyone Evaluate. I’ve never attended one of their events, and this topic in particular intrigued me, so I decided to go. Evaluation of this sort does not factor heavily in my day-to-day work, so I welcomed the opportunity to learn more from the experts. Museums are known for exhibits, but more archives and libraries are organizing similar experiences in their own spaces. So perhaps I will be a part of exhibition design and planning in the future, and will therefore need to understand the basics of evaluation!
The newly rebranded and relocated Chicago Architecture Center (CAC) was host for the meeting. Dozens of folks from across Chicagloand attended, including those who work at cultural institutions, companies, and consulting firms. I felt out of my element in this crowd of exhibit and instruction designers, and I enjoyed sitting back and taking it all in. I recognized a few faces from the Art Institute of Chicago and DuSable Museum of African American History.
First Michael Wood, the Senior Director of Program Strategy, provided some information about CAC, its recent change of name, and its move to new facilities. After having spent half an hour or so wandering around the new exhibitions, it was interesting to hear about all the changes the organization has recently undergone. I have had the opportunity to visit their old space, attend some of their tours, and take advantage of the wonderful annual open house event they organize. The exhibitions and their space seem to provide a sense of harmony with their new identity, though Wood made it clear that changes have not been without challenges. He also introduced the topic of evaluation and told us about the agenda and format for the meeting.
Katherine Gean of Katherine Gean consulting was first to present on the topic. She provided a high-level view of what evaluation looks like in the context of cultural heritage exhibits. She first stressed the importance of figuring out research questions: what do I need to know, what do I want to learn, and how to I want to study it? Gathering information and data becomes much more straightforward when there are clear parameters about the goals of the investigation. Gean then explained the difference between quantitative (numeric counts, generalizable) and qualitative (descriptive exploration, not as generalizable) data gathering, and how sometimes combining the two through a mixed methods approach works the best. In fact, she said that often the pursuit of answering one question via one method (quantitative or qualitative) often results in more questions arising, and different methods needing to be employed in order to answer those questions. The process is therefore often iterative. She also provided some examples of types of methods within each category:
Quantitative: surveys, timing and tracking
Qualitative: interviews, focus groups, follow-alongs, observation, cognitive interviews
Jana Greenslit, who works at the Museum of Science and Industry (MSI) was next to speak. She described their efforts to measure awe using in situ evaluation. Evaluation was embedded within the exhibit and overall museum experience, so they did not rely on post-visit surveys or interviews. Observation of visitors within gallery experiences were challenging given the nature of the research question. Measuring awe as a passive observer is challenging, since feelings aren’t always visible or apparent. Instead, Greenslit opted for a combination of experience sampling and eye tracking to help gather information to determine how awe-inspiring the museum experience is. The museum used cheap cell phones that they lent to select visitors, or had visitors opt in with their own cell phones for the experience sampling data gathering. One staff member was then tasked with texting these devices to ask visitors to rate their experience on a numerical scale as they were experiencing it. Greenslit also decided to use eye tracking glasses to help determine what visitors were looking at, what they spent the most time with, and what they were saying as they moved through spaces. Essentially, this technology allowed for observation in both qualitative and quantitative ways (through analyzing the footage and encoding it) without a museum staff representative needing to be present. It sounds as though they have reached some conclusions in regards to their original query, and hopefully the findings will be published on their website soon.
Rosie May, who works at the Museum of Contemporary Art Chicago (MCA) presented last. May started by recommending reading the article ‘The Museum is Watching You’ in the Wall Street Journal, as it provides a helpful overview of evaluation and its value in museums. She then talked about how she began working at MCA in interpretation, and how evaluation played in important role in better understanding their visitors and their needs. Some of the initial questions she posed included:
What are visitors’ behaviors in the gallery spaces?
What interpretation tools (introductory text, videos, wall labels) do visitors use?
How to visitors construct meaning, how do they learn?
What is the value of the exhibit experience for visitors?
May opted to employ two methods to gather quantitative and qualitative data to help answer these questions. First, staff were employed to interview visitors after they experienced the galleries. A standard evaluation form was used to note the answers, and the interviews were recorded as well. A team of in-gallery observers were also used in order to measure timing and tracking in front of interpretation tools and the collections objects on display. May was able to see how visitors behaved in the space - how they moved where they spent their time. From this mixed methods approach, the museum learned that visitors struggle to navigate through exhibits, and that they want to know how long they should expect to be in an exhibit. Visitors expect wall labels to be next to every object on display, and they appreciate when these labels are concise and provide tools to help them look at an interpret objects. The data also revealed that visitors want active learning activities, since art museum exhibits tend to be fairly passive experiences. From all this information, the exhibitions team made concrete changes to improve visitors’ experiences in exhibits:
Since visitors spent on average less than a minute in front of labels, interpretive text have been edited such that it can be read in this amount of time.
Since standard tombstone information on the top of labels (donor information, identifying accession number) was found to be confusing, some of this information has been moved to the bottom of the label.
Locating labels next to their corresponding objects has been prioritized in exhibition installation.
Clearer wayfinding signage was produced and installed throughout exhibition spaces.
More exhibits are featuring rooms in which visitors can actively respond to ideas presented.
Everyone attending this event then had the chance to put some of this practice and methodologies to use. Wood provided us with a prompt from his institution in the hopes that the group could come up with concrete research questions and methods to help CAC. In short, there has been a shift since they have moved locations and rebranded, and many visitors seem confused about what they can do there, and what they should expect. We broke into teams to discuss and explore. My group ended up coming up with the following questions: what is CAC and what do people think CAC is; what do visitors want and are they interested in the exhibit experience? Interestingly, folks identified the first two questions as aligning more closely with market research than audience research. Both sets of questions are important for the organization to answer in order to clarify their services and better provide for their visitors. For the identity component, it was decided that organizing focus groups for staff, visitors, and non-visitors would be helpful in order to gather some qualitative data about mission and services. This market segmentation can provide additional insights. The group decided that eye tracking may also provide useful data, especially given the seeming visitor confusion inside the spaces. Finally, tracking and tallying specific visitor questions could help reveal perceptions or misunderstandings among visitors. The group decided that the identity questions should be a top priority for CAC, but that they could use concept testing in the future to help out with the visitor experience questions. The goal of pursuing these questions and gathering data through these different methods would be to find ways of changing for the better. This could result in improved messaging in advertising, clearer signage on the exterior of the building explaining CAC to visitors, and better communication across all services provided by the organization.
This event was an amazing learning opportunity for me. I walked in with some basic understanding of how evaluation works, and left feeling much more confident about the process and specific methodologies. And crucially, I more fully understand what the aims of evaluation are (answering specific research questions) and overall goals should be (improving the experience and services offered). Cultural heritage exhibitions are important experiences that help connect the public with information and ideas through the display of objects, visuals, interpretive text, and hands-on activities. It’s exciting to think about the ways in which these experiences can be improved through strategic and iterative evaluation. And it’s also worth considering all the ways in which this type of evaluation extends beyond exhibits in information organizations.