Rethinking Our Sacred Cows
We all have our sacred cows. You know, the way we’ve always done things. And the truth is, it often takes a lot to get us to try something new. We have our tried and true and, especially when we’re stressed, we tend to fall back on what we know. It’s not unlike cooking. Like many others, when I’m busy or rushed, it’s the favourite recipes that I reach for. Yet, if we want to go in a different direction, relying on our old recipes is like only paddling on one side of the canoe. Not only will we have difficulty getting to a new place, we will be traveling in circles. It is true, as Alberta Einstein once said, “We can’t solve problems by using the same kind of thinking we used when we created them”.
This week I took part in a workshop that was a reminder of how critical it is for each of us to remain open and receptive to doing things differently. After all, if we’re serious about wanting different results in our lives, businesses, organizations, and communities, we’re going to have to apply new strategies and new ways of determining our priorities.
The workshop was based on the introduction of a tool designed for public health practitioners working with communities. After our individual introductions, we were all asked to answer the question, “What are the top 3 things your group/agency/organization thinks will be most effective to help people be healthy in your community?
As one might imagine, the answers were very diverse and included such responses as more walking trails and recreation facilities, after-hour and weekend use of schools, family activities, greater awareness of the importance of good nutrition, financial subsidies for low income families, safe transportation, political leadership, community meeting spaces etc.
All of the answers served to educate those in attendance of the complexity of what it takes to be a healthy community and the impact of the physical, economic, socio-cultural, and political environments. Subsequently, all of the answers from the participants were organized within these four environments and four groups were formed to brainstorm specific strategies.
After all of the brainstormed ideas were captured within the groups, we were individually asked to score each idea for its (1) importance and (2) changeability (meaning how easy we thought it would be to make that change a reality). Individual answers within the group were then added and averaged to determine our top five priorities.
Sounds like a great process right?
While it definitely had its merits, what we saw happen within our group was that averaging our responses meant we averaged out a lot of what was innovative or difficult to achieve, and ended up with a list of top priorities that were best described as “same old, same old”. Speaking for myself, I can say there wasn’t anything on that list that spoke to me or inspired me to assist with its implementation.
Upon reflection, we realized that by averaging, we had ignored the polarized items where one person may have ranked something low and another ranked it high. Instead of discussing the reasons for the disparity, the averaging meant it landed in the middle of the pack and was therefore ignored.
Instead, we realized, those polarized items could have been key areas for discussion. We also realized a third area for scoring called “potential impact” might have helped to raise some of them to the top, as even though they would have been difficult to achieve, they would have had a significant impact.
In my experience, polarization occurs for two reasons. Either it’s a really innovative idea that is ahead of its time and needs more explanation and exploration, or there are differing perceptions and experiences at play that, if shared, could make something easier or more feasible to implement.
We were also worried that the entire process didn’t allow us to consider the assets that might already exist within the community and how we could lever or build on them. Nor did it take into consideration our respective, individual passions and how they could ultimately assist in something being driven forward.
While we pointed out our concerns, time pressures meant the instructor needed to move us to the next step where all four groups would vote and prioritize from among one another’s top five in order to determine a focus for moving forward. However, given our concerns, and to her credit, the instructor suggested each group add one wild card from among their strategies that didn’t make it to their top five.
Once the dust settled and the votes were tabulated, the top two ended up being from among the four wild cards – one focused on aboriginal communities and the other on “good food boxes” that provide families with healthy, locally grown foods. The other three were maximizing the use of current facilities, creating opportunities for equipment exchange, and cooking classes.
So what does this tell us?
I think it’s a lot about changing how we measure. The health sector in particular works from a science and fact-based model that stresses objectivity. And yet, if we are to be honest, measuring and averaging often means we don’t discuss the tough stuff, what we may know intuitively, or the innovation that often sits on the fringes. I’m not saying we shouldn’t measure. I get that what we count, quantify, measure, and evaluate influences public policy, decision-making, and investment. What I am saying is that we may need to rethink our old ways of measuring especially our propensity to want to use numbers. Our drive to find new solutions might instead mean we need to validate and provide room for deep, intense, messy and ultimately meaningful conversations and exchanges that can’t be quantified by numbers. After all, as the pundits have quoted over the years, “If you always do what you always do, you’ll always get, what you always get”. Not sure about you, but from where I’m sitting, that’s just not good enough.Posted on 02-13-11
Bravo! Love the message about process because there are so many that emphasize the obvious and never get to the real gems!•Posted by Carol Petersen on 02/14/11 at 11:48 AM
Brenda - you have very articulately recognized the limitations of over-relying on quantitative ratings, and averaging in particular. Though I do sometimes use averages in evaluation, I have learned over the years that looking at ranges of scores and outliers is often more interesting than and useful for decisions than averages. You have mentioned some of the reasons for this, such as exploring reasons for different views and how to engage people with different interests in issues they are passionate about and contribute to the overall community.
As someone initially trained as a quantitative researcher, I always felt like there was something missing when we reduce decisions down to numbers. I therefore took it upon myself to learn the qualitative methods (especially people’s reflections and stories in their own words) that I did not get in school. I find I am now more drawn to the qualitative, to bring out the richness of perspectives, especially in community building work. When ratings and other numerical data are required or useful, the qualitative data deepens our understanding of what the numbers mean, as well as taking us beyond the numbers into things like reflective practice, tacit knowledge, intuitive insights gained through life/work experiences and so on.
I think processes like the one you mentioned are one tool in a broad toolbox. It brings a group some consensus, but may be best used with other tools that allow for more exploration of diversity of opinion. I like the idea of more structured processes like including a wild card - which could be a jumping off point for more discussion and reflection on where there are differences as well as where there is consensus.•Posted by Tammy Horne on 02/14/11 at 05:43 PM
I think it’s a lot about changing how we measure. The health sector in particular works from a science and fact-based model that stresses objectivity.
•Posted by Diptiranjan on 08/08/12 at 11:58 AM
Previous entry: Where Everybody Knew Our Name