In my writing so far, I hope I have whetted your appetite for coaching research and put a convincing argument that it cannot be left as an “academic” pastime, but should be part of every practitioner’s arsenal.

Most of us do not have the time to carry out research per se, but given that our profession is in its infancy, there is much to discover in the literature about the true potential of what we can offer as coaches and how this can impact upon our clients and their organizations! We can contribute to the growing body of knowledge ourselves by delving into journals and articles, discussing “hot topics” within our networks and generally making our literature our own. As practitioners we contribute a valuable perspective when we talk with our peers and to academic researchers.

Over the last year I have been contributing to a working group on research as part of the Global Coaching Convention. This convention was established to create a collaborative framework of stakeholders in coaching with the aim of professionalizing the industry. Quite a job and at times I think the size and complexity of the aim has daunted even the hardiest souls. As in any undertaking of this size, there has been a debate about the value of such an initiative. Detractors say that the coaching community has grown organically so far and it should be left to continue doing so. Others say that the convention is taking on too big a job, and with so many diverse agendas on the table, there is little hope of getting collaboration or consensus so people are wasting their time. I will not go further into the debates other than to mention that if any of our clients came out with such a view, we might well consider challenging it! But enough of my soap box, as a good researcher I shall admit my bias and point everyone in the direction of the GCC’s website for the latest news and events.

Sunny Stout Rostron and Carol Kauffman chaired and facilitated the research working group and they did a grand job in challenging our process and thinking as well as generally bringing the project home. The core piece of work was a review of where we are, as a community, in terms of our research. Sunny and Carol will be publishing the full piece in the near future, but I would like to share with you some of the thinking it sparked with me.

First and foremost, we agreed that if we are thinking of moving to becoming a profession then we have to define what our body of knowledge is—what is it that makes our offer different to that of occupational psychologists, management consultants or other related fields? Research is the route to defining our knowledge. Even if we are simply looking at, and comparing, each other’s practice we are engaging in research.

The second point that struck a real note with me was our discussion around whether we should define what is “good” and what is “bad” research. This question and its real depth gets in the way of many of us entering the world of research. It throws up all kinds of questions about what is the “correct” way of doing it, reporting it or even defining it.

Let us first consider our purpose in doing research. For me and many others it is to find something out or to learn, and the best evidence of learning is to change behaviors. So we are effectively saying to our colleagues:

“Trust me—I have looked at this issue and found XYZ. You can now take my findings and apply them directly to your practice.”

That is quite a thing to say. We are suggesting people change their practice and behaviors because of what we have found out. To do this (and still sleep at night) we need to know that we are right (or valid) and not leading people down the garden path on a scenic route to nowhere. Some researchers have taken the easy route out of this dilemma and stuck to one way of doing research, irrespective of the question they are asking. Usually the method of choice is a controlled experimental study where one group gets coaching and one group doesn’t, and at the end there is some measure of impact on behaviors (with everyone hoping there is a positive effect on those who have been coached)!

Everyone breathes a sigh of relief as they are doing a scientific study and don’t have to justify themselves any further. Oh if only life was that easy! As we have discussed before, what you research and how you do it is determined by the question you are asking NOT the other way around. A controlled experiment would be terribly complicated and confusing if we wanted to explore how and what elements of the coaching engagement are of most value to a diverse range of clients. Trying to control for all the factors that would be different between groups would make it untenable (and unusable).

Identifying the research method used as the main differentiator between good and bad research is therefore not a sensible path and will only lead to restricting the type of research question we will be able to ask (and answer). Our criteria for whether an inquiry is “good” research must be: Is there coherence between the question, the method used to research it and the analysis undertaken, and has everything been done to the standards of good practice? Let us leave the question of what is good practice to one side for another day and take that as read; we can then be happy to consider good research to include any method or even mix of methods that makes sense for the question we are asking.

The same thinking should be brought to bear on the question: What is the best research to do? Everyone wants to do research that will set the world alight, but choosing a topic isn’t easy. Governments have been engaged in foresight exercises for many years trying to second guess the research investment they should make to enable them to meet the challenges for the future. They have invested a significant amount of money, but it has resulted in quite a lot of what has been described as crystal gazing.

Experience has shown that such exercises are useful for mapping current drivers for research, but usually fail to foresee the big issues for the future, e.g., the exponential rise in the use of the mobile phone and the personal computer. If we cannot see what will be the main issue for the future then the best research to do is the research that speaks to you and your practice, i.e., the research that follows your passion. Chances are that your passion will be shared by others—go ahead and ask them—and if that is the case then you can be confident that there will be an audience for your work.

This article first appeared in Business Coaching Worldwide (October Issue 2008, Volume 4, Issue 3).


Worth Reading:

As an introduction to how people are thinking about research for the future, have a look at these two papers. If you do not have access to these journals through a library or database, then just go to the website of the journal and order the download direct to your computer.

Linley, Alex P. 2006. “Coaching Research: Who? What? Where? When? Why?” International Journal of Evidence Based Coaching and Mentoring Vol. 4, No. 2 (Autumn): 1

Bennett, J.L. 2006. “An Agenda for Coaching-Related Research: A Challenge for Researchers.” Consulting Psychology Journal: Practice and Research Vol. 58, Part 4: 240-249


Dr. Annette Fillery-Travis

Dr. Annette Fillery-Travis is a senior researcher and education coach with the Professional Development Foundation. The author of more than 60 research articles and studies, her recent book The Case for Coaching, presenting a literature review with research case studies and interviews from over 20 organizations on coaching efficacy, was published in 2006 by CIPD, UK.