Sunday, December 9, 2018

How to Evaluate Strategic Communication Campaigns

       Strategic communication campaigns can be evaluated in many ways, and this is something that can be covered by quantitative rulers. Quantitative rulers are such things as surveys and questionnaires and qualitative rulers are things like interviews and focus groups. Through the use of  both of these tools, strategic communication campaigns can be conducted and evaluated thoroughly.
       In communication terms, evaluation is the systematic application of research processes and procedures to understand the conceptualization, design, implementation, and utility of interventions. This is also known here as communication campaigns. Evaluation research determines whether or not a program was effective, how it failed or how it achieved its desired objective, and the efficiency in which it achieved the desired objective. Evaluation adds to the knowledge base of how programs touch and influence their desired audiences so that researchers can learn lessons from these experiences and design more effective programs for future use. Instead of evaluating staff performances, evaluation is designed to identify sources or possible sources of implementation errors. Furthermore, it looks to find out if and how the program succeeded. Therefore, corrections can be conducted for current and future programming objectives. As a specialty field, empirical work on evaluation has grown over the past several decades. Today, evaluation is seen as a distinct research enterprise.
       In further explanation of the research enterprise, it is composed of three major debates. One debate involves the use of quantitative against qualitative methods. The balance of emphasis between these two particular methods should be geared towards their ability to answer the research questions being presented and the availability of data. In order to evaluate an intervention effectively, evaluators might seek to use both quantitative and qualitative measures. The research results from one will ultimately aid in supplementing the results of the other. A second debate involves the determination that non - experimental designs can appropriately control for selectivity and other areas involved in the public communication process. With the appropriate theoretical and procedural components, researchers can use virtual experimental designs for campaign evaluation. A third debate involves the advantages and disadvantages of internal against external factors. Oftentimes, external factors often have more justification. However, they are commonly less informed than internal evaluators involving aspects of the campaign's implementation that could influence its effectiveness.
       Furthermore, evaluation research plays three roles in any communications campaign. It improves the likelihood of achieving program success and by pushing campaign programmers to determine exactly in advance the goals and objectives of the campaign and the speculative or causal relations leading to those desired objectives. Once the campaign goals are determined, it becomes possible to develop programs to meet these objectives and develop ways to measure them. The first function of an evaluation is to determine the expected impacts and outcomes the program will provide. For instance, a campaign could be designed to increase awareness of the dangers of substance abuse. Therefore, the evaluation proposal should show the percentage increase expected in this raised awareness. The second function of a campaign evaluation is to assist planners and scholars in understanding how or why a campaign was successful or failed. Being able to determine how or why this program did not meet its desired objective through causal, academic, or implementation errors, will further increase the probability of successes that can be repeated and further failures are avoided in future behavioral promotion programs.
       Unfortunately however, there are multiple barriers that exist to complicate rigorous evaluation. One of these barriers is the estimated cost of the evaluation. Many programmers argue that money used on evaluation research should not be taken from program activities. This argument takes away from the desired objective that evaluation should be a key element in any program. Research costs should normally range from 10% - 15% of the project's total budget. These costs have very effective rates of return and improve implementation and provide a basis for explaining and understanding the results. There is another barrier that exists as well, and this second barrier involves the perception that research involves too much time. To lessen this problem, evaluation research should be available before, during, and immediately following program completion. Also, many argue that evaluation takes away from program implementation. Evaluations should not take away from programs, but they should be involved as an integral part and complement the program. It is definitely noted that planning and implementing a thorough evaluation outlines the timing and objectives of various program components like when to start the program launch and broadcasts, and finding how to organize supplementary activities to maximize exposure and impact.
       Moreover, evaluation research is often conducted in three distinct phases. The first step in the evaluation process is to identify and assess the necessities that cause the desire for a communication campaign. The relevant communities involved could be the overall determining factor of this desire. Once these needs have been identified, formative research can be conducted to more accurately understand the subject of the program. Then, process research can be used. This research involves those activities used to conduct the measurement of the degree of program implementation to determine whether the program met its desired objective. Following process research, summative research is conducted. This consists of activities used to measure the program's impact, lessons learned from the research, and to evaluate research discoveries.
       In addition to evaluation research, monitoring campaign exposure is an effective way of measuring campaign success. This is accomplished by measuring the degree to which audience members have access to, recall, or recognize the intervention.
        Along with campaign exposure, interpersonal communication is an important factor in measuring the success of a campaign. Public communication campaigns need to be able to achieve this desired objective. Oftentimes, communication campaigns purposely cause this act. The impact of the media produced communication will not be seen until the information permeates through interpersonal networks and individuals have time to share their attitudes, experiences, and opinions with one another.
       Due to the above information of the components and measuring tools for campaign evaluations, it further expresses how professionals are able to measure the effectiveness and impact of strategic communication campaigns. In addition, the writing further explains the use of quantitative rulers used to achieve the desired objective. With the use of the above listed tools and information, strategic communication professionals can accurately and effectively measure the impact of strategic communication campaigns.
      
      
Potter, Les. " The Strategic Communication Plan: An Overview." IABC, 1 March 2012, https://www.iabc.com/the-strategic-communication-plan/.

No comments:

Post a Comment