Rapid Feedback Evaluation Process and Domestic Terrorism Prevention
An Analysis of the five steps which explain a “process” of a select set of data sources which could be used in each step of the process:
The rapid feedback evaluation (RFE) can begin after goals of a program to be evaluated are in agreement. In simplified terms this is the nexus of completing an evaluation at the starting point. Once the evaluability assessment (EA) is set, the RFE comes into play to ensure parameters within the evaluation itself. This helps with establishing the goals of the program, parameters for estimates that are not steadfast and finding similar programs to benchmark against. The RFE is not as hard and fast as an EA as those are published whereas RFE’s are not. One of the positive aspects of the rapid feedback evaluation is that is can begin on the notion of an idea of a program and assessment.
Breaking down the rapid feedback evaluation into common terms of my understanding would be again from a police agency perspective. When commanders and chiefs need to answer to city managers and city council members, this process can assist in providing sound data on the fly. This evaluation process provides a model for processes to be evaluated further down the road. Most importantly the rapid feedback evaluation provides the department leadership with a “snapshot” of the parameters it is operating in. With the increased liability departments face; programs, policies, and processes need to be able to assess at a moments notice for compliance purposes.
The five steps are as follows:
Step 1: Collect existing date on program performance. According to the text, data sources include agency records, program data systems, monitoring and audit reports, and research and evaluation studies. In simple terms looking to this information for fact gathering to determine if a program is working assumptions can be made quickly. For example, determining a use of force program in response to a Justice Department inquiry may require fast answers without the luxury of data mining for in depth responses.
Step 2: Collect new data on program performance. To keep this on my understanding and not straight from the text, I would say turning to an SME (subject matter expert) is the best fact finder for this step. Field training officers responding to use of force issues would more than likely be the best source to understand commonalities (if any) in a flawed program.
Step 3: Estimate program effectiveness and state the range of uncertainty in the estimates. To me, this step in the process provides a plus and minus parameter or acceptable margin of error. Sometimes quantitative data may not be available to effectively review a program in depth. In turn, assumptions can be made by best guess and technical estimates.
Step 4: Develop and analyze designs for more definitive evaluation. Determining cost, structure, breaking the mundane, normal avenue of evaluation are just a few portions of this step. This provides a specific goal and target at this point. Breaking this step down into simplified terms really relies on costs, disruption, and experimental (on many levels) designs to determine or test assumptions.
Step 5: Reach agreement on the design and intended use of any further evaluation. This final step is a summary of the previous four with likely costs, time, and program effectiveness. This step also helps determine possible uses of the information being evaluated (Wholey, Hatry, Newcomer, 2010).
Wholey, J.S., Hatry, H.P., Newcomer, K.E. (2010). Handbook of Practical Program Evaluation (3rd ed.). Hoboken, NJ: John Wiley & Sons, Inc.
Filed Under: Evaluation Process and Domestic Terrorism Prevention