Below are thirteen straightforward ways to examine outcome measurement data. Each site will need to select those analyses for which data is available and add other analyses it deems appropriate. These analyses can be undertaken by each partner – not only prosecutors’ offices, but police departments, advocacy agencies, and medical facilities – and can be facilitated by automation.
Sites might focus on one or more of these analyses:
1. Examine the latest performance data on each outcome measure and identify outcome values that are unexpectedly disappointing or surprisingly good.
Use: This can signal the need for attention from the prosecutor’s office or one or more of its partners, such as the need for additional training or technical assistance. The data also can be used to sustain high performance on sexual violence cases and extend them, where appropriate, to other cases in the office.
2. Compare outcome measurements between victim subgroups (for those with available subgroup data).
Use: This will identify victim groups for which current practices are working well and those for which they are not, suggesting practices which might need revision or potential biases that may need examination. Exhibit 9-1 illustrates what such a report might look like. The illustration displays the findings of six different victim demographic groups for a single outcome measure, such as age, gender, and race/ethnicity. The report also displays the outcome totals for all groups combined.
3. Compare outcome values over time.
Use: This will indicate progress, setbacks, and trends. Exhibit 10-2 illustrates a basic reporting format that displays the latest data on outcome measures over time. The measures can be organized by the particular goals the prosecutor and partner agencies aim to achieve (such as the six listed in Chapter 2; Exhibit 10-2 groups outcome measures into four broad subject areas that can translate to goals).
4. Compare outcomes for different levels of case complexity (for those with appropriate and available complexity data).
Use: This information will enable prosecutors and others to more accurately and fairly interpret case outcomes. Exhibit 10-1 illustrates a comparison on outcomes grouped by case complexity level – a tool which can be used to assign prosecutors to cases. Some prosecutors might be particularly successful working on high complexity cases and, when possible, should be assigned to them without risking denigration for a lower overall success rate.
5. Compare the outcomes from before and after a new or modified practice is implemented.
Use: This permits the testing of new or modified practices by indicating whether the new/modified practice should be continued, further modified, or repealed — before making a full commitment to it.
6. Compare the outcomes of different strategies.
Use: This comparison can help to assess the relative effectiveness of different prosecution strategies, such as those suggested in RSVP Volume 1. For example, a site might wish to test a different way of processing SAKs (such as submitting kits to private instead of public labs) to improve the timeliness of results. Such a change in process may affect the average case processing time from initial report through to case resolution/disposition, and the number and percentage of cases with delays. On the other hand, a site might want to examine the benefit of a law enforcement policy requiring multiple reviews before clearing a case as “unfounded,” something which possibly affects the number and percentage of reported sexual violence cases not referred by law enforcement to the prosecutor’s office.
7. Compare outcomes to targets set for individual outcome measures.
Use: These comparisons can serve as a motivational tool for professionals responding to sexual violence crimes. However, selecting appropriate targets can be difficult as it requires consideration of recent history and trends, budget considerations, and possibly changes in the law. Targets might be set as ranges rather than single numbers, and should strike an appropriate balance between optimism and pragmatism. Expectations should be realistic to keep pace with the often-incremental changes resulting from refining criminal justice policies or practices.
Setting targets for conviction rates is not recommended, particularly during the first few years of performance management implementation. Taking on more complex cases may cause conviction rates to decrease – at least at first – as prosecutors build the skills necessary to litigate such cases and judges and juries adapt to these new types of cases.
Setting early or unrealistic conviction rate targets will likely discourage prosecutors from taking on high complexity cases. When conviction rate targets are set, they must be placed in the context of the overall prosecution rate in order to be meaningful.
For instance, a conviction rate of 87% for an office with a prosecution rate of 15% reflects a higher rate of attrition of referred cases than an office with the same conviction rate but a 67% prosecution rate. A determination of reasons for attrition (e.g., inadequate investigation, level of complexity of case, referral of noncriminal matter) requires a review of the cases declined.104 This level of review not only brings proper context to a conviction rate but also identifies, with some level of precision, areas of needed training and technical assistance.
8. Examine the reasons for case attrition, if reasons are collected as part of the outcome measurement process.
Use: Chapter 3 suggests three points of case attrition for sexual violence crimes: (a) the victim did not report the assault to law enforcement; (b) the cases were reported to law enforcement but not forwarded to the prosecutor; and (c) the cases were forwarded to the prosecutor’s office but ultimately declined. The prosecutor’s office can conduct an analysis of the reasons cases are not reported, forwarded, or charged and then identify reasons that are avoidable. This information could be used to help determine the need for internal trainings, to revamp practices by one or more partners, or inform public information campaigns to encourage victims to report.
9. Examine suggestions from the victim surveys to improve sexual violence response.
Use: Victim surveys, if conducted, can be reviewed for suggestions requiring corrective action. This information can be summarized and discussed with the multidisciplinary team partners to improve policies and procedures. The person examining the survey data can even group suggestions by topic. If victims’ confidentiality is preserved, the suggestions can be examined individually, even a single response might give one or more of the partners a helpful suggestion for improvement.
10. Examine and compare the outcomes of individual police departments, if the prosecutor’s office handles cases from more than one law enforcement agency.
For information that is more meaningful and fair, examine the data for differences in the complexity of cases handled by each police department.
Use: This could call attention to police departments with less successful outcomes to identify and implement best practices in handling sexual violence cases, as well as indicate training, staffing and resource needs.
11. Examine the performance of individual prosecutors.
Use: This can identify prosecutors with low-level case dispositions and/or usage of research-informed practices, which may indicate the need for training and technical assistance. It can also identify prosecutors who most consistently use best strategies, which can be used as a basis for recognition. If an office chooses to calculate case dispositions or use of research-informed practices by individual prosecutor, such information should probably be restricted for internal use only, as it may implicate personnel and performance concerns.
This use will be more informative and fair if considered in the context of the complexity of each prosecutor’s caseload. See Chapter 5 for a discussion of case complexity assessments
12. Examine the frequency and relationship between case disposition and other important case characteristics.
Such as: relationship of perpetrator to the victim (g., stranger, family member, person in a position of power); and time of the assault (time of day, day of week, month).
Use: This can help indicate the need for adding training topics, as well as to strengthen or change practices and policies regarding resource assignment. Some attorneys may be especially skilled in implementing particular research-informed protocols (as articulated in RSVP Volume I), and when possible, should be assigned cases that will be strengthened by their use.105
13. Identify and track key case characteristics of sexual violence reports.
Most of the previous analyses involve examining the relationship between cases characteristics and case outcomes. However, counting the frequencies with which various case characteristics occur also can be useful by themselves without being linked to case outcomes.
Use: Such information can be examined for trends and provide useful information for assessing the need for training and technical assistance, as well as to revise practices. The information enables partners (and the public) to better understand the nature of sexual violence cases in the community. The information tracked could include: workload-related counts; selected victim demographic characteristics; and counts of various case characteristics, such as relationship of perpetrator to the victim, location of incidents, and time of day of assault.
Computer programming can greatly assist in calculating data and formatting reports to display findings for all of the above-mentioned assessments. Initially, an office may want to use spreadsheet software in which the relevant data on each case is entered, desired tabulations are made, and displays are set up. However, having an IT professional or other employee with technical knowledge program the full process will make it much faster, more accurate, and considerably less burdensome on the prosecutor’s office.