7.1 Recommended Steps to Measure Case Success

7.1-A. Decide what measures will be included in your office’s definition of success. They can include some combination of the following:

A measure based on case resolution. (Measure 9a)

See Chapter 6 for a list of sample case resolution categories. These case resolutions alone could be disaggregated by level of complexity.

A intermediate measure based on the use of prosecution practices that are research-based and trauma-informed (“best practices”). (Intermediate Measure 9b)

The process of prosecuting a sexual assault case is arguably as important as its outcome. Even if a case results in a resolution that falls short of the charges or sentence pursued by the prosecutor, the implementation of best practices throughout the life of the case, from initial evaluation and charging through resolution, can generate a high quality of procedural justice for the victim and the public. A comprehensive definition of case success should thus account for the prosecution’s efforts to bring about justice.

As such, it is critical to track best practices in order to compare their use against case outcomes. This practice would be for internal assessments of success only and would not necessarily need to be part of any transparency reporting to the general public. Instead, tracking these measures is about driving practice change within an office and/or about addressing individual prosecutor performance.

Perhaps a case resulted in a conviction to a lower-level sex offense rather than the most serious offense initially charged. Is there anything we could have done to increase the chances of getting a guilty verdict on the higher-level charge? By revisiting our actions throughout the case, we may be able to find our answer. Did we file a 404(b) motion? If not, should we have? Do we have the necessary training and skills, and access to experts, to work with victims with disabilities? If not, how will we build our capacity and identify experts?

A “best practices” checklist can help measure the level of procedural justice in our cases. It can also be used as the basis for regular meetings between unit chiefs and line-level prosecutors to examine the actions taken throughout the case and identify areas for improvement.

Tracking practices will not help reveal “why” an outcome was achieved or not, but it does allow for focused attention on practices being implemented. This could help to determine, for example, if trial lawyers are falling short of implementing best practices, or, if case resolutions include high rates of acquittal and/or victim experiences are negative regardless of whether best practices are used.

Although this intermediate outcome measure was not tested in the pilot sites, it was identified as a useful resource and developed based on the authors’ experience working with professionals across these sites.

A comprehensive explanation of best case-level practices for prosecuting sexual violence crimes can be found in Chapter 4 of RSVP Volume I. Exhibit 7-1 provides a sample checklist based on some of these critical practices, with a sample rating scale for each. Both the checklist and the rating scale presented here are recommendations based on the authors’ experiences, as well as RSVP Volume I. Each can be modified based upon a jurisdiction’s needs and experience implementing performance management.

It is important to note that best practices contained within Exhibit 7-1 cannot be checked off with a simple “yes or no” answer. Completing the checklist for each case will require a thorough examination and honest critique of the prosecutor’s efforts with respect to each practice. Following each case resolution, prosecutors can engage in a self-evaluation and then meet with their supervisors or unit chiefs to discuss their performance and complete the checklist.

Together, the prosecutor and the supervisor should consult Chapter 4 of RSVP Volume 1 for an explanation of the specific actions that comprise each of the below strategies. Remember, the purpose of this determination is to understand and constantly improve individual and office-level response to sexual violence cases by not only identifying what is working well but also identifying areas for improvement: subjects which require additional training, specific skills requiring refinement, and/or community responses highlighting increased outreach efforts or public awareness.

Once the checklist is finalized, each case can be rated based on the fidelity of the implementation of best practices (0, 1 or 2) for each item listed. These ratings can be summed to represent the total score for implementation of best practices, with higher scores representing greater adherence to and fidelity of implementation. If ten practices are rated based on 0, 1, or 2, the minimum best practice implementation score is 0 and the maximum is 20.

A measure based on victim’s sense of safety and satisfaction with criminal justice process. (Measure 10)

 As mentioned above, it is critical to include the victim’s perspective when considering case success.  A victim survey might ask each victim to rate the resolution as to whether the resolution was “fully,” “partially,” or “not at all,” satisfactory. See Chapter 9 for more information. Victims’ rating of the quality of their experience with the handling of their cases (Measure 10) could be a useful measure in defining case success. Therefore, multiple measures of success are important to consider: one based on case resolution (while accounting for complexity); one based on the implementation of prosecution best practices throughout case processing; and one based solely on victims’ ratings of satisfaction with their case and the quality of treatment they received by stakeholders responding to sexual assault.

 

7.1-B. Define a rating scale with 3-4 levels of success.

The next step is to define levels of case success. It is helpful to define three to four levels of success for cases that reached the prosecutor’s office. At least initially, three or four levels are likely to be sufficient, such as “fully successful,” “moderately successful,” and “unsuccessful.” Each site may wish to use other labels it believes are more appropriate for itself.

 

7.1-C. Establish a standardized operational definition for each success level.

Group case outcomes, case practices, and/or victims’ case ratings into success categories developed in Step 2.

In terms of case resolutions, “unsuccessful” might include outright acquittal on all charges, guilty pleas or verdicts on low-level, non-sexual offenses (such as non-sexual misdemeanors), or no-contest pleas without meaningful penalties.

The definition of “fully successful” might include guilty pleas or verdicts on the most serious sexual assault count or, in some cases, guilty pleas or verdicts on other charges carrying lengthy prison sentences.

The definition of “moderately successful” might include guilty pleas or verdicts on lesser (but significant) charges.

Another example: a plea to the initial charge may be considered a partial or fully successful success, whereas a plea to a reduced charge (such as to a non-sexual offense) might be considered less satisfactory, depending on other factors. Some cases might not fit neatly into the defined categories. In that case, it may be a wise to define more categories.

In terms of case practices, “unsuccessful” might include cases with best practice implementation scores ranging from 0-6, or some number that an office feels is a justifiable cut-off demonstrating the lack of use of best practices.

“Fully successful” cases might include cases with best practice implementation scores that are higher than 10 or 15, or whatever cut-off point an office believes demonstrates that best practices were indeed implemented during case processing. The definition of “partially successful” might include scores between the unsuccessful and fully successful cut-off points.

In term of victim case ratings, these can only be examined across an aggregate of victim surveys. In Chapter 9, we advise that victim surveys should be confidential. As such, offices will not be able to link an individual victim’s survey to their particular case. Therefore, prosecutors will have to review this case success measure based on the percentage of victims that report justice being served for them across cases, rather than individual cases.

 

7.1-D. Assign a success level for each case.

Once a site articulates definitions of success, cases can be assigned initial success levels shortly after case completion across the first two measures: case disposition and implementation of best practices. The third measure will not be generated after each case resolution because individual victim surveys will not be linked to individual cases, but can be generated during data reporting periods.

For each reporting period, all three ratings of success can be tallied across all cases resolved during that timeframe: two that are based on individual cases, (1) case resolution, and (2) best practices; and one that is aggregated for victim experience. This would provide the data for SAJI Measures 9 and 10.

Who should develop definitions of success and rate the cases?

Who should prepare the definitions of success? Who should rate each case as to its degree of success? The site’s prosecutor’s office, in collaboration with their partners, should be responsible for determining the definitions of success for all three measures described above.

Who rates cases on level of success for case resolution and implementation of best practices also is the responsibility of the prosecutor’s office.

Options include: (1) attorneys rate their own cases, abiding by the definitions of each level of success for the two measures; (2) a different attorney than the one prosecuting the case provides ratings of success for the two measures; (3) multiple attorneys rate the case and a consensus is sought; and (4) an administrator in the prosecutor’s office with adequate credentials rates all cases. The first option has less credibility because they are self-ratings. However, attorneys could rate their own cases, which can be reviewed later by supervisors.

Consideration of Case Complexity

By viewing two case success outcomes (case resolutions and implementation of best practices) through the lens of case complexity as discussed in Chapter 5, prosecutors can achieve a more nuanced and fairer view of their practice.

For instance, a less-than-desirable case resolution for a low complexity case and a less-than-desirable resolution for a high complexity case, can and should be viewed much differently. Keep in mind that the purpose of measuring case resolutions – as with all performance management efforts is to help improve the effectiveness, efficiency, and equity of prosecution – not to play “gotcha” or to criticize the prosecutor.

For each case, the site would identify the complexity and success levels of each case. It can then calculate the number and percentages of cases at each level of success for each level of complexity.

The resulting measures would be in the form: “Number and percentage of moderately complex cases that had: (a) fully successful outcomes for case resolution; (b) partially successful outcomes for case resolution; and (c) fully unsuccessful outcomes for case resolution.” This can be repeated with the case outcome categories for implementation of best practices. An example is included in the figure below. The process by which to measure case complexity is explained in Chapter 6.

Complexity Level of Case Percentage with Fully Successful Outcomes Percentage with Partially Successful Outcomes Percentage with Fully Unsuccessful Outcomes
High 32% 54% 14%
Moderate 58% 35% 7%
Low 80% 15% 5%
Exhibit 7-1: Sample Checklist To Track Use of Best Prosecution Practices

 

  1. Recommend charges based on totality of evidence, applicable laws, ethical considerations and understanding of relevant research (RSVP Vol. 1, 3.1.C).

0 – Charging decision was inconsistent with evaluation based upon all admissible evidence, relevant research, and ethical considerations.
1 – Charging decision was somewhat consistent with evaluation based upon all admissible evidence, relevant research, and ethical considerations.
2 – Charging decision was fully consistent with evaluation based upon all admissible evidence, relevant research, and ethical considerations.

  1. Conduct a trauma-informed interview of victim to reveal evidence of the crime (3.1-F-1).

0 – Interview(s) with victim was not trauma-informed.
1 – Interview(s) with victim was somewhat trauma-informed.
2 – Interview(s) with victim were fully trauma-informed.

  1. Review and analyze DNA and forensic evidence, if available (3.1-F-2).

1 – DNA and/or forensic evidence SAKs were either not appropriately stored or chain of custody was not fully documented, or SAKs were misplaced.
2 – DNA and/or forensic evidence SAKs were appropriately stored, chain of custody was fully documented but prosecutor lacks sufficient capacity to evaluate and litigate.
3 – DNA and/or forensic evidence SAKs were appropriately stored, chain of custody was fully documented and prosecutor has the capacity to evaluate and litigate.
N/A – Discount a value of 2 from total possible score.

  1. Prevent and/or respond to witness intimidation (3.1-F-3).

0 – No attempt to prevent or respond to witness intimidation.
1 – Some attempt to prevent or respond to witness intimidation.
2 – Fully responded to witness intimidation.
N/A – Discount a value of 2 from total possible score.91

  1. Work with experts to understand and explain the evidence (3.2-A).

0 – Did not work with experts to understand and explain evidence.
1 – Somewhat worked with experts to understand and explain evidence.
2 – Fully worked with experts to understand and explain evidence.
N/A – Discount a value of 2 from total possible score.92

  1. File motions to shield victims and expose defendants (3.2-B).

0 – Did not file any motions to shield victims or expose defendants.
1 – Filed motions to shield victims or expose defendants that were average or satisfactory.
2 – Filed motions to shield victims or expose defendants that were strong or exemplary.
N/A – Discount a value of 2 from total possible score.

  1. Construct a compelling case theme and theory (3.2-C).

0 – Did not construct a case theme and theory.
1 – Case theme and theory were constructed but were uncompelling or not fully used at trial.
2 – Case theme and theory were compelling and fully used at trial.

  1. Anticipate and overcome expected defenses to guard against victim shaming/blaming (3.2-D).

0 – Did not anticipate and overcome predicable defenses.
1 – Somewhat anticipated and overcame predictable defenses.
2 – Fully anticipated and overcame predictable defenses.

  1. Educate jury panel and select an unbiased jury (3.3-C).

0 – Did not use voir dire to educate jury about case and to select unbiased jurors.
1 – Somewhat used voir dire to educate jury about case and to select unbiased jurors.
2 – Fully used voir dire to educate jury about case to and to select unbiased jurors.
N/A – Discount a value of 2 from total possible score.

  1. Recreate victim’s reality of the crime and offender’s predation by using direct examination, witness order, and physical or demonstrative exhibits (3.3-E).

0 – Did not develop trial strategy to recreate reality of the assault.
1 – Trial strategy recreating reality of the assault was somewhat successful.
2 – Trial strategy recreating reality of the assault was fully successful.
N/A – Discount a value of 2 from total possible score.


 

91 Before selecting this option, keep in mind that intimidation is often subtle and surreptitious. In the words of Kerry Healey, former law and public safety consultant, “[o]nly unsuccessful intimidation ever [comes] to the attention of police or prosecutors.” Victim and Witness Intimidation: New Developments and Emerging Responses, National Institute of Justice Research In Action (1995).Thus, intimidation requires proactive investigation.

92 Before selecting this option, keep in mind that experts can assist in educating prosecutors and law enforcement about a particular area of expertise, and therefore supervisors should ask additional questions to determine if the decision not to use an expert was based on a lack of understanding of the issue. This is a helpful way to target staff training and capacity building.