A reasonably straightforward approach is for the prosecutor’s office to identify key factors influencing prosecution decisions, and then assess the extent to which individual cases contains these factors. A key issue for this procedure is how to rate the complexity of each case in a way that is reasonably reliable.
The simplest approach is to ask prosecutors to rate the complexity of each case using a pre-determined list of factors believed to make a case complex. Each prosecutor would assess the case based on the presence or absence of each factor. The complexity rating for each factor could be only “yes” or “no”. After considering the number of “yes” factors, the prosecutor would place the case into one of three or four levels of complexity, such as: (1) very complex; (2) somewhat complex; 3) somewhat straightforward; or (4) straightforward. This type of rating is subjective. Not all prosecutors would judge the same case the same way, and the ratings of a prosecutor might be affected by how they felt on a given day.
The problem of subjectivity can be somewhat alleviated by having two or even three prosecutors separately rate each case. This approach also would be strengthened by providing detailed definitions of the complexity levels, including the presence or absence of selected complexity factors. A candidate list of such factors is contained in Exhibit 5-1 below.
A second, more involved approach increases the precision of the complexity ratings. This involves developing a computerized calculation of complexity for each case as in the first approach, but each factor would be assigned a numerical value within a rating scale. For each case, a prosecutor would rate each complexity factor on its level of influence on the case.
An example of such a complexity rating system is presented in Exhibit 5-2 below. The rating system uses three levels of influence on the case valued at 0, 1, and 2. The system allows for additional nuance of each factor to be accounted for in the scoring process.
Once each factor has a rating, a user can then add the ratings across each factor to come up with an overall complexity score, which then can be grouped into a small number of overall complexity levels, such as: (1) high complexity; (2) medium complexity; and (3) low complexity. The prosecutor’s office must establish the range of values for each of these levels. For example, “low complexity” cases might range from 0-5; “medium complexity” might range from 6-10; and “high complexity” might be 11 and over.
Simply adding the ratings across factors implies that each one has equal importance and influence. This is unlikely to be true. For example, victim intoxication may make a case exponentially more complicated than other complexity factors. Thus, prosecutors’ offices may want to consider weighting complexity factors based on their perception of added complexity in a case. Rather than using 0, 1, and 2 in the rating system, prosecutors could assign a number weighted based on relevance to complexity. So, a victim’s lack of injury may be rated a 2, but victim intoxication rated an 8. This process allows certain factors to weigh more heavily on the overall complexity score.
The process for adding the ratings of each factor and then assigning a level of case complexity could be digitized based on whatever software program an office chooses to use – meaning prosecutors would not need to make manual calculations. From there, only simple analysis is needed to determine the number and percentage of cases that fall into the three or four complexity categories during a particular reporting period. Next, this data can be linked to each case’s level of success to provide the percentage of cases at each complexity level with successful outcomes.
The computer-calculated rating system is preferable because of its reduced subjectivity. Some offices, however, might want to start out with the simpler procedure and then switch to the computer-calculated procedure at a later date.
More details on complexity rating procedures are provided in Exhibit 5-3 below.