

Once it is established that the bug tracking system is a system for measuring attributes, the next step is to examine the concepts of accuracy and accuracy that relate to the situation. In both cases, training or work aids could be tailored to either specific individuals or all evaluators, depending on the number of evaluators who were guilty of imprecise attribution of attributes.


If the problems only concern a few assessors, then the problems might simply require a little personal attention. If the problems are highlighted by several assessors, the problems are naturally systemic or procedural. When it comes to reproducibility, evaluators have strong opinions on certain conditions, but these opinions differ. B Repeatability is the main problem, evaluators are disoriented or undecided by certain criteria. These are “yes” or “no” and “correct allocation” or “wrong allocation” answers. Similarly, the appropriate source location is either attributed to the defect or not. Either a category is correctly assigned to an error, or it is not. One can assume that the assignment of a code – that is, the division of a code into a category – is a decision that characterizes the error with an attribute. First, the analyst should determine that there is indeed attribute data. This means that before designing an attribute contract analysis and selecting the appropriate scenarios, an analyst should urgently consider monitoring the database to determine if past events have been properly coded. Repeatability and reproducibility are components of accuracy in an analysis of the attribute measurement system, and it is advisable to first determine if there is a precision problem. As performing an attribute analysis can be tedious, costly and generally uncomfortable for all stakeholders (the analysis is simple versus execution), it is best to take a moment to really understand what should be done and why.

The best way to do this is to first monitor the database and then use the results of that audit to perform a targeted and optimized analysis of repeatability and reproducibility. Attribute analysis can be an excellent tool for detecting the causes of inaccuracies in a bug tracking system, but it must be used with great care, reflection and minimal complexity, should it ever be used. In addition, if global accuracy, repeatability and reproducibility are known, distortions can also be detected in situations where decisions are always wrong. In the case of an attribute measurement system, repeatability or reproducibility problems necessarily pose precision problems. The accuracy of a measurement system is analyzed by segmenting into two main elements: repeatability (the ability of a particular evaluator to assign the same value or attribute several times under the same conditions) and reproducibility (the ability of several assessors to agree on a set of circumstances).
