Abstract: Medical data and biomedical studies are often imbalanced with a majority of observations coming from healthy or normal subjects. In the presence of such imbalances, agreement among multiple raters based on Fleiss’ Kappa (FK) produces counterintuitive results. Simulations suggest that the degree of FK’s misrepresentation of the observed agreement may be directly related to the degree of imbalance in the data. We propose a new method for evaluating agreement among multiple raters that is not affected by imbalances, A-Kappa (AK). Performance of AK and FK is compared by simulating various degrees of imbalance and illustrate the use of the proposed method with real data. The proposed index of agreement may provide some insight by relating its magnitude to a probability scale. Existing indices are interpreted arbitrarily. This new method not only provides a measure of overall agreement but also provides an agreement index on an individual item. Computation of both AK and FK may further shed light into the data and be useful in the interpretation and presenting the results.