This article presents a study of three (cross) validation metrics used for the selection of the optimal parameters of a support vector machine (SVM) classifier. The study focuses on problems for which the data is non-separable and unbalanced as is often the case for experimental and clinical data. The three metrics selected in this work are the area under the ROC curve, accuracy, and balanced accuracy. As a test example, the study investigates the optimal parameters for an SVM classification model for hip fracture. The hip fracture data is obtained from a finite element model that is fully parameterized. Because the data is computational, fully separable sets of data (fracture and safe) can be obtained. By projection onto a lower dimensional sub-space, the data becomes non-separable and is used to construct the SVM. The knowledge of the separable case provides a comparison metric (the weighted likelihood) that would be unknown if only clinical data is used. The performance of the various metrics are compared for several levels of separability, unbalance and size of training samples. A probabilistic SVM is used to compute the probability of fracture.