Abstract
Cyberinfrastructure is increasingly becoming target of a wide spectrum of attacks from Denial of Service to large-scale defacement of the digital presence of an organization. Intrusion Detection System (IDSs) provide administrators a defensive edge over intruders lodging such malicious attacks. However, with the sheer number of different IDSs available, one has to objectively assess the capabilities of different IDSs to select an IDS that meets specific organizational requirements. A prerequisite to enable such an objective assessment is the implicit comparability of IDS literature. In this study, we review IDS literature to understand the implicit comparability of IDS literature from the perspective of metrics used in the empirical evaluation of the IDS. We identified 22 metrics commonly used in the empirical evaluation of IDS and constructed search terms to retrieve papers that mention the metric. We manually reviewed a sample of 495 papers and found 159 of them to be relevant. We then estimated the number of relevant papers in the entire set of papers retrieved from IEEE. We found that, in the evaluation of IDSs, multiple different metrics are used and the trade-off between metrics is rarely considered. In a retrospective analysis of the IDS literature, we found the the evaluation criteria has been improving over time, albeit marginally. The inconsistencies in the use of evaluation metrics may not enable direct comparison of one IDS to another.
Publication Date
Fall 9-28-2016
Document Type
Technical Report
Department, Program, or Center
Software Engineering (GCCIS)
Recommended Citation
Munaiah, Nuthan; Meneely, Andrew; Wilson, Ryan; and Short, Benjamin, "Are Intrusion Detection Studies Evaluated Consistently? A Systematic Literature Review" (2016). Accessed from
https://repository.rit.edu/article/1810
Campus
RIT – Main Campus
Comments
This is a preliminary, unpublished technical report.