In order to assure accuracy and realism of resilience assessment methods and tools, it is essential to have access to field data that are unbiased and representative. Several initiatives are taking place that offer access to malware samples for research purposes. Papers are published where techniques have been assessed thanks to these samples. Definition of benchmarking datasets is the next step ahead. In this presentation, we report on the lessons learned while collecting and analyzing malware samples in a large scale collaborative effort. Three different environments are described and their integration used to highlight the open issues that remain with such data collection. Three main lessons are offered to the reader. First, creation of representative malware samples datasets is probably an impossible task. Second, false negative alerts are not what we think they are. Third, false positive alerts exist where we were not used to see them. These three lessons have to be taken into account by those who want to assess the resilience of techniques with respect to malicious faults.
These are the results of a joint work carried out in the context of the European funded WOMBAT project, together with partners from Hispasec Systemas, EURECOM institute and Symantec Research Labs Europe (see http://wombat-project.eu/ for more on the WOMBAT project). Zurich_ZISC_presentation.pdf