Cybersecurity (2020) 3:15 Choi et al. Cybersecurity https://doi.org/10.1186/s42400-020-00055-5 SURVEY Open Access Using deep learning to solve computer security challenges: a survey Yoon-Ho Choi1,2 , Peng Liu1* , Zitong Shang1 , Haizhou Wang1 , Zhilong Wang1 , Lan Zhang1 , Junwei Zhou3 and Qingtian Zou1 Abstract Although using machine learning techniques to solve computer security challenges is not a new idea, the rapidly emerging Deep Learning technology has recently triggered a substantial amount of interests in the computer security community. This paper seeks to provide a dedicated review of the very recent research works on using Deep Learning techniques to solve computer security challenges. In particular, the review covers eight computer security problems being solved by applications of Deep Learning: security-oriented program analysis, defending return-oriented programming (ROP) attacks, achieving control-flow integrity (CFI), defending network attacks, malware classification, system-event-based anomaly detection, memory forensics, and fuzzing for software security. Keywords: Deep learning, Security-oriented program analysis, Return-oriented programming attacks, Control-flow integrity, Network attacks, Malware classification, System-event-based anomaly detection, Memory forensics, Fuzzing for software security Introduction Using machine learning techniques to solve computer security challenges is not a new idea. For example, in the year of 1998, Ghosh and others in (Ghosh et al. 1998) proposed to train a (traditional) neural network based anomaly detection scheme(i.e., detecting anomalous and unknown intrusions against programs); in the year of 2003, Hu and others in (Hu et al. 2003) and Heller and others in (Heller et al. 2003) applied Support Vector Machines to based anomaly detection scheme (e.g., detecting anomalous Windows registry accesses). The machine-learning-based computer security research investigations during 1990-2010, however, have not been very impactful. For example, to the best of our knowledge, none of the machine learning applications proposed in (Ghosh et al. 1998; Hu et al. 2003; Heller et al. 2003) has been incorporated into a widely deployed intrusion-detection commercial product. *Correspondence: pxl20@psu.edu The Pennsylvania State University, Pennsylvania, USA Full list of author information is available at the end of the article 1 Regarding why not very impactful, although researchers in the computer security community seem to have different opinions, the following remarks by Sommer and Paxson (Sommer and Paxson 2010) (in the context of intrusion detection) have resonated with many researchers: • Remark A: “It is crucial to have a clear picture of what problem a system targets: what specifically are the attacks to be detected? The more narrowly one can define the target activity, the better one can tailor a detector to its specifics and reduce the potential for misclassifications.” (Sommer and Paxson 2010) • Remark B: “If one cannot make a solid argument for the relation of the features to the attacks of interest, the resulting study risks foundering on serious flaws.” (Sommer and Paxson 2010) These insightful remarks, though well aligned with the machine learning techniques used by security researchers during 1990-2010, could become a less significant concern with Deep Learning (DL), a rapidly emerging machine learning technology, due to the following observations. First, Remark A implies that even if the same machine © The Author(s). 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Choi et al. Cybersecurity (2020) 3:15 learning method is used, one algorithm employing a cost function that is based on a more specifically defined target attack activity could perform substantially better than another algorithm deploying a less specifically defined cost function. This could be a less significant concern with DL, since a few recent studies have shown that even if the target attack activity is not narrowly defined, a DL model could still achieve very high classification accuracy. Second, Remark B implies that if feature engineering is not done properly, the trained machine learning models could be plagued by serious flaws. This could be a less significant concern with DL, since many deep learning neural networks require less feature engineering than conventional machine learning techniques. As stated in NSCAI Intern Report for Congress (2019), “DL is a statistical technique that exploits large quantities of data as training sets for a network with multiple hidden layers, called a deep neural network (DNN). A DNN is trained on a dataset, generating outputs, calculating errors, and adjusting its internal parameters. Then the process is repeated hundreds of thousands of times until the network achieves an acceptable level of performance. It has proven to be an effective technique for image classification, object detection, speech recognition, and natural language processing–problems that challenged researchers for decades. By learning from data, DNNs can solve some problems much more effectively, and also solve problems that were never solvable before.” Now let’s take a high-level look at how DL could make it substantially easier to overcome the challenges identified by Sommer and Paxson (Sommer and Paxson 2010). First, one major advantage of DL is that it makes learning algorithms less dependent on feature engineering. This characteristic of DL makes it easier to overcome the challenge indicated by Remark B. Second, another major advantage of DL is that it could achieve high classification accuracy with minimum domain knowledge. This characteristic of DL makes it easier to overcome the challenge indicated by Remark A. Key observation. The above discussion indicates that DL could be a game changer in applying machine learning techniques to solving computer security challenges. Motivated by this observation, this paper seeks to provide a dedicated review of the very recent research works on using Deep Learning techniques to solve computer security challenges. It should be noticed that since this Fig. 1 Overview of the four-phase workflow Page 2 of 32 paper aims to provide a dedicated review, non-deeplearning techniques and their security applications are out of the scope of this paper. The remaining of the paper is organized as follows. In “A four-phase workflow framework can summarize the existing works in a unified manner” section, we present a fourphase workflow framework which we use to summarize the existing works in a unified manner. In “A closer look at applications of deep learning in solving security-oriented program analysis challenges-A closer look at applications of deep learning in security-oriented fuzzing” section, we provide a review of eight computer security problems being solved by applications of Deep Learning, respectively. In “Discussion” section, we will discuss certain similarity and certain dissimilarity among the existing works. In “Further areas of investigation” section, we mention four further areas of investigation. In “Conclusion section, we conclude the paper. A four-phase workflow framework can summarize the existing works in a unified manner We found that a four-phase workflow framework can provide a unified way to summarize all the research works surveyed by us. In particular, we found that each work surveyed by us employs a particular workflow when using machine learning techniques to solve a computer security challenge, and we found that each workflow consists of two or more phases. By “a unified way”, we mean that every workflow surveyed by us is essentially an instantiation of a common workflow pattern which is shown in Fig. 1. Definitions of the four phases The four phases, shown in Fig. 1, are defined as follows. To make the definitions of the four phases more tangible, we use a running example to illustrate each of the four phases. Phase I.(Obtaining the raw data) In this phase, certain raw data are collected. Running Example: When Deep Learning is used to detect suspicious events in a Hadoop distributed file system (HDFS), the raw data are usually the events (e.g., a block is allocated, read, written, replicated, or deleted) that have happened to each block. Since these events are recorded in Hadoop logs, the log files hold the raw data. Since each event is uniquely identified by a particular (block ID, timestamp) tuple, we could simply view the raw data as n event sequences. Here n is the total number of blocks in the HDFS. For example, the raw data collected in Xu et al. (2009) in total consists of 11,197,954 events. Since 575,139 blocks were in the HDFS, there were 575,139 event sequences in the raw data, and on average each event sequence had 19 events. One such event sequence is shown as follows: Choi et al. Cybersecurity (2020) 3:15 Page 3 of 32 081110 112428 31 INFO dfs.FSNamesystem: BLOCK* NameSystem.allocateBlock: /user/root/rand/_temporary/_task_200811101 024_0001_m_001649_0/ part-01649.blk_-1033546237298158256 081110 112428 9602 INFO dfs.DataNode $DataXceiver: Receiving block blk_{-}1033546237298158256 src: /10.250.13.240:54015 dest:/10.250.13.240:50010 081110 112428 9982 INFO dfs.DataNode$ DataXceiver: Receiving block blk_-1033546237298158256 src: /10.250.13.240:52837 dest:/10.250.13.240:50010 081110 112432 9982 INFO dfs.DataNode$ DataXceiver: writeBlock blk_{-}1033546237298158256 received exception java.io.IOException:Could not read from stream Phase IV.(Classifier learning) This phase aims to build specific classifiers or other predictors through Deep Learning. Running Example: Let’s revisit the same HDFS. DeepLog (Du et al. 2017) used Deep Learning to build a stacked LSTM neural network for anomaly detection. For example, let’s consider event sequence {22,5,5,5,11,9,11,9,11,9,26,26,26} in which each integer represents the event type of the corresponding event in the event sequence. Given a window size h = 4, the input sample and the output label pairs to train DeepLog will be: {22,5,5,5 → 11 }, {5,5,5,11 → 9 }, {5,5,11,9 → 11 }, and so forth. In the detection stage, DeepLog examines each individual event. It determines if an event is treated as normal or abnormal according to whether the event’s type is predicted by the LSTM neural network, given the history of event types. If the event’s type is among the top g predicted types, the event is treated as normal; otherwise, it is treated as abnormal. Phase II.(Data preprocessing) Both Phase II and Phase III aim to properly extract and represent the useful information held in the raw data collected in Phase I. Both Phase II and Phase III are closely related to feature engineering. A key difference between Phase II and Phase III is that Phase III is completely dedicated to representation learning, while Phase II is focused on all the information extraction and data processing operations that are not based on representation learning. Running Example: Let’s revisit the aforementioned HDFS. Each recorded event is described by unstructured text. In Phase II, the unstructured text is parsed to a data structure that shows the event type and a list of event variables in (name, value) pairs. Since there are 29 types of events in the HDFS, each event is represented by an integer from 1 to 29 according to its type. In this way, the aforementioned example event sequence can be transformed to: In this subsection, we use the four-phase workflow framework to summarize two representative works for each security problem. System security includes many sub research topics. However, not every research topics are suitable to adopt deep learning-based methods due to their intrinsic characteristics. For these security research subjects that can combine with deep-learning, some of them has undergone intensive research in recent years, others just emerging. We notice that there are 5 mainstream research directions in system security. This paper mainly focuses on system security, so the other mainstream research directions (e.g., deepfake) are outof-scope. Therefore, we choose these 5 widely noticed research directions, and 3 emerging research direction in our survey: 22, 5, 5, 7 Phase III.(Representation learning) As stated in Bengio et al. (2013), “Learning representations of the data that make it easier to extract useful information when building classifiers or other predictors.” Running Example: Let’s revisit the same HDFS. Although DeepLog (Du et al. 2017) directly employed one-hot vectors to represent the event types without representation learning, if we view an event type as a word in a structured language, one may actually use the word embedding technique to represent each event type. It should be noticed that the word embedding technique is a representation learning technique. Using the four-phase workflow framework to summarize some representative research works 1. In security-oriented program analysis, malware classification (MC), system-event-based anomaly detection (SEAD), memory forensics (MF), and defending network attacks, deep learning based methods have already undergone intensive research. 2. In defending return-oriented programming (ROP) attacks, Control-flow integrity (CFI), and fuzzing, deep learning based methods are emerging research topics. We select two representative works for each research topic in our survey. Our criteria to select papers mainly include: 1) Pioneer (one of the first papers in this field); 2) Top (published on top conference or journal); 3) Novelty; 4) Citation (The citation of this paper is high); 5) Effectiveness (the result of this paper is pretty good); 6) Representative (the paper is a representative work for a Choi et al. Cybersecurity (2020) 3:15 Page 4 of 32 branch of the research direction). Table 1 lists the reasons why we choose each paper, which is ordered according to their importance. The summary is shown in Table 2. There are three columns in the table. In the first column, we listed eight security problems, including security-oriented program analysis, defending return-oriented programming (ROP) attacks, control-flow integrity (CFI), defending network attacks (NA), malware classification (MC), systemevent-based anomaly detection (SEAD), memory forensics (MF), and fuzzing for software security. In the second column, we list the very recent two representative works for each security problem. In the “Summary” column, we sequentially describe how the four phases are deployed at each work, then, we list the evaluation results for each work in terms of accuracy (ACC), precision (PRC), recall (REC), F1 score (F1), false-positive rate (FPR), and false-negative rate (FNR), respectively. Methodology for reviewing the existing works Data representation (or feature engineering) plays an important role in solving security problems with Deep Learning. This is because data representation is a way to take advantage of human ingenuity and prior knowledge to extract and organize the discriminative information from the data. Many efforts in deploying machine learning algorithms in security domain actually goes into the design of preprocessing pipelines and data transformations that result in a representation of the data to support effective machine learning. In order to expand the scope and ease of applicability of machine learning in security domain, it would be highly desirable to find a proper way to represent the data in security domain, which can entangle and hide more or less the different explanatory factors of variation behind the data. To let this survey adequately reflect the important role played by data representation, our review will focus on how the following three questions are answered by the existing works: • Question 1: Is Phase II pervasively done in the literature? When Phase II is skipped in a work, are there any particular reasons? • Question 2: Is Phase III employed in the literature? When Phase III is skipped in a work, are there any particular reasons? • Question 3: When solving different security problems, is there any commonality in terms of the (types of) classifiers learned in Phase IV? Among the works solving the same security problem, is there dissimilarity in terms of classifiers learned in Phase IV? To group the Phase III methods at different applications of Deep Learning in solving the same security problem, we introduce a classification tree as shown in Fig. 2. The classification tree categorizes the Phase III methods in our selected survey works into four classes. First, class 1 includes the Phase III methods which do not consider representation learning. Second, class 2 includes the Phase III methods which consider representation learning but, do not adopt it. Third, class 3 includes the Phase III methods which consider and adopt representation learning but, do not compare the performance with other methods. Finally, class 4 includes Table 1 List of criteria we used to choose representative work for each research topic Order of Criteria for Paper Selection 1 2 3 4 RFBNN (Shin et al. 2015) Pioneer Top Novelty Citation EKLAVYA (Chua et al. 2017) Top Novelty Citation N/A ROPNN (Li et al. 2018) Pioneer Novelty Effectiveness N/A HeNet (Chen et al. 2018) Effectiveness Novelty Citation N/A Barnum (Yagemann et al. 2019) Pioneer Novelty N/A N/A CFG-CNN (Phan et al. 2017) Representative N/A N/A N/A 50b(yte)-CNN(Millar et al. 2018) Novelty Effectiveness N/A N/A PCNN (Zhang et al. 2019) Novelty Effectiveness N/A N/A Resenberg (Rosenberg et al. 2018) Novelty Effectiveness Top Representative DeLaRosa (De La Rosa et al. 2018) Novelty Representative N/A N/A DeepLog (Du et al. 2017) Pioneer Top Citation N/A DeepMem (Song et al. 2018) Pioneer Top N/A N/A NeuZZ (Shi and Pei 2019) Novelty Top Effectiveness N/A Learn & Fuzz (Godefroid et al. 2017) Pioneer Novelty Top N/A Choi et al. Cybersecurity (2020) 3:15 Page 5 of 32 Table 2 Solutions using Deep Learning for eight security problems. The metrics in the Evaluation column include accuracy (ACC), precision (PRC), recall (REC), F1 score (F1 ), false positive rate (FPR), and false negative rate (FNR) Security Problem Works Security Oriented Program Analysis (Shin et al. 2015; Chua et al. 2017; Guo et al. 2019; Xu et al. 2017) RFBNN (Shin et al. 2015) EKLAVYA (Chua et al. 2017) Defending Return Oriented Programming Attacks (Li et al. 2018; Chen et al. 2018; Zhang et al. 2019) ROPNN (Li et al. 2018) HeNet (Chen et al. 2018) Summary Phase I Phase II Dataset comes from previous paper (Bao et al. 2014), consisting of 2200 separate binaries. 2064 of the binaries were for Linux, obtained from the coreutils, binutils, and findutils packages. The remaining 136 for Windows consist of binaries from popular opensource projects. Half of the binaries were for x86, and the other half for x86-64. They extract fixed-length subsequences (1000-byte chunks) from code section of binaries, Then, use “one-hot encoding”, which converts a byte into a Z256 vector. Phase III Phase IV N/A Bi-directional RNN Phase I They adopted source code from previous work (Shin et al. 2015) as their rawdata, then obtained two datasets by using two commonly used compilers: gcc and clang, with different optimization levels ranging from O0 to O3 for both x86 and x64. They obtained the ground truth for the function arguments by parsing the DWARF debug information. Next, they extract functions from the binaries and remove functions which are duplicates of other functions in the dataset. Finally, they match caller snipper and callee body. Evaluation ACC: 98.4% PRE:N/A REC:0.97 F1 :0.98 FPR:N/A FNR:N/A Phase II Tokenizing the hexadecimal value of each instruction. Phase III Phase IV Word2vec technique to compute word embeddings. RNN Phase I Evaluation ACC:81.0% PRE:N/A REC:N/A F1 :N/A FPR:N/A FNR:N/A Phase II The data is a set of gadget chains obtained from existing programs. A gadget searching tool, ROPGadget is used to find available gadgets. Gadgets are chained based on whether the produced gadget chain is executable on a CPU emulator. The raw data is represented in hexadecimal form of instruction sequences. Form one-hot vector for bytes. Phase III Phase IV N/A 1-D CNN Phase I Phase II Data is acquired from Intel PT, which is a processor trace tool that can log control flow data. Taken NotTaken (TNT) packet and Target IP (TIP) packet are the two packets of interested. Logged as binary numbers, information of executed branches can be obtained from TNT, and binary executed can be obtained from TIP. Then the binary sequences are transferred into sequences of values between 0-255, called pixels, byte by byte. Given the pixel sequences, slice the whole sequence and reshape to form sequences of images for neural network training. Phase III Phase IV Word2vec technique to compute word embeddings. DNN Evaluation ACC:99.9% PRE:0.99 REC:N/A F1 :0.01 FPR:N/A FNR:N/A Evaluation ACC:98.1% PRE:0.99 REC:0.96 F1 :0.97 FPR:0.01 FNR:0.04 Choi et al. Cybersecurity (2020) 3:15 Page 6 of 32 Table 2 Solutions using Deep Learning for eight security problems. The metrics in the Evaluation column include accuracy (ACC), precision (PRC), recall (REC), F1 score (F1 ), false positive rate (FPR), and false negative rate (FNR) (Continued) Security Problem Achieving Control Flow Integrity (Yagemann et al. 2019; Phan et al. 2017; Zhang et al. 2019) Works Barnum (Yagemann et al. 2019) Summary Phase I The raw data, which is the exact sequence of instructions executed, was generated by combining the program binary, get immediately before the program opens a document, and Intel® PT trace. While Intel® PT built-in filtering options are set to CR3 and current privilege level (CPL), which only traces the program activity in the user space. Phase III N/A Phase II The raw instruction sequences are summarized into Basic Blocks with IDs assigned and are then sliced into manageable subsequences with a fix window size 32, founded experimentally. Only sequences ending on indirect calls, jumps and returns are analyzed, since control-flow hijacking attacks always occur there. The label is the next BBID in the sequence. Phase IV Evaluation LSTM ACC:N/A%PRE:0.98 REC:1.00 F1 :0.98 CFG-CNN (Phan et al. 2017) Defending Network Attacks (Millar et al. 2018; Zhang et al. 2019; Yuan et al. 2017; Varenne et al. 2019; Yin et al. 2017; Ustebay et al. 2019; Faker and Dogdu 2019) 50b(yte)CNN (Millar et al. 2018) PCCN (Zhang et al. 2019) FPR:0.98 FNR:0.02 Phase II Since each vertex of the CFG represents an instruction with complex information that could be viewed from different aspects, including instruction name, type, operands etc., a vertex is represented as the sum of a set of real valued vectors, corresponding to the number of views (e.g. addq 32,%rsp is converted to linear combination of randomly assigned vectors of addq value, reg). The CFG is then sliced by a set of fixed size windows sliding through the entire graph to extract local features on different levels. Phase III Phase IV Evaluation N/A DGCNN with different numbers of views and with ACC:84.1%PRE:N/A or without operands REC:N/A F1 :N/A FPR:N/A FNR:N/A Phase I Phase II Open dataset UNSW-NB15 is used. First, tcpdump The first 50 bytes of each network traffic flow tool is utilised to capture 100 GB of the raw traffic are picked out and each is directly used as one (i.e. PCAP files) containing benign activities and 9 feature input to the neural network. types of attacks. The Argus, Bro-IDS (now called Zeek) analysis tools are then used and twelve algorithms are developed to generate totally 49 features with the class label. In the end, the total number of data samples is 2,540,044 which are stored in CSV files. Phase III Phase IV Evaluation N/A CNN with 2 hidden fully connected layers ACC:N/A%PRE:N/A REC:N/A F1 :0.93 FPR:N/A FNR:N/A Phase I Phase II Open dataset CICIDS2017, which contains Extract a total of 1,168,671 flow data, including benign and 14 types of attacks, is used. Back- 12 types of attack activities, from original dataset. ground benign network traffics are generated by Those flow data are then processed and visualprofiling the abstract behavior of human interac- ized into grey-scale 2D graphs. The visualization tions. Raw data are provided as PCAP files, and method is not specified. the results of the network traffic analysis using CICFlowMeter are pvodided as CSV files. In the end the dataset contains 3,119,345 data samples and 83 features categorized into 15 classes (1 normal + 14 attacks). Phase III Phase IV Evaluation N/A Parallel cross CNN. ACC:N/A%PRE:0.99 REC:N/A F1 :0.99 FPR:N/A FNR:N/A Phase I The raw data is instruction level control-flow graph constructed from program assembly code by an algorithm proposed by the authors. While in the CFG, one vertex corresponds to one instruction and one directed edge corresponds to an execution path from one instruction to another. The program sets for experiments are obtained from popular programming contest CodeChief. Choi et al. Cybersecurity (2020) 3:15 Page 7 of 32 Table 2 Solutions using Deep Learning for eight security problems. The metrics in the Evaluation column include accuracy (ACC), precision (PRC), recall (REC), F1 score (F1 ), false positive rate (FPR), and false negative rate (FNR) (Continued) Security Problem Malware Classification (De La Rosa et al. 2018; Saxe and Berlin 2015; Kolosnjaji et al. 2017; McLaughlin et al. 2017; Tobiyama et al. 2016; Dahl et al. 2013; Nix and Zhang 2017; Kalash et al. 2018; Cui et al. 2018; David and Netanyahu 2015; Rosenberg et al. 2018; Xu et al. 2018) Works Rosenberg (Rosenberg et al. 2018) Summary Phase I The android dataset has the latest malware families and their variants, each with the same number of samples. The samples are labeled by VirusTotal. Then Cuckoo Sandbox is used to extract dynamic features (API calls) and static features (string). To avoid some antiforensic sample, they applied YARA rule and removed sequences with less than 15 API calls. After preprocessing and balance the benign samples number, the dataset has 400,000 valid samples. Phase III N/A Phase IV They used RNN, BRNN, LSTM, Deep LSTM, BLSTM, Deep BLSTM, GRU, bidirectional GRU, Fully-connected DNN, 1D CNN in their experiments Evaluation ACC:98.3% PRE:N/A REC:N/A FPR:N/A DeLaRosa (De La Rosa et al. 2018) Phase I The windows dataset is from Reversing Labs including XP, 7, 8, and 10 for both 32-bit and 64-bit architectures and gathered over a span of twelve years (20062018). They selected nine malware families in their dataset and extracted static features in terms of bytes, basic, and assembly features. Phase III N/A System Event Based Anomaly Detection (Du et al. 2017; Meng et al. 2019; Das et al. 2018; Brown et al. 2018; Zhang et al. 2019; Bertero et al. 2017) Phase II Long sequences cause out of memory during training LSTM model. So they use sliding window with fixed size and pad shorter sequences with zeros. Onehot encoding is applied to API calls. For static features strings, they defined a vector of 20,000 Boolean values indicating the most frequent Strings in the entire dataset. If the sample contain one string, the corresponding value in the vector will be assigned as 1, otherwise, 0. DeepLog (Du et al. 2017) Phase I More than 24 million raw log entries with the size of 2412 MB are recorded from the 203-node HDFS. Over 11 million log entries with 29 types are parsed, which are further grouped to 575,061 sessions according to block identifier. These sessions are manually labeled as normal and abnormal by HDFS experts. Finally, the constructed dataset HDFS 575,061 sessions of logs in the dataset, among which 16,838 sessions were labeled as anomalous Phase III DeepLog directly utilized one-hot vector to represent 29 log key without represent learning Phase II For bytes-level features, they used a sliding window to get the histogram of the bytes and compute the associated entropy in a window; for basic features, they created a fixed-sized feature vector given either a list of ASCII strings, or extracted import and metadata information from the PE Header(Strings are hashed and calculate a histogram of these hashes by counting the occurrences of each value); for assembly features, the disassembled code generated by Radare2 can be parsed and transformed into graph-like data structures such as call graphs, control flow graph, and instruction flow graph. Phase IV N/A F1 :N/A FNR:N/A Evaluation ACC:90.1% PRE:N/A REC:N/A F1 :N/A FPR:N/A FNR:N/A Phase II The raw log entries are parsed to different log type using Spell(Du and Li 2016) which is based a longest common subsequence. There are total 29 log types in HDFS dataset Phase IV A stacked LSTM with two hidden LSTM layers. Evaluation ACC:N/A% PRE:0.95 REC:0.96 FPR:N/A F1 :0.96 FNR:N/A Choi et al. Cybersecurity (2020) 3:15 Page 8 of 32 Table 2 Solutions using Deep Learning for eight security problems. The metrics in the Evaluation column include accuracy (ACC), precision (PRC), recall (REC), F1 score (F1 ), false positive rate (FPR), and false negative rate (FNR) (Continued) Security Problem Works LogAnom (Meng et al. 2019) Summary Phase I LogAnom also used HDFS dataset, which is same as DeepLog. Phase III LogAnom employed Word2Vec to represent the extracted log templates with more semantic information Phase II The raw log entries are parsed to different log templates using FT-Tree (Zhang et al. 2017) according the frequent combinations of log words. There are total 29 log templates in HDFS dataset Phase IV Two LSTM layers with 128 neurons Evaluation ACC:N/A% PRE:0.97 REC:0.94 FPR:N/A Memory Forensics (Song et al. 2018; Petrik et al. 2018; Michalas and Murray 2017; Dai et al. 2018) DeepMem (Song et al. 2018) Phase I 400 memory dumps are collected on Windows 7 x86 SP1 virtual machine with simulating various random user actions and forcing the OS to randomly allocate objects. The size of each dump is 1GB. Phase III Each node is represented by a latent numeric vector from the embedding network. Phase II Construct memory graph from memory dumps, where each node represents a segment between two pointers and an edge is created if two nodes are neighbor Phase IV Fully Connected Network (FCN) with ReLU layer. Evaluation ACC:N/A% PRE:0.99 REC:0.99 FPR:0.01 MDMF (Petrik et al. 2018) Fuzzing (Wang et al. 2019; Shi and Pei 2019; Böttinger et al. 2018; Godefroid et al. 2017; Rajpal et al. 2017) DeepMem (Song et al. 2018) F1 :0.96 FNR:N/A Phase I Create a dataset of benign host memory snapshots running normal, noncompromised software, including software that executes in many of the malicious snapshots. The benign snapshot is extracted from memory after ample time has passed for the chosen programs to open. By generating samples in parallel to the separate malicious environment, the benign memory snapshot dataset created. Phase II Various representation for the memory snapshots including byte sequence and image, without relying on domain-knowledge of the OS. Phase III Phase IV N/A Recurrent Neural Network with LSTM cells and Convolutional Neural Network composed of multiple layers, including pooling and fully connected layers. for image data Phase I F1 :0.99 FNR:0.01 Evaluation ACC:98.0% PRE:N/A REC:N/A F1 :N/A FPR:N/A FNR:N/A Phase II The raw data are about 63,000 non-binary PDF objects, sliced in fix size, extracted from 534 PDF files that are provided by Windows fuzzing team and are previously used for prior extended fuzzing of Edge PDF parser. N/A Phase III Phase IV N/A Char-RNN Evaluation ACC:N/A% PRE:N/A REC:N/A F1 :0.93 FPR:N/A FNR:N/A Choi et al. Cybersecurity (2020) 3:15 Page 9 of 32 Table 2 Solutions using Deep Learning for eight security problems. The metrics in the Evaluation column include accuracy (ACC), precision (PRC), recall (REC), F1 score (F1 ), false positive rate (FPR), and false negative rate (FNR) (Continued) Security Problem Works NEUZZ(Shi and Pei 2019) 1 Summary Phase I Phase II For each program tested, the raw data is collected by running AFL-2.52b on a single core machine for one hour. The training data are byte level input files generated by AFL, and the labels are bitmaps corresponding to input files. For experiments, NEUZZ is implemented on 10 realworld programs, the LAVA-M bug dataset, and the CGC dataset. N/A Phase III Phase IV N/A NN Evaluation ACC:N/A% PRE:N/A REC:N/A F1 :0.93 FPR:N/A FNR:N/A Deep Learning metrics are often not available in fuzzing papers. Typical fuzzing metrics used for evaluations are: code coverage, pass rate and bugs the Phase III methods which consider and adopt representation learning and, compare the performance with other methods. In the remaining of this paper, we take a closer look at how each of the eight security problems is being solved by applications of Deep Learning in the literature. A closer look at applications of deep learning in solving security-oriented program analysis challenges Introduction Recent years, security-oriented program analysis is widely used in software security. For example, symbolic execution and taint analysis are used to discover, detect and analyze vulnerabilities in programs. Control flow analysis, data flow analysis and pointer/alias analysis are important components when enforcing many secure strategies, such as control flow integrity, data flow integrity and doling dangling pointer elimination. Reverse engineering was used by defenders and attackers to understand the logic of a program without source code. In the security-oriented program analysis, there are many open problems, such as precise pointer/alias analysis, accurate and complete reversing engineer, complex constraint solving, program de-obfuscation, and so on. Some problems have theoretically proven to be NP-hard, and others still need lots of human effort to solve. Either of them needs a lot of domain knowledge and experience from expert to develop better solutions. Essentially speaking, the main challenges when solving them through traditional approaches are due to the sophisticated rules between the features and labels, which may change in different contexts. Therefore, on the one hand, it will take a large quantity of human effort to develop rules to solve the problems, on the other hand, even the most experienced expert cannot guarantee completeness. Fortunately, the deep learning method is skillful to find relations between features and labels if given a large amount of training data. It can quickly and comprehensively find all the relations if the training samples are representative and effectively encoded. In this section, we will review the very recent four representative works that use Deep Learning for securityoriented program analysis. We observed that they focused on different goals. Shin, et al. designed a model (Shin et al. 2015) to identify the function boundary. EKLAVYA (Chua et al. 2017) was developed to learn the function type. Gemini (Xu et al. 2017) was proposed to detect similarity among functions. DEEPVSA (Guo et al. 2019) was designed to learn memory region of an indirect addressing from the code sequence. Among these works, we select two representative works (Shin et al. 2015; Chua et al. Fig. 2 Classification tree for different Phase III methods. Here, consideration, adoption, and comparison indicate that a work considers Phase III, adopts Phase III and makes comparison with other methods, respectively Choi et al. Cybersecurity (2020) 3:15 2017) and then, summarize the analysis results in Table 2 in detail. Our review will be centered around three questions described in “Methodology for reviewing the existing works” section. In the remaining of this section, we will first provide a set of observations, and then we provide the indications. Finally, we provide some general remarks. Key findings from a closer look From a close look at the very recent applications using Deep Learning for solving security-oriented program analysis challenges, we observed the followings: Observation 3.1: All of the works in our survey used binary files as their raw data. Phase II in our survey had one similar and straightforward goal – extracting code sequences from the binary. Difference among them was that the code sequence was extracted directly from the binary file when solving problems in static program analysis, while it was extracted from the program execution when solving problems in dynamic program analysis. *Observation 3.2: Most data representation methods generally took into account the domain knowledge. Most data representation methods generally took into the domain knowledge, i.e., what kind of information they wanted to reserve when processing their data. Note that the feature selection has a wide influence on Phase II and Phase III, for example, embedding granularities, representation learning methods. Gemini (Xu et al. 2017) selected function level feature and other works in our survey selected instruction level feature. To be specifically, all the works except Gemini (Xu et al. 2017) vectorized code sequence on instruction level. Observation 3.3: To better support data representation for high performance, some works adopted representation learning. For instance, DEEPVSA (Guo et al. 2019) employed a representation learning method, i.e., bi-directional LSTM, to learn data dependency within instructions. EKLAVYA (Chua et al. 2017) adopted representation learning method, i.e., word2vec technique, to extract inter-instruciton information. It is worth noting that Gemini (Xu et al. 2017) adopts the Structure2vec embedding network in its siamese architecture in Phase IV (see details in Observation 3.7). The Structure2vec embedding network learned information from an attributed control flow graph. Observation 3.4: According to our taxonomy, most works in our survey were classified into class 4. To compare the Phase III, we introduced a classification tree with three layers as shown in Fig. 2 to group different works into four categories. The Page 10 of 32 decision tree grouped our surveyed works into four classes according to whether they considered representation learning or not, whether they adopted representation learning or not, and whether they compared their methods with others’, respectively, when designing their framework. According to our taxonomy, EKLAVYA (Chua et al. 2017), DEEPVSA (Guo et al. 2019) were grouped into class 4 shown in Fig. 2. Also, Gemini’s work (Xu et al. 2017) and Shin, et al.’s work (Shin et al. 2015) belonged to class 1 and class 2 shown in Fig. 2, respectively. Observation 3.5: All the works in our survey explain why they adopted or did not adopt one of representation learning algorithms. Two works in our survey adopted representation learning for different reasons: to enhance model’s ability of generalization (Chua et al. 2017); and to learn the dependency within instructions (Guo et al. 2019). It is worth noting that Shin, et al. did not adopt representation learning because they wanted to preserve the “attractive” features of neural networks over other machine learning methods – simplicity. As they stated, “first, neural networks can learn directly from the original representation with minimal preprocessing (or “feature engineering”) needed.” and “second, neural networks can learn end-to-end, where each of its constituent stages are trained simultaneously in order to best solve the end goal.” Although Gemini (Xu et al. 2017) did not adopt representation learning when processing their raw data, the Deep Learning models in siamese structure consisted of two graph embedding networks and one cosine function. *Observation 3.6: The analysis results showed that a suitable representation learning method could improve accuracy of Deep Learning models. DEEPVSA (Guo et al. 2019) designed a series of experiments to evaluate the effectiveness of its representative method. By combining with the domain knowledge, EKLAVYA (Chua et al. 2017) employed t-SNE plots and analogical reasoning to explain the effectiveness of their representation learning method in an intuitive way. Observation 3.7: Various Phase IV methods were used. In Phase IV, Gemini (Xu et al. 2017) adopted siamese architecture model which consisted of two Structure2vec embedding networks and one cosine function. The siamese architecture took two functions as its input, and produced the similarity score as the output. The other three works (Shin et al. 2015; Chua et al. 2017; Guo et al. 2019) adopted bi-directional RNN, RNN, bi-directional LSTM respectively. Shin, et al. adopted bi-directional RNN Choi et al. Cybersecurity (2020) 3:15 because they wanted to combine both the past and the future information in making a prediction for the present instruction (Shin et al. 2015). DEEPVSA (Guo et al. 2019) adopted bi-directional RNN to enable their model to infer memory regions in both forward and backward ways. The above observations seem to indicate the following indications: Indication 3.1: Phase III is not always necessary. Not all authors regard representation learning as a good choice even though some case experiments show that representation learning can improve the final results. They value more the simplicity of Deep Learning methods and suppose that the adoption of representation learning weakens the simplicity of Deep Learning methods. Indication 3.2: Even though the ultimate objective of Phase III in the four surveyed works is to train a model with better accuracy, they have different specific motivations as described in Observation 3.5. When authors choose representation learning, they usually try to convince people the effectiveness of their choice by empirical or theoretical analysis. *Indication 3.3: Observation 3.7 indicates that authors usually refer to the domain knowledge when designing the architecture of Deep Learning model. For instance, the works we reviewed commonly adopt bi-directional RNN when their prediction partly based on future information in data sequence. Discussion Despite the effectiveness and agility of deep learningbased methods, there are still some challenges in developing a scheme with high accuracy due to the hierarchical data structure, lots of noisy, and unbalanced data composition in program analysis. For instance, an instruction sequence, a typical data sample in program analysis, contains three-level hierarchy: sequence– instruction–opcode/operand. To make things worse, each level may contain many different structures, e.g., oneoperand instructions, multi-operand instructions, which makes it harder to encode the training data. A closer look at applications of deep learning in defending ROP attacks Introduction Return-oriented programming (ROP) attack is one of the most dangerous code reuse attacks, which allows the attackers to launch control-flow hijacking attack without injecting any malicious code. Rather, It leverages particular instruction sequences (called “gadgets”) widely existing in the program space to achieve Turing-complete Page 11 of 32 attacks (Shacham and et al. 2007). Gadgets are instruction sequences that end with a RET instruction. Therefore, they can be chained together by specifying the return addresses on program stack. Many traditional techniques could be used to detect ROP attacks, such as controlflow integrity (CFI Abadi et al. (2009)), but many of them either have low detection rate or have high runtime overhead. ROP payloads do not contain any codes. In other words, analyzing ROP payload without the context of the program’s memory dump is meaningless. Thus, the most popular way of detecting and preventing ROP attacks is control-flow integrity. The challenge after acquiring the instruction sequences is that it is hard to recognize whether the control flow is normal. Traditional methods use the control flow graph (CFG) to identify whether the control flow is normal, but attackers can design the instruction sequences which follow the normal control flow defined by the CFG. In essence, it is very hard to design a CFG to exclude every single possible combination of instructions that can be used to launch ROP attacks. Therefore, using data-driven methods could help eliminate such problems. In this section, we will review the very recent three representative works that use Deep Learning for defending ROP attacks: ROPNN (Li et al. 2018), HeNet (Chen et al. 2018) and DeepCheck (Zhang et al. 2019). ROPNN (Li et al. 2018) aims to detect ROP attacks, HeNet (Chen et al. 2018) aims to detect malware using CFI, and DeepCheck (Zhang et al. 2019) aims at detecting all kinds of code reuse attacks. Specifically, ROPNN is to protect one single program at a time, and its training data are generated from real-world programs along with their execution. Firstly, it generates its benign and malicious data by “chaining-up” the normally executed instruction sequences and “chaining-up” gadgets with the help of gadgets generation tool, respectively, after the memory dumps of programs are created. Each data sample is byte-level instruction sequence labeled as “benign” or “malicious”. Secondly, ROPNN will be trained using both malicious and benign data. Thirdly, the trained model is deployed to a target machine. After the protected program started, the executed instruction sequences will be traced and fed into the trained model, the protected program will be terminated once the model found the instruction sequences are likely to be malicious. HeNet is also proposed to protect a single program. Its malicious data and benign data are generated by collecting trace data through Intel PT from malware and normal software, respectively. Besides, HeNet preprocesses its dataset and shape each data sample in the format of image, so that they could implement transfer learning from a model pre-trained on ImageNet. Then, HeNet is trained and deployed on machines with features of Intel PT to collect and classify the program’s execution trace online. Choi et al. Cybersecurity (2020) 3:15 The training data for DeepCheck are acquired from CFGs, which are constructed by dissembling the programs and using the information from Intel PT. After the CFG for a protected program is constructed, authors sample benign instruction sequences by chaining up basic blocks that are connected by edges, and sample malicious instruction sequences by chaining up those that are not connected by edges. Although a CFG is needed during training, there is no need to construct CFG after the training phase. After deployed, instruction sequences will be constructed by leveraging Intel PT on the protected program. Then the trained model will classify whether the instruction sequences are malicious or benign. We observed that none of the works considered Phase III, so all of them belong to class 1 according to our taxonomy as shown in Fig. 2. The analysis results of ROPNN (Li et al. 2018) and HeNet (Chen et al. 2018) are shown in Table 2. Also, we observed that three works had different goals. Our review will be centered around three questions described in “Methodology for reviewing the existing works” section. In the remaining of this section, we will first provide a set of observations, and then we provide the indications. Finally, we provide some general remarks. Key findings from a closer look From a close look at the very recent applications using Deep Learning for defending return-oriented programming attacks, we observed the followings: Observation 4.1: All the works (Li et al. 2018; Zhang et al. 2019; Chen et al. 2018) in this survey focused on data generation and acquisition. In ROPNN (Li et al. 2018), both malicious samples (gadget chains) were generated using an automated gadget generator (i.e. ROPGadget (Salwant 2015)) and a CPU emulator (i.e. Unicorn (Unicorn-The ultimate CPU emulator 2015)). ROPGadget was used to extract instruction sequences that could be used as gadgets from a program, and Unicorn was used to validate the instruction sequences. Corresponding benign sample (gadget-chain-like instruction sequences) were generated by disassembling a set of programs. In DeepCheck (Zhang et al. 2019) refers to the key idea of control-flow integrity (Abadi et al. 2009). It generates program’s run-time control flow through new feature of Intel CPU (Intel Processor Tracing), then compares the run-time control flow with the program’s control-flow graph (CFG) that generates through static analysis. Benign instruction sequences are that with in the program’s CFG, and vice versa. In HeNet (Chen et al. 2018), program’s execution trace was extracted using the similar way as DeepCheck. Page 12 of 32 Then, each byte was transformed into a pixel with an intensity between 0-255. Known malware samples and benign software samples were used to generate malicious data benign data, respectively. Observation 4.2: None of the ROP works in this survey deployed Phase III. Both ROPNN (Li et al. 2018) and DeepCheck (Zhang et al. 2019) used binary instruction sequences for training. In ROPNN (Li et al. 2018), one byte was used as the very basic element for data pre-processing. Bytes were formed into one-hot matrices and flattened for 1-dimensional convolutional layer. In DeepCheck (Zhang et al. 2019), half-byte was used as the basic unit. Each half-byte (4 bits) was transformed to decimal form ranging from 0-15 as the basic element of the input vector, then was fed into a fully-connected input layer. On the other hand, HeNet (Chen et al. 2018) used different kinds of data. By the time this survey has been drafted, the source code of HeNet was not available to public and thus, the details of the data pre-processing was not be investigated. However, it is still clear that HeNet used binary branch information collected from Intel PT rather than binary instructions. In HeNet, each byte was converted to one decimal number ranging from 0 to 255. Byte sequences was sliced and formed into image sequences (each pixel represented one byte) for a fully-connected input layer. Observation 4.3: Fully-connected neural network was widely used. Only ROPNN (Li et al. 2018) used 1-dimensional convolutional neural network (CNN) when extracting features. Both HeNet (Chen et al. 2018) and DeepCheck (Zhang et al. 2019) used fully-connected neural network (FCN). None of the works used recurrent neural network (RNN) and the variants. The above observations seem to indicate the following indications: Indication 4.1:It seems like that one of the most important factors in ROP problem is feature selection and data generation. All three works use very different methods to collect/generate data, and all the authors provide very strong evidences and/or arguments to justify their approaches. ROPNN (Li et al. 2018) was trained by the malicious and benign instruction sequences. However, there is no clear boundary between benign instruction sequences and malicious gadget chains. This weakness may impair the performance when applying ROPNN to real world ROP attacks. As oppose to ROPNN, DeepCheck (Zhang et al. 2019) utilizes CFG to generate training basic-block sequences. However, since the malicious basic-block Choi et al. Cybersecurity (2020) 3:15 sequences are generated by randomly connecting nodes without edges, it is not guaranteed that all the malicious basic-blocks are executable. HeNet (Chen et al. 2018) generates their training data from malware. Technically, HeNet could be used to detect any binary exploits, but their experiment focuses on ROP attack and achieves 100% accuracy. This shows that the source of data in ROP problem does not need to be related to ROP attacks to produce very impressive results. Indication 4.2: Representation learning seems not critical when solving ROP problems using Deep Learning. Minimal process on data in binary form seems to be enough to transform the data into a representation that is suitable for neural networks. Certainly, it is also possible to represent the binary instructions at a higher level, such as opcodes, or use embedding learning. However, as stated in (Li et al. 2018), it appears that the performance will not change much by doing so. The only benefit of representing input data to a higher level is to reduce irrelevant information, but it seems like neural network by itself is good enough at extracting features. Indication 4.3: Different Neural network architecture does not have much influence on the effectiveness of defending ROP attacks. Both HeNet (Chen et al. 2018) and DeepCheck (Zhang et al. 2019) utilizes standard DNN and achieved comparable results on ROP problems. One can infer that the input data can be easily processed by neural networks, and the features can be easily detected after proper pre-process. It is not surprising that researchers are not very interested in representation learning for ROP problems as stated in Observation 4.1. Since ROP attack is focus on the gadget chains, it is straightforward for the researcher to choose the gadgets as their training data directly. It is easy to map the data into numerical representation with minimal processing. An example is that one can map binary executable to hexadecimal ASCII representation, which could be a good representation for neural network. Instead, researchers focus more in data acquisition and generation. In ROP problems, the amount of data is very limited. Unlike malware and logs, ROP payloads normally only contain addresses rather than codes, which do not contain any information without providing the instructions in corresponding addresses. It is thus meaningless to collect all the payloads. At the best of our knowledge, all the previous works use pick instruction sequences rather than payloads as their training data, even though they are hard to collect. Page 13 of 32 Discussion Even though, Deep Learning based method does not face the challenge to design a very complex fine-grained CFG anymore, it suffers from a limited number of data sources. Generally, Deep Learning based method requires lots of training data. However, real-world malicious data for the ROP attack is very hard to find, because comparing with benign data, malicious data need to be carefully crafted and there is no existing database to collect all the ROP attacks. Without enough representative training set, the accuracy of the trained model cannot be guaranteed. A closer look at applications of deep learning in achieving CFI Introduction The basic ideas of control-flow integrity (CFI) techniques, proposed by Abadi in 2005 (Abadi et al. 2009), could be dated back to 2002, when Vladimir and his fellow researchers proposed an idea called program shepherding (Kiriansky et al. 2002), a method of monitoring the execution flow of a program when it is running by enforcing some security policies. The goal of CFI is to detect and prevent control-flow hijacking attacks, by restricting every critical control flow transfers to a set that can only appear in correct program executions, according to a prebuilt CFG. Traditional CFI techniques typically leverage some knowledge, gained from either dynamic or static analysis of the target program, combined with some code instrumentation methods, to ensure the program runs on a correct track. However, the problems of traditional CFI are: (1) Existing CFI implementations are not compatible with some of important code features (Xu et al. 2019); (2) CFGs generated by static, dynamic or combined analysis cannot always be precisely completed due to some open problems (Horwitz 1997); (3) There always exist certain level of compromises between accuracy and performance overhead and other important properties (Tan and Jaeger 2017; Wang and Liu 2019). Recent research has proposed to apply Deep Learning on detecting control flow violation. Their result shows that, compared with traditional CFI implementation, the security coverage and scalability were enhanced in such a fashion (Yagemann et al. 2019). Therefore, we argue that Deep Learning could be another approach which requires more attention from CFI researchers who aim at achieving control-flow integrity more efficiently and accurately. In this section, we will review the very recent three representative papers that use Deep Learning for achieving CFI. Among the three, two representative papers (Yagemann et al. 2019; Phan et al. 2017) are already summarized phase-by-phase in Table 2. We refer to interested readers the Table 2 for a concise overview of those two papers. Choi et al. Cybersecurity (2020) 3:15 Our review will be centered around three questions described in Section 3. In the remaining of this section, we will first provide a set of observations, and then we provide the indications. Finally, we provide some general remarks. Key findings from a closer look From a close look at the very recent applications using Deep Learning for achieving control-flow integrity, we observed the followings: Observation 5.1: None of the related works realize preventive1 prevention of control flow violation. After doing a thorough literature search, we observed that security researchers are quite behind the trend of applying Deep Learning techniques to solve security problems. Only one paper has been founded by us, using Deep Learning techniques to directly enhance the performance of CFI (Yagemann et al. 2019). This paper leveraged Deep Learning to detect document malware through checking program’s execution traces that generated by hardware. Specifically, the CFI violations were checked in an offline mode. So far, no works have realized Just-In-Time checking for program’s control flow. In order to provide more insightful results, in this section, we try not to narrow down our focus on CFI detecting attacks at run-time, but to extend our scope to papers that take good use of control flow related data, combined with Deep Learning techniques (Phan et al. 2017; Nguyen et al. 2018). In one work, researchers used self-constructed instruction-level CFG to detect program defection (Phan et al. 2017). In another work, researchers used lazy-binding CFG to detect sophisticated malware (Nguyen et al. 2018). Observation 5.2: Diverse raw data were used for evaluating CFI solutions. In all surveyed papers, there are two kinds of control flow related data being used: program instruction sequences and CFGs. Barnum et al. (Yagemann et al. 2019) employed statically and dynamically generated instruction sequences acquired by program disassembling and Intel® Processor Trace. CNNoverCFG (Phan et al. 2017) used self-designed algorithm to construct instruction level control-flow graph. Minh Hai Nguyen et al. (Nguyen et al. 2018) used proposed lazy-binding CFG to reflect the behavior of malware DEC. Observation 5.3: All the papers in our survey adopted Phase II. All the related papers in our survey employed Phase II to process their raw data before sending them into 1 We refer readers to (Wang and Liu 2019) which systemizes the knowledge of protections by CFI schemes. Page 14 of 32 Phase III. In Barnum (Yagemann et al. 2019), the instruction sequences from program run-time tracing were sliced into basic-blocks. Then, they assigned each basic-blocks with an unique basic-block ID (BBID). Finally, due to the nature of control-flow hijacking attack, they selected the sequences ending with indirect branch instruction (e.g., indirect call/jump, return and so on) as the training data. In CNNoverCFG (Phan et al. 2017), each of instructions in CFG were labeled with its attributes in multiple perspectives, such as opcode, operands, and the function it belongs to. The training data is generated are sequences generated by traversing the attributed control-flow graph. Nguyen and others (Nguyen et al. 2018) converted the lazy-binding CFG to corresponding adjacent matrix and treated the matrix as a image as their training data. Observation 5.4: All the papers in our survey did not adopt Phase III. We observed all the papers we surveyed did not adopted Phase III. Instead, they adopted the form of numerical representation directly as their training data. Specifically, Barnum (Yagemann et al. 2019) grouped the instructions into basic-blocks, then represented basic-blocks with uniquely assigning IDs. In CNNoverCFG (Phan et al. 2017), each of instructions in the CFG was represented by a vector that associated with its attributes. Nguyen and others directly used the hashed value of bit string representation. Observation 5.5: Various Phase IV models were used. Barnum (Yagemann et al. 2019) utilized BBID sequence to monitor the execution flow of the target program, which is sequence-type data. Therefore, they chose LSTM architecture to better learn the relationship between instructions. While in the other two papers (Phan et al. 2017; Nguyen et al. 2018), they trained CNN and directed graph-based CNN to extract information from control-flow graph and image, respectively. The above observations seem to indicate the following indications: Indication 5.1: All the existing works did not achieve Just-In-Time CFI violation detection. It is still a challenge to tightly embed Deep Learning model in program execution. All existing work adopted lazy-checking – checking the program’s execution trace following its execution. Indication 5.2: There is no unified opinion on how to generate malicious sample. Data are hard to collect in control-flow hijacking attacks. The researchers must carefully craft malicious sample. It is not clear whether the “handcrafted” Choi et al. Cybersecurity (2020) 3:15 sample can reflect the nature the control-flow hijacking attack. *Observation 5.3: The choice of methods in Phase II are based on researchers’ security domain knowledge. Discussion The strength of using deep learning to solve CFI problems is that it can avoid the complicated processes of developing algorithms to build acceptable CFGs for the protected programs. Compared with the traditional approaches, the DL based method could prevent CFI designer from studying the language features of the targeted program and could also avoid the open problem (pointer analysis) in control flow analysis. Therefore, DL based CFI provides us a more generalized, scalable, and secure solution. However, since using DL in CFI problem is still at an early age, which kinds of control-flow related data are more effective is still unclear yet in this research area. Additionally, applying DL in real-time control-flow violation detection remains an untouched area and needs further research. A closer look at applications of deep learning in defending network attacks Introduction Network security is becoming more and more important as we depend more and more on networks for our daily lives, works and researches. Some common network attack types include probe, denial of service (DoS), Remote-to-local (R2L), etc. Traditionally, people try to detect those attacks using signatures, rules, and unsupervised anomaly detection algorithms. However, signature based methods can be easily fooled by slightly changing the attack payload; rule based methods need experts to regularly update rules; and unsupervised anomaly detection algorithms tend to raise lots of false positives. Recently, people are trying to apply Deep Learning methods for network attack detection. In this section, we will review the very recent seven representative works that use Deep Learning for defending network attacks. Millar et al. (2018); Varenne et al. (2019); Ustebay et al. (2019) build neural networks for multi-class classification, whose class labels include one benign label and multiple malicious labels for different attack types. Zhang et al. (2019) ignores normal network activities and proposes parallel cross convolutional neural network (PCCN) to classify the type of malicious network activities. Yuan et al. (2017) applies Deep Learning to detecting a specific attack type, distributed denial of service (DDoS) attack. Yin et al. (2017); Faker and Dogdu (2019) explores both binary classification and multi-class classification for benign and malicious activities. Among these seven works, we select two representative works (Millar et al. 2018; Zhang et al. 2019) and summarize the main aspects of their approaches regarding whether the Page 15 of 32 four phases exist in their works, and what exactly do they do in the Phase if it exists. We direct interested readers to Table 2 for a concise overview of these two works. Our review will be centered around three questions described in “Methodology for reviewing the existing works” section. In the remaining of this section, we will first provide a set of observations, and then we provide the indications. Finally, we provide some general remarks. Key findings from a closer look From a close look at the very recent applications using Deep Learning for solving network attack challenges, we observed the followings: Observation 6.1: All the seven works in our survey used public datasets, such as UNSW-NB15 (Moustafa and Slay 2015) and CICIDS2017 (IDS 2017 Datasets 2019). The public datasets were all generated in test-bed environments, with unbalanced simulated benign and attack activities. For attack activities, the dataset providers launched multiple types of attacks, and the numbers of malicious data for those attack activities were also unbalanced. Observation 6.2: The public datasets were given into one of two data formats, i.e., PCAP and CSV. One was raw PCAP or parsed CSV format, containing network packet level features, and the other was also CSV format, containing network flow level features, which showed the statistic information of many network packets. Out of all the seven works, (Yuan et al. 2017; Varenne et al. 2019) used packet information as raw inputs, (Yin et al. 2017; Zhang et al. 2019; Ustebay et al. 2019; Faker and Dogdu 2019) used flow information as raw inputs, and (Millar et al. 2018) explored both cases. Observation 6.3: In order to parse the raw inputs, preprocessing methods, including one-hot vectors for categorical texts, normalization on numeric data, and removal of unused features/data samples, were commonly used. Commonly removed features include IP addresses and timestamps. Faker and Dogdu (2019) also removed port numbers from used features. By doing this, they claimed that they could “avoid over-fitting and let the neural network learn characteristics of packets themselves”. One outlier was that, when using packet level features in one experiment, (Millar et al. 2018) blindly chose the first 50 bytes of each network packet without any feature extracting processes and fed them into neural network. Observation 6.4: Using image representation improved the performance of security solutions using Deep Learning. Choi et al. Cybersecurity (2020) 3:15 After preprocessing the raw data, while (Zhang et al. 2019) transformed the data into image representation, (Yuan et al. 2017; Varenne et al. 2019; Faker and Dogdu 2019; Ustebay et al. 2019; Yin et al. 2017) directly used the original vectors as an input data. Also, (Millar et al. 2018) explored both cases and reported better performance using image representation. Observation 6.5: None of all the seven surveyed works considered representation learning. All the seven surveyed works belonged to class 1 shown in Fig. 2. They either directly used the processed vectors to feed into the neural networks, or changed the representation without explanation. One research work (Millar et al. 2018) provided a comparison on two different representations (vectors and images) for the same type of raw input. However, the other works applied different preprocessing methods in Phase II. That is, since the different preprocessing methods generated different feature spaces, it was difficult to compare the experimental results. Observation 6.6: Binary classification model showed better results from most experiments. Among all the seven surveyed works, (Yuan et al. 2017) focused on one specific attack type and only did binary classification to classify whether the network traffic was benign or malicious. Also, (Millar et al. 2018; Ustebay et al. 2019; Zhang et al. 2019; Varenne et al. 2019) included more attack types and did multi-class classification to classify the type of malicious activities, and (Yin et al. 2017; Faker and Dogdu 2019) explored both cases. As for multi-class classification, the accuracy for selective classes was good, while accuracy for other classes, usually classes with much fewer data samples, suffered by up to 20% degradation. Observation 6.7: Data representation influenced on choosing a neural network model. The above observations seem to indicate the following indications: Indication 6.1: All works in our survey adopt a kind of preprocessing methods in Phase II, because raw data provided in the public datasets are either not ready for neural networks, or that the quality of data is too low to be directly used as data samples. Preprocessing methods can help increase the neural network performance by improving the data samples’ qualities. Furthermore, by reducing the feature space, pre-processing can also improve the efficiency of neural network training and testing. Thus, Phase II Page 16 of 32 should not be skipped. If Phase II is skipped, the performance of neural network is expected to go down considerably. Indication 6.2: Although Phase III is not employed in any of the seven surveyed works, none of them explains a reason for it. Also, they all do not take representation learning into consideration. Indication 6.3: Because no work uses representation learning, the effectiveness are not well-studied. Out of other factors, it seems that the choice of pre-processing methods has the largest impact, because it directly affects the data samples fed to the neural network. Indication 6.4: There is no guarantee that CNN also works well on images converted from network features. Some works that use image data representation use CNN in Phase IV. Although CNN has been proven to work well on image classification problem in the recent years, there is no guarantee that CNN also works well on images converted from network features. From the observations and indications above, we hereby present two recommendations: (1) Researchers can try to generate their own datasets for the specific network attack they want to detect. As stated, the public datasets have highly unbalanced number of data for different classes. Doubtlessly, such unbalance is the nature of real world network environment, in which normal activities are the majority, but it is not good for Deep Learning. (Varenne et al. 2019) tries to solve this problem by oversampling the malicious data, but it is better to start with a balanced data set. (2) Representation learning should be taken into consideration. Some possible ways to apply representation learning include: (a) apply word2vec method to packet binaries, and categorical numbers and texts; (b) use K-means as one-hot vector representation instead of randomly encoding texts. We suggest that any change of data representation may be better justified by explanations or comparison experiments. Discussion One critical challenge in this field is the lack of highquality data set suitable for applying deep learning. Also, there is no agreement on how to apply domain knowledge into training deep learning models for network security problems. Researchers have been using different pre-processing methods, data representations and model types, but few of them have enough explanation on why such methods/representations/models are chosen, especially for data representation. Choi et al. Cybersecurity (2020) 3:15 A closer look at applications of deep learning in malware classification Introduction The goal of malware classification is to identify malicious behaviors in software with static and dynamic features like control-flow graph and system API calls. Malware and benign programs can be collected from open datasets and online websites. Both the industry and the academic communities have provided approaches to detect malware with static and dynamic analyses. Traditional methods such as behavior-based signatures, dynamic taint tracking, and static data flow analysis require experts to manually investigate unknown files. However, those hand-crafted signatures are not sufficiently effective because attackers can rewrite and reorder the malware. Fortunately, neural networks can automatically detect large-scale malware variants with superior classification accuracy. In this section, we will review the very recent twelve representative works that use Deep Learning for malware classification (De La Rosa et al. 2018; Saxe and Berlin 2015; Kolosnjaji et al. 2017; McLaughlin et al. 2017; Tobiyama et al. 2016; Dahl et al. 2013; Nix and Zhang 2017; Kalash et al. 2018; Cui et al. 2018; David and Netanyahu 2015; Rosenberg et al. 2018; Xu et al. 2018). De La Rosa et al. (2018) selects three different kinds of static features to classify malware. Saxe and Berlin (2015); Kolosnjaji et al. (2017); McLaughlin et al. (2017) also use static features from the PE files to classify programs. (Tobiyama et al. 2016) extracts behavioral feature images using RNN to represent the behaviors of original programs. (Dahl et al. 2013) transforms malicious behaviors using representative learning without neural network. Nix and Zhang (2017) explores RNN model with the API calls sequences as programs’ features. Cui et al. (2018); Kalash et al. (2018) skip Phase II by directly transforming the binary file to image to classify the file. (David and Netanyahu 2015; Rosenberg et al. 2018) applies dynamic features to analyze malicious features. Xu et al. (2018) combines static features and dynamic features to represent programs’ features. Among these works, we select two representative works (De La Rosa et al. 2018; Rosenberg et al. 2018) and identify four phases in their works shown as Table 2. Our review will be centered around three questions described in “Methodology for reviewing the existing works” section. In the remaining of this section, we will first provide a set of observations, and then we provide the indications. Finally, we provide some general remarks. Key findings from a closer look From a close look at the very recent applications using Deep Learning for solving malware classification challenges, we observed the followings: Page 17 of 32 Observation 7.1: Features selected in malware classification were grouped into three categories: static features, dynamic features, and hybrid features. Typical static features include metadata, PE import Features, Byte/Entorpy, String, and Assembly Opcode Features derived from the PE files (Kolosnjaji et al. 2017; McLaughlin et al. 2017; Saxe and Berlin 2015). De La Rosa et al. (2018) took three kinds of static features: byte-level, basic-level ( strings in the file, the metadata table, and the import table of the PE header), and assembly features-level. Some works directly considered binary code as static features (Cui et al. 2018; Kalash et al. 2018). Different from static features, dynamic features were extracted by executing the files to retrieve their behaviors during execution. The behaviors of programs, including the API function calls, their parameters, files created or deleted, websites and ports accessed, etc, were recorded by a sandbox as dynamic features (David and Netanyahu 2015). The process behaviors including operation name and their result codes were extracted (Tobiyama et al. 2016). The process memory, tri-grams of system API calls and one corresponding input parameter were chosen as dynamic features (Dahl et al. 2013). An API calls sequence for an APK file was another representation of dynamic features (Nix and Zhang 2017; Rosenberg et al. 2018). Static features and dynamic features were combined as hybrid features (Xu et al. 2018). For static features, Xu and others in (Xu et al. 2018) used permissions, networks, calls, and providers, etc. For dynamic features, they used system call sequences. Observation 7.2: In most works, Phase II was inevitable because extracted features needed to be vertorized for Deep Learning models. One-hot encoding approach was frequently used to vectorize features (Kolosnjaji et al. 2017; McLaughlin et al. 2017; Rosenberg et al. 2018; Tobiyama et al. 2016; Nix and Zhang 2017). Bag-of-words (BoW) and n -gram were also considered to represent features (Nix and Zhang 2017). Some works brought the concepts of word frequency in NLP to convert the sandbox file to fixed-size inputs (David and Netanyahu 2015). Hashing features into a fixed vector was used as an effective method to represent features (Saxe and Berlin 2015). Bytes histogram using the bytes analysis and bytes-entropy histogram with a sliding window method were considered (De La Rosa et al. 2018). In (De La Rosa et al. 2018), De La Rosa and others embeded strings by hashing the ASCII strings to a fixed-size feature vector. For assembly features, they extracted four different levels of Choi et al. Cybersecurity (2020) 3:15 granularity: operation level (instruction-flow-graph), block level (control-flow-graph), function level (callgraph), and global level (graphs summarized). bigram, trigram and four-gram vectors and n -gram graph were used for the hybrid features (Xu et al. 2018). Observation 7.3: Most Phase III methods were classified into class 1. Following the classification tree shown in Fig. 2, most works were classified into class 1 shown in Fig. 2 except two works (Dahl et al. 2013; Tobiyama et al. 2016), which belonged to class 3 shown in Fig. 2. To reduce the input dimension, Dahl et al. (2013) performed feature selection using mutual information and random projection. Tobiyama et al. generated behavioral feature images using RNN (Tobiyama et al. 2016). Observation 7.4: After extracting features, two kinds of neural network architectures, i.e., one single neural network and multiple neural networks with a combined loss function, were used. Hierarchical structures, like convolutional layers, fully connected layers and classification layers, were used to classify programs (McLaughlin et al. 2017; Dahl et al. 2013; Nix and Zhang 2017; Saxe and Berlin 2015; Tobiyama et al. 2016; Cui et al. 2018; Kalash et al. 2018). A deep stack of denoising autoencoders was also introduced to learn programs’ behaviors (David and Netanyahu 2015). De La Rosa and others (De La Rosa et al. 2018) trained three different models with different features to compare which static features are relevant for the classification model. Some works investigated LSTM models for sequential features (Nix and Zhang 2017; Rosenberg et al. 2018). Two networks with different features as inputs were used for malware classification by combining their outputs with a dropout layer and an output layer (Kolosnjaji et al. 2017). In (Kolosnjaji et al. 2017), one network transformed PE Metadata and import features using feedforward neurons, another one leveraged convolutional network layers with opcode sequences. Lifan Xu et al. (Xu et al. 2018) constructed a few networks and combined them using a two-level multiple kernel learning algorithm. The above observations seem to indicate the following indications: Indication 7.1: Except two works transform binary into images (Cui et al. 2018; Kalash et al. 2018), most works surveyed need to adapt methods to vectorize extracted features. The vectorization methods should not only keep syntactic and semantic information in features, but Page 18 of 32 also consider the definition of the Deep Learning model. Indication 7.2: Only limited works have shown how to transform features using representation learning. Because some works assume the dynamic and static sequences, like API calls and instruction, and have similar syntactic and semantic structure as natural language, some representation learning techniques like word2vec may be useful in malware detection. In addition, for the control-flow graph, call graph and other graph representations, graph embedding is a potential method to transform those features. Discussion Though several pieces of research have been done in malware detection using Deep Learning, it’s hard to compare their methods and performances because of two uncertainties in their approaches. First, the Deep Learning model is a black-box, researchers cannot detail which kind of features the model learned and explain why their model works. Second, feature selection and representation affect the model’s performance. Because they do not use the same datasets, researchers cannot prove their approaches – including selected features and Deep Learning model – are better than others. The reason why few researchers use open datasets is that existing open malware datasets are out of data and limited. Also, researchers need to crawl benign programs from app stores, so their raw programs will be diverse. A closer look at applications of Deep Learning in system-event-based anomaly detection Introduction System logs recorded significant events at various critical points, which can be used to debug the system’s performance issues and failures. Moreover, log data are available in almost all computer systems and are a valuable resource for understanding system status. There are a few challenges in anomaly detection based on system logs. Firstly, the raw log data are unstructured, while their formats and semantics can vary significantly. Secondly, logs are produced by concurrently running tasks. Such concurrency makes it hard to apply workflow-based anomaly detection methods. Thirdly, logs contain rich information and complexity types, including text, real value, IP address, timestamp, and so on. The contained information of each log is also varied. Finally, there are massive logs in every system. Moreover, each anomaly event usually incorporates a large number of logs generated in a long period. Recently, a large number of scholars employed deep learning techniques (Du et al. 2017; Meng et al. 2019; Das Choi et al. Cybersecurity (2020) 3:15 et al. 2018; Brown et al. 2018; Zhang et al. 2019; Bertero et al. 2017) to detect anomaly events in the system logs and diagnosis system failures. The raw log data are unstructured, while their formats and semantics can vary significantly. To detect the anomaly event, the raw log usually should be parsed to structure data, the parsed data can be transformed into a representation that supports an effective deep learning model. Finally, the anomaly event can be detected by deep learning based classifier or predictor. In this section, we will review the very recent six representative papers that use deep learning for system-eventbased anomaly detection (Du et al. 2017; Meng et al. 2019; Das et al. 2018; Brown et al. 2018; Zhang et al. 2019; Bertero et al. 2017). DeepLog (Du et al. 2017) utilizes LSTM to model the system log as a natural language sequence, which automatically learns log patterns from the normal event, and detects anomalies when log patterns deviate from the trained model. LogAnom (Meng et al. 2019) employs Word2vec to extract the semantic and syntax information from log templates. Moreover, it uses sequential and quantitative features simultaneously. Das et al. (2018) uses LSTM to predict node failures that occur in super computing systems from HPC logs. Brown et al. (2018) presented RNN language models augmented with attention for anomaly detection in system logs. LogRobust (Zhang et al. 2019) uses FastText to represent semantic information of log events, which can identify and handle unstable log events and sequences. Bertero et al. (2017) map log word to a high dimensional metric space using Google’s word2vec algorithm and take it as features to classify. Among these six papers, we select two representative works (Du et al. 2017; Meng et al. 2019) and summarize the four phases of their approaches. We direct interested readers to Table 2 for a concise overview of these two works. Our review will be centered around three questions described in “Methodology for reviewing the existing works” section. In the remaining of this section, we will first provide a set of observations, and then we provide the indications. Finally, we provide some general remarks. Key findings from a closer look From a close look at the very recent applications using deep learning for solving security-event-based anomaly detection challenges, we observed the followings: Observation 8.1: Most works of our surveyed papers evaluated their performance using public datasets. By the time we surveyed this paper, only two works in (Das et al. 2018; Bertero et al. 2017) used their private datasets. Page 19 of 32 Observation 8.2: Most works in this survey adopted Phase II when parsing the raw log data. After reviewing the six works proposed recently, we found that five works (Du et al. 2017; Meng et al. 2019; Das et al. 2018; Brown et al. 2018; Zhang et al. 2019) employed parsing technique, while only one work (Bertero et al. 2017) did not. DeepLog (Du et al. 2017) parsed the raw log to different log type using Spell (Du and Li 2016) which is based a longest common subsequence. Desh (Das et al. 2018) parsed the raw log to constant message and variable component. Loganom (Meng et al. 2019) parsed the raw log to different log templates using FT-Tree (Zhang et al. 2017) according to the frequent combinations of log words. Andy Brown et al. (Brown et al. 2018) parsed the raw log into word and character tokenization. LogRobust (Zhang et al. 2019) extracted its log event by abstracting away the parameters in the message. Bertero et al. (2017) considered logs as regular text without parsing. Observation 8.3: Most works have considered and adopted Phase III. Among these six works, only DeepLog represented the parsed data using the one-hot vector without learning. Moreover, Loganom (Meng et al. 2019) compared their results with DeepLog. That is, DeepLog belongs to class 1 and Loganom belongs to class 4 in Fig. 2, while the other four works follow in class 3. The four works (Meng et al. 2019; Das et al. 2018; Zhang et al. 2019; Bertero et al. 2017) used word embedding techniques to represent the log data. Andy Brown et al. (Brown et al. 2018) employed attention vectors to represent the log messages. DeepLog (Du et al. 2017) employed the one-hot vector to represent the log type without learning. We have engaged an experiment replacing the one-hot vector with trained word embeddings. Observation 8.4: Evaluation results were not compared using the same dataset. DeepLog (Du et al. 2017) employed the one-hot vector to represent the log type without learning, which employed Phase II without Phase III. However, Christophe Bertero et al. (Bertero et al. 2017) considered logs as regular text without parsing, and used Phase III without Phase II. The precision of the two methods is very high, which is greater than 95%. Unfortunately, the evaluations of the two methods used different datasets. Observation 8.5: Most works empolyed LSTM in Phase IV. Five works including (Du et al. 2017; Meng et al. 2019; Das et al. 2018; Brown et al. 2018; Zhang et al. 2019) employed LSTM in the Phase IV, while Bertero Choi et al. Cybersecurity (2020) 3:15 Page 20 of 32 et al. (2017) tried different classifiers including naive Bayes, neural networks and random forest. The above observations seem to indicate the following indications: Indication 8.1: Phase II has a positive effect on accuracy if being well-designed. Since Bertero et al. (2017) considers logs as regular text without parsing, we can say that Phase II is not required. However, we can find that most of the scholars employed parsing techniques to extract structure information and remove the useless noise. Indication 8.2: Most of the recent works use trained representation to represent parsed data. As shown in Table 3, we can find Phase III is very useful, which can improve detection accuracy. Indication 8.3: Phase II and Phase III cannot be skipped simultaneously. Both Phase II and Phase III are not required. However, all methods have employed Phase II or Phase III. Indication 8.4: Observation 8.3 indicates that the trained word embedding format can improve the anomaly detection accuracy as shown in Table 3. Indication 8.5: Observation 8.5 indicates that most of the works adopt LSTM to detect anomaly events. We can find that most of the works adopt LSTM to detect anomaly event, since log data can be considered as sequence and there can be lags of unknown duration between important events in a time series. LSTM has feedback connections, which can not only process single data points, but also entire sequences of data. As our consideration, neither Phase II nor Phase III is required in system event-based anomaly detection. However, Phase II can remove noise in raw data, and Phase III can learn a proper representation of the data. Both Phase II and Phase III have a positive effect on anomaly detection accuracy. Since the event log is text data that we can’t feed the raw log data into deep learning model directly, Phase II and Phase III can’t be skipped simultaneously. Table 3 Comparison between word embedding and one-hot representation Method FP 1 FN 2 Precision Recall F1-measure Word Embedding 3 680 219 96.069% 98.699% 97.366% One-hot Vector 4 711 705 95.779% 95.813% 95.796% DeepLog 5 833 619 95% 96% 96% 1 FP: false positive; 2 FN: False negative;3 Word Embedding: Log keys are embedded by Continuous Bag of words;4 One-hot Vector: We reproduced the results according to DeepLog;5 DeepLog: Orignial results presented in the paper (Du et al. 2017). Discussion Deep learning can capture the potentially nonlinear and high dimensional dependencies among log entries from the training data that correspond to abnormal events. In that way, it can release the challenges mentioned above. However, it still suffers from several challenges. For example, how to represent the unstructured data accurately and automatically without human knowledge. A closer look at applications of deep learning in solving memory forensics challenges Introduction In the field of computer security, memory forensics is security-oriented forensic analysis of a computer’s memory dump. Memory forensics can be conducted against OS kernels, user-level applications, as well as mobile devices. Memory forensics outperforms traditional diskbased forensics because although secrecy attacks can erase their footprints on disk, they would have to appear in memory (Song et al. 2018). The memory dump can be considered as a sequence of bytes, thus memory forensics usually needs to extract security semantic information from raw memory dump to find attack traces. The traditional memory forensic tools fall into two categories: signature scanning and data structure traversal. These traditional methods usually have some limitations. Firstly, it needs expert knowledge on the related data structures to create signatures or traversing rules. Secondly, attackers may directly manipulate data and pointer values in kernel objects to evade detection, and then it becomes even more challenging to create signatures and traversing rules that cannot be easily violated by malicious manipulations, system updates, and random noise. Finally, the high-efficiency requirement often sacrifices high robustness. For example, an efficient signature scan tool usually skips large memory regions that are unlikely to have the relevant objects and relies on simple but easily tamperable string constants. An important clue may hide in this ignored region. In this section, we will review the very recent four representative works that use Deep Learning for memory forensics (Song et al. 2018; Petrik et al. 2018; Michalas and Murray 2017; Dai et al. 2018). DeepMem (Song et al. 2018) recognized the kernel objects from raw memory dumps by generating abstract representations of kernel objects with a graph-based Deep Learning approach. MDMF (Petrik et al. 2018) detected OS and architecture-independent malware from memory snapshots with several pre-processing techniques, domain unaware feature selection, and a suite of machine learning algorithms. MemTri (Michalas and Murray 2017) predicts the likelihood of criminal activity in a memory image using a Bayesian network, based on evidence data artefacts generated by several applications. Dai et al. (2018) monitor the malware process memory Choi et al. Cybersecurity (2020) 3:15 and classify malware according to memory dumps, by transforming the memory dump into grayscale images and adopting a multi-layer perception as the classifier. Among these four works (Song et al. 2018; Petrik et al. 2018; Michalas and Murray 2017; Dai et al. 2018), two representative works (i.e., (Song et al. 2018; Petrik et al. 2018)) are already summarized phase-by-phase in Table 1. We direct interested readers to Table 2 for a concise overview of these two works. Our review will be centered around the three questions raised in Section 3. In the remaining of this section, we will first provide a set of observations, and then we provide the indications. Finally, we provide some general remarks. Key findings from a closer look From a close look at the very recent applications using Deep Learning for solving memory forensics challenges, we observed the followings: Observation 9.1: Most methods used their own datasets for performance evaluation, while none of them used a public dataset. DeepMem was evaluated on self-generated dataset by the authors, who collected a large number of diverse memory dumps, and labeled the kernel objects in them using existing memory forensics tools like Volatility. MDMF employed the MalRec dataset by Georgia Tech to generate malicious snapshots, while it created a dataset of benign memory snapshots running normal software. MemTri ran several Windows 7 virtual machine instances with self-designed suspect activity scenarios to gather memory images. Dai et al. built the Procdump program in Cuckoo sandbox to extract malware memory dumps. We found that each of the four works in our survey generated their own datasets, while none was evaluated on a public dataset. Observation 9.2: Among the four works (Song et al. 2018; Michalas and Murray 2017; Petrik et al. 2018; Dai et al. 2018), two works (Song et al. 2018; Michalas and Murray 2017) employed Phase II while the other two works (Petrik et al. 2018; Dai et al. 2018) did not employ. DeepMem (Song et al. 2018) devised a graph representation for a sequence of bytes, taking into account both adjacency and points-to relations, to better model the contextual information in memory dumps. MemTri (Michalas and Murray 2017) firstly identified the running processes within the memory image that match the target applications, then employed regular expressions to locate evidence artefacts in a memory image. MDMF (Petrik et al. 2018) and Dai et al. (2018) transformed the memory dump into image directly. Page 21 of 32 Observation 9.3: Among four works (Song et al. 2018; Michalas and Murray 2017; Petrik et al. 2018; Dai et al. 2018), only DeepMem (Song et al. 2018) employed Phase III for which it used an embedding method to represent a memory graph. MDMF (Petrik et al. 2018) directly fed the generated memory images into the training of a CNN model. Dai et al. (2018) used HOG feature descriptor for detecting objects, while MemTri (Michalas and Murray 2017) extracted evidence artefacts as the input of Bayesian Network. In summary, DeepMem belonged to class 3 shown in Fig. 2, while the other three works belonged to class 1 shown in Fig. 2. Observation 9.4: All the four works (Song et al. 2018; Petrik et al. 2018; Michalas and Murray 2017; Dai et al. 2018) have employed different classifiers even when the types of input data are the same. DeepMem chose fully connected network (FCN) model that has multi-layered hidden neurons with ReLU activation functions, following by a softmax layer as the last layer. MDMF (Petrik et al. 2018) evaluated their performance both on traditional machine learning algorithms and Deep Learning approach including CNN and LSTM. Their results showed the accuracy of different classifiers did not have a significant difference. MemTri employed a Bayesian network model that is designed with three layers, i.e., a hypothesis layer, a sub-hypothesis layer, and an evidence layer. Dai et al. used a multi-layer perception model including an input layer, a hidden layer and an output layer as the classifier. The above observations seem to indicate the following indications: Indication 9.1: There lacks public datasets for evaluating the performance of different Deep Learning methods in memory forensics. From Observation 9.1, we find that none of the four works surveyed was evaluated on public datasets. Indication 9.2: From Observation 9.2, we find that it is disputable whether one should employ Phase II when solving memory forensics problems. Since both (Petrik et al. 2018) and (Dai et al. 2018) directly transformed a memory dump into an image, Phase II is not required in these two works. However, since there is a large amount of useless information in a memory dump, we argue that appropriate prepossessing could improve the accuracy of the trained models. Indication 9.3: From Observation 9.3, we find that Phase III is paid not much attention in memory forensics. Most works did not employ Phase III. Among the four works, only DeepMem (Song et al. 2018) Choi et al. Cybersecurity (2020) 3:15 employed Phase III during which it used embeddings to represent a memory graph. The other three works (Petrik et al. 2018; Michalas and Murray 2017; Dai et al. 2018) did not learn any representations before training a Deep Learning model. Indication 9.4: For Phase IV in memory forensics, different classifiers can be employed. Which kind of classifier to use seems to be determined by the features used and their data structures. From Observation 9.4, we find that the four works have actually employed different kinds of classifiers even the types of input data are the same. It is very interesting that MDMF obtained similar results with different classifiers including traditional machine learning and Deep Learning models. However, the other three works did not discuss why they chose a particular kind of classifier. Since a memory dump can be considered as a sequence of bytes, the data structure of a training data example is straightforward. If the memory dump is transformed into a simple form in Phase II, it can be directly fed into the training process of a Deep Learning model, and as a result Phase III can be ignored. However, if the memory dump is transformed into a complicated form in Phase II, Phase III could be quite useful in memory forensics. Regarding the answer for Question 3 at “Methodology for reviewing the existing works” section, it is very interesting that during Phase IV different classifiers can be employed in memory forensics. Moreover, MDMF (Petrik et al. 2018) has shown that they can obtain similar results with different kinds of classifiers. Nevertheless, they also admit that with a larger amount of training data, the performance could be improved by Deep Learning. Discussion An end-to-end manner deep learning model can learn the precise representation of memory dump automatically to release the requirement for expert knowledge. However, it still needs expert knowledge to represent data and attacker behavior. Attackers may also directly manipulate data and pointer values in kernel objects to evade detection. A closer look at applications of deep learning in security-oriented fuzzing Introduction Fuzzing of software security is one of the state of art techniques that people use to detect software vulnerabilities. The goal of fuzzing is to find all the vulnerabilities exist in the program by testing as much program code Page 22 of 32 as possible. Due to the nature of fuzzing, this technique works best on finding vulnerabilities in programs that take in input files, like PDF viewers (Godefroid et al. 2017) or web browsers. A typical workflow of fuzzing can be concluded as: given several seed input files, the fuzzer will mutate or fuzz the seed inputs to get more input files, with the aim of expanding the overall code coverage of the target program as it executes the mutated files. Although there have already been various popular fuzzers (Li et al. 2018), fuzzing still cannot bypass its problem of sometimes redundantly testing input files which cannot improve the code coverage rate (Shi and Pei 2019; Rajpal et al. 2017). Some input files mutated by the fuzzer even cannot pass the well-formed file structure test (Godefroid et al. 2017). Recent research has come up with ideas of applying Deep Learning in the process of fuzzing to solve these problems. In this section, we will review the very recent four representative works that use Deep Learning for fuzzing for software security. Among the three, two representative works (Godefroid et al. 2017; Shi and Pei 2019) are already summarized phase-by-phase in Table 2. We direct interested readers to Table 2 for a concise overview of those two works. Our review will be centered around three questions described in Section 3. In the remaining of this section, we will first provide a set of observations, and then we provide the indications. Finally, we provide some general remarks. Key findings from a closer look From a close look at the very recent applications using Deep Learning for solving security-oriented program analysis challenges, we observed the followings: Observation 10.1: Deep Learning has only been applied in mutation-based fuzzing. Even though various of different fuzzing techniques, including symbolic execution based fuzzing (Stephens et al. 2016), tainted analysis based fuzzing (Bekrar et al. 2012) and hybrid fuzzing (Yun et al. 2018) have been proposed so far, we observed that all the works we surveyed employed Deep Learning method to assist the primitive fuzzing – mutation-based fuzzing. Specifically, they adopted Deep Learning to assist fuzzing tool’s input mutation. We found that they commonly did it in two ways: 1) training Deep Learning models to tell how to efficiently mutate the input to trigger more execution path (Shi and Pei 2019; Rajpal et al. 2017); 2) training Deep Learning models to tell how to keep the mutated files compliant with the program’s basic semantic requirement Choi et al. Cybersecurity (2020) 3:15 (Godefroid et al. 2017). Besides, all three works trained different Deep Learning models for different programs, which means that knowledge learned from one programs cannot be applied to other programs. Observation 10.2: Similarity among all the works in our survey existed when choosing the training samples in Phase I. The works in this survey had a common practice, i.e., using the input files directly as training samples of the Deep Learning model. Learn&Fuzz (Godefroid et al. 2017) used character-level PDF objects sequence as training samples. Neuzz (Shi and Pei 2019) regarded input files directly as byte sequences and fed them into the neural network model. Rajpal et al. (2017) also used byte level representations of input files as training samples. Observation 10.3: Difference between all the works in our survey existed when assigning the training labels in Phase I. Despite the similarity of training samples researchers decide to use, there was a huge difference in the training labels that each work chose to use. Learn&Fuzz (Godefroid et al. 2017) directly used the character sequences of PDF objects as labels, same as training samples, but shifted by one position, which is a common generative model technique already broadly used in speech and handwriting recognition. Unlike Learn&Fuzz, Neuzz (Shi and Pei 2019) and Rajpal’s work (Rajpal et al. 2017) used bitmap and heatmap respectively as training labels, with the bitmap demonstrating the code coverage status of a certain input, and the heatmap demonstrating the efficacy of flipping one or more bytes of the input file. Whereas, as a common terminology well-known among fuzzing researchers, bitmap was gathered directly from the results of AFL. Heatmap used by Rajpal et al. was generated by comparing the code coverage supported by the bitmap of one seed file and the code coverage supported by bitmaps of the mutated seed files. It was noted that if there is acceptable level of code coverage expansion when executing the mutated seed files, demonstrated by more “1”s, instead of “0”s in the corresponding bitmaps, the byte level differences among the original seed file and the mutated seed files will be highlighted. Since those bytes should be the focus of later on mutation, heatmap was used to denote the location of those bytes. Different labels usage in each work was actually due to the different kinds of knowledge each work wants to learn. For a better understanding, let us note that we can simply regard a Deep Learning model as a simulation of a “function”. Learn&Fuzz (Godefroid et al. 2017) wanted to learn valid mutation of a PDF file Page 23 of 32 that was compliant with the syntax and semantic requirements of PDF objects. Their model could be seen as a simulation of f (x, θ) = y, where x denotes sequence of characters in PDF objects and y represents a sequence that are obtained by shifting the input sequences by one position. They generated new PDF object character sequences given a starting prefix once the model was trained. In Neuzz (Shi and Pei 2019), an NN(Neural Network) model was used to do program smoothing, which simultated a smooth surrogate function that approximated the discrete branching behaviors of the target program. f (x, θ) = y, where x denoted program’s byte level input and y represented the corresponding edge coverage bitmap. In this way, the gradient of the surrogate function was easily computed, due to NN’s support of efficient computation of gradients and higher order derivatives. Gradients could then be used to guide the direction of mutation, in order to get greater code coverage. In Rajpal and others’ work (Rajpal et al. 2017), they designed a model to predict good (and bad) locations to mutate in input files based on the past mutations and corresponding code coverage information. Here, the x variable also denoted program’s byte level input, but the y variable represented the corresponding heatmap. Observation 10.4: Various lengths of input files were handled in Phase II. Deep Learning models typically accepted fixed length input, whereas the input files for fuzzers often held different lengths. Two different approaches were used among the three works we surveyed: splitting and padding. Learn&Fuzz (Godefroid et al. 2017) dealt with this mismatch by concatenating all the PDF objects character sequences together, and then splited the large character sequence into multiple training samples with a fixed size. Neuzz (Shi and Pei 2019) solved this problem by setting a maximize input file threshold and then, padding the smaller-sized input files with null bytes. From additional experiments, they also found that a modest threshold gived them the best result, and enlarging the input file size did not grant them additional accuracy. Aside from preprocessing training samples, Neuzz also preprocessed training labels and reduced labels dimension by merging the edges that always appeared together into one edge, in order to prevent the multicollinearity problem, that could prevent the model from converging to a small loss value. Rajpal and others (Rajpal et al. 2017) used the similar splitting mechanism as Learn&Fuzz to split their input files into either 64-bit or 128-bit chunks. Their chunk size was determined empirically and was considered as a trainable parameter for their Deep Choi et al. Cybersecurity (2020) 3:15 Learning model, and their approach did not require sequence concatenating at the beginning. Observation 10.5: All the works in our survey skipped Phase III. According to our definition of Phase III, all the works in our survey did not consider representation learning. Therefore, all the three works (Godefroid et al. 2017; Shi and Pei 2019; Rajpal et al. 2017) fell into class 1 shown in Fig. 2.While as in Rajpal and others’ work, they considered the numerical representation of byte sequences. They claimed that since one byte binary data did not always represent the magnitude but also state, representing one byte in values ranging from 0 to 255 could be suboptimal. They used lower level 8-bit representation. The above observations seem to indicate the following indications: Indication 10.1: No alteration to the input files seems to be a correct approach. As far as we concerned, it is due to the nature of fuzzing. That is, since every bit of the input files matters, any slight alteration to the input files could either lose important information or add redundant information for the neural network model to learn. Indication 10.2: Evaluation criteria should be chosen carefully when judging mutation. Input files are always used as training samples regarding using Deep Learning technique in fuzzing problems. Through this similar action, researchers have a common desire to let the neural network mode learn how the mutated input files should look like. But the criterion of judging a input file actually has two levels: on the one hand, a good input file should be correct in syntax and semantics; on the other hand, a good input file should be the product of a useful mutation, which triggers the program to behave differently from previous execution path. This idea of a fuzzer that can generate semantically correct input file could still be a bad fuzzer at triggering new execution path was first brought up in Learn&Fuzz (Godefroid et al. 2017). We could see later on works trying to solve this problem by using either different training labels (Rajpal et al. 2017) or use neural network to do program smoothing (Shi and Pei 2019). We encouraged fuzzing researchers, when using Deep Learning techniques, to keep this problem in mind, in order to get better fuzzing results. Indication 10.3: Works in our survey only focus on local knowledge. In brief, some of the existing works (Shi and Pei 2019; Rajpal et al. 2017) leveraged the Deep Learning model to learn the relation between program’s input and its behavior and used the Page 24 of 32 knowledge that learned from history to guide future mutation. For better demonstration, we defined the knowledge that only applied in one program as local knowledge. In other words, this indicates that the local knowledge cannot direct fuzzing on other programs. Discussion Corresponding to the problems conventional fuzzing has, the advantages of applying DL in fuzzing are that DL’s learning ability can ensure mutated input files follow the designated grammar rules better. The ways in which input files are generated are more directed, and will, therefore, guarantee the fuzzer to increase its code coverage by each mutation. However, even if the advantages can be clearly demonstrated by the two papers we discuss above, some challenges still exist, including mutation judgment challenges that are faced both by traditional fuzzing techniques and fuzzing with DL, and the scalability of fuzzing approaches. We would like to raise several interesting questions for the future researchers: 1) Can the knowledge learned from the fuzzing history of one program be applied to direct testing on other programs? 2) If the answer to question one is positive, we can suppose that global knowledge across different programs exists? Then, can we train a model to extract the global knowledge? 3) Whether it is possible to combine global knowledge and local knowledge when fuzzing programs? Discussion Using high-quality data in Deep Learning is important as much as using well-structured deep neural network architectures. That is, obtaining quality data must be an important step, which should not be skipped, even in resolving security problems using Deep Learning. So far, this study demonstrated how the recent security papers using Deep Learning have adopted data conversion (Phase II) and data representation (Phase III) on different security problems. Our observations and indications showed a clear understanding of how security experts generate quality data when using Deep Learning. Since we did not review all the existing security papers using Deep Learning, the generality of observations and indications is somewhat limited. Note that our selected papers for review have been published recently at one of prestigious security and reliability conferences such as USENIX SECURITY, ACM CCS and so on (Shin et al. 2015)-(Das et al. 2018), (Brown et al. 2018; Zhang et al. 2019), (Song et al. 2018; Petrik et al. 2018), (Wang et al. 2019)-(Rajpal et al. 2017). Thus, our observations and indications help to understand how most security experts have used Deep Learning to solve the wellknown eight security problems from program analysis to fuzzing. Choi et al. Cybersecurity (2020) 3:15 Our observations show that we should transfer raw data to synthetic formats of data ready for resolving security problems using Deep Learning through data cleaning and data augmentation and so on. Specifically, we observe that Phases II and III methods have mainly been used for the following purposes: • To clean the raw data to make the neural network (NN) models easier to interpret • To reduce the dimensionality of data (e.g., principle component analysis (PCA), t-distributed stochastic neighbor embedding (t-SNE)) • To scale input data (e.g., normalization) • To make NN models understand more complex relationships depending on security problems (e.g. memory graphs) • To simply change various raw data formats into a vector format for NN models (e.g. one-hot encoding and word2vec embedding) In this following, we do further discuss the question, “What if Phase II is skipped?", rather than the question, “Is Phase III always necessary?". This is because most of the selected papers do not consider Phase III methods (76%), or adopt with no concrete reasoning (19%). Specifically, we demonstrate how Phase II has been adopted according to eight security problems, different types of data, various models of NN and various outputs of NN models, in depth. Our key findings are summarized as follows: • How to fit security domain knowledge into raw data has not been well-studied yet. • While raw text data are commonly parsed after embedding, raw binary data are converted using various Phase II methods. • Raw data are commonly converted into a vector format to fit well to a specific NN model using various Phase II methods. • Various Phase II methods are used according to the relationship between output of security problem and output of NN models. What if phase II is skipped? From the analysis results of our selected papers for review, we roughly classify Phase II methods into the following four categories. • Embedding: The data conversion methods that intend to convert high-dimensional discrete variables into low-dimensional continuous vectors (Google Developers 2016). • Parsing combined with embedding: The data conversion methods that constitute an input data into syntactic components in order to test conformability after embedding. Page 25 of 32 • One-hot encoding: A simple embedding where each data belonging to a specific category is mapped to a vector of 0s and a single 1. Here, the low-dimension transformed vector is not managed. • Domain-specific data structures: A set of data conversion methods which generate data structures capturing domain-specific knowledge for different security problems, e.g., memory graphs (Song et al. 2018). Findings on eight security problems We observe that over 93% of the papers use one of the above-classified Phase II methods. 7% of the papers do not use any of the above-classified methods, and these papers are mostly solving a software fuzzing problem. Specifically, we observe that 35% of the papers use a Category 1 (i.e. embedding) method; 30% of the papers use a Category 2 (i.e. parsing combined with embedding) method; 15% of the papers use a Category 3 (i.e. one-hot encoding) method; and 13% of the papers use a Category 4 (i.e. domain-specific data structures) method. Regarding why one-hot encoding is not widely used, we found that most security data include categorical input values, which are not directly analyzed by Deep Learning models. From Fig. 3, we also observe that according to security problems, different Phase II methods are used. First, PA, ROP and CFI should convert raw data into a vector format using embedding because they commonly collect instruction sequence from binary data. Second, NA and SEAD use parsing combined with embedding because raw data such as the network traffic and system logs consist of the complex attributes with the different formats such as categorical and numerical input values. Third, we observe that MF uses various data structures because memory dumps from memory layout are unstructured. Fourth, fuzzing generally uses no data conversion since Deep Learning models are used to generate the new input data with the same data format as the original raw data. Finally, we observe that MC commonly uses onehot encoding and embedding because malware binary and well-structured security log files include categorical, numerical and unstructured data in general. These observations indicate that type of data strongly influences on use of Phase II methods. We also observe that only MF among eight security problems commonly transform raw data into well-structured data embedding a specialized security domain knowledge. This observation indicates that various conversion methods of raw data into well-structure data which embed various security domain knowledge are not yet studied in depth. Findings on different data types Note that according to types of data, a NN model works better than the others. For example, CNN works well with Choi et al. Cybersecurity (2020) 3:15 Page 26 of 32 Fig. 3 Statistics of Phase II methods for eight security problems images but does not work with text. From Fig. 4 for raw binary data, we observe that 51.9%, 22.3% and 11.2% of security papers use embedding, one-hot encoding and Others, respectively. Only 14.9% of security papers, especially related to fuzzing, do not use one of Phase II methods. This observation indicates that binary input data which have various binary formats should be converted into an input data type which works well with a specific NN model. From Fig. 4 for raw text data, we also observe that 92.4% of papers use parsing with embedding as the Phase II method. Note that compared with raw binary data whose formats are unstructured, raw text data generally have the well-structured format. Raw text data collected from network traffics may also have various types of attribute values. Thus, raw text data are commonly parsed after embedding to reduce redundancy and dimensionality of data. Fig. 4 Statistics of Phase II methods on type of data Findings on various models of NN According to types of the converted data, a specific NN model works better than the others. For example, CNN works well with images but does not work with raw text. From Fig. 6b, we observe that use of embedding for DNN (42.9%), RNN (28.6%) and LSTM (14.3%) models approximates to 85%. This observation indicates that embedding methods are commonly used to generate sequential input data for DNN, RNN and LSTM models. Also, we observe that one-hot encoded data are commonly used as input data for DNN (33.4%), CNN (33.4%) and LSTM (16.7%) models. This observation indicates that one-hot encoding is one of common Phase II methods to generate numerical values for image and sequential input data because many raw input data for security problems commonly have the categorical features. We observe that the CNN (66.7%) model uses the converted input data using the Others Choi et al. Cybersecurity (2020) 3:15 methods to express the specific domain knowledge into the input data structure of NN networks. This is because general vector formats including graph, matrix and so on can also be used as an input value of the CNN model. From Fig. 5b, we observe that DNN, RNN and LSTM models commonly use embedding, one-hot encoding and parsing combined with embedding. For example, we observe security papers of 54.6%, 18.2% and 18.2% models use embedding, one-hot encoding and parsing combined with embedding, respectively. We also observe that the CNN model is used with various Phase II methods because any vector formats such as image can generally be used as an input data of the CNN model. Findings on output of NN models According to the relationship between output of security problem and output of NN, we may use a specific Phase II method. For example, if output of security problem is given into a class (e.g., normal or abnormal), output of NN should also be given into classification. Fig. 5 Statistics of Phase II methods for various types of NNs Page 27 of 32 From Fig. 6a, we observe that embedding is commonly used to support a security problem for classification (100%). Parsing combined with embedding is used to support a security problem for object detection (41.7%) and classification (58.3%). One-hot encoding is used only for classification (100%). These observations indicate that classification of a given input data is the most common output which is obtained using Deep Learning under various Phase II methods. From Fig. 6b, we observe that security problems, whose outputs are classification, commonly use embedding (43.8%) and parsing combined with embedding (21.9%) as the Phase II method. We also observe that security problems, whose outputs are object detection, commonly use parsing combined with embedding (71.5%). However, security problems, whose outputs are data generation, commonly do not use the Phase III methods. These observations indicate that a specific Phase II method has been used according to the relationship between output of security problem and use of NN models. Choi et al. Cybersecurity (2020) 3:15 Page 28 of 32 Fig. 6 Statistics of Phase II methods for various output of NN Further areas of investigation Since any Deep Learning models are stochastic, each time the same Deep Learning model is fit even on the same data, it might give different outcomes. This is because deep neural networks use random values such as random initial weights. However, if we have all possible data for every security problem, we may not make random predictions. Since we have the limited sample data in practice, we need to get the best-effort prediction results using the given Deep Learning model, which fits to the given security problem. How can we get the best-effort prediction results of Deep Learning models for different security problems? Let us begin to discuss about the stability of evaluation results for our selected papers for review. Next, we will elaborate the influence of security domain knowledge on prediction results of Deep Learning models. Finally, we will discuss some common issues in those fields. How stable are evaluation results? When evaluating neural network models, Deep Learning models commonly use three methods: train-test split; train-validation-test split; and k-fold cross validation. A train-test split method splits the data into two parts, i.e., training and test data. Even though a train-test split method makes the stable prediction with a large amount of data, predictions vary with a small amount of data. A train-validation-test split method splits the data into three parts, i.e., training, validation and test data. Validation data are used to estimate predictions over the unknown data. k-fold cross validation has k different set of predictions from k different evaluation data. Since k-fold cross validation takes the average expected performance of the NN model over k-fold validation data, the evaluation result is closer to the actual performance of the NN model. From the analysis results of our selected papers for review, we observe that 40.0% and 32.5% of the selected papers are measured using a train-test split method and a train-validation-test split method, respectively. Only 17.5% of the selected papers are measured using k-fold cross validation. This observation implies that even though the selected papers show almost more than 99% of accuracy or 0.99 of F1 score, most solutions using Deep Learning might not show the same performance for the noisy data with randomness. To get stable prediction results of Deep Learning models for different security problems, we might reduce the influence of the randomness of data on Deep Learning models. At least, it is recommended to consider the following methods: • Do experiments using the same data many time: To get a stable prediction with a small amount of sample data, we might control the randomness of data using the same data many times. Choi et al. Cybersecurity (2020) 3:15 • Use cross validation methods, e.g. k-fold cross validation: The expected average and variance from k -fold cross validation estimates how stable the proposed model is. How does security domain knowledge influence the performance of security solutions using deep learning? When selecting a NN model that analyzes an application dataset, e.g., MNIST dataset (LeCun and Cortes 2010), we should understand that the problem is to classify a handwritten digit using a 28 × 28 black. Also, to solve the problem with the high classification accuracy, it is important to know which part of each handwritten digit mainly influences the outcome of the problem, i.e., a domain knowledge. While solving a security problem, knowing and using security domain knowledge for each security problem is also important due to the following reasons (we label the observations and indications that realted to domain knowledge with ‘∗’): Firstly, the dataset generation, preprocess and feature selection highly depend on domain knowledge. Different from the image classification and natural language processing, raw data in the security domain cannot be sent into the NN model directly. Researchers need to adopt strong domain knowledge to generate, extract, or clean the training set. Also, in some works, domain knowledge is adopted in data labeling because labels for data samples are not straightforward. Secondly, domain knowledge helps with the selection of DL models and its hierarchical structure. For example, the neural network architecture (hierarchical and bi-directional LSTM) designed in DEEPVSA (Guo et al. 2019) is based on the domain knowledge in the instruction analysis. Thirdly, domain knowledge helps to speed up the training process. For instance, by adopting strong domain knowledge to clean the training set, domain knowledge helps to spend up the training process while keeping the same performance. However, due to the influence of the randomness of data on Deep Learning models, domain knowledge should be carefully adopted to avoid potential decreased accuracy. Finally, domain knowledge helps with the interpretability of models’ prediction. Recently, researchers try to explore the interpretability of the deep learning model in security areas, For instance, LEMNA (Guo et al. 2018) and EKLAVYA (Chua et al. 2017) explain how the prediction was made by models from different perspectives. By enhancing the trained models’ interpretability, they can improve their approaches’ accuracy and security. The explanation for the relation between input, hidden state, and the final output is based on domain knowledge. Page 29 of 32 Common challenges In this section, we will discuss the common challenges when applying DL to solving security problems. These challenges as least shared by the majority of works, if not by all the works. Generally, we observe 7 common challenges in our survey: 1. The raw data collected from the software or system usually contains lots of noise. 2. The collected raw is untidy. For instance, the instruction trace, the Untidy data: variable length sequences, 3. Hierarchical data syntactic/structure. As discussed in Section 3, the information may not simply be encoded in a single layer, rather, it is encoded hierarchically, and the syntactic is complex. 4. Dataset generation is challenging in some scenarios. Therefore, the generated training data might be less representative or unbalanced. 5. Different for the application of DL in image classification, and natural language process, which is visible or understandable, the relation between data sample and its label is not intuitive, and hard to explain. Availability of trained model and quality of dataset. Finally, we investigate the availability of the trained model and the quality of the dataset. Generally, the availability of the trained models affects its adoption in practice, and the quality of the training set and the testing set will affect the credibility of testing results and comparison between different works. Therefore, we collect relevant information to answer the following four questions and shows the statistic in Table 4: 1. Whether a paper’s source code is publicly available? 2. Whether raw data, which is used to generate the dataset, is publicly available? 3. Whether its dataset is publicly available? 4. How are the quality of the dataset? We observe that both the percentage of open source of code and dataset in our surveyed fields is low, which makes it a challenge to reproduce proposed schemes, make comparisons between different works, and adopt them in practice. Specifically, the statistic shows that 1) the percentage of open source of code in our surveyed fields is low, only 6 out of 16 paper published their model’s source code. 2) the percentage of public data sets is low. Even though, the raw data in half of the works are publicly available, only 4 out of 16 fully or partially published their dataset. 3) the quality of datasets is not guaranteed, for instance, most of the dataset is unbalanced. The performance of security solutions even using Deep Learning might vary according to datasets. Traditionally, Choi et al. Cybersecurity (2020) 3:15 Page 30 of 32 Table 4 Analysis of the datasets and trained model Topic PA ROP CFI Network Malware LogEvent MemoryFoensic FUZZING Paper Source Available Raw Data Available1 Dataset Available 2 Quality of Dataset Sample Num Balance RFNBNN [9] ✗ ✓ ✗ N/A N/A EKLAVYA [10] ✗ ✓ ✗ N/A N/A ROPNN [11] ✗ ✗ ✗ N/A N/A HeNet [12] ✗ ✗ ✗ N/A N/A Barnum [13] ✓ ✗ ✗ N/A N/A CFG-CNN [14] ✓ ✓ ✗ N/A N/A 50b(yte)-CNN [15] ✗ ✓ ✗ 115835 ✗ PCCN [16] ✓ ✓ ✓ 1168671 ✗ Rosenber [17] ✗ ✗ ✗ 500000 ✓ DeLaRosa [18] ✗ ✗ ✗ 100000 ✗ DeepLog [8] P3 ✓ P N/A ✗ LogAnom [41] ✗ ✓ P N/A ✗ DeepMem [19] ✓ ✓ ✓ N/A ✗ MDMF [48] ✗ ✗ ✗ N/A ✗ NeuZZ [20] ✓ ✗ ✗ N/A N/A Learn & Fuzz ✗ ✗ ✗ N/A N/A 1 “Raw data” refers to the data that used to generate training set but cannot be feed into the model directly. For instance, a collection of binary files is raw file “Dataset” is the collection of data sample that can be feed in to the DL model directly. For instance, a collection of image, sequence 3 “P” denotes that its source code or dataset is partially available to public 2 when evaluating different NN models in image classification, standard datasets such as MNIST for recognizing handwritten 10 digits and CIFAR10 (Krizhevsky et al. 2010) for recognizing 10 object classes are used for performance comparison of different NN models. However, there are no known standard datasets for evaluating NN models on different security problems. Due to such a limitation, we observe that most security papers using Deep Learning do not compare the performance of different security solutions even when they consider the same security problem. Thus, it is recommended to generate and use a standard dataset for a specific security problem for comparison. In conclusion, we think that there are three aspects that need to be improved in future research: 1. Developing standard dataset. 2. Publishing their source code and dataset. 3. Improving the interpretability of their model. Conclusion This paper seeks to provide a dedicated review of the very recent research works on using Deep Learning techniques to solve computer security challenges. In particular, the review covers eight computer security problems being solved by applications of Deep Learning: security-oriented program analysis, defending ROP attacks, achieving CFI, defending network attacks, malware classification, system-event-based anomaly detection, memory forensics, and fuzzing for software security. Our observations of the reviewed works indicate that the literature of using Deep Learning techniques to solve computer security challenges is still at an earlier stage of development. Acknowledgments We are grateful to the anonymous reviewers for their useful comments and suggestions. Authors’ contributions All authors read and approved the final manuscript. Funding This work was supported by ARO W911NF-13-1-0421 (MURI), NSF CNS-1814679, and ARO W911NF-15-1-0576. Availability of data and materials Not applicable. Competing interests PL is currently serving on the editorial board for Journal of Cybersecurity. Author details 1 The Pennsylvania State University, Pennsylvania, USA. 2 Pusan National University, Busan, Republic of Korea. 3 Wuhan University of Technology, Wuhan, China. Received: 11 March 2020 Accepted: 17 June 2020 References Abadi M, Budiu M, Erlingsson Ú, Ligatti J (2009) Control-Flow Integrity Principles, Implementations, and Applications. ACM Trans Inf Syst Secur (TISSEC) 13(1):4 Choi et al. Cybersecurity (2020) 3:15 Bao T, Burket J, Woo M, Turner R, Brumley D (2014) BYTEWEIGHT: Learning to Recognize Functions in Binary Code. In: 23rd USENIX Security Symposium (USENIX Security 14). USENIX Association, San Diego. pp 845–860 Bekrar S, Bekrar C, Groz R, Mounier L (2012) A Taint Based Approach for Smart Fuzzing. In: 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation. IEEE. https://doi.org/10.1109/icst.2012.182 Bengio Y, Courville A, Vincent P (2013) Representation Learning: A Review and New Perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798–1828 Bertero C, Roy M, Sauvanaud C, Tredan G (2017) Experience Report: Log Mining Using Natural Language Processing and Application to Anomaly Detection. In: 2017 IEEE 28th International Symposium on Software Reliability Engineering (ISSRE). IEEE. https://doi.org/10.1109/issre.2017.43 Brown A, Tuor A, Hutchinson B, Nichols N (2018) Recurrent Neural Network Attention Mechanisms for Interpretable System Log Anomaly Detection. In: Proceedings of the First Workshop on Machine Learning for Computing Systems, MLCS’18. ACM, New York. pp 1:1–1:8 Böttinger K, Godefroid P, Singh R (2018) Deep Reinforcement Fuzzing. In: 2018 IEEE Security and Privacy Workshops (SPW), pages 116–122. IEEE. https:// doi.org/10.1109/spw.2018.00026 Chen L, Sultana S, Sahita R (2018) Henet: A Deep Learning Approach on Intel  Processor Trace for Effective Exploit Detection. In: 2018 IEEE Security and Privacy Workshops (SPW). IEEE. https://doi.org/10.1109/spw.2018.00025 Chua ZL, Shen S, Saxena P, Liang Z (2017) Neural Nets Can Learn Function Type Signatures from Binaries. In: 26th USENIX Security Symposium (USENIX Security 17). USENIX Association. pp 99–116. https://dl.acm.org/ doi/10.5555/3241189.3241199 Cui Z, Xue F, Cai X, Cao Y, Wang GG, Chen J (2018) Detection of Malicious Code Variants Based on Deep Learning. IEEE Trans Ind Inform 14(7):3187–3196 Dahl GE, Stokes JW, Deng L, Yu D (2013) Large-scale Malware Classification using Random Projections and Neural Networks. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE. https://doi.org/10.1109/icassp.2013.6638293 Dai Y, Li H, Qian Y, Lu X (2018) A Malware Classification Method Based on Memory Dump Grayscale Image. Digit Investig 27:30–37 Das A, Mueller F, Siegel C, Vishnu A (2018) Desh: Deep Learning for System Health Prediction of Lead Times to Failure in HPC. In: Proceedings of the 27th International Symposium on High-Performance Parallel and Distributed Computing, HPDC ’18. ACM, New York. pp 40–51 David OE, Netanyahu NS (2015) DeepSign: Deep Learning for Automatic Malware Signature Generation and Classification. In: 2015 International Joint Conference on Neural Networks (IJCNN). IEEE. https://doi.org/10. 1109/ijcnn.2015.7280815 De La Rosa L, Kilgallon S, Vanderbruggen T, Cavazos J (2018) Efficient Characterization and Classification of Malware Using Deep Learning. In: 2018 Resilience Week (RWS). IEEE. https://doi.org/10.1109/rweek.2018. 8473556 Du M, Li F (2016) Spell: Streaming Parsing of System Event Logs. In: 2016 IEEE 16th International Conference on Data Mining (ICDM). IEEE. https://doi. org/10.1109/icdm.2016.0103 Du M, Li F, Zheng G, Srikumar V (2017) DeepLog: Anomaly Detection and Diagnosis from System Logs Through Deep Learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS ’17. ACM, New York. pp 1285–1298 Faker O, Dogdu E (2019) Intrusion Detection Using Big Data and Deep Learning Techniques. In: Proceedings of the 2019 ACM Southeast Conference on ZZZ - ACM SE ’19. ACM. pp 86–93. https://doi.org/10.1145/ 3299815.3314439 Ghosh AK, Wanken J, Charron F (1998) Detecting Anomalous and Unknown Intrusions against Programs. In: Proceedings 14th annual computer security applications conference (Cat. No. 98Ex217). IEEE, Washington, DC. pp 259–267 Godefroid P, Peleg H, Singh R (2017) Learn&Fuzz: Machine Learning for Input Fuzzing. In: 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE. https://doi.org/10.1109/ase.2017. 8115618 Google Developers (2016) Embeddings. https://developers.google.com/ machine-learning/crash-course/embeddings/video-lecture Guo W, Mu D, Xu J, Su P, Wang G, Xing X (2018) Lemna: Explaining deep learning based security applications. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pages 364–379. https://doi.org/10.1145/3243734.3243792 Page 31 of 32 Guo W, Mu D, Xing X, Du M, Song D (2019) {DEEPVSA}: Facilitating Value-set Analysis with Deep Learning for Postmortem Program Analysis. In: 28th USENIX Security Symposium (USENIX Security 19). USENIX Association, Santa Clara, CA. pp 1787–1804. https://www.usenix.org/conference/ usenixsecurity19/presentation/guo Heller KA, Svore KM, Keromytis AD, Stolfo SJ (2003) One Class Support Vector Machines for Detecting Anomalous Windows Registry Accesses. In: Proceedings of the Workshop on Data Mining for Computer Security. IEEE, Dallas, TX Horwitz S (1997) Precise Flow-insensitive May-alias Analysis is NP-hard. ACM Trans Program Lang Syst 19(1):1–6 Hu W, Liao Y, Vemuri VR (2003) Robust Anomaly Detection using Support Vector Machines. In: Proceedings of the international conference on machine learning. Citeseer, Washington, DC. pp 282–289 IDS 2017 Datasets (2019). https://www.unb.ca/cic/datasets/ids-2017.html Kalash M, Rochan M, Mohammed N, Bruce NDB, Wang Y, Iqbal F (2018) Malware Classification with Deep Convolutional Neural Networks. In: 2018 9th IFIP International Conference on New Technologies, Mobility and Security (NTMS). pp 1–5. https://doi.org/10.1109/NTMS.2018.8328749 Kiriansky V, Bruening D, Amarasinghe SP, et al. (2002) Secure Execution via Program Shepherding. In: USENIX Security Symposium, volume 92, page 84. USENIX Association, Monterey, CA Kolosnjaji B, Eraisha G, Webster G, Zarras A, Eckert C (2017) Empowering Convolutional Networks for Malware Classification and Analysis. Proc Int Jt Conf Neural Netw 2017-May:3838–3845 Krizhevsky A, Nair V, Hinton G (2010) CIFAR-10 (Canadian Institute for Advanced Research). https://www.cs.toronto.edu/~kriz/cifar.html LeCun Y, Cortes C (2010) MNIST Handwritten Digit Database. http://yann. lecun.com/exdb/mnist/ Li J, Zhao B, Zhang C (2018) Fuzzing: A Survey. Cybersecurity 1(1):6 Li X, Hu Z, Fu Y, Chen P, Zhu M, Liu P (2018) ROPNN: Detection of ROP Payloads Using Deep Neural Networks. arXiv preprint arXiv:1807.11110 McLaughlin N, Martinez Del Rincon J, Kang BJ, Yerima S, Miller P, Sezer S, Safaei Y, Trickel E, Zhao Z, Doupe A, Ahn GJ (2017) Deep Android Malware Detection. In: Proceedings of the 7th ACM Conference on Data and Application Security and Privacy. pp 301–308. https://doi.org/10.1145/ 3029806.3029823 Meng W, Liu Y, Zhu Y, Zhang S, Pei D, Liu Y, Chen Y, Zhang R, Tao S, Sun P, Zhou R (2019) Loganomaly: Unsupervised Detection of Sequential and Quantitative Anomalies in Unstructured Logs. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization. https://doi.org/10.24963/ijcai.2019/658 Michalas A, Murray R (2017) MemTri: A Memory Forensics Triage Tool Using Bayesian Network and Volatility. In: Proceedings of the 2017 International Workshop on Managing Insider Security Threats, MIST ’17, pages 57–66. ACM, New York Millar K, Cheng A, Chew HG, Lim C-C (2018) Deep Learning for Classifying Malicious Network Traffic. In: Pacific-Asia Conference on Knowledge Discovery and Data Mining. Springer. pp 156–161. https://doi.org/10.1007/ 978-3-030-04503-6_15 Moustafa N, Slay J (2015) UNSW-NB15: A Comprehensive Data Set for Network Intrusion Detection Systems (UNSW-NB15 Network Data Set). In: 2015 Military Communications and Information Systems Conference (MilCIS). IEEE. https://doi.org/10.1109/milcis.2015.7348942 Nguyen MH, Nguyen DL, Nguyen XM, Quan TT (2018) Auto-Detection of Sophisticated Malware using Lazy-Binding Control Flow Graph and Deep Learning. Comput Secur 76:128–155 Nix R, Zhang J (2017) Classification of Android Apps and Malware using Deep Neural Networks. Proc Int Jt Conf Neural Netw 2017-May:1871–1878 NSCAI Intern Report for Congress (2019). https://drive.google.com/file/d/ 153OrxnuGEjsUvlxWsFYauslwNeCEkvUb/view Petrik R, Arik B, Smith JM (2018) Towards Architecture and OS-Independent Malware Detection via Memory Forensics. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, CCS ’18, pages 2267–2269. ACM, New York Phan AV, Nguyen ML, Bui LT (2017) Convolutional Neural Networks over Control Flow Graphs for Software defect prediction. In: 2017 IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI). IEEE. pp 45–52. https://doi.org/10.1109/ictai.2017.00019 Choi et al. Cybersecurity (2020) 3:15 Rajpal M, Blum W, Singh R (2017) Not All Bytes are Equal: Neural Byte Sieve for Fuzzing. arXiv preprint arXiv:1711.04596 Rosenberg I, Shabtai A, Rokach L, Elovici Y (2018) Generic Black-box End-to-End Attack against State of the Art API Call based Malware Classifiers. In: Research in Attacks, Intrusions, and Defenses. Springer. pp 490–510. https://doi.org/10.1007/978-3-030-00470-5_23 Salwant J (2015) ROPGadget. https://github.com/JonathanSalwan/ROPgadget Saxe J, Berlin K (2015) Deep Neural Network based Malware Detection using Two Dimensional Binary Program Features. In: 2015 10th International Conference on Malicious and Unwanted Software (MALWARE). IEEE. https://doi.org/10.1109/malware.2015.7413680 Shacham H, et al. (2007) The Geometry of Innocent Flesh on the Bone: Return-into-libc without Function Calls (on the x86). In: ACM conference on Computer and communications security, pages 552–561. https://doi. org/10.1145/1315245.1315313 Shi D, Pei K (2019) NEUZZ: Efficient Fuzzing with Neural Program Smoothing. IEEE Secur Priv Shin ECR, Song D, Moazzezi R (2015) Recognizing Functions in Binaries with Neural Networks. In: 24th USENIX Security Symposium (USENIX Security 15). USENIX Association. https://dl.acm.org/doi/10.5555/2831143.2831182 Sommer R, Paxson V (2010) Outside the Closed World: On Using Machine Learning For Network Intrusion Detection. In: 2010 IEEE Symposium on Security and Privacy (S&P). IEEE. https://doi.org/10.1109/sp.2010.25 Song W, Yin H, Liu C, Song D (2018) DeepMem: Learning Graph Neural Network Models for Fast and Robust Memory Forensic Analysis. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, CCS ’18. ACM, New York. pp 606–618 Stephens N, Grosen J, Salls C, Dutcher A, Wang R, Corbetta J, Shoshitaishvili Y, Kruegel C, Vigna G (2016) Driller: Augmenting Fuzzing Through Selective Symbolic Execution. In: Proceedings 2016 Network and Distributed System Security Symposium. Internet Society. https://doi.org/10.14722/ndss.2016. 23368 Tan G, Jaeger T (2017) CFG Construction Soundness in Control-Flow Integrity. In: Proceedings of the 2017 Workshop on Programming Languages and Analysis for Security - PLAS ’17. ACM. https://doi.org/10.1145/3139337. 3139339 Tobiyama S, Yamaguchi Y, Shimada H, Ikuse T, Yagi T (2016) Malware Detection with Deep Neural Network Using Process Behavior. Proc Int Comput Softw Appl Conf 2:577–582 Unicorn-The ultimate CPU emulator (2015). https://www.unicorn-engine.org/ Ustebay S, Turgut Z, Aydin MA (2019) Cyber Attack Detection by Using Neural Network Approaches: Shallow Neural Network, Deep Neural Network and AutoEncoder. In: Computer Networks. Springer. pp 144–155. https://doi. org/10.1007/978-3-030-21952-9_11 Varenne R, Delorme JM, Plebani E, Pau D, Tomaselli V (2019) Intelligent Recognition of TCP Intrusions for Embedded Micro-controllers. In: International Conference on Image Analysis and Processing. Springer. pp 361–373. https://doi.org/10.1007/978-3-030-30754-7_36 Wang Z, Liu P (2019) GPT Conjecture: Understanding the Trade-offs between Granularity, Performance and Timeliness in Control-Flow Integrity. eprint 1911.07828, archivePrefix arXiv, primaryClass cs.CR, arXiv Wang Y, Wu Z, Wei Q, Wang Q (2019) NeuFuzz: Efficient Fuzzing with Deep Neural Network. IEEE Access 7:36340–36352 Xu W, Huang L, Fox A, Patterson D, Jordan MI (2009) Detecting Large-Scale System Problems by Mining Console Logs. In: Proceedings of the ACM SIGOPS 22Nd Symposium on Operating Systems Principles SOSP ’09. ACM, New York. pp 117–132 Xu X, Liu C, Feng Q, Yin H, Song L, Song D (2017) Neural Network-Based Graph Embedding for Cross-Platform Binary Code Similarity Detection. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM. pp 363–376. https://doi.org/10.1145/ 3133956.3134018 Xu L, Zhang D, Jayasena N, Cavazos J (2018) HADM: Hybrid Analysis for Detection of Malware 16:702–724 Xu X, Ghaffarinia M, Wang W, Hamlen KW, Lin Z (2019) CONFIRM: Evaluating Compatibility and Relevance of Control-flow Integrity Protections for Modern Software. In: 28th USENIX Security Symposium (USENIX Security 19), pages 1805–1821. USENIX Association, Santa Clara Yagemann C, Sultana S, Chen L, Lee W (2019) Barnum: Detecting Document Malware via Control Flow Anomalies in Hardware Traces. In: Lecture Notes in Computer Science. Springer. pp 341–359. https://doi.org/10.1007/9783-030-30215-3_17 Page 32 of 32 Yin C, Zhu Y, Fei J, He X (2017) A Deep Learning Approach for Intrusion Detection using Recurrent Neural Networks. IEEE Access 5:21954–21961 Yuan X, Li C, Li X (2017) DeepDefense: Identifying DDoS Attack via Deep Learning. In: 2017 IEEE International Conference on Smart Computing (SMARTCOMP). IEEE. https://doi.org/10.1109/smartcomp.2017.7946998 Yun I, Lee S, Xu M, Jang Y, Kim T (2018) QSYM : A Practical Concolic Execution Engine Tailored for Hybrid Fuzzing. In: 27th USENIX Security Symposium (USENIX Security 18), pages 745–761. USENIX Association, Baltimore Zhang S, Meng W, Bu J, Yang S, Liu Y, Pei D, Xu J, Chen Y, Dong H, Qu X, Song L (2017) Syslog Processing for Switch Failure Diagnosis and Prediction in Datacenter Networks. In: 2017 IEEE/ACM 25th International Symposium on Quality of Service (IWQoS). IEEE. https://doi.org/10.1109/iwqos.2017. 7969130 Zhang J, Chen W, Niu Y (2019) DeepCheck: A Non-intrusive Control-flow Integrity Checking based on Deep Learning. arXiv preprint arXiv:1905.01858 Zhang X, Xu Y, Lin Q, Qiao B, Zhang H, Dang Y, Xie C, Yang X, Cheng Q, Li Z, Chen J, He X, Yao R, Lou J-G, Chintalapati M, Shen F, Zhang D (2019) Robust Log-based Anomaly Detection on Unstable Log Data. In: Proceedings of the 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE 2019, pages 807–817. ACM, New York Zhang Y, Chen X, Guo D, Song M, Teng Y, Wang X (2019) PCCN: Parallel Cross Convolutional Neural Network for Abnormal Network Traffic Flows Detection in Multi-Class Imbalanced Network Traffic Flows. IEEE Access 7:119904–119916 Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.