Tactical Cyber Immune System

Funding: ARO STTR

This project leverages the tools and results to build a fully functional tactical cyber immune systems (TCIS) prototype. The TCIS prototype’s tangible benefits to end-users of TCIS will be: (i) Maintaining the acceptable performance of cyber systems and applications in spite of malicious attacks; (ii) Surveillance and autonomic enforcement of normal user behavioral semantics in order to seamlessly detect any non-self behavior by users; (iii) Metrics for evaluating the security; and (v) Seamless recovery of the components of a cyber system from catastrophic failures and security breaches.

AMAP-based Autonomic Security Operations Center (ASoC)

Funding: AFRL STTP

In this STTP project, we extend the current AMAP prototype to develop an innovative security architecture that assumes any cyber component is malicious until it can be verified that it is free from any malicious components. The autonomic computing provides the mechanisms to take proactive actions to stop cyber-attacks, their propagation as well as mitigates their impacts. The main modules are Continuous Threat Modeling, Cyber Situation Awareness, and Anomaly Behavior Analysis.

CAREER: Learning in Adversarial and Nonstationary Environments

Funding: National Science Foundation

The security and privacy of machine learning algorithms have been exposed in many classes’ models (e.g., logistic regression, neural networks, etc.). In fact, adversaries in machine learning have been a topic of recent interest and concern due to the mathematical flaws in the algorithms. Furthermore, domains of cybersecurity pose an even larger threat than many applications of machine learning because of how an adversary can influence the training data or even the testing data. Application areas related to this proposal include – but are not limited to – remote sensing, fraud detection, web usage tracking, IDS, and malware detection for cybersecurity data. This CAREER studies when and why feature selection fails with an adversary. Not only will this research focus on understanding why feature selection fails, but also the transferability of black and white box attacks on feature selection. This project also proposes to develop novel methods to attack information-theoretic algorithms and approaches for resilient information-theoretic feature selection.

Adversarial Machine Learning in Audio Transcription: The robustness and vulnerability of Deep Neural Networks (DNN) are quickly becoming a critical area of interest since these models are in widespread use across real-world applications. A DNN’s vulnerability is exploited by an adversary to generate data to attack the model; however, the majority of adversarial data generators have focused on image domains with far fewer work on audio domains. More recently, audio analysis models were shown to be vulnerable to adversarial audio examples. Thus, one urgent open problem is to detect adversarial audio reliably. Our group is developing an algorithm to detect adversarial audio by using a DNN’s quantization error. Specifically, we have shown that adversarial audio typically exhibits a larger activation quantization error than benign audio. The quantization error is measured using character error rates. We are using the difference in errors to discriminate adversarial audio.

REU Site: CAT Vehicle: The Cognitive and Autonomous Test Vehicle

Funding: National Science Foundation

The goal of the REU Site CAT Vehicle is to empower students to engage with the myriad applications that are related to autonomous ground vehicles and machine learning. The approach is the application of model-based design approaches that raise the level of abstraction to permit the safe operation of a full-sized robotic vehicle testbed. A full-size robotic car will be paired with core research in machine learning, made possible with data gathered with automotive sensors. These themes provide a context in which participants will explore research in model-based design for cyber-physical systems, machine learning, human-in-the-loop systems, control, and autonomous systems. Participants will use a spiral development process, where new project requirements are added after previous requirements are verified, as part of the safety procedures for the full-sized vehicle testbed

Partnership for Proactive Cybersecurity Research and Training

Funding: Department of Energy

The primary goal of the Partnership for Proactive Cybersecurity Training (PACT) is to address the current and future cybersecurity research challenges and educate and train the next generation of highly skilled cybersecurity workforce that is heavily recruited from underrepresented and minorities and women. To achieve these goals, we form a multi-organization and multidisciplinary alliance from academia (The University of Arizona, Howard University and Navajo Technical University) and DoE Labs (Argonne National Laboratory). The Consortium goals are the following: