-
Notifications
You must be signed in to change notification settings - Fork 1.2k
ART Attacks
Work in progress ...
The attack descriptions include a link to the original publication and tags describing framework-support of implementations in ART:
-
all/Numpy
: implementation based on Numpy to support all frameworks -
TensorFlow
: implementation based on TensorFlow optimised for TensorFlow estimators -
PyTorch
: implementation based on PyTorch optimised for PyTorch estimators
-
Auto-Attack (Croce and Hein, 2020)
Auto-Attack runs one or more evasion attacks, defaults or provided by the user, against a classification task. Auto-Attack optimises the attack strength by only attacking correctly classified samples and by first running the untargeted version of each attack followed by running the targeted version against each possible target label.
-
Auto Projected Gradient Descent (Auto-PGD) (Croce and Hein, 2020)
all/Numpy
Auto Projected Gradient Descent attacks classification and optimizes its attack strength by adapting the step size across iterations depending on the overall attack budget and progress of the optimisations. After adapting its steps size Auto-Attack restarts from the best example found so far.
-
Shadow Attack (Ghiasi et al., 2020)
TensorFlow
,PyTorch
Shadow Attack causes certifiably robust networks to misclassify an image and produce "spoofed" certificates of robustness by applying large but naturally looking perturbations.
-
Wasserstein Attack (Wong et al., 2020)
-
Targeted Universal Adversarial Perturbations (Hirano and Takemoto, 2019)
-
High Confidence Low Uncertainty (Grosse et al., 2018)
all/Numpy
-
Iterative Frame Saliency (Inkawhich et al., 2018)
-
DPatch (Liu et al., 2018)
all/Numpy
DPatch creates digital, rectangular patches that attack object detectors.
-
ShapeShifter (Chen et al., 2018)
-
Projected Gradient Descent (PGD) (Madry et al., 2017)
-
NewtonFool (Jang et al., 2017)
-
Elastic Net (Chen et al., 2017)
-
Adversarial Patch (Brown et al., 2017)
all/Numpy
,TensorFlow
-
Decision Tree Attack (Papernot et al., 2016)
all/Numpy
-
Carlini & Wagner (C&W)
L_2
andL_inf
attack (Carlini and Wagner, 2016) -
Basic Iterative Method (BIM) (Kurakin et al., 2016)
all/Numpy
-
Jacobian Saliency Map (Papernot et al., 2016)
-
Universal Perturbation (Moosavi-Dezfooli et al., 2016)
-
Feature Adversaries (Sabour et al., 2016)
all/Numpy
-
DeepFool (Moosavi-Dezfooli et al., 2015)
-
Virtual Adversarial Method (Miyato et al., 2015)
-
Fast Gradient Method (Goodfellow et al., 2014)
all/Numpy
- Square Attack (Andriushchenko et al., 2020)
- HopSkipJump Attack (Chen et al., 2019)
- Threshold Attack (Vargas et al., 2019)
- Pixel Attack (Vargas et al., 2019, Su et al., 2019)
- Simple Black-box Adversarial (SimBA) (Guo et al., 2019)
- Spatial Transformation (Engstrom et al., 2017)
- Query-efficient Black-box (Ilyas et al., 2017)
- Zeroth Order Optimisation (ZOO) (Chen et al., 2017)
- Decision-based/Boundary Attack (Brendel et al., 2018)
- Adversarial Backdoor Embedding (Tan and Shokri, 2019)
- Clean Label Feature Collision Attack (Shafahi, Huang et. al., 2018)
- Backdoor Attack (Gu, et. al., 2017)
- Poisoning Attack on Support Vector Machines (SVM) (Biggio et al., 2013)
- Functionally Equivalent Extraction (Jagielski et al., 2019)
- Copycat CNN (Correia-Silva et al., 2018)
- KnockoffNets (Orekondy et al., 2018)
- Attribute Inference Black-Box
- Attribute Inference White-Box Lifestyle DecisionTree (Fredrikson et al., 2015)
- Attribute Inference White-Box DecisionTree (Fredrikson et al., 2015)
- Membership Inference Black-Box
- Membership Inference Black-Box Rule-Based
- Label-Only Boundary Distance Attack (Choquette-Choo et al., 2020) (ART 1.5)
- Label-Only Gap Attack (Choquette-Choo et al., 2020) (ART 1.5)
- MIFace (Fredrikson et al., 2015)