PyTorch implementation of projected gradient descent (PGD) adversarial noise attack
-
Updated
Jun 15, 2024 - Python
PyTorch implementation of projected gradient descent (PGD) adversarial noise attack
Hands-on AI security workshop by GDSC Asia Pacific University – explore the fundamentals of attacking machine learning systems through white-box and black-box techniques. Learn to evade image classifiers and manipulate LLM behavior using real-world tools and methods.
Add a description, image, and links to the projected-gradient-descent topic page so that developers can more easily learn about it.
To associate your repository with the projected-gradient-descent topic, visit your repo's landing page and select "manage topics."