Hybrid neural network is protected against adversarial attacks using various defense techniques, including input transformation, randomization, and adversarial training.
-
Updated
May 25, 2024 - Jupyter Notebook
Hybrid neural network is protected against adversarial attacks using various defense techniques, including input transformation, randomization, and adversarial training.
A classical-quantum or hybrid neural network with adversarial defense protection
Hybrid neural network model is protected against adversarial attacks using either adversarial training or randomization defense techniques
A classical or convolutional neural network model with adversarial defense protection
StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
My fundamental topics - research on Adversarial machine learning
A PyTorch Based Deep Learning Quick Develop Framework. One-Stop for train/predict/server/demo
Official repository for the paper: "On Adversarial Training without Perturbing all Examples", Accepted at ICLR 2024
The official code of IEEE S&P 2024 paper "Why Does Little Robustness Help? A Further Step Towards Understanding Adversarial Transferability". We study how to train surrogates model for boosting transfer attack.
On The Impact of Adversarial Training and Transferability on CAN Intrusion Detection Systems
[ICML 2022]Source code for "A Closer Look at Smoothness in Domain Adversarial Training",
Arabic Synonym BERT-based Adversarial Examples for Text Classification
Counter speech classification using adversarial training
Trustworthy Artificial Intelligence Course Notebooks, 2023
Adversarial Style for Image Classification
LAMPAT: Low-rank Adaptation Multilingual Paraphrasing using Adversarial Training (AAAI'24)
Adversarial Attack and Defense in Deep Ranking, T-PAMI, 2024
PyTorch implementation of adversarial training and defenses [Fantastic Robustness Measures: The Secrets of Robust Generalization, NeurIPS 2023].
Code for the paper "Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models"
A Comprehensive Study on Cloud-Based Model Interpretability, Accountability, and Privacy in Machine Learning with Resilience to Adversarial Attacks
Add a description, image, and links to the adversarial-training topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-training topic, visit your repo's landing page and select "manage topics."