An Easy-to-use Knowledge Editing Framework for LLMs.
-
Updated
May 30, 2024 - Jupyter Notebook
An Easy-to-use Knowledge Editing Framework for LLMs.
🐢 Open-Source Evaluation & Testing for LLMs and ML models
Moonshot - A simple and modular tool to evaluate and red-team any LLM application.
A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible AI, Trustworthy AI, and Human-Centered AI.
We make Generative AI accessible to Federal agencies and businesses. Easy-to-use ezGPT™ platform eliminates the need for in-house expertise and delivers pre-built solutions for rapid innovation. With security and privacy at its core, we unlock the potential of AI. Our innovative chatbot guides users, ensuring a smooth and successful experience.
We make Generative AI accessible to Federal agencies and businesses. Easy-to-use ezGPT™ platform eliminates the need for in-house expertise and delivers pre-built solutions for rapid innovation. With security and privacy at its core, we unlock the potential of AI. Our innovative chatbot guides users, ensuring a smooth and successful experience.
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Birhanu Eshete is an Associate Professor of Computer Science at the University of Michigan, Dearborn. His main research focus is in trustworthy machine learning with emphasis on security, safety, privacy, interpretability, fairness, and the dynamics thereof. He also studies online cybercrime and advanced and persistent threats (APTs).
Breaking the Trilemma of Privacy, Utility, Efficiency via Controllable Machine Unlearning
The open-sourced Python toolbox for backdoor attacks and defenses.
In the dynamic landscape of medical artificial intelligence, this study explores the vulnerabilities of the Pathology Language-Image Pretraining (PLIP) model, a Vision Language Foundation model, under targeted attacks like PGD adversarial attack.
Optimization-based deep learning models can give explainability with output guarantees and certificates of trustworthiness.
Welcome to my Machine Learning repository, where you can find learning materials both from my studies and from various online courses.
[ICML 2024] TrustLLM: Trustworthiness in Large Language Models
A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.
AutoML system for building trustworthy peptide bioactivity predictors
Trustworthy AI/ML course by Professor Birhanu Eshete, University of Michigan, Dearborn.
A toolkit for tools and techniques related to the privacy and compliance of AI models.
Add a description, image, and links to the trustworthy-ai topic page so that developers can more easily learn about it.
To associate your repository with the trustworthy-ai topic, visit your repo's landing page and select "manage topics."