My personal website.
-
Updated
Mar 12, 2024 - HTML
My personal website.
Explainable Debugger for Black-box Machine Learning Models
Explanation-guided boosting of machine learning evasion attacks.
Trustworthy AI/ML course by Professor Birhanu Eshete, University of Michigan, Dearborn.
This repo contains the codes, figures and datasets for the paper - U-Trustworthy Models. Reliability, Competence, and Confidence in Decision-Making.
DSPLab@UMich-Dearborn Website
Official implementation of NeurIPS 2023 paper "Trade-off Between Efficiency and Consistency for Removal-based Explanations" (https://arxiv.org/abs/2210.17426)
KDD 2023 tutorial "Trustworthy Transfer Learning: Transferability and Trustworthiness"
In the dynamic landscape of medical artificial intelligence, this study explores the vulnerabilities of the Pathology Language-Image Pretraining (PLIP) model, a Vision Language Foundation model, under targeted attacks like PGD adversarial attack.
Birhanu Eshete is an Associate Professor of Computer Science at the University of Michigan, Dearborn. His main research focus is in trustworthy machine learning with emphasis on security, safety, privacy, interpretability, fairness, and the dynamics thereof. He also studies online cybercrime and advanced and persistent threats (APTs).
Welcome to my Machine Learning repository, where you can find learning materials both from my studies and from various online courses.
A School for All Seasons on Trustworthy Machine Learning
Code for the paper "Approximating full conformal prediction at scale via influence functions""
a tool for comparing the predictions of any text classifiers
A list of research papers of explainable machine learning.
Trustworthy AI method based on Dempster-Shafer theory - application to fetal brain 3D T2w MRI segmentation
TRIAGE: Characterizing and auditing training data for improved regression (NeurIPS 2023)
Repository for the NeurIPS 2023 paper "Beyond Confidence: Reliable Models Should Also Consider Atypicality"
Morphence: An implementation of a moving target defense against adversarial example attacks demonstrated for image classification models trained on MNIST and CIFAR10.
A project to train your model from scratch or fine-tune a pretrained model using the losses provided in this library to improve out-of-distribution detection and uncertainty estimation performances. Calibrate your model to produce enhanced uncertainty estimations. Detect out-of-distribution data using the defined score type and threshold.
Add a description, image, and links to the trustworthy-machine-learning topic page so that developers can more easily learn about it.
To associate your repository with the trustworthy-machine-learning topic, visit your repo's landing page and select "manage topics."