Evaluation framework for oncology foundation models (FMs)
-
Updated
Jun 10, 2024 - Python
Evaluation framework for oncology foundation models (FMs)
Official implementation of paper "Meta Prompting for AI Systems" (https://arxiv.org/abs/2311.11482)
Making large AI models cheaper, faster and more accessible
This repository contains the python package for Helical
A task generation and model evaluation system.
Foundation model benchmarking tool. Run any model on Amazon SageMaker and benchmark for performance across instance type and serving stack options.
A curated list of classic artificial intelligence paper
An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
A curated list of foundation models for vision and language tasks
The official evaluation suite and dynamic data release for MixEval.
SaprotHub: Making Protein Modeling Accessible to All Biologists
Multi-Agent VQA: Exploring Multi-Agent Foundation Models on Zero-Shot Visual Question Answering
A project webpage for the EHRMamba paper.
日本語LLMまとめ - Overview of Japanese LLMs
This is the official repository of the paper "Multi-Modal and Multi-Agent Systems Meet Rationality: A Survey"
Semantic alignment of astronomical data with natural language using multi-modal models. (Jax) Code associated with https://arxiv.org/abs/2403.08851.
Chronos: Pretrained (Language) Models for Probabilistic Time Series Forecasting
Flask based REST API for experimenting with multi-agent systems that support data analysis and visualization
Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
Add a description, image, and links to the foundation-models topic page so that developers can more easily learn about it.
To associate your repository with the foundation-models topic, visit your repo's landing page and select "manage topics."