Eternal is an experimental platform for machine learning models and workflows.
-
Updated
Jun 8, 2024 - Go
Eternal is an experimental platform for machine learning models and workflows.
The simplest way to serve AI/ML models in production
A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
Unofficial (Golang) Go bindings for the Hugging Face Inference API
The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) and ready to deploy on Qualcomm® devices.
This repository allows you to get started with a gui based training a State-of-the-art Deep Learning model with little to no configuration needed! NoCode training with TensorFlow has never been so easy.
🔀 Bedrock Proxy Endpoint ⇢ Spin up your own custom OpenAI API server endpoint for easy AWS Bedrock inference (using standard baseUrl, and apiKey params)
The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) and ready to deploy on Qualcomm® devices.
Computer Vision API V2 - FastAPI & ONNX Models
the small distributed language model toolkit; fine-tune state-of-the-art LLMs anywhere, rapidly
Text to image generation with stable diffusion xl model powered by hugging face inference api
Text components powering LLMs & SLMs for geniusrise framework
Train and predict your model on pre-trained deep learning models through the GUI (web app). No more many parameters, no more data preprocessing.
An open source framework for Retrieval-Augmented System (RAG) uses semantic search helps to retrieve the expected results and generate human readable conversational response with the help of LLM (Large Language Model).
Tool for test diferents large language models without code.
MLOps library for LLM deployment w/ the vLLM engine on RunPod's infra.
Add a description, image, and links to the inference-api topic page so that developers can more easily learn about it.
To associate your repository with the inference-api topic, visit your repo's landing page and select "manage topics."