Skip to content

Cryserrrrr/openAi-interface

Repository files navigation

OpenAI Interface

technology (1)

Table des matières

  1. Introduction
  2. Setup
  3. Usage
  4. Features
  5. Objective
  6. Contributing
  7. Issues and Feedback

Introduction

Welcome to the OpenAI Interface repository! This project provides a user-friendly interface to interact with various OpenAI models, including GPT-4, GPT-3, GPT-Vision, Text-to-Speech, Speech-to-Text, and DALL-E 3. You can seamlessly integrate these models into a conversation, making it easy to explore the capabilities of OpenAI's powerful technologies.

Ui

gpt-interface

The application is built using Next.js for the frontend and Styled Components for styling. To get started, follow the setup instructions below.

Setup

  1. Clone the repository to your local machine:
git clone https://github.com/your-username/openAi-interface.git
  1. Navigate to the project directory:
cd openAi-interface
  1. Create a .env.local file in the root of the project and add your OpenAI API key:
NEXT_PUBLIC_OPENAI_API_KEY=your-api-key-here

Replace your-api-key-here with your actual OpenAI API key.

  1. Install dependencies:
npm install
  1. Start the development server:
npm run dev
  1. Open your browser and visit http://localhost:3000 to access the OpenAI Interface.

Usage

The OpenAI Interface allows you to create dynamic conversations using various OpenAI models. Explore the different functionalities and experiment with combining multiple models in a single conversation.

Features

  • GPT-4, GPT-3, GPT-Vision: Easily switch between different OpenAI models.

  • Text-to-Speech and Speech-to-Text: Convert text to speech and vice versa. 🚧

  • DALL-E 3 Integration: Generate creative and unique images with DALL-E 3. 🚧

  • Multi-Model Conversations: Combine different models in the same conversation. 🚧 (only with gpt models)

Objective

The primary goal of this repository is to establish a comprehensive environment integrating the OpenAI API, enabling seamless transitions between various models for versatile applications. Users will have the flexibility to leverage different models based on their specific needs. In the future, the project aims to expand its capabilities by incorporating functionalities such as video processing with GPT Vision, additional API call features, and other enhancements to further enhance the overall user experience.

Contributing

If you would like to contribute to the project, please check the contribution guidelines.

Issues and Feedback

If you encounter any issues or have feedback, please open an issue.