Skip to content
/ lambdapi Public

Serverless runtime environment tailored for code produced by LLMs. Automatic API generation from your code, support for multiple programming languages, and integrated file and database storage solutions.

License

Notifications You must be signed in to change notification settings

jb41/lambdapi

Repository files navigation

LambdaPi: GPT-Driven Serverless Code Plugin

LambdaPi is a serverless runtime environment designed for code generated by Large Language Models (LLMs), complete with a user interface for writing, testing, and deploying the code.

As LLMs continue to revolutionize software development, LambdaPi aims to provide a robust open-source ecosystem for building, testing, and deploying APIs (and eventually other types of software). By harnessing the power of GPT models, LambdaPi automatically containerizes and deploys code in a serverless fashion, with containers created and destroyed with each request.

LambdaPi features a shared Redis database and file storage for communication, storage, and input/output needs. Functions can be called using JSON or query parameters and return a variety of formats, including JSON, plain text, and files (png, csv).

LambdaPi supports a wide range of runtimes, including Python, Node, Bash, Ruby, Shell, C++, and others of your choice (just create Dockerconfig). Functions are designed as straightforward programs, with parameters passed as command line arguments. Output is returned to the user via stdout, ensuring a streamlined process:

command line parameters as input ->
    -> program execution ->
    -> printed result.

This simplicity caters to both humans and LLMs, making it easier for anyone to create and deploy code. With support for multiple runtimes and an emphasis on simplicity, LambdaPi enables developers and LLMs alike to create powerful, efficient code with minimal barriers to entry.

View the system prompt configuration in config.yaml.example

🎬 Demo

output_video.mp4

✨ Features

  • 📚 Runtimes:
    • Python
    • Node.js
    • C++
    • Kali Linux
    • Ruby
  • 📁 File storage
  • 🗃️ Redis storage
  • 📤 JSON, plain text, or file output support
  • 🖋️ Easy scripting and command line execution

📋 Requirements

With Docker (recommended setup)

Here, main app is run inside Docker, and all functions are run as a Docker-in-Docker (DinD)

  • Docker

Without Docker

  • Python 3.x
  • Redis

🛠️ Setup

Docker (recommended)

Clone the repository:

git clone https://github.com/jb41/lambdapi

Navigate to the project directory:

cd lambdapi

Copy the example configuration:

cp config.yaml.example config.yaml

Add your GPT API key to config.yaml (obtain it from https://platform.openai.com/account/api-keys)

Building Kali Linux images can be time-consuming due to the installation of the kali-linux-headless package. To speed up the setup process, you can remove the package by executing the command rm -rf runtimes/kali.yaml.

Execute the setup script, which will setup database, configure directories, generate Dockerfiles, and build the corresponding Docker images.

docker-compose -f docker-compose.setup.yml up

Without Docker

Install dependencies:

apt-get install redis docker-ce docker-ce-cli containerd.io

Install required libraries:

pip3 install -r requirements.txt

Run

python3 setup.py

🎮 Usage

Docker (recommended)

Run

docker-compose up --build

Visit http://localhost:8000/index.html to view the dashboard.

Without Docker

Run

python3 main.py

Visit http://localhost:8000/index.html to view the dashboard.

📄 Documentation

File Storage

All files saved in the current directory (/app) are returned in the response. These files, along with the container, are deleted after the request is completed. If you want to preserve a file for future use or access existing files, store or read them from the /data directory.

Redis

The Redis URL is stored in the REDIS_HOST environment variable. Redis is shared among all containers and operates on the default port.

Parameters

You can pass parameters as query parameters or a JSON body. However, they are forwarded to the script as command-line arguments. For instance, a JSON body like this:

{
  "a": 1337,
  "b": 42,
  "foo": "bar"
}

will be passed to a Python script as:

python3 "1337" "42" "bar"

In the UI (/functions-details.html), you can provide parameters in the input text area. Separate each parameter with a newline.

Please note that there are limitations in preserving parameter names and handling more complex data structures, such as strings with spaces, hashes, and others. Contributions to address these issues are welcome.

🤖 GPT-3.5 vs GPT-4

LambdaPi is compatible with both GPT-3.5 and GPT-4 models, but performs significantly better with GPT-4, often requiring little or no manual input. GPT-3.5 can be used with some manual adjustments or with simpler scripts.

💡 Examples

Runtime Description URL
Ruby Open a website, do a search, return results http://localhost:8000/function-details.html?id=1
Python Generate an image with a text from the user http://localhost:8000/function-details.html?id=2
Python Save input data to redis http://localhost:8000/function-details.html?id=3
Python Read from redis http://localhost:8000/function-details.html?id=4
Python Generate graph from numbers provided in CSV http://localhost:8000/function-details.html?id=5
Python download website and return links http://localhost:8000/function-details.html?id=6
Node Add caption to an image http://localhost:8000/function-details.html?id=7
Node Request a joke from jokes API and return http://localhost:8000/function-details.html?id=8
Kali Linux Scan ports with nmap http://localhost:8000/function-details.html?id=9
Kali Linux Search for vulnerabilities with nikto (rather slow, ~300 seconds) http://localhost:8000/function-details.html?id=10
Kali Linux Bruteforce MD5 password with john http://localhost:8000/function-details.html?id=11
C++ Read data from redis http://localhost:8000/function-details.html?id=12
C++ Calculate first 100 fibbonaci numbers with boost http://localhost:8000/function-details.html?id=13

🚀 Running in production

To optimize performance in a production environment, it's important to run multiple workers. When running LambdaPi using Docker, the application will automatically utilize a number of workers equal to the number of CPU cores. You can configure this setting for your specific Docker instance to ensure efficient operation.

🤝 Contributing

We warmly welcome your contributions and appreciate your support in enhancing the project. There are several areas where you can help improve LambdaPi:

  • Frontend expansion: Generate HTML files with frontend JavaScript and set up a /public directory for hosting files (similar to the /data implementation).
  • Chaining: Develop unit tests with LLM, execute code, and verify output before presenting to users.
  • Dockerfile optimization: Reconsider the use of Python for generating Dockerfiles, and explore potential improvements in managing Docker containers (including handling, setup, and execution).
  • Additional runtimes: Incorporate more programming languages and technologies, such as Go or Rust, to further broaden the range of supported runtimes.
  • Alternative Linux distributions: Evaluate other Linux distributions that may be beneficial for the project (currently using Kali Linux - beacause of it's preinstalled tooling).
  • UX/UI enhancements: Improve the user interface by rethinking the layout of elements like the code editor, prompt, LLM text response, parameters, and results.
  • User management: Consider implementing user registration and authentication features.
  • HTTP method restrictions: Limit access to specific HTTP methods, instead of allowing all methods (GET, POST, PUT, PATCH, DELETE, OPTIONS, HEAD).
  • HTTP Basic Auth: Implement endpoint protection using HTTP Basic Authentication.
  • Cronjobs: Enable the execution of scripts using cron jobs.
  • Sqlite3 integration: Add support for SQLite3 databases, providing an alternative to the existing file system for those who require a relational database.
  • Chaining when errors: Run /completions with with code and error in order to fix it
  • Fix formatting for non-JSON reponses: in /functions-details.html, non JSON responses with \r\n could be formatted better.

About

Serverless runtime environment tailored for code produced by LLMs. Automatic API generation from your code, support for multiple programming languages, and integrated file and database storage solutions.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published