Skip to content

This solution converts speech to text and then processes and summarizes the text based on the prompt scenario.

License

MIT, MIT licenses found

Licenses found

MIT
LICENSE
MIT
LICENSE.md
Notifications You must be signed in to change notification settings

Azure-Samples/summarization-openai-python-promptflow

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Process Automation: Speech to Text and Summarization with ACA

Open in GitHub Codespaces Open in Dev Containers

This sample creates a web-based app that allows workers at a company called Contoso Manufacturing to report issues via text or speech. Audio input is translated to text and then summarized to hightlight important information and specifiy the department the report should be sent to.

Table of Contents

Features

This project template provides the following features:

Architecture Digram

Azure account requirements

In order to deploy and run this example, you'll need:

Opening the project

You have a few options for setting up this project. The easiest way to get started is GitHub Codespaces, since it will setup all the tools for you, but you can also set it up locally.

GitHub Codespaces

  1. You can run this template virtually by using GitHub Codespaces. The button will open a web-based VS Code instance in your browser:

    Open in GitHub Codespaces

  2. Open a terminal window.

  3. Sign in to your Azure account:

    azd auth login
  4. Provision the resources and deploy the code:

    azd up

    This project uses gpt-3.5-turbo which may not be available in all Azure regions. Check for up-to-date region availability and select a region during deployment accordingly.

  5. Install the necessary Python packages:

    cd src
    pip install -r requirements.txt
    

Once the above steps are completed you can jump straight to exploring the sample.

VS Code Dev Containers

A related option is VS Code Dev Containers, which will open the project in your local VS Code using the Dev Containers extension:

  1. Start Docker Desktop (install it if not already installed)

  2. Open the project:

    Open in Dev Containers

  3. In the VS Code window that opens, once the project files show up (this may take several minutes), open a terminal window.

Once you've completed these steps jump to deployment.

Local Environment

Prerequisites

Initializing the project

Create a new folder and switch to it in the terminal, then run this command to download the project code:

```shell
azd init -t summarization-openai-python-promptflow
```
Note that this command will initialize a git repository, so you do not need to clone this repository.

Deployment

Once you've opened the project in Dev Containers, or locally, you can deploy it to Azure.

  1. Sign in to your Azure account:

    azd auth login

    If you have any issues with that command, you may also want to try azd auth login --use-device-code.

  2. Provision the resources and deploy the code:

    azd up
  3. Install the necessary Python packages:

    cd src
    pip install -r requirements.txt
    

    This project uses gpt-3.5-turbo which may not be available in all Azure regions. Check for up-to-date region availability and select a region during deployment accordingly.

Exploring the sample

Understanding the prompty file

This sample repository contains a summarize prompty file you can explore. In this sample we are telling the model to summarize the reports given by a worker in a specific format.

The prompty file contains the following:

  • The name, description and authors of the prompt
  • configuration: Details about the LLM model including:
    • api type: chat or completion
    • configuration: connection type (azure_openai or openai) and environment variables
    • model parametes: max_tokesn, temperature and response_format (text or json_object)
  • inputs: the content input from the user, where each input should have a type and can also have a default value
  • outputs: where the output should have a type like string
  • Sample Section: a sample of the inputs to be provided
  • The prompt: in this sample we send add a system message as the prompt with context and details about the format. We also add in a user message at the bottom of the file, which consists of the reported issue in text format from our user.

If you ran the provisioning step above correctly, all of the variables should already be set for you. You can edit the prompt to see what changes this makes to the summary created.

Testing the sample

This repository contains sample data to be able to test the project end to end. To run this project you'll need to pass in as input a reported issue to be summarized. You can pass this input as either a .wav file or a string of text. The data/audio-data/ folder contains sample audio files for you to use or you can use the example string shown below. Below are the commands you can use in your terminal to run the project locally with promptflow.

Testing with sample audio data. Make sure you are in the src directory:

pf flow test --flow summarizationapp --inputs problem="data/audio-data/issue0.wav"

Testing with sample text data:

pf flow test --flow summarizationapp --inputs problem="I need to open a problem report for part number ABC123. The brake rotor is overheating causing glazing on the pads. We track temperature above 24 degrees Celsius and we are seeing this after three to four laps during runs when the driver is braking late and aggressively into corners. The issue severity is to be prioritized as a 2. This is impacting the front brake assembly EFG234"

To understand how the code works look through the summarize.py file.

Costs

Pricing may vary per region and usage. Exact costs cannot be estimated. You may try the Azure pricing calculator for the resources below:

  • Azure Container Apps: Pay-as-you-go tier. Costs based on vCPU and memory used. Pricing
  • Azure OpenAI: Standard tier, GPT and Ada models. Pricing per 1K tokens used, and at least 1K tokens are used per question. Pricing
  • Azure Monitor: Pay-as-you-go tier. Costs based on data ingested. Pricing

Security Guidelines

This template uses Managed Identity for authenticating to the Azure services used (Azure OpenAI, Azure PostgreSQL Flexible Server).

Additionally, we have added a GitHub Action that scans the infrastructure-as-code files and generates a report containing any detected issues. To ensure continued best practices in your own repository, we recommend that anyone creating solutions based on our templates ensure that the Github secret scanning setting is enabled.

Resources

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct.

Resources:

For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

About

This solution converts speech to text and then processes and summarizes the text based on the prompt scenario.

Resources

License

MIT, MIT licenses found

Licenses found

MIT
LICENSE
MIT
LICENSE.md

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published