Skip to main content
ElevenLabs is an AI audio platform that lets your Noah app generate spoken audio and voice-driven experiences. It powers text-to-speech (TTS), speech-to-text (STT), and voice assistants so users can listen, speak, and interact without relying only on the screen. With ElevenLabs connected to Noah, your app can:
  • Convert text into high-quality synthesized speech
  • Transcribe recorded audio into text
  • Build voice assistants that read chats or responses aloud
  • Offer narrated content (stories, articles, briefings, lessons)
  • Improve accessibility with hands-free, audio-first experiences
Screenshot of the Noah Integrations panel showing ElevenLabs alongside other integrations like OpenAI, CoinGecko, Stripe, Google AdSense, and Google Analytics

Common use cases and example apps

Example appDescription
Voice-enabled assistantAccept user input (text or voice), generate a response, and play it back using a natural ElevenLabs voice.
Daily audio briefingsCompile metrics or summaries each morning and deliver them as narrated audio inside your Noah app.
Narrated content / podcastsTurn written content (stories, articles, updates) into narrated audio episodes users can listen to on demand.
Learning & instructionRead lessons, explanations, or exercises aloud to support audio-first learning and guidance.
Guided wellness / fitnessDeliver spoken prompts and instructions for routines, workouts, or mindfulness sessions.
Accessibility featuresRead on-screen content aloud for users who prefer or require audio instead of text-only interfaces.

How ElevenLabs works in Noah

Under the hood, the ElevenLabs integration in Noah typically uses:
  • Supabase Edge Functions (or a similar serverless backend) to call ElevenLabs APIs for:
    • Text-to-Speech (/elevenlabs-tts)
    • Speech-to-Text (/elevenlabs-transcribe)
  • Frontend helpers that call those functions from your app:
    • A TTS helper that returns an audio URL
    • An STT helper that sends recorded audio and returns text
  • An optional voice assistant hook that reads chat messages aloud using ElevenLabs voices.
Noah can scaffold much of this for you based on the ElevenLabs integration template, so you mostly configure keys and describe what you want in chat.

Prerequisites

Before you connect ElevenLabs in Noah, make sure you have:
  • An ElevenLabs account and API key
  • Supabase Cloud enabled for your Noah project
  • Basic environment variables set up for Supabase and ElevenLabs
To enable Supabase Cloud in Noah: To add your ElevenLabs API key as a secret: In addition, your project should include Supabase environment variables (for example in .env):
VITE_SUPABASE_URL=https://your-project.supabase.co
VITE_SUPABASE_ANON_KEY=your-anon-key

Step 1: Create an ElevenLabs account and API key

To create an ElevenLabs API key:
  1. Go to ElevenLabs and sign up or log in.
  2. In Developers → API keys, click Create Key.
  3. Give the key a descriptive name, for example Noah integration.
  4. (Recommended) Turn on Restrict Key, set a credit limit, and choose which features the key can use.
  5. Copy the generated API key and keep it somewhere safe.
Your ElevenLabs API key works like a password. Keep it secret and never commit it to your repo or paste it into public places.

Step 2: Add your ElevenLabs API key in Noah

Next, store the key securely in your Noah project:
1

Open your project settings

  • In Noah, open your project.
  • Go to Settings (or the Environment / Secrets section).
2

Add ELEVENLABS_API_KEY

  • Create a new secret named ELEVENLABS_API_KEY.
  • Paste your ElevenLabs API key value.
  • Save changes so backend functions can read this value at runtime.
3

Confirm Supabase configuration

  • Ensure VITE_SUPABASE_URL and VITE_SUPABASE_ANON_KEY are set.
  • These are used by your frontend helpers when calling Supabase Edge Functions.

Backend: Edge Functions for TTS and STT

The ElevenLabs integration template uses two Supabase Edge Functions:
  • supabase/functions/elevenlabs-tts – calls ElevenLabs Text-to-Speech API and returns audio.
  • supabase/functions/elevenlabs-transcribe – calls ElevenLabs Speech-to-Text API and returns transcription.
At a high level, each function:
  • Reads ELEVENLABS_API_KEY from the environment.
  • Validates input (text for TTS, audio file for STT).
  • Calls the appropriate ElevenLabs endpoint.
  • Returns audio (audio/mpeg) or JSON transcription.
  • Applies CORS headers so you can call it from your Noah app.
If you want to change how these Edge Functions behave (for example, default voices, models, or error handling), describe the changes in your Noah project chat and let Noah update the Supabase function code for you.

Frontend helpers and voice assistant

On the frontend, the template exposes small helpers that talk to those functions.
  • Text-to-Speech helper (conceptually):
    • Sends { text, voiceId } to the TTS Edge Function.
    • Receives an audio blob and returns a blob: URL you can play in an <audio> element.
  • Speech-to-Text helper:
    • Uploads an audio Blob via FormData to the STT Edge Function.
    • Receives JSON with a text field containing the transcription.
  • Voice assistant hook:
    • Accepts a list of messages (user / assistant).
    • Uses the TTS helper to read them out loud in sequence.
    • Exposes controls like isReading and stopReading.
To hook these helpers into your existing chat or UI, ask Noah in chat to add or adjust the relevant components (for example, “Add a play button next to each chat message that uses the ElevenLabs TTS helper.”). Noah will make the code changes on your behalf.

Voice IDs

Here are some example ElevenLabs voices you can use as defaults:
VoiceID
George (default)JBFqnCBsd6RMkjVDRZzb
Rachel21m00Tcm4TlvDq8ikWAM
AdampNInz6obpgDQGcFmaJgB
BellaEXAVITQu4vr4xnSDxMaL
You can replace these IDs with your own voices from the ElevenLabs dashboard and pass them into your TTS helper or voice assistant.

Tips and troubleshooting

  • Double-check that ELEVENLABS_API_KEY is set in your Noah project secrets.\
  • Redeploy or restart any Edge Functions so they pick up the latest environment values.
  • Verify your network and that the ElevenLabs API is reachable from your environment.\
  • Check Supabase Edge Function logs for errors calling the ElevenLabs API.\
  • Make sure the requested voice ID and model are valid in your ElevenLabs account.
  • Confirm that you are sending an audio file (Blob) with valid content and a supported format.\
  • Check the ElevenLabs STT model configuration and quotas in your account.
  • In ElevenLabs, configure credit limits and key restrictions for the API key used by Noah.\
  • Use separate API keys for development and production environments where appropriate.