
Setup
To get started, you’ll need a Layercode account and a voice agent. If you haven’t done so yet, follow our Getting Started Guide. Clone the example repo and install dependencies:Disable React Strict Mode for Development: React Strict Mode renders components twice in development, which causes the Layercode voice agent hook to initialize twice. This results in duplicate voice agent sessions and can cause issues like hearing the voice agent speak twice.If you’re using Create React App, remove If you’re using Vite, ensure
<React.StrictMode>
from your src/index.js
or src/index.tsx
:src/index.tsx
React.StrictMode
is not wrapping your app in src/main.tsx
.Project structure
This project uses Vite for fast React development, Tailwind CSS for styling, and TypeScript.How it works
Connect to a Layercode voice agent
We use the React SDKuseLayercodeAgent
hook, which handles all the complexity required for real-time, low-latency, two-way voice agent interactions.
Here’s a simplified example of how to use the React SDK in a React application:
useLayercodeAgent
hook accepts:
- Your agent ID (from the Layercode Dashboard)
- The endpoint to authorize the client session (see Authorize Client Session)
- An optional callback function for handling data messages
useLayercodeAgent
hook will:
- Make a request to your authorize session endpoint to create new session and return the client session key.
- Establish a WebSocket connection to Layercode (using the client session key)
- Capture microphone audio from the user and stream it to the Layercode voice agent for transcription
- (At this stage, Layercode will call Your Backend webhook to generate a response, and then convert the response from text to speech)
- Playback audio of the voice agent’s response to the user in their browser, as it’s generated
useLayercodeAgent
hook returns an object with the following properties:
status
: The connection status of the voice agent. You can show this to the user to indicate the connection status.agentAudioAmplitude
: The amplitude of the audio from the voice agent. You can use this to drive an animation when the voice agent is speaking.
Components
AudioVisualization
TheAudioVisualization
component is used to visualize the audio from the voice agent. It uses the agentAudioAmplitude
value returned from the useLayercodeAgent hook to drive the height of the audio bars with a simple animation.
src/ui/AudioVisualization.tsx
ConnectionStatusIndicator
TheConnectionStatusIndicator
component is used to display the connection status of the voice agent. It uses the status
value returned from the useLayercodeAgent hook to display the connection status.
src/ui/ConnectionStatusIndicator.tsx
VoiceAgentPushToTalk (optional)
Because theuseLayercodeAgent
hook handles all of the audio streaming and playback, in most cases the microphone button is simply a visual aid and doesn’t implement any logic. A simple microphone icon inside a circle will suffice in most cases.
Layercode does support ‘push-to-talk’ turn taking, as an alternative to automatic turn taking (read more about turn taking). When using ‘push-to-talk’ turn taking, holding down and releasing the MicrophoneButton
must send a websocket message to tell Layercode the user has started and finished talking. In this example, we provide an alternative VoiceAgentPushToTalk
component, that along with the MicrophoneButtonPushToTalk
component, handles this logic.
To use this mode, you’ll need to edit src/App.tsx
to use the VoiceAgentPushToTalk
component instead of the VoiceAgent
component. Then in your Layercode Dashboard, you’ll need to click Edit in the Transcription section of your voice agent and set the Turn Taking to Push to Talk.