Yet another thing i needed to figure out recently to hook up my Assembly.ai transription engine to a frontend that was loud.
The first step is to request access to the microphone with echo cancellation enabled. This feature is built into most modern browsers and helps reduce the feedback from your speakers.
async function getMicrophoneStream() { const constraints = { audio: { echoCancellation: true, noiseSuppression: true, autoGainControl: true } }; try { const stream = await navigator.mediaDevices.getUserMedia(constraints); return stream; } catch (err) { console.error('Error accessing the microphone', err); return null; } }
Next, we set up the Web Audio API to process the audio stream. This involves creating an AudioContext and connecting various nodes, including a DynamicsCompressorNode.
async function setupAudioProcessing(stream) { const audioContext = new AudioContext(); const source = audioContext.createMediaStreamSource(stream); // Create a DynamicsCompressorNode for additional processing const compressor = audioContext.createDynamicsCompressor(); compressor.threshold.setValueAtTime(-50, audioContext.currentTime); // Example settings compressor.knee.setValueAtTime(40, audioContext.currentTime); compressor.ratio.setValueAtTime(12, audioContext.currentTime); compressor.attack.setValueAtTime(0, audioContext.currentTime); compressor.release.setValueAtTime(0.25, audioContext.currentTime); // Connect nodes source.connect(compressor); compressor.connect(audioContext.destination); return { audioContext, source, compressor }; }
Finally, we integrate our audio processing setup with the Web Speech API to perform speech recognition.
async function startSpeechRecognition() { const stream = await getMicrophoneStream(); if (!stream) return; const { audioContext, source, compressor } = await setupAudioProcessing(stream); const recognition = new (window.SpeechRecognition || window.webkitSpeechRecognition)(); recognition.continuous = true; recognition.interimResults = true; recognition.onresult = (event) => { for (let i = event.resultIndex; i < event.results.length; i++) { const transcript = event.results[i][0].transcript; console.log('Transcript:', transcript); } }; recognition.onerror = (event) => { console.error('Speech recognition error', event.error); }; recognition.start(); // Handle audio context resume if needed if (audioContext.state === 'suspended') { audioContext.resume(); } return recognition; } // Start the speech recognition process startSpeechRecognition();
Hopefully you found this useful.
Happy coding!
Tim.
The above is the detailed content of How to Prevent Speaker Feedback in Speech Transcription Using Web Audio API. For more information, please follow other related articles on the PHP Chinese website!