>
完整的代码可在GitHub上找到,最后提供了一个工作演示。密钥概念:
>利用Web语音API在React应用程序中启用语音搜索,改善用户互动。
useVoice
>通过使用另一个自定义钩(useBookFetch
> 网络语音API的浏览器支持有限。 确保您使用兼容的浏览器(查看MDN以获取最新兼容性信息)。
使用Web语音API的简单示例:
此代码实例化
,添加了const SpeechRecognition = webkitSpeechRecognition; const speech = new SpeechRecognition(); speech.onresult = (event) => { console.log(event); }; speech.start();
提供了抄录的文本。SpeechRecognition
onresult
onresult
事件提供了包含
>
onresult
此基本代码可以在Chrome DevTools或JavaScript文件中运行。 让我们将其集成到一个React应用程序中。SpeechRecognitionEvent
>
results
在React中使用Web语音:>
>将默认
替换为以下内容,其中包含Web语音API:此增强的组件管理听力状态(
npx create-react-app book-voice-search cd book-voice-search npm start
),并处理麦克风单击事件(App.js
)。
// App.js import React, { useState, useEffect } from "react"; import "./index.css"; import Mic from "./microphone-black-shape.svg"; // Import your microphone image let speech; if (window.webkitSpeechRecognition) { const SpeechRecognition = webkitSpeechRecognition; speech = new SpeechRecognition(); speech.continuous = true; // Enable continuous listening } else { speech = null; } const App = () => { const [isListening, setIsListening] = useState(false); const [text, setText] = useState(""); const listen = () => { setIsListening(!isListening); if (isListening) { speech.stop(); } else { speech.start(); } }; useEffect(() => { if (!speech) return; speech.onresult = (event) => { setText(event.results[event.results.length - 1][0].transcript); }; }, []); // ... (rest of the component remains the same) }; export default App;
>
isListening
text
>可重复使用的自定义反应语音钩:listen
useEffect
>
提高代码可重复性,创建一个自定义钩useVoice.js
:
const SpeechRecognition = webkitSpeechRecognition; const speech = new SpeechRecognition(); speech.onresult = (event) => { console.log(event); }; speech.start();
用于使用此钩子:App.js
npx create-react-app book-voice-search cd book-voice-search npm start
并促进代码重复使用。App.js
>
书籍语音搜索功能:>
创建另一个自定义挂钩来处理书籍搜索:useBookFetch.js
>
// App.js import React, { useState, useEffect } from "react"; import "./index.css"; import Mic from "./microphone-black-shape.svg"; // Import your microphone image let speech; if (window.webkitSpeechRecognition) { const SpeechRecognition = webkitSpeechRecognition; speech = new SpeechRecognition(); speech.continuous = true; // Enable continuous listening } else { speech = null; } const App = () => { const [isListening, setIsListening] = useState(false); const [text, setText] = useState(""); const listen = () => { setIsListening(!isListening); if (isListening) { speech.stop(); } else { speech.start(); } }; useEffect(() => { if (!speech) return; speech.onresult = (event) => { setText(event.results[event.results.length - 1][0].transcript); }; }, []); // ... (rest of the component remains the same) }; export default App;
中以显示搜索结果:App.js
// useVoice.js import { useState, useEffect } from 'react'; // ... (SpeechRecognition setup remains the same) const useVoice = () => { // ... (state and listen function remain the same) useEffect(() => { // ... (onresult event listener remains the same) }, []); return { text, isListening, listen, voiceSupported: speech !== null }; }; export { useVoice };
[插入codesandbox或类似的演示链接]
> 结论:这个示例展示了Web语音API的功能和简单性,用于在React应用程序中添加语音交互。 请记住浏览器的兼容性和潜在的准确性限制。 完整的代码可在GitHub上找到。>
>常见问题(移动到末端以获得更好的流程):
(这些将以原始格式以结论遵循原始输入的FAQ部分,可以包括在这里,稍微改写,以获得更好的清晰度和流程在这篇修订的文章中。以上是将语音搜索添加到React应用程序的详细内容。更多信息请关注PHP中文网其他相关文章!