Bienvenue dans le troisième et dernier volet de cette série. Dans la deuxième partie, nous avons créé et connecté le backend avec Strapi pour nous aider à sauvegarder nos réunions et transcriptions. Dans cette partie de la série, nous utiliserons ChatGPT avec Strapi pour obtenir des informations sur le texte transcrit en un seul clic. Nous examinerons également quelques tests et comment déployer l'application sur le cloud Strapi.
Vous pouvez retrouver ci-dessous les grandes lignes de cette série :
Nous aurons besoin de nos points de terminaison personnalisés dans Strapi CMS pour nous connecter à ChatGPT, alors accédez au terminal, changez le répertoire en strapi-transcribe-api et exécutez la commande ci-dessous :
yarn strapi generate
Cela lancera le processus de génération de notre API personnalisée. Choisissez l'option API, donnez-lui le nom transcribe-insight-gpt et sélectionnez "non" lorsqu'il nous demande si c'est pour un plugin.
Dans le répertoire src, si nous vérifions le répertoire api dans notre éditeur de code, nous devrions voir l'API nouvellement créée pour transcribe-insight-gpt avec ses routes, contrôleurs, et services.
Vérifions si cela fonctionne en décommentant le code de chaque fichier, en redémarrant le serveur et en accédant au tableau de bord d'administration. Nous souhaitons rendre l'accès à cet itinéraire public, alors cliquez sur Paramètres > Plugin Utilisateurs et autorisations > Rôles > Public, puis faites défiler jusqu'à Sélectionner tout sur l'API transcribe-insight-gpt pour rendre les autorisations publiques, puis cliquez sur enregistrer en haut à droite.
Si nous saisissons ce qui suit dans notre navigateur et cliquons sur Entrée, nous devrions recevoir un message "ok".
http://localhost:1337/api/transcribe-insight-gpt
Nous avons confirmé que le point de terminaison de l'API fonctionne, connectons-le d'abord à OpenAI, installons le package OpenAI, naviguons jusqu'au répertoire de route et exécutons la commande ci-dessous dans le terminal
yarn add openai
Puis, dans le fichier .env, ajoutez la clé API à la variable d'environnement OPENAI :
OPENAI=<OpenAI api key here>
Maintenant, sous le répertoire transcribe-insight-gpt, modifiez le code dans le répertoire routes comme suit :
module.exports = { routes: [ { method: "POST", path: "/transcribe-insight-gpt/exampleAction", handler: "transcribe-insight-gpt.exampleAction", config: { policies: [], middlewares: [], }, }, ], };
Modifiez le code dans le répertoire du contrôleur comme suit :
"use strict"; module.exports = { exampleAction: async (ctx) => { try { const response = await strapi .service("api::transcribe-insight-gpt.transcribe-insight-gpt") .insightService(ctx); ctx.body = { data: response }; } catch (err) { console.log(err.message); throw new Error(err.message); } }, };
Et le code dans le répertoire des services comme suit :
"use strict"; const { OpenAI } = require("openai"); const openai = new OpenAI({ apiKey: process.env.OPENAI, }); /** * transcribe-insight-gpt service */ module.exports = ({ strapi }) => ({ insightService: async (ctx) => { try { const input = ctx.request.body.data?.input; const operation = ctx.request.body.data?.operation; if (operation === "analysis") { const analysisResult = await gptAnalysis(input); return { message: analysisResult, }; } else if (operation === "answer") { const answerResult = await gptAnswer(input); return { message: answerResult, }; } else { return { error: "Invalid operation specified" }; } } catch (err) { ctx.body = err; } }, }); async function gptAnalysis(input) { const analysisPrompt = "Analyse the following text and give me a brief overview of what it means:"; const completion = await openai.chat.completions.create({ messages: [{ role: "user", content: `${analysisPrompt} ${input}` }], model: "gpt-3.5-turbo", }); const analysis = completion.choices[0].message.content; return analysis; } async function gptAnswer(input) { const answerPrompt = "Analyse the following text and give me an answer to the question posed: "; const completion = await openai.chat.completions.create({ messages: [{ role: "user", content: `${answerPrompt} ${input}` }], model: "gpt-3.5-turbo", }); const answer = completion.choices[0].message.content; return answer; }
Ici, nous transmettons deux paramètres à notre API : le texte d'entrée, qui sera nos transcriptions, et l'opération, qui sera soit une analyse, soit une réponse selon l'opération que nous souhaitons qu'elle effectue. Chaque opération aura une invite différente pour ChatGPT.
Nous pouvons vérifier la connexion à notre itinéraire POST en collant le code ci-dessous dans notre terminal :
curl -X POST \ http://localhost:1337/api/transcribe-insight-gpt/exampleAction \ -H 'Content-Type: application/json' \ -d '{ "data": { "input": "Comparatively, four-dimensional space has an extra coordinate axis, orthogonal to the other three, which is usually labeled w. To describe the two additional cardinal directions", "operation": "analysis" } }'
Et pour vérifier l'opération de réponse, vous pouvez utiliser la commande ci-dessous :
curl -X POST \ http://localhost:1337/api/transcribe-insight-gpt/exampleAction \ -H 'Content-Type: application/json' \ -d '{ "data": { "input": "I speak without a mouth and hear without ears. I have no body, but I come alive with the wind. What am I?", "operation": "answer" } }'
C'est super. Maintenant que nous disposons de nos capacités d'analyse et de réponse au sein d'une route API Strapi, nous devons les connecter à notre code frontal et nous assurer que nous pouvons enregistrer ces informations pour nos réunions et transcriptions.
Pour maintenir une séparation claire des préoccupations, créons un fichier API distinct pour la fonctionnalité d'analyse de notre application.
Dans transcribe-frontend sous le répertoire api, créez un nouveau fichier appelé Analysis.js et collez le code suivant :
const baseUrl = 'http://localhost:1337'; const url = `${baseUrl}/api/transcribe-insight-gpt/exampleAction`; export async function callInsightGpt(operation, input) { console.log('operation - ', operation); const payload = { data: { input: input, operation: operation, }, }; try { const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify(payload), }); const data = await response.json(); return data; } catch (error) { console.error('Error:', error); } }
Le code ci-dessus est une requête POST pour appeler l'API Insight et récupérer l'analyse de ChatGPT.
Ajoutons un moyen de mettre à jour nos transcriptions avec des analyses et des réponses. Collez le code suivant dans le fichier transcriptions.js.
export async function updateTranscription( updatedTranscription, transcriptionId ) { const updateURL = `${url}/${transcriptionId}`; const payload = { data: updatedTranscription, }; try { const res = await fetch(updateURL, { method: 'PUT', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify(payload), }); return await res.json(); } catch (error) { console.error('Error updating meeting:', error); throw error; } }
Le code ci-dessus est une requête PUT pour gérer une mise à jour du champ d'analyse ou de réponse sur chaque transcription.
Maintenant, créons un hook où nous pouvons utiliser cette méthode. Créez un fichier nommé useInsightGpt sous le répertoire hooks et collez le code suivant :
import { useState } from 'react'; import { callInsightGpt } from '../api/analysis'; import { updateMeeting } from '../api/meetings'; import { updateTranscription } from '../api/transcriptions'; export const useInsightGpt = () => { const [loadingAnalysis, setLoading] = useState(false); const [transcriptionIdLoading, setTranscriptionIdLoading] = useState(''); const [analysisError, setError] = useState(null); const getAndSaveTranscriptionAnalysis = async ( operation, input, transcriptionId ) => { try { setTranscriptionIdLoading(transcriptionId); // Get insight analysis / answer const { data } = await callInsightGpt(operation, input); // Use transcriptionId to save it to the transcription const updateTranscriptionDetails = operation === 'analysis' ? { analysis: data.message } : { answer: data.message }; await updateTranscription(updateTranscriptionDetails, transcriptionId); setTranscriptionIdLoading(''); } catch (e) { setTranscriptionIdLoading(''); setError('Error getting analysis', e); } }; const getAndSaveOverviewAnalysis = async (operation, input, meetingId) => { try { setLoading(true); // Get overview insight const { data: { message }, } = await callInsightGpt(operation, input); // Use meetingId to save it to the meeting const updateMeetingDetails = { overview: message }; await updateMeeting(updateMeetingDetails, meetingId); setLoading(false); } catch (e) { setLoading(false); setError('Error getting overview', e); } }; return { loadingAnalysis, transcriptionIdLoading, analysisError, getAndSaveTranscriptionAnalysis, getAndSaveOverviewAnalysis, }; };
Ce hook gère la logique pour obtenir et enregistrer l'aperçu de notre réunion une fois celle-ci terminée. Il gère également l'obtention de l'analyse ou des réponses à nos transcriptions et leur sauvegarde. Il garde une trace de la transcription pour laquelle nous demandons une analyse afin que nous puissions afficher les états de chargement spécifiques.
Import the functionality above into the TranscribeContainer and use it. Paste the following updated code into TranscribeContainer.jsx
import React, { useState, useEffect } from "react"; import styles from "../styles/Transcribe.module.css"; import { useAudioRecorder } from "../hooks/useAudioRecorder"; import RecordingControls from "../components/transcription/RecordingControls"; import TranscribedText from "../components/transcription/TranscribedText"; import { useRouter } from "next/router"; import { useMeetings } from "../hooks/useMeetings"; import { useInsightGpt } from "../hooks/useInsightGpt"; import { createNewTranscription } from "../api/transcriptions"; const TranscribeContainer = ({ streaming = true, timeSlice = 1000 }) => { const router = useRouter(); const [meetingId, setMeetingId] = useState(null); const [meetingTitle, setMeetingTitle] = useState(""); const { getMeetingDetails, saveTranscriptionToMeeting, updateMeetingDetails, loading, error, meetingDetails, } = useMeetings(); const { loadingAnalysis, transcriptionIdLoading, analysisError, getAndSaveTranscriptionAnalysis, getAndSaveOverviewAnalysis, } = useInsightGpt(); const apiKey = process.env.NEXT_PUBLIC_OPENAI_API_KEY; const whisperApiEndpoint = "https://api.openai.com/v1/audio/"; const { recording, transcribed, handleStartRecording, handleStopRecording, setTranscribed, } = useAudioRecorder(streaming, timeSlice, apiKey, whisperApiEndpoint); const { ended } = meetingDetails; const transcribedHistory = meetingDetails?.transcribed_chunks?.data; useEffect(() => { const fetchDetails = async () => { if (router.isReady) { const { meetingId } = router.query; if (meetingId) { try { await getMeetingDetails(meetingId); setMeetingId(meetingId); } catch (err) { console.log("Error getting meeting details - ", err); } } } }; fetchDetails(); }, [router.isReady, router.query]); useEffect(() => { setMeetingTitle(meetingDetails.title); }, [meetingDetails]); const handleGetAnalysis = async (input, transcriptionId) => { await getAndSaveTranscriptionAnalysis("analysis", input, transcriptionId); // re-fetch meeting details await getMeetingDetails(meetingId); }; const handleGetAnswer = async (input, transcriptionId) => { await getAndSaveTranscriptionAnalysis("answer", input, transcriptionId); // re-fetch meeting details await getMeetingDetails(meetingId); }; const handleStopMeeting = async () => { // provide meeting overview and save it // getMeetingOverview(transcribed_chunks) await updateMeetingDetails( { title: meetingTitle, ended: true, }, meetingId, ); // re-fetch meeting details await getMeetingDetails(meetingId); setTranscribed(""); }; const stopAndSaveTranscription = async () => { // save transcription first let { data: { id: transcriptionId }, } = await createNewTranscription(transcribed); // make a call to save the transcription chunk here await saveTranscriptionToMeeting(meetingId, meetingTitle, transcriptionId); // re-fetch current meeting which should have updated transcriptions await getMeetingDetails(meetingId); // Stop and clear the current transcription as it's now saved await handleStopRecording(); }; const handleGoBack = () => { router.back(); }; if (loading) return <p>Loading...</p>; return ( <div style={{ margin: "20px" }}> {ended && ( <button onClick={handleGoBack} className={styles.goBackButton}> Go Back </button> )} {!ended && ( <button className={styles["end-meeting-button"]} onClick={handleStopMeeting} > End Meeting </button> )} {ended ? ( <p className={styles.title}>{meetingTitle}</p> ) : ( <input onChange={(e) => setMeetingTitle(e.target.value)} value={meetingTitle} type="text" placeholder="Meeting title here..." className={styles["custom-input"]} /> )} <div> {!ended && ( <div> <RecordingControls handleStartRecording={handleStartRecording} handleStopRecording={stopAndSaveTranscription} /> {recording ? ( <p className={styles["primary-text"]}>Recording</p> ) : ( <p>Not recording</p> )} </div> )} {/*Current transcription*/} {transcribed && <h1>Current transcription</h1>} <TranscribedText transcribed={transcribed} current={true} /> {/*Transcribed history*/} <h1>History</h1> {transcribedHistory ?.slice() .reverse() .map((val, i) => { const transcribedChunk = val.attributes; const text = transcribedChunk.text; const transcriptionId = val.id; return ( <TranscribedText key={transcriptionId} transcribed={text} answer={transcribedChunk.answer} analysis={transcribedChunk.analysis} handleGetAnalysis={() => handleGetAnalysis(text, transcriptionId) } handleGetAnswer={() => handleGetAnswer(text, transcriptionId)} loading={transcriptionIdLoading === transcriptionId} /> ); })} </div> </div> ); }; export default TranscribeContainer;
Here, depending on your need, we use the useInsightGpt hook to get the analysis or answer. We also display a loading indicator beside the transcribed text.
Paste the following code into TranscribedText.jsx to update the UI accordingly.
import styles from '../../styles/Transcribe.module.css'; function TranscribedText({ transcribed, answer, analysis, handleGetAnalysis, handleGetAnswer, loading, current, }) { return ( <div className={styles['transcribed-text-container']}> <div className={styles['speech-bubble-container']}> {transcribed && ( <div className={styles['speech-bubble']}> <div className={styles['speech-pointer']}></div> <div className={styles['speech-text-question']}>{transcribed}</div> {!current && ( <div className={styles['button-container']}> <button className={styles['primary-button-analysis']} onClick={handleGetAnalysis} > Get analysis </button> <button className={styles['primary-button-answer']} onClick={handleGetAnswer} > Get answer </button> </div> )} </div> )} </div> <div> <div className={styles['speech-bubble-container']}> {loading && ( <div className={styles['analysis-bubble']}> <div className={styles['analysis-pointer']}></div> <div className={styles['speech-text-answer']}>Loading...</div> </div> )} {analysis && ( <div className={styles['analysis-bubble']}> <div className={styles['analysis-pointer']}></div> <p style={{ margin: 0 }}>Analysis</p> <div className={styles['speech-text-answer']}>{analysis}</div> </div> )} </div> <div className={styles['speech-bubble-container']}> {answer && ( <div className={styles['speech-bubble-right']}> <div className={styles['speech-pointer-right']}></div> <p style={{ margin: 0 }}>Answer</p> <div className={styles['speech-text-answer']}>{answer}</div> </div> )} </div> </div> </div> ); } export default TranscribedText;
We can now request analysis and get answers to questions in real-time straight after they have been transcribed.
When the user ends the meeting, we want to provide an overview of everything discussed. Let's add this functionality to the TranscribeContainer component.
In the function handleStopMeeting we can use the method getAndSaveOverviewAnalysis from the useInsightGpt hook:
const handleStopMeeting = async () => { // provide meeting overview and save it const transcribedHistoryText = transcribedHistory .map((val) => `transcribed_chunk: ${val.attributes.text}`) .join(', '); await getAndSaveOverviewAnalysis( 'analysis', transcribedHistoryText, meetingId ); await updateMeetingDetails( { title: meetingTitle, ended: true, }, meetingId ); // re-fetch meeting details await getMeetingDetails(meetingId); setTranscribed(''); };
Here, we are joining all of the transcribed chunks from the meeting and then sending them to our ChatGPT API for analysis, where they will be saved for our meeting.
Now, let's display the overview once it has been loaded. Add the following code above the RecordingControls:
{loadingAnalysis && <p>Loading Overview...</p>} {overview && ( <div> <h1>Overview</h1> <p>{overview}</p> </div> )}
Then, destructure the overview from the meeting details by adding the following line below our hook declarations:
const { ended, overview } = meetingDetails;
To summarise, we listen to the loading indicator from useInsightGpt and check if overview is present from the meeting; if it is, we display it.
We have a couple of errors that could be caused by one of our hooks; let's create a component to handle them.
Create a file called ErrorToast.js under the components directory:
import { useEffect, useState } from 'react'; const ErrorToast = ({ message, duration }) => { const [visible, setVisible] = useState(true); useEffect(() => { const timer = setTimeout(() => { setVisible(false); }, duration); return () => clearTimeout(timer); }, [duration]); if (!visible) return null; return <div className="toast">{message}</div>; }; export default ErrorToast;
And add the following css code to globals.css under the style directory:
.toast { position: fixed; top: 20px; left: 50%; transform: translateX(-50%); background-color: rgba(255, 0, 0, 0.8); color: white; padding: 16px; border-radius: 8px; box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1); z-index: 1000; transition: opacity 0.5s ease-out; opacity: 1; display: flex; align-items: center; justify-content: center; text-align: center; } .toast-hide { opacity: 0; }
Now, we can use this error component in TranscribeContainer; whenever we encounter an unexpected error from the API, we will show this error toast briefly to notify the user that something went wrong.
Import the ErrorToast at the top of the file and then paste the following code above the Go Back button in the return statement of our component:
{error || analysisError ? ( <ErrorToast message={error || analysisError} duration={5000} /> ) : null}
Now, let's add a test to ensure our hooks are working as we expect them to and to alert us to any breaking changes in the code that might be introduced later. First, add the packages below so we can use jest in our project.
yarn add -D jest jest-environment-jsdom @testing-library/react @testing-library/jest-dom @testing-library/react-hooks
Then create a jest.config.js file in the route of the frontend project and add the following code:
const nextJest = require('next/jest'); const createJestConfig = nextJest({ dir: './', }); const customJestConfig = { moduleDirectories: ['node_modules', '<rootDir>/'], testEnvironment: 'jest-environment-jsdom', }; module.exports = createJestConfig(customJestConfig);
This just sets up Jest ready to be used in Next.js.
Create a test directory and an index.test.js file with the following code:
import { renderHook, act } from '@testing-library/react-hooks'; import { useInsightGpt } from '../hooks/useInsightGpt'; import { callInsightGpt } from '../api/analysis'; import { updateMeeting } from '../api/meetings'; import { updateTranscription } from '../api/transcriptions'; jest.mock('../api/analysis'); jest.mock('../api/meetings'); jest.mock('../api/transcriptions'); describe('useInsightGpt', () => { beforeEach(() => { jest.clearAllMocks(); }); it('should handle transcription analysis successfully', async () => { const mockData = { data: { message: 'Test analysis message' } }; callInsightGpt.mockResolvedValueOnce(mockData); updateTranscription.mockResolvedValueOnce({}); const { result } = renderHook(() => useInsightGpt()); await act(async () => { await result.current.getAndSaveTranscriptionAnalysis( 'analysis', 'input', 'transcriptionId' ); }); expect(callInsightGpt).toHaveBeenCalledWith('analysis', 'input'); expect(updateTranscription).toHaveBeenCalledWith( { analysis: 'Test analysis message' }, 'transcriptionId' ); expect(result.current.transcriptionIdLoading).toBe(''); expect(result.current.analysisError).toBe(null); }); it('should handle overview analysis successfully', async () => { const mockData = { data: { message: 'Test overview message' } }; callInsightGpt.mockResolvedValueOnce(mockData); updateMeeting.mockResolvedValueOnce({}); const { result } = renderHook(() => useInsightGpt()); await act(async () => { await result.current.getAndSaveOverviewAnalysis( 'overview', 'input', 'meetingId' ); }); expect(callInsightGpt).toHaveBeenCalledWith('overview', 'input'); expect(updateMeeting).toHaveBeenCalledWith( { overview: 'Test overview message' }, 'meetingId' ); expect(result.current.loadingAnalysis).toBe(false); expect(result.current.analysisError).toBe(null); }); it('should handle errors in transcription analysis', async () => { const mockError = new Error('Test error'); callInsightGpt.mockRejectedValueOnce(mockError); const { result } = renderHook(() => useInsightGpt()); await act(async () => { await result.current.getAndSaveTranscriptionAnalysis( 'analysis', 'input', 'transcriptionId' ); }); expect(result.current.transcriptionIdLoading).toBe(''); expect(result.current.analysisError).toBe( 'Error getting analysis', mockError ); }); it('should handle errors in overview analysis', async () => { const mockError = new Error('Test error'); callInsightGpt.mockRejectedValueOnce(mockError); const { result } = renderHook(() => useInsightGpt()); await act(async () => { await result.current.getAndSaveOverviewAnalysis( 'overview', 'input', 'meetingId' ); }); expect(result.current.loadingAnalysis).toBe(false); expect(result.current.analysisError).toBe( 'Error getting overview', mockError ); }); });
Because the hooks use our Strapi API, we need a way to replace the data we're getting back from the API calls. We're using jest.mock to intercept the APIs and send back mock data. This way, we can test our hooks' internal logic without calling the API.
In the first two tests, we mock the API call and return some data, then render our hook and call the correct function. We then check if the correct functions have been called with the correct data from inside the hook. The last two tests just test that errors are handled correctly.
Add the following under scripts in the package.json file:
"test": "jest --watch"
Now open the terminal, navigate to the route directory of the frontend project, and run the following command to check if the tests are passing:
yarn test
You should see a success message like the one below:
As an optional challenge, let's see if you can apply what we did with testing useInsightGpt to testing the other hooks.
Here is what our application looks like.
Finally, we have the finished application up and running correctly with some tests. The time has come to deploy our project to Strapi cloud.
First, navigate to Strapi and click on "cloud" at the top right.
Connect with GitHub.
From the dashboard, click on Create project.
Choose your GitHub account and the correct repo, fill out the display name, and choose the region.
Now, if you have the same file structure as me, which you should do if you've been following along, then you will just need to add the base directory, so click on Show advanced settings and enter the base directory of /strapi-transcribe-api, then you will need to add all of the environment variables that can be found in the .env file in the route of the strapi project.
Once you have added all of these, click on "create project." This will bring you to a loading screen, and then you will be redirected to the build logs; here, you can just wait for the build to finish.
Once it has finished building, you can click on Overview from the top left. This should direct you to the dashboard, where you will find the details of your deployment and the app URL under Overview on the right.
First, click on your app URL, which will open a new tab and direct you to the welcome page of your Strapi app. Then, create a new admin user, which will log you into the dashboard.
This is a new deployment, and as such, it won't have any of the data we had saved locally; it also won't have carried across the public settings we had on the API, so click on Settings>Users & Permissions Plugin>Roles>Public, expand and select all on Meeting, Transcribe-insight-gpt, and Transcribed-chunk, and then click save in the top right.
Once again, let's just check that our deployment was successful by running the below command in the terminal. Please replace https://yourDeployedUrlHere.com with the URL in the Strapi cloud dashboard.
curl -X POST \ https://yourDeployedUrlHere.com/api/transcribe-insight-gpt/exampleAction \ -H 'Content-Type: application/json' \ -d '{ "data": { "input": "I speak without a mouth and hear without ears. I have no body, but I come alive with the wind. What am I?", "operation": "answer" } }'
Now we have the API deployed and ready to use, let's deploy our frontend with Vercel.
First, we will need to change the baseUrl in our API files to link to our newly deployed Strapi instance,
Add the following variable to .env.local
NEXT_PUBLIC_STRAPI_URL="your strapi cloud url"
Now go ahead and replace the current value of baseUrl with the following in all three API files:
const baseUrl = process.env.NODE_ENV == 'production' ? process.env.NEXT_PUBLIC_STRAPI_URL : 'http://localhost:1337';
This will just check if the app is running in production. If so, it will use our deployed strap instance. If not, it will revert to localhost. Make sure to push these changes to Github.
Now navigate to Vercel and sign up if you don't already have an account.
Now, let's create a new project by continuing with GitHub.
Once you have verified your account, import the correct GitHub repo
Now we will fill out some configuration details, give the project a name, change the framework preset to Next.js, change the root directory to 'transcribe-frontend', and add the two environment variables from the .env.local file in the Next.js project.
Now click deploy and wait for it to finish. Once deployed, it should redirect you to a success page with a preview of the app.
Now click continue to the dashboard, where you can find information about the app, such as the domain and the deployment logs.
From here, you can click visit to be directed to the app's frontend deployment.
Alors voilà ! Vous avez maintenant créé votre application de transcription du début à la fin. Nous avons expliqué comment y parvenir avec plusieurs technologies de pointe. Nous avons utilisé Strapi pour le CMS backend et l'intégration personnalisée de ChatGPT, démontrant avec quelle rapidité et facilité cette technologie peut permettre de créer des applications Web complexes. Nous avons également couvert certains modèles architecturaux avec gestion des erreurs et tests dans Next.js, et enfin, nous avons déployé le backend sur le cloud Strapi. J'espère que vous avez trouvé cette série révélatrice et qu'elle vous encouragera à donner vie à vos idées.
Ce qui précède est le contenu détaillé de. pour plus d'informations, suivez d'autres articles connexes sur le site Web de PHP en chinois!