首頁 > web前端 > js教程 > 主體

使用 Strapi、ChatGPT 和 Whisper 建立轉錄應用程式:第 3 部分

王林
發布: 2024-09-08 20:31:38
原創
929 人瀏覽過

歡迎來到本系列的第三部分,也是最後一部分。在第 2 部分中,我們創建了後端並將其與 Strapi 連接,以幫助保存我們的會議和轉錄。在本系列的這一部分中,我們將結合使用 ChatGPT 和 Strapi,只需單擊按鈕即可深入了解轉錄文字。我們還將研究一些測試以及如何將應用程式部署到 Strapi 雲端。

大綱

您可以在下面找到本系列的大綱:

  • 第 1 部分:實現音訊錄製與使用者介面
  • 第 2 部分:合併 Strapi CMS 並儲存轉錄
  • 第 3 部分:實現與 chatGPT 的連接並部署到 Strapi 雲端

在 Strapi 中建立自訂 API 端點

我們需要 Strapi CMS 中的自訂端點來與 ChatGPT 連接,因此導航到終端,將目錄更改為 Strapi-transcribe-api,然後執行以下命令:

yarn strapi generate
登入後複製

這樣做將開始產生自訂 API 的過程。選擇 API 選項,將其命名為 transcribe-insight-gpt,然後在詢問我們是否是插件時選擇 「否」

公開 Strapi API

在src 目錄中,如果我們在程式碼編輯器中檢查api 目錄,我們應該會看到新建立的transcribe-insight-gpt API,其中包含routescontrollers、和服務目錄。

讓我們透過取消每個檔案中的程式碼註解、重新啟動伺服器並導航到管理儀表板來檢查它是否有效。我們希望公開存取此路線,因此點擊設定>使用者和權限外掛程式>角色>公開,然後向下捲動到在transcribe-insight-gpt API上選擇全部以將權限公開,然後點擊右上角的儲存

如果我們在瀏覽器中輸入以下內容並點擊 Enter,我們應該會收到一條 「ok」 訊息。

http://localhost:1337/api/transcribe-insight-gpt
登入後複製

將 ChatGPT 與 Strapi 結​​合使用

我們已經確認 API 端點正在工作,讓我們先將其連接到 OpenAI,安裝 OpenAI 包,導航到路由目錄,然後在終端機中運行以下命令

yarn add openai
登入後複製

然後,在 .env 檔案中,將 API 金鑰新增至 OPENAI 環境變數:

OPENAI=<OpenAI api key here>
登入後複製

現在,在transcribe-insight-gpt目錄下,將routes目錄中的程式碼改為以下內容:

module.exports = {
  routes: [
    {
      method: "POST",
      path: "/transcribe-insight-gpt/exampleAction",
      handler: "transcribe-insight-gpt.exampleAction",
      config: {
        policies: [],
        middlewares: [],
      },
    },
  ],
};
登入後複製

將控制器目錄中的程式碼變更為以下內容:

"use strict";

module.exports = {
  exampleAction: async (ctx) => {
    try {
      const response = await strapi
        .service("api::transcribe-insight-gpt.transcribe-insight-gpt")
        .insightService(ctx);

      ctx.body = { data: response };
    } catch (err) {
      console.log(err.message);
      throw new Error(err.message);
    }
  },
};
登入後複製

services目錄中的程式碼如下:

"use strict";
const { OpenAI } = require("openai");
const openai = new OpenAI({
  apiKey: process.env.OPENAI,
});

/**
 * transcribe-insight-gpt service
 */

module.exports = ({ strapi }) => ({
  insightService: async (ctx) => {
    try {
      const input = ctx.request.body.data?.input;
      const operation = ctx.request.body.data?.operation;

      if (operation === "analysis") {
        const analysisResult = await gptAnalysis(input);

        return {
          message: analysisResult,
        };
      } else if (operation === "answer") {
        const answerResult = await gptAnswer(input);

        return {
          message: answerResult,
        };
      } else {
        return { error: "Invalid operation specified" };
      }
    } catch (err) {
      ctx.body = err;
    }
  },
});

async function gptAnalysis(input) {
  const analysisPrompt =
    "Analyse the following text and give me a brief overview of what it means:";
  const completion = await openai.chat.completions.create({
    messages: [{ role: "user", content: `${analysisPrompt} ${input}` }],
    model: "gpt-3.5-turbo",
  });

  const analysis = completion.choices[0].message.content;

  return analysis;
}

async function gptAnswer(input) {
  const answerPrompt =
    "Analyse the following text and give me an answer to the question posed: ";
  const completion = await openai.chat.completions.create({
    messages: [{ role: "user", content: `${answerPrompt} ${input}` }],
    model: "gpt-3.5-turbo",
  });

  const answer = completion.choices[0].message.content;

  return answer;
}
登入後複製

在這裡,我們將兩個參數傳遞給我們的 API:輸入文字(將是我們的轉錄)和操作(根據我們希望它執行的操作將是分析或答案)。每次操作都會有不同的ChatGPT提示。

確認 ChatGPT 有效

我們可以透過在終端機中貼上以下程式碼來檢查與 POST 路由的連線:

curl -X POST \
  http://localhost:1337/api/transcribe-insight-gpt/exampleAction \
  -H 'Content-Type: application/json' \
  -d '{
    "data": {
        "input": "Comparatively, four-dimensional space has an extra coordinate axis, orthogonal to the other three, which is usually labeled w. To describe the two additional cardinal directions",
        "operation": "analysis"
    }
}'
登入後複製

並且要檢查答案操作,您可以使用以下命令:

curl -X POST \
  http://localhost:1337/api/transcribe-insight-gpt/exampleAction \
  -H 'Content-Type: application/json' \
  -d '{
    "data": {
        "input": "I speak without a mouth and hear without ears. I have no body, but I come alive with the wind. What am I?",
        "operation": "answer"
    }
}'
登入後複製

太棒了。現在我們在 Strapi API 路徑中擁有了分析和應答功能,我們需要將其連接到我們的前端程式碼,並確保我們可以保存這些資訊以用於我們的會議和轉錄。

在 Next.js 中連接用於分析的自訂 API

為了保持清晰的關注點分離,讓我們為應用程式的分析功能建立一個單獨的 API 檔案。

從 ChatGPT 取得分析

在 transcribe-frontend 的 api 目錄下,建立一個名為analysis.js 的新檔案並貼上以下程式碼:

const baseUrl = 'http://localhost:1337';
const url = `${baseUrl}/api/transcribe-insight-gpt/exampleAction`;

export async function callInsightGpt(operation, input) {
  console.log('operation - ', operation);
  const payload = {
    data: {
      input: input,
      operation: operation,
    },
  };
  try {
    const response = await fetch(url, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
      },
      body: JSON.stringify(payload),
    });

    const data = await response.json();
    return data;
  } catch (error) {
    console.error('Error:', error);
  }
}
登入後複製

上面的程式碼是一個 POST 請求,用於呼叫 Insight API 並從 ChatGPT 傳回分析結果。

透過分析和答案更新轉錄

讓我們加入一種透過分析和答案來更新轉錄的方法。將以下程式碼貼到transcriptions.js 檔案中。

export async function updateTranscription(
  updatedTranscription,
  transcriptionId
) {
  const updateURL = `${url}/${transcriptionId}`;
  const payload = {
    data: updatedTranscription,
  };

  try {
    const res = await fetch(updateURL, {
      method: 'PUT',
      headers: {
        'Content-Type': 'application/json',
      },
      body: JSON.stringify(payload),
    });

    return await res.json();
  } catch (error) {
    console.error('Error updating meeting:', error);
    throw error;
  }
}
登入後複製

上面的程式碼是一個 PUT 請求,用於處理每個轉錄的分析或答案欄位的更新。

建立自訂掛鉤來處理轉錄的概述和分析

現在,讓我們建立一個可以使用此方法的鉤子。在 hooks 目錄下建立一個名為 useInsightGpt 的文件,並貼上以下程式碼:

import { useState } from 'react';
import { callInsightGpt } from '../api/analysis';
import { updateMeeting } from '../api/meetings';
import { updateTranscription } from '../api/transcriptions';

export const useInsightGpt = () => {
  const [loadingAnalysis, setLoading] = useState(false);
  const [transcriptionIdLoading, setTranscriptionIdLoading] = useState('');
  const [analysisError, setError] = useState(null);

  const getAndSaveTranscriptionAnalysis = async (
    operation,
    input,
    transcriptionId
  ) => {
    try {
      setTranscriptionIdLoading(transcriptionId);
      // Get insight analysis / answer
      const { data } = await callInsightGpt(operation, input);
      // Use transcriptionId to save it to the transcription
      const updateTranscriptionDetails =
        operation === 'analysis'
          ? { analysis: data.message }
          : { answer: data.message };
      await updateTranscription(updateTranscriptionDetails, transcriptionId);
      setTranscriptionIdLoading('');
    } catch (e) {
      setTranscriptionIdLoading('');
      setError('Error getting analysis', e);
    }
  };

  const getAndSaveOverviewAnalysis = async (operation, input, meetingId) => {
    try {
      setLoading(true);
      // Get overview insight
      const {
        data: { message },
      } = await callInsightGpt(operation, input);
      // Use meetingId to save it to the meeting
      const updateMeetingDetails = { overview: message };
      await updateMeeting(updateMeetingDetails, meetingId);
      setLoading(false);
    } catch (e) {
      setLoading(false);
      setError('Error getting overview', e);
    }
  };

  return {
    loadingAnalysis,
    transcriptionIdLoading,
    analysisError,
    getAndSaveTranscriptionAnalysis,
    getAndSaveOverviewAnalysis,
  };
};

登入後複製

此掛鉤處理在會議結束時取得和保存會議概述的邏輯。它還負責獲取轉錄的分析或答案並保存它們。它會追蹤我們請求分析的轉錄,以便我們可以顯示特定的載入狀態。

Display Analysis of a Transcription

Import the functionality above into the TranscribeContainer and use it. Paste the following updated code into TranscribeContainer.jsx

import React, { useState, useEffect } from "react";
import styles from "../styles/Transcribe.module.css";
import { useAudioRecorder } from "../hooks/useAudioRecorder";
import RecordingControls from "../components/transcription/RecordingControls";
import TranscribedText from "../components/transcription/TranscribedText";
import { useRouter } from "next/router";
import { useMeetings } from "../hooks/useMeetings";
import { useInsightGpt } from "../hooks/useInsightGpt";
import { createNewTranscription } from "../api/transcriptions";

const TranscribeContainer = ({ streaming = true, timeSlice = 1000 }) => {
  const router = useRouter();
  const [meetingId, setMeetingId] = useState(null);
  const [meetingTitle, setMeetingTitle] = useState("");
  const {
    getMeetingDetails,
    saveTranscriptionToMeeting,
    updateMeetingDetails,
    loading,
    error,
    meetingDetails,
  } = useMeetings();
  const {
    loadingAnalysis,
    transcriptionIdLoading,
    analysisError,
    getAndSaveTranscriptionAnalysis,
    getAndSaveOverviewAnalysis,
  } = useInsightGpt();
  const apiKey = process.env.NEXT_PUBLIC_OPENAI_API_KEY;
  const whisperApiEndpoint = "https://api.openai.com/v1/audio/";
  const {
    recording,
    transcribed,
    handleStartRecording,
    handleStopRecording,
    setTranscribed,
  } = useAudioRecorder(streaming, timeSlice, apiKey, whisperApiEndpoint);

  const { ended } = meetingDetails;
  const transcribedHistory = meetingDetails?.transcribed_chunks?.data;

  useEffect(() => {
    const fetchDetails = async () => {
      if (router.isReady) {
        const { meetingId } = router.query;
        if (meetingId) {
          try {
            await getMeetingDetails(meetingId);
            setMeetingId(meetingId);
          } catch (err) {
            console.log("Error getting meeting details - ", err);
          }
        }
      }
    };

    fetchDetails();
  }, [router.isReady, router.query]);

  useEffect(() => {
    setMeetingTitle(meetingDetails.title);
  }, [meetingDetails]);

  const handleGetAnalysis = async (input, transcriptionId) => {
    await getAndSaveTranscriptionAnalysis("analysis", input, transcriptionId);
    // re-fetch meeting details
    await getMeetingDetails(meetingId);
  };

  const handleGetAnswer = async (input, transcriptionId) => {
    await getAndSaveTranscriptionAnalysis("answer", input, transcriptionId);
    // re-fetch meeting details
    await getMeetingDetails(meetingId);
  };

  const handleStopMeeting = async () => {
    // provide meeting overview and save it
    // getMeetingOverview(transcribed_chunks)
    await updateMeetingDetails(
      {
        title: meetingTitle,
        ended: true,
      },
      meetingId,
    );

    // re-fetch meeting details
    await getMeetingDetails(meetingId);
    setTranscribed("");
  };

  const stopAndSaveTranscription = async () => {
    // save transcription first
    let {
      data: { id: transcriptionId },
    } = await createNewTranscription(transcribed);

    // make a call to save the transcription chunk here
    await saveTranscriptionToMeeting(meetingId, meetingTitle, transcriptionId);
    // re-fetch current meeting which should have updated transcriptions
    await getMeetingDetails(meetingId);
    // Stop and clear the current transcription as it's now saved
    await handleStopRecording();
  };

  const handleGoBack = () => {
    router.back();
  };

  if (loading) return <p>Loading...</p>;

  return (
    <div style={{ margin: "20px" }}>
      {ended && (
        <button onClick={handleGoBack} className={styles.goBackButton}>
          Go Back
        </button>
      )}
      {!ended && (
        <button
          className={styles["end-meeting-button"]}
          onClick={handleStopMeeting}
        >
          End Meeting
        </button>
      )}
      {ended ? (
        <p className={styles.title}>{meetingTitle}</p>
      ) : (
        <input
          onChange={(e) => setMeetingTitle(e.target.value)}
          value={meetingTitle}
          type="text"
          placeholder="Meeting title here..."
          className={styles["custom-input"]}
        />
      )}
      <div>
        {!ended && (
          <div>
            <RecordingControls
              handleStartRecording={handleStartRecording}
              handleStopRecording={stopAndSaveTranscription}
            />
            {recording ? (
              <p className={styles["primary-text"]}>Recording</p>
            ) : (
              <p>Not recording</p>
            )}
          </div>
        )}

        {/*Current transcription*/}
        {transcribed && <h1>Current transcription</h1>}
        <TranscribedText transcribed={transcribed} current={true} />

        {/*Transcribed history*/}
        <h1>History</h1>
        {transcribedHistory
          ?.slice()
          .reverse()
          .map((val, i) => {
            const transcribedChunk = val.attributes;
            const text = transcribedChunk.text;
            const transcriptionId = val.id;
            return (
              <TranscribedText
                key={transcriptionId}
                transcribed={text}
                answer={transcribedChunk.answer}
                analysis={transcribedChunk.analysis}
                handleGetAnalysis={() =>
                  handleGetAnalysis(text, transcriptionId)
                }
                handleGetAnswer={() => handleGetAnswer(text, transcriptionId)}
                loading={transcriptionIdLoading === transcriptionId}
              />
            );
          })}
      </div>
    </div>
  );
};

export default TranscribeContainer;
登入後複製

Here, depending on your need, we use the useInsightGpt hook to get the analysis or answer. We also display a loading indicator beside the transcribed text.

Display Answers and Analysis of Transcriptions in Real Time

Paste the following code into TranscribedText.jsx to update the UI accordingly.

import styles from '../../styles/Transcribe.module.css';

function TranscribedText({
  transcribed,
  answer,
  analysis,
  handleGetAnalysis,
  handleGetAnswer,
  loading,
  current,
}) {
  return (
    <div className={styles['transcribed-text-container']}>
      <div className={styles['speech-bubble-container']}>
        {transcribed && (
          <div className={styles['speech-bubble']}>
            <div className={styles['speech-pointer']}></div>
            <div className={styles['speech-text-question']}>{transcribed}</div>
            {!current && (
              <div className={styles['button-container']}>
                <button
                  className={styles['primary-button-analysis']}
                  onClick={handleGetAnalysis}
                >
                  Get analysis
                </button>
                <button
                  className={styles['primary-button-answer']}
                  onClick={handleGetAnswer}
                >
                  Get answer
                </button>
              </div>
            )}
          </div>
        )}
      </div>
      <div>
        <div className={styles['speech-bubble-container']}>
          {loading && (
            <div className={styles['analysis-bubble']}>
              <div className={styles['analysis-pointer']}></div>
              <div className={styles['speech-text-answer']}>Loading...</div>
            </div>
          )}
          {analysis && (
            <div className={styles['analysis-bubble']}>
              <div className={styles['analysis-pointer']}></div>
              <p style={{ margin: 0 }}>Analysis</p>
              <div className={styles['speech-text-answer']}>{analysis}</div>
            </div>
          )}
        </div>
        <div className={styles['speech-bubble-container']}>
          {answer && (
            <div className={styles['speech-bubble-right']}>
              <div className={styles['speech-pointer-right']}></div>
              <p style={{ margin: 0 }}>Answer</p>
              <div className={styles['speech-text-answer']}>{answer}</div>
            </div>
          )}
        </div>
      </div>
    </div>
  );
}

export default TranscribedText;
登入後複製

We can now request analysis and get answers to questions in real-time straight after they have been transcribed.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

Implement Meeting Overview functionality

When the user ends the meeting, we want to provide an overview of everything discussed. Let's add this functionality to the TranscribeContainer component.

In the function handleStopMeeting we can use the method getAndSaveOverviewAnalysis from the useInsightGpt hook:

const handleStopMeeting = async () => {
    // provide meeting overview and save it
    const transcribedHistoryText = transcribedHistory
      .map((val) => `transcribed_chunk: ${val.attributes.text}`)
      .join(', ');

    await getAndSaveOverviewAnalysis(
      'analysis',
      transcribedHistoryText,
      meetingId
    );

    await updateMeetingDetails(
      {
        title: meetingTitle,
        ended: true,
      },
      meetingId
    );

    // re-fetch meeting details
    await getMeetingDetails(meetingId);
    setTranscribed('');
  };
登入後複製

Here, we are joining all of the transcribed chunks from the meeting and then sending them to our ChatGPT API for analysis, where they will be saved for our meeting.

Now, let's display the overview once it has been loaded. Add the following code above the RecordingControls:

{loadingAnalysis && <p>Loading Overview...</p>}

 {overview && (
    <div>
      <h1>Overview</h1>
      <p>{overview}</p>
    </div>
  )}
登入後複製

Then, destructure the overview from the meeting details by adding the following line below our hook declarations:

const { ended, overview } = meetingDetails;
登入後複製

To summarise, we listen to the loading indicator from useInsightGpt and check if overview is present from the meeting; if it is, we display it.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

Error handling in Next.js

We have a couple of errors that could be caused by one of our hooks; let's create a component to handle them.

Create a file called ErrorToast.js under the components directory:

import { useEffect, useState } from 'react';

const ErrorToast = ({ message, duration }) => {
  const [visible, setVisible] = useState(true);

  useEffect(() => {
    const timer = setTimeout(() => {
      setVisible(false);
    }, duration);

    return () => clearTimeout(timer);
  }, [duration]);

  if (!visible) return null;

  return <div className="toast">{message}</div>;
};

export default ErrorToast;
登入後複製

And add the following css code to globals.css under the style directory:

.toast {
  position: fixed;
  top: 20px;
  left: 50%;
  transform: translateX(-50%);
  background-color: rgba(255, 0, 0, 0.8);
  color: white;
  padding: 16px;
  border-radius: 8px;
  box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
  z-index: 1000;
  transition: opacity 0.5s ease-out;
  opacity: 1;
  display: flex;
  align-items: center;
  justify-content: center;
  text-align: center;
}

.toast-hide {
  opacity: 0;
}
登入後複製

Now, we can use this error component in TranscribeContainer; whenever we encounter an unexpected error from the API, we will show this error toast briefly to notify the user that something went wrong.

Import the ErrorToast at the top of the file and then paste the following code above the Go Back button in the return statement of our component:

 {error || analysisError ? (
        <ErrorToast message={error || analysisError} duration={5000} />
      ) : null}
登入後複製

Testing with Next.js using Jest

Now, let's add a test to ensure our hooks are working as we expect them to and to alert us to any breaking changes in the code that might be introduced later. First, add the packages below so we can use jest in our project.

yarn add -D jest jest-environment-jsdom @testing-library/react @testing-library/jest-dom @testing-library/react-hooks
登入後複製

Then create a jest.config.js file in the route of the frontend project and add the following code:

const nextJest = require('next/jest');
const createJestConfig = nextJest({
  dir: './',
});
const customJestConfig = {
  moduleDirectories: ['node_modules', '<rootDir>/'],
  testEnvironment: 'jest-environment-jsdom',
};
module.exports = createJestConfig(customJestConfig);
登入後複製

This just sets up Jest ready to be used in Next.js.

Create a test directory and an index.test.js file with the following code:

import { renderHook, act } from '@testing-library/react-hooks';
import { useInsightGpt } from '../hooks/useInsightGpt';
import { callInsightGpt } from '../api/analysis';
import { updateMeeting } from '../api/meetings';
import { updateTranscription } from '../api/transcriptions';

jest.mock('../api/analysis');
jest.mock('../api/meetings');
jest.mock('../api/transcriptions');

describe('useInsightGpt', () => {
  beforeEach(() => {
    jest.clearAllMocks();
  });

  it('should handle transcription analysis successfully', async () => {
    const mockData = { data: { message: 'Test analysis message' } };
    callInsightGpt.mockResolvedValueOnce(mockData);
    updateTranscription.mockResolvedValueOnce({});

    const { result } = renderHook(() => useInsightGpt());

    await act(async () => {
      await result.current.getAndSaveTranscriptionAnalysis(
        'analysis',
        'input',
        'transcriptionId'
      );
    });

    expect(callInsightGpt).toHaveBeenCalledWith('analysis', 'input');
    expect(updateTranscription).toHaveBeenCalledWith(
      { analysis: 'Test analysis message' },
      'transcriptionId'
    );
    expect(result.current.transcriptionIdLoading).toBe('');
    expect(result.current.analysisError).toBe(null);
  });

  it('should handle overview analysis successfully', async () => {
    const mockData = { data: { message: 'Test overview message' } };
    callInsightGpt.mockResolvedValueOnce(mockData);
    updateMeeting.mockResolvedValueOnce({});

    const { result } = renderHook(() => useInsightGpt());

    await act(async () => {
      await result.current.getAndSaveOverviewAnalysis(
        'overview',
        'input',
        'meetingId'
      );
    });

    expect(callInsightGpt).toHaveBeenCalledWith('overview', 'input');
    expect(updateMeeting).toHaveBeenCalledWith(
      { overview: 'Test overview message' },
      'meetingId'
    );
    expect(result.current.loadingAnalysis).toBe(false);
    expect(result.current.analysisError).toBe(null);
  });

  it('should handle errors in transcription analysis', async () => {
    const mockError = new Error('Test error');
    callInsightGpt.mockRejectedValueOnce(mockError);

    const { result } = renderHook(() => useInsightGpt());

    await act(async () => {
      await result.current.getAndSaveTranscriptionAnalysis(
        'analysis',
        'input',
        'transcriptionId'
      );
    });

    expect(result.current.transcriptionIdLoading).toBe('');
    expect(result.current.analysisError).toBe(
      'Error getting analysis',
      mockError
    );
  });

  it('should handle errors in overview analysis', async () => {
    const mockError = new Error('Test error');
    callInsightGpt.mockRejectedValueOnce(mockError);

    const { result } = renderHook(() => useInsightGpt());

    await act(async () => {
      await result.current.getAndSaveOverviewAnalysis(
        'overview',
        'input',
        'meetingId'
      );
    });

    expect(result.current.loadingAnalysis).toBe(false);
    expect(result.current.analysisError).toBe(
      'Error getting overview',
      mockError
    );
  });
});
登入後複製

Because the hooks use our Strapi API, we need a way to replace the data we're getting back from the API calls. We're using jest.mock to intercept the APIs and send back mock data. This way, we can test our hooks' internal logic without calling the API.

In the first two tests, we mock the API call and return some data, then render our hook and call the correct function. We then check if the correct functions have been called with the correct data from inside the hook. The last two tests just test that errors are handled correctly.

Add the following under scripts in the package.json file:

"test": "jest --watch"
登入後複製

Now open the terminal, navigate to the route directory of the frontend project, and run the following command to check if the tests are passing:

yarn test
登入後複製

You should see a success message like the one below:

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

As an optional challenge, let's see if you can apply what we did with testing useInsightGpt to testing the other hooks.

Application Demo

Here is what our application looks like.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

Deployment with Strapi cloud

Finally, we have the finished application up and running correctly with some tests. The time has come to deploy our project to Strapi cloud.

First, navigate to Strapi and click on "cloud" at the top right.

Connect with GitHub.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

From the dashboard, click on Create project.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

Choose your GitHub account and the correct repo, fill out the display name, and choose the region.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

Now, if you have the same file structure as me, which you should do if you've been following along, then you will just need to add the base directory, so click on Show advanced settings and enter the base directory of /strapi-transcribe-api, then you will need to add all of the environment variables that can be found in the .env file in the route of the strapi project.

Once you have added all of these, click on "create project." This will bring you to a loading screen, and then you will be redirected to the build logs; here, you can just wait for the build to finish.

Once it has finished building, you can click on Overview from the top left. This should direct you to the dashboard, where you will find the details of your deployment and the app URL under Overview on the right.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

First, click on your app URL, which will open a new tab and direct you to the welcome page of your Strapi app. Then, create a new admin user, which will log you into the dashboard.

This is a new deployment, and as such, it won't have any of the data we had saved locally; it also won't have carried across the public settings we had on the API, so click on Settings>Users & Permissions Plugin>Roles>Public, expand and select all on Meeting, Transcribe-insight-gpt, and Transcribed-chunk, and then click save in the top right.

Once again, let's just check that our deployment was successful by running the below command in the terminal. Please replace https://yourDeployedUrlHere.com with the URL in the Strapi cloud dashboard.

curl -X POST \
  https://yourDeployedUrlHere.com/api/transcribe-insight-gpt/exampleAction \
  -H 'Content-Type: application/json' \
  -d '{
    "data": {
        "input": "I speak without a mouth and hear without ears. I have no body, but I come alive with the wind. What am I?",
        "operation": "answer"
    }
}'
登入後複製

Deplying Next.js with Vercel

Now we have the API deployed and ready to use, let's deploy our frontend with Vercel.

First, we will need to change the baseUrl in our API files to link to our newly deployed Strapi instance,

Add the following variable to .env.local

NEXT_PUBLIC_STRAPI_URL="your strapi cloud url"
登入後複製

Now go ahead and replace the current value of baseUrl with the following in all three API files:

const baseUrl =
  process.env.NODE_ENV == 'production'
    ? process.env.NEXT_PUBLIC_STRAPI_URL
    : 'http://localhost:1337';
登入後複製

This will just check if the app is running in production. If so, it will use our deployed strap instance. If not, it will revert to localhost. Make sure to push these changes to Github.

Now navigate to Vercel and sign up if you don't already have an account.

Now, let's create a new project by continuing with GitHub.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

Once you have verified your account, import the correct GitHub repo

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

Now we will fill out some configuration details, give the project a name, change the framework preset to Next.js, change the root directory to 'transcribe-frontend', and add the two environment variables from the .env.local file in the Next.js project.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

Now click deploy and wait for it to finish. Once deployed, it should redirect you to a success page with a preview of the app.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

Now click continue to the dashboard, where you can find information about the app, such as the domain and the deployment logs.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

From here, you can click visit to be directed to the app's frontend deployment.

結論

所以你就有了!您現在已經從頭到尾建立了轉錄應用程式。我們已經研究瞭如何利用幾種尖端技術來實現這一目標。我們使用 Strapi 進行後端 CMS 和自訂 ChatGPT 集成,展示了該技術如何快速、輕鬆地建立複雜的 Web 應用程式。我們也介紹了 Next.js 中的錯誤處理和測試的一些架構模式,最後,我們將後端部署到 Strapi 雲端。我希望這個系列能讓您大開眼界,並鼓勵您將想法變為現實。

其他資源

  • 完整程式碼的 Github 連結。

以上是使用 Strapi、ChatGPT 和 Whisper 建立轉錄應用程式:第 3 部分的詳細內容。更多資訊請關注PHP中文網其他相關文章!

來源:dev.to
本網站聲明
本文內容由網友自願投稿,版權歸原作者所有。本站不承擔相應的法律責任。如發現涉嫌抄襲或侵權的內容,請聯絡admin@php.cn
熱門教學
更多>
最新下載
更多>
網站特效
網站源碼
網站素材
前端模板
關於我們 免責聲明 Sitemap
PHP中文網:公益線上PHP培訓,幫助PHP學習者快速成長!