Bina Apl Transkripsi dengan Strapi, ChatGPT & Whisper: Bahagian 3

王林
Lepaskan: 2024-09-08 20:31:38
asal
1022 orang telah melayarinya

Selamat datang ke ansuran ketiga dan terakhir dalam siri ini. Dalam Bahagian 2, kami mencipta dan menyambung bahagian belakang dengan Strapi untuk membantu menyelamatkan mesyuarat dan transkripsi kami. Dalam bahagian siri ini, kami akan menggunakan ChatGPT dengan Strapi untuk mendapatkan cerapan tentang teks yang ditranskripsi dengan mengklik butang. Kami juga akan melihat beberapa ujian dan cara menggunakan aplikasi ke awan Strapi.

Garis besar

Anda boleh mendapatkan garis besar untuk siri ini di bawah:

  • Bahagian 1: Laksanakan Rakaman Audio dan Antara Muka Pengguna
  • Bahagian 2: Menggabungkan CMS Strapi dan Simpan Transkripsi
  • Bahagian 3: Laksanakan sambungan ke chatGPT dan gunakan ke awan Strapi

Buat Titik Akhir API Tersuai dalam Strapi

Kami memerlukan titik akhir tersuai kami dalam Strapi CMS untuk menyambung dengan ChatGPT, jadi navigasi ke terminal, tukar direktori kepada strapi-transcribe-api, dan jalankan arahan di bawah:

yarn strapi generate
Salin selepas log masuk

Melakukan ini akan memulakan proses menjana API tersuai kami. Pilih pilihan API, beri nama transcribe-insight-gpt, dan pilih "no" apabila ia bertanya kepada kami sama ada ini untuk pemalam.

Jadikan Strapi API Umum

Di dalam direktori src, Jika kami menyemak direktori api dalam editor kod kami, kami akan melihat API yang baru dibuat untuk transcribe-insight-gpt dengan laluan, pengawal, dan direktori perkhidmatan.

Mari kita semak sama ada ia berfungsi dengan menyahkomen kod dalam setiap fail, memulakan semula pelayan dan menavigasi ke papan pemuka pentadbir. Kami akan mahu menjadikan akses kepada laluan ini awam, jadi klik Tetapan > Pemalam pengguna & kebenaran > Peranan > Awam, kemudian tatal ke bawah ke Pilih semua pada API transcribe-insight-gpt untuk menjadikan kebenaran umum dan klik simpan di bahagian atas sebelah kanan.

Jika kami memasukkan yang berikut ke dalam penyemak imbas kami dan mengklik enter, kami sepatutnya mendapat mesej "ok".

http://localhost:1337/api/transcribe-insight-gpt
Salin selepas log masuk

Menggunakan ChatGPT dengan Strapi

Kami telah mengesahkan titik akhir API berfungsi, mari sambungkannya ke OpenAI dahulu, pasang pakej OpenAI, navigasi ke direktori laluan, dan jalankan arahan di bawah dalam terminal

yarn add openai
Salin selepas log masuk

Kemudian, dalam fail .env, tambahkan kunci API pada pembolehubah persekitaran OPENAI:

OPENAI=<OpenAI api key here>
Salin selepas log masuk

Sekarang, di bawah direktori transcribe-insight-gpt, tukar kod dalam direktori laluan kepada yang berikut:

module.exports = {
  routes: [
    {
      method: "POST",
      path: "/transcribe-insight-gpt/exampleAction",
      handler: "transcribe-insight-gpt.exampleAction",
      config: {
        policies: [],
        middlewares: [],
      },
    },
  ],
};
Salin selepas log masuk

Tukar kod dalam direktori pengawal kepada yang berikut:

"use strict";

module.exports = {
  exampleAction: async (ctx) => {
    try {
      const response = await strapi
        .service("api::transcribe-insight-gpt.transcribe-insight-gpt")
        .insightService(ctx);

      ctx.body = { data: response };
    } catch (err) {
      console.log(err.message);
      throw new Error(err.message);
    }
  },
};
Salin selepas log masuk

Dan kod dalam direktori perkhidmatan kepada yang berikut:

"use strict";
const { OpenAI } = require("openai");
const openai = new OpenAI({
  apiKey: process.env.OPENAI,
});

/**
 * transcribe-insight-gpt service
 */

module.exports = ({ strapi }) => ({
  insightService: async (ctx) => {
    try {
      const input = ctx.request.body.data?.input;
      const operation = ctx.request.body.data?.operation;

      if (operation === "analysis") {
        const analysisResult = await gptAnalysis(input);

        return {
          message: analysisResult,
        };
      } else if (operation === "answer") {
        const answerResult = await gptAnswer(input);

        return {
          message: answerResult,
        };
      } else {
        return { error: "Invalid operation specified" };
      }
    } catch (err) {
      ctx.body = err;
    }
  },
});

async function gptAnalysis(input) {
  const analysisPrompt =
    "Analyse the following text and give me a brief overview of what it means:";
  const completion = await openai.chat.completions.create({
    messages: [{ role: "user", content: `${analysisPrompt} ${input}` }],
    model: "gpt-3.5-turbo",
  });

  const analysis = completion.choices[0].message.content;

  return analysis;
}

async function gptAnswer(input) {
  const answerPrompt =
    "Analyse the following text and give me an answer to the question posed: ";
  const completion = await openai.chat.completions.create({
    messages: [{ role: "user", content: `${answerPrompt} ${input}` }],
    model: "gpt-3.5-turbo",
  });

  const answer = completion.choices[0].message.content;

  return answer;
}
Salin selepas log masuk

Di sini, kami menghantar dua parameter kepada API kami: teks input, yang akan menjadi transkripsi kami, dan operasi, yang akan sama ada analisis atau jawapan bergantung pada operasi yang kami mahu ia lakukan. Setiap operasi akan mempunyai gesaan berbeza untuk ChatGPT.

Sahkan Bahawa ChatGPT Berfungsi

Kami boleh menyemak sambungan ke laluan POST kami dengan menampal kod di bawah di terminal kami:

curl -X POST \
  http://localhost:1337/api/transcribe-insight-gpt/exampleAction \
  -H 'Content-Type: application/json' \
  -d '{
    "data": {
        "input": "Comparatively, four-dimensional space has an extra coordinate axis, orthogonal to the other three, which is usually labeled w. To describe the two additional cardinal directions",
        "operation": "analysis"
    }
}'
Salin selepas log masuk

Dan untuk menyemak operasi jawapan, anda boleh menggunakan arahan di bawah:

curl -X POST \
  http://localhost:1337/api/transcribe-insight-gpt/exampleAction \
  -H 'Content-Type: application/json' \
  -d '{
    "data": {
        "input": "I speak without a mouth and hear without ears. I have no body, but I come alive with the wind. What am I?",
        "operation": "answer"
    }
}'
Salin selepas log masuk

Itu bagus. Memandangkan kami mempunyai keupayaan analisis dan jawapan dalam laluan API Strapi, kami perlu menyambungkannya ke kod bahagian hadapan kami dan memastikan kami boleh menyimpan maklumat ini untuk mesyuarat dan transkripsi kami.

Sambungkan API Tersuai untuk Analisis dalam Next.js

Untuk mengekalkan pemisahan kebimbangan yang jelas, mari buat fail API yang berasingan untuk fungsi analisis apl kami.

Dapatkan Analisis daripada ChatGPT

Dalam transcribe-frontend di bawah direktori api, buat fail baharu bernama analysis.js dan tampal dalam kod berikut:

const baseUrl = 'http://localhost:1337';
const url = `${baseUrl}/api/transcribe-insight-gpt/exampleAction`;

export async function callInsightGpt(operation, input) {
  console.log('operation - ', operation);
  const payload = {
    data: {
      input: input,
      operation: operation,
    },
  };
  try {
    const response = await fetch(url, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
      },
      body: JSON.stringify(payload),
    });

    const data = await response.json();
    return data;
  } catch (error) {
    console.error('Error:', error);
  }
}
Salin selepas log masuk

Kod di atas ialah permintaan POST untuk memanggil API cerapan dan mendapatkan semula analisis daripada ChatGPT.

Kemas kini Transkripsi dengan Analisis dan Jawapan

Mari tambahkan cara untuk mengemas kini transkripsi kami dengan analisis dan jawapan. Tampalkan kod berikut ke dalam fail transcriptions.js.

export async function updateTranscription(
  updatedTranscription,
  transcriptionId
) {
  const updateURL = `${url}/${transcriptionId}`;
  const payload = {
    data: updatedTranscription,
  };

  try {
    const res = await fetch(updateURL, {
      method: 'PUT',
      headers: {
        'Content-Type': 'application/json',
      },
      body: JSON.stringify(payload),
    });

    return await res.json();
  } catch (error) {
    console.error('Error updating meeting:', error);
    throw error;
  }
}
Salin selepas log masuk

Kod di atas ialah permintaan PUT untuk mengendalikan kemas kini analisis atau medan jawapan pada setiap transkripsi.

Buat Cangkuk Tersuai untuk Mengendalikan Gambaran Keseluruhan dan Analisis Transkripsi

Sekarang, mari buat cangkuk di mana kita boleh menggunakan kaedah ini. Buat fail bernama useInsightGpt di bawah direktori cangkuk dan tampal dalam kod berikut:

import { useState } from 'react';
import { callInsightGpt } from '../api/analysis';
import { updateMeeting } from '../api/meetings';
import { updateTranscription } from '../api/transcriptions';

export const useInsightGpt = () => {
  const [loadingAnalysis, setLoading] = useState(false);
  const [transcriptionIdLoading, setTranscriptionIdLoading] = useState('');
  const [analysisError, setError] = useState(null);

  const getAndSaveTranscriptionAnalysis = async (
    operation,
    input,
    transcriptionId
  ) => {
    try {
      setTranscriptionIdLoading(transcriptionId);
      // Get insight analysis / answer
      const { data } = await callInsightGpt(operation, input);
      // Use transcriptionId to save it to the transcription
      const updateTranscriptionDetails =
        operation === 'analysis'
          ? { analysis: data.message }
          : { answer: data.message };
      await updateTranscription(updateTranscriptionDetails, transcriptionId);
      setTranscriptionIdLoading('');
    } catch (e) {
      setTranscriptionIdLoading('');
      setError('Error getting analysis', e);
    }
  };

  const getAndSaveOverviewAnalysis = async (operation, input, meetingId) => {
    try {
      setLoading(true);
      // Get overview insight
      const {
        data: { message },
      } = await callInsightGpt(operation, input);
      // Use meetingId to save it to the meeting
      const updateMeetingDetails = { overview: message };
      await updateMeeting(updateMeetingDetails, meetingId);
      setLoading(false);
    } catch (e) {
      setLoading(false);
      setError('Error getting overview', e);
    }
  };

  return {
    loadingAnalysis,
    transcriptionIdLoading,
    analysisError,
    getAndSaveTranscriptionAnalysis,
    getAndSaveOverviewAnalysis,
  };
};

Salin selepas log masuk

Kail ini mengendalikan logik untuk mendapatkan dan menyimpan gambaran keseluruhan untuk mesyuarat kami apabila ia telah tamat. Ia juga mengendalikan mendapatkan analisis atau jawapan kepada transkripsi kami dan menyimpannya juga. Ia menjejaki transkripsi yang mana kami meminta analisis supaya kami dapat menunjukkan keadaan pemuatan tertentu.

Display Analysis of a Transcription

Import the functionality above into the TranscribeContainer and use it. Paste the following updated code into TranscribeContainer.jsx

import React, { useState, useEffect } from "react";
import styles from "../styles/Transcribe.module.css";
import { useAudioRecorder } from "../hooks/useAudioRecorder";
import RecordingControls from "../components/transcription/RecordingControls";
import TranscribedText from "../components/transcription/TranscribedText";
import { useRouter } from "next/router";
import { useMeetings } from "../hooks/useMeetings";
import { useInsightGpt } from "../hooks/useInsightGpt";
import { createNewTranscription } from "../api/transcriptions";

const TranscribeContainer = ({ streaming = true, timeSlice = 1000 }) => {
  const router = useRouter();
  const [meetingId, setMeetingId] = useState(null);
  const [meetingTitle, setMeetingTitle] = useState("");
  const {
    getMeetingDetails,
    saveTranscriptionToMeeting,
    updateMeetingDetails,
    loading,
    error,
    meetingDetails,
  } = useMeetings();
  const {
    loadingAnalysis,
    transcriptionIdLoading,
    analysisError,
    getAndSaveTranscriptionAnalysis,
    getAndSaveOverviewAnalysis,
  } = useInsightGpt();
  const apiKey = process.env.NEXT_PUBLIC_OPENAI_API_KEY;
  const whisperApiEndpoint = "https://api.openai.com/v1/audio/";
  const {
    recording,
    transcribed,
    handleStartRecording,
    handleStopRecording,
    setTranscribed,
  } = useAudioRecorder(streaming, timeSlice, apiKey, whisperApiEndpoint);

  const { ended } = meetingDetails;
  const transcribedHistory = meetingDetails?.transcribed_chunks?.data;

  useEffect(() => {
    const fetchDetails = async () => {
      if (router.isReady) {
        const { meetingId } = router.query;
        if (meetingId) {
          try {
            await getMeetingDetails(meetingId);
            setMeetingId(meetingId);
          } catch (err) {
            console.log("Error getting meeting details - ", err);
          }
        }
      }
    };

    fetchDetails();
  }, [router.isReady, router.query]);

  useEffect(() => {
    setMeetingTitle(meetingDetails.title);
  }, [meetingDetails]);

  const handleGetAnalysis = async (input, transcriptionId) => {
    await getAndSaveTranscriptionAnalysis("analysis", input, transcriptionId);
    // re-fetch meeting details
    await getMeetingDetails(meetingId);
  };

  const handleGetAnswer = async (input, transcriptionId) => {
    await getAndSaveTranscriptionAnalysis("answer", input, transcriptionId);
    // re-fetch meeting details
    await getMeetingDetails(meetingId);
  };

  const handleStopMeeting = async () => {
    // provide meeting overview and save it
    // getMeetingOverview(transcribed_chunks)
    await updateMeetingDetails(
      {
        title: meetingTitle,
        ended: true,
      },
      meetingId,
    );

    // re-fetch meeting details
    await getMeetingDetails(meetingId);
    setTranscribed("");
  };

  const stopAndSaveTranscription = async () => {
    // save transcription first
    let {
      data: { id: transcriptionId },
    } = await createNewTranscription(transcribed);

    // make a call to save the transcription chunk here
    await saveTranscriptionToMeeting(meetingId, meetingTitle, transcriptionId);
    // re-fetch current meeting which should have updated transcriptions
    await getMeetingDetails(meetingId);
    // Stop and clear the current transcription as it's now saved
    await handleStopRecording();
  };

  const handleGoBack = () => {
    router.back();
  };

  if (loading) return <p>Loading...</p>;

  return (
    <div style={{ margin: "20px" }}>
      {ended && (
        <button onClick={handleGoBack} className={styles.goBackButton}>
          Go Back
        </button>
      )}
      {!ended && (
        <button
          className={styles["end-meeting-button"]}
          onClick={handleStopMeeting}
        >
          End Meeting
        </button>
      )}
      {ended ? (
        <p className={styles.title}>{meetingTitle}</p>
      ) : (
        <input
          onChange={(e) => setMeetingTitle(e.target.value)}
          value={meetingTitle}
          type="text"
          placeholder="Meeting title here..."
          className={styles["custom-input"]}
        />
      )}
      <div>
        {!ended && (
          <div>
            <RecordingControls
              handleStartRecording={handleStartRecording}
              handleStopRecording={stopAndSaveTranscription}
            />
            {recording ? (
              <p className={styles["primary-text"]}>Recording</p>
            ) : (
              <p>Not recording</p>
            )}
          </div>
        )}

        {/*Current transcription*/}
        {transcribed && <h1>Current transcription</h1>}
        <TranscribedText transcribed={transcribed} current={true} />

        {/*Transcribed history*/}
        <h1>History</h1>
        {transcribedHistory
          ?.slice()
          .reverse()
          .map((val, i) => {
            const transcribedChunk = val.attributes;
            const text = transcribedChunk.text;
            const transcriptionId = val.id;
            return (
              <TranscribedText
                key={transcriptionId}
                transcribed={text}
                answer={transcribedChunk.answer}
                analysis={transcribedChunk.analysis}
                handleGetAnalysis={() =>
                  handleGetAnalysis(text, transcriptionId)
                }
                handleGetAnswer={() => handleGetAnswer(text, transcriptionId)}
                loading={transcriptionIdLoading === transcriptionId}
              />
            );
          })}
      </div>
    </div>
  );
};

export default TranscribeContainer;
Salin selepas log masuk

Here, depending on your need, we use the useInsightGpt hook to get the analysis or answer. We also display a loading indicator beside the transcribed text.

Display Answers and Analysis of Transcriptions in Real Time

Paste the following code into TranscribedText.jsx to update the UI accordingly.

import styles from '../../styles/Transcribe.module.css';

function TranscribedText({
  transcribed,
  answer,
  analysis,
  handleGetAnalysis,
  handleGetAnswer,
  loading,
  current,
}) {
  return (
    <div className={styles['transcribed-text-container']}>
      <div className={styles['speech-bubble-container']}>
        {transcribed && (
          <div className={styles['speech-bubble']}>
            <div className={styles['speech-pointer']}></div>
            <div className={styles['speech-text-question']}>{transcribed}</div>
            {!current && (
              <div className={styles['button-container']}>
                <button
                  className={styles['primary-button-analysis']}
                  onClick={handleGetAnalysis}
                >
                  Get analysis
                </button>
                <button
                  className={styles['primary-button-answer']}
                  onClick={handleGetAnswer}
                >
                  Get answer
                </button>
              </div>
            )}
          </div>
        )}
      </div>
      <div>
        <div className={styles['speech-bubble-container']}>
          {loading && (
            <div className={styles['analysis-bubble']}>
              <div className={styles['analysis-pointer']}></div>
              <div className={styles['speech-text-answer']}>Loading...</div>
            </div>
          )}
          {analysis && (
            <div className={styles['analysis-bubble']}>
              <div className={styles['analysis-pointer']}></div>
              <p style={{ margin: 0 }}>Analysis</p>
              <div className={styles['speech-text-answer']}>{analysis}</div>
            </div>
          )}
        </div>
        <div className={styles['speech-bubble-container']}>
          {answer && (
            <div className={styles['speech-bubble-right']}>
              <div className={styles['speech-pointer-right']}></div>
              <p style={{ margin: 0 }}>Answer</p>
              <div className={styles['speech-text-answer']}>{answer}</div>
            </div>
          )}
        </div>
      </div>
    </div>
  );
}

export default TranscribedText;
Salin selepas log masuk

We can now request analysis and get answers to questions in real-time straight after they have been transcribed.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

Implement Meeting Overview functionality

When the user ends the meeting, we want to provide an overview of everything discussed. Let's add this functionality to the TranscribeContainer component.

In the function handleStopMeeting we can use the method getAndSaveOverviewAnalysis from the useInsightGpt hook:

const handleStopMeeting = async () => {
    // provide meeting overview and save it
    const transcribedHistoryText = transcribedHistory
      .map((val) => `transcribed_chunk: ${val.attributes.text}`)
      .join(', ');

    await getAndSaveOverviewAnalysis(
      'analysis',
      transcribedHistoryText,
      meetingId
    );

    await updateMeetingDetails(
      {
        title: meetingTitle,
        ended: true,
      },
      meetingId
    );

    // re-fetch meeting details
    await getMeetingDetails(meetingId);
    setTranscribed('');
  };
Salin selepas log masuk

Here, we are joining all of the transcribed chunks from the meeting and then sending them to our ChatGPT API for analysis, where they will be saved for our meeting.

Now, let's display the overview once it has been loaded. Add the following code above the RecordingControls:

{loadingAnalysis && <p>Loading Overview...</p>}

 {overview && (
    <div>
      <h1>Overview</h1>
      <p>{overview}</p>
    </div>
  )}
Salin selepas log masuk

Then, destructure the overview from the meeting details by adding the following line below our hook declarations:

const { ended, overview } = meetingDetails;
Salin selepas log masuk

To summarise, we listen to the loading indicator from useInsightGpt and check if overview is present from the meeting; if it is, we display it.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

Error handling in Next.js

We have a couple of errors that could be caused by one of our hooks; let's create a component to handle them.

Create a file called ErrorToast.js under the components directory:

import { useEffect, useState } from 'react';

const ErrorToast = ({ message, duration }) => {
  const [visible, setVisible] = useState(true);

  useEffect(() => {
    const timer = setTimeout(() => {
      setVisible(false);
    }, duration);

    return () => clearTimeout(timer);
  }, [duration]);

  if (!visible) return null;

  return <div className="toast">{message}</div>;
};

export default ErrorToast;
Salin selepas log masuk

And add the following css code to globals.css under the style directory:

.toast {
  position: fixed;
  top: 20px;
  left: 50%;
  transform: translateX(-50%);
  background-color: rgba(255, 0, 0, 0.8);
  color: white;
  padding: 16px;
  border-radius: 8px;
  box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
  z-index: 1000;
  transition: opacity 0.5s ease-out;
  opacity: 1;
  display: flex;
  align-items: center;
  justify-content: center;
  text-align: center;
}

.toast-hide {
  opacity: 0;
}
Salin selepas log masuk

Now, we can use this error component in TranscribeContainer; whenever we encounter an unexpected error from the API, we will show this error toast briefly to notify the user that something went wrong.

Import the ErrorToast at the top of the file and then paste the following code above the Go Back button in the return statement of our component:

 {error || analysisError ? (
        <ErrorToast message={error || analysisError} duration={5000} />
      ) : null}
Salin selepas log masuk

Testing with Next.js using Jest

Now, let's add a test to ensure our hooks are working as we expect them to and to alert us to any breaking changes in the code that might be introduced later. First, add the packages below so we can use jest in our project.

yarn add -D jest jest-environment-jsdom @testing-library/react @testing-library/jest-dom @testing-library/react-hooks
Salin selepas log masuk

Then create a jest.config.js file in the route of the frontend project and add the following code:

const nextJest = require('next/jest');
const createJestConfig = nextJest({
  dir: './',
});
const customJestConfig = {
  moduleDirectories: ['node_modules', '<rootDir>/'],
  testEnvironment: 'jest-environment-jsdom',
};
module.exports = createJestConfig(customJestConfig);
Salin selepas log masuk

This just sets up Jest ready to be used in Next.js.

Create a test directory and an index.test.js file with the following code:

import { renderHook, act } from '@testing-library/react-hooks';
import { useInsightGpt } from '../hooks/useInsightGpt';
import { callInsightGpt } from '../api/analysis';
import { updateMeeting } from '../api/meetings';
import { updateTranscription } from '../api/transcriptions';

jest.mock('../api/analysis');
jest.mock('../api/meetings');
jest.mock('../api/transcriptions');

describe('useInsightGpt', () => {
  beforeEach(() => {
    jest.clearAllMocks();
  });

  it('should handle transcription analysis successfully', async () => {
    const mockData = { data: { message: 'Test analysis message' } };
    callInsightGpt.mockResolvedValueOnce(mockData);
    updateTranscription.mockResolvedValueOnce({});

    const { result } = renderHook(() => useInsightGpt());

    await act(async () => {
      await result.current.getAndSaveTranscriptionAnalysis(
        'analysis',
        'input',
        'transcriptionId'
      );
    });

    expect(callInsightGpt).toHaveBeenCalledWith('analysis', 'input');
    expect(updateTranscription).toHaveBeenCalledWith(
      { analysis: 'Test analysis message' },
      'transcriptionId'
    );
    expect(result.current.transcriptionIdLoading).toBe('');
    expect(result.current.analysisError).toBe(null);
  });

  it('should handle overview analysis successfully', async () => {
    const mockData = { data: { message: 'Test overview message' } };
    callInsightGpt.mockResolvedValueOnce(mockData);
    updateMeeting.mockResolvedValueOnce({});

    const { result } = renderHook(() => useInsightGpt());

    await act(async () => {
      await result.current.getAndSaveOverviewAnalysis(
        'overview',
        'input',
        'meetingId'
      );
    });

    expect(callInsightGpt).toHaveBeenCalledWith('overview', 'input');
    expect(updateMeeting).toHaveBeenCalledWith(
      { overview: 'Test overview message' },
      'meetingId'
    );
    expect(result.current.loadingAnalysis).toBe(false);
    expect(result.current.analysisError).toBe(null);
  });

  it('should handle errors in transcription analysis', async () => {
    const mockError = new Error('Test error');
    callInsightGpt.mockRejectedValueOnce(mockError);

    const { result } = renderHook(() => useInsightGpt());

    await act(async () => {
      await result.current.getAndSaveTranscriptionAnalysis(
        'analysis',
        'input',
        'transcriptionId'
      );
    });

    expect(result.current.transcriptionIdLoading).toBe('');
    expect(result.current.analysisError).toBe(
      'Error getting analysis',
      mockError
    );
  });

  it('should handle errors in overview analysis', async () => {
    const mockError = new Error('Test error');
    callInsightGpt.mockRejectedValueOnce(mockError);

    const { result } = renderHook(() => useInsightGpt());

    await act(async () => {
      await result.current.getAndSaveOverviewAnalysis(
        'overview',
        'input',
        'meetingId'
      );
    });

    expect(result.current.loadingAnalysis).toBe(false);
    expect(result.current.analysisError).toBe(
      'Error getting overview',
      mockError
    );
  });
});
Salin selepas log masuk

Because the hooks use our Strapi API, we need a way to replace the data we're getting back from the API calls. We're using jest.mock to intercept the APIs and send back mock data. This way, we can test our hooks' internal logic without calling the API.

In the first two tests, we mock the API call and return some data, then render our hook and call the correct function. We then check if the correct functions have been called with the correct data from inside the hook. The last two tests just test that errors are handled correctly.

Add the following under scripts in the package.json file:

"test": "jest --watch"
Salin selepas log masuk

Now open the terminal, navigate to the route directory of the frontend project, and run the following command to check if the tests are passing:

yarn test
Salin selepas log masuk

You should see a success message like the one below:

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

As an optional challenge, let's see if you can apply what we did with testing useInsightGpt to testing the other hooks.

Application Demo

Here is what our application looks like.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

Deployment with Strapi cloud

Finally, we have the finished application up and running correctly with some tests. The time has come to deploy our project to Strapi cloud.

First, navigate to Strapi and click on "cloud" at the top right.

Connect with GitHub.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

From the dashboard, click on Create project.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

Choose your GitHub account and the correct repo, fill out the display name, and choose the region.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

Now, if you have the same file structure as me, which you should do if you've been following along, then you will just need to add the base directory, so click on Show advanced settings and enter the base directory of /strapi-transcribe-api, then you will need to add all of the environment variables that can be found in the .env file in the route of the strapi project.

Once you have added all of these, click on "create project." This will bring you to a loading screen, and then you will be redirected to the build logs; here, you can just wait for the build to finish.

Once it has finished building, you can click on Overview from the top left. This should direct you to the dashboard, where you will find the details of your deployment and the app URL under Overview on the right.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

First, click on your app URL, which will open a new tab and direct you to the welcome page of your Strapi app. Then, create a new admin user, which will log you into the dashboard.

This is a new deployment, and as such, it won't have any of the data we had saved locally; it also won't have carried across the public settings we had on the API, so click on Settings>Users & Permissions Plugin>Roles>Public, expand and select all on Meeting, Transcribe-insight-gpt, and Transcribed-chunk, and then click save in the top right.

Once again, let's just check that our deployment was successful by running the below command in the terminal. Please replace https://yourDeployedUrlHere.com with the URL in the Strapi cloud dashboard.

curl -X POST \
  https://yourDeployedUrlHere.com/api/transcribe-insight-gpt/exampleAction \
  -H 'Content-Type: application/json' \
  -d '{
    "data": {
        "input": "I speak without a mouth and hear without ears. I have no body, but I come alive with the wind. What am I?",
        "operation": "answer"
    }
}'
Salin selepas log masuk

Deplying Next.js with Vercel

Now we have the API deployed and ready to use, let's deploy our frontend with Vercel.

First, we will need to change the baseUrl in our API files to link to our newly deployed Strapi instance,

Add the following variable to .env.local

NEXT_PUBLIC_STRAPI_URL="your strapi cloud url"
Salin selepas log masuk

Now go ahead and replace the current value of baseUrl with the following in all three API files:

const baseUrl =
  process.env.NODE_ENV == 'production'
    ? process.env.NEXT_PUBLIC_STRAPI_URL
    : 'http://localhost:1337';
Salin selepas log masuk

This will just check if the app is running in production. If so, it will use our deployed strap instance. If not, it will revert to localhost. Make sure to push these changes to Github.

Now navigate to Vercel and sign up if you don't already have an account.

Now, let's create a new project by continuing with GitHub.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

Once you have verified your account, import the correct GitHub repo

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

Now we will fill out some configuration details, give the project a name, change the framework preset to Next.js, change the root directory to 'transcribe-frontend', and add the two environment variables from the .env.local file in the Next.js project.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

Now click deploy and wait for it to finish. Once deployed, it should redirect you to a success page with a preview of the app.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

Now click continue to the dashboard, where you can find information about the app, such as the domain and the deployment logs.

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

From here, you can click visit to be directed to the app's frontend deployment.

Kesimpulan

Jadi begitu! Anda kini telah membina apl transkripsi anda dari awal hingga akhir. Kami telah membincangkan cara untuk mencapai ini dengan beberapa teknologi canggih. Kami menggunakan Strapi untuk CMS bahagian belakang dan penyepaduan ChatGPT tersuai, menunjukkan betapa cepat dan mudahnya teknologi ini boleh membina apl web yang kompleks. Kami juga meliputi beberapa corak seni bina dengan pengendalian dan ujian ralat dalam Next.js, dan akhirnya, kami menggunakan bahagian belakang ke awan Strapi. Saya harap anda telah menemui siri ini membuka mata dan ia akan menggalakkan anda menghidupkan idea anda.

Sumber Tambahan

  • Pautan Github ke kod lengkap.

Atas ialah kandungan terperinci Bina Apl Transkripsi dengan Strapi, ChatGPT & Whisper: Bahagian 3. Untuk maklumat lanjut, sila ikut artikel berkaitan lain di laman web China PHP!

sumber:dev.to
Kenyataan Laman Web ini
Kandungan artikel ini disumbangkan secara sukarela oleh netizen, dan hak cipta adalah milik pengarang asal. Laman web ini tidak memikul tanggungjawab undang-undang yang sepadan. Jika anda menemui sebarang kandungan yang disyaki plagiarisme atau pelanggaran, sila hubungi admin@php.cn
Tutorial Popular
Lagi>
Muat turun terkini
Lagi>
kesan web
Kod sumber laman web
Bahan laman web
Templat hujung hadapan
Tentang kita Penafian Sitemap
Laman web PHP Cina:Latihan PHP dalam talian kebajikan awam,Bantu pelajar PHP berkembang dengan cepat!