首頁 > web前端 > js教程 > 主體

Vercel人工智慧SDK

WBOY
發布: 2024-08-16 06:12:02
原創
495 人瀏覽過

引入追蹤、多模式附件、到客戶端的 JSON 串流等。

Vercel AI SDK 是一個使用 JavaScript 和 TypeScript 建立 AI 應用程式的工具包。其統一的 API 可讓您使用任何語言模型,並提供與領先 Web 框架(例如 Next.js 和 Svelte)的強大 UI 整合。

Vercel AI SDK 3.3 引進四大功能:

  • 追蹤(實驗):使用 OpenTelemetry
  • 偵測 AI SDK 函數
  • 多模式文件附件(實驗):使用 useChat 傳送文件附件
  • useObject hook(實驗):將結構化物件產生流傳輸到客戶端
  • 其他 LLM 設定:用於工具和結構化物件產生、停止序列和發送自訂標頭的原始 JSON

我們還添加了 AWS Bedrock 和 Chrome AI(社群)模型供應商以及許多較小的功能和附加功能。您可以在我們的變更日誌中找到所有更改,包括次要功能。

實驗性功能讓您盡快使用最新的AI SDK功能。但是,它們可能會在補丁版本中發生變化。如果您決定使用實驗性功能,請固定補丁版本。

追蹤

鑑於語言模型的不確定性,可觀察性對於理解和開發人工智慧應用至關重要。您需要能夠追蹤和理解各個模型呼叫的時間、令牌使用、提示和回應內容。

Vercel AI SDK 現在支援使用 OpenTelemetry 進行跟踪,作為一項實驗性功能,OpenTelemetry 是一種用於記錄遙測資訊的開源標準。以下是 Vercel Datadog 整合追蹤視覺化的範例:

Vercel AI SDK

使用 Datadog 和 Vercel AI SDK 進行追蹤視覺化

您可以使用 Datadog、Sentry 和 Axiom 等 Vercel 可觀測性整合來分析 AI SDK 追蹤資料。或者,您可以使用 LLM 可觀察性提供者,例如 LangFuse、Braintrust 或 LangSmith。

要將遙測與 Vercel AI SDK 結合使用,您需要為您的應用程式進行設定。我們建議使用@vercel/otel。如果您使用 Next.js 並在 Vercel 上部署,您可以將帶有以下程式碼的 Instrumentation.ts 新增至您的專案:

import { registerOTel } from '@vercel/otel';

export function register() {
  registerOTel({ serviceName: 'your-project-nameapp' });
}
登入後複製

由於追蹤功能是實驗性的,因此您需要選擇使用experimental_telemetry 選項來記錄資訊。您還可以提供函數 ID 來識別呼叫位置以及您想要記錄的其他元資料。

const result = await generateText({
  model: anthropic('claude-3-5-sonnet-20240620'),
  prompt: 'Write a short story about a cat.',
  experimental_telemetry: { 
    isEnabled: true,
    functionId: 'my-awesome-function',
    metadata: {
      something: 'custom',
      someOtherThing: 'other-value',
    },
  },
});
登入後複製

啟用該功能將記錄您的函數呼叫的追蹤資料。您可以在 AI SDK 遙測文件中找到更多詳細資訊。如果您想開始使用,請查看我們的可部署 AI SDK Next.js 追蹤範本。

多模式檔案附件

在許多人工智慧聊天應用程式中,使用者需要隨訊息一起傳送附件,例如圖像、PDF 和各種媒體檔案。這些附件還需要可以與用戶查看的訊息一起預覽。

因此,我們將experimental_attachments 加入到useChat() React hook 的handleSubmit() 處理程序中。

Vercel AI SDK

使用 useChat 發送圖像和文字附件

查看此範例的實際操作並部署範本。

有兩種方法可以透過訊息傳送附件,透過向handleSubmit 函數提供 FileList 物件或 URL 清單:

文件列表

透過使用 FileList,您可以使用檔案輸入元素將多個檔案作為附件連同訊息一起傳送。 useChat 鉤子會自動將它們轉換為資料 URL 並將它們傳送給 AI 提供者。

const { input, handleSubmit, handleInputChange } = useChat();
const [files, setFiles] = useState<FileList | undefined>(undefined);
return (
  <form
    onSubmit={(event) => {
      handleSubmit(event, {
        experimental_attachments: files,
      });
    }}
  >
    <input
      type="file"
      onChange={(event) => {
        if (event.target.files) {
          setFiles(event.target.files);
        }
      }}
      multiple
    />
    <input type="text" value={input} onChange={handleInputChange} />
  </form>
);
登入後複製

網址

您也可以將 URL 作為附件與訊息一起傳送。這對於發送外部資源或媒體內容的連結很有用。

const { input, handleSubmit, handleInputChange } = useChat();
const [attachments] = useState<Attachment[]>([
  {
    name: 'earth.png',
    contentType: 'image/png',
    url: 'https://example.com/earth.png',
  }
]);
return (
  <form
    onSubmit={event => {
      handleSubmit(event, {
        experimental_attachments: attachments,
      });
    }}
  >
    <input type="text" value={input} onChange={handleInputChange} />
  </form>
)
登入後複製

您可以在我們的多模式聊天機器人指南中了解更多。

使用物件鉤子

結構化資料產生是人工智慧應用的常見要求,例如用於從自然語言輸入中提取資訊。使用新的 useObject 掛鉤,您可以將結構化物件產生直接串流到客戶端。這項實驗性功能現已在 React 中推出,它允許您建立動態介面,在 JSON 物件串流時顯示它們。

例如,想像一個應用程序,您可以在其中以文字形式輸入費用以進行報銷。您可以使用人工智慧將文字輸入轉換為結構化對象,並在處理結構化費用時將其串流傳輸給使用者:

Vercel AI SDK

Extracting and streaming an expense from plain text with useObject

Here's how you could implement this in a Next.js application. First, define a schema for the expenses. The schema is shared between client and server:

import { z } from 'zod';

export const expenseSchema = z.object({
  expense: z.object({
    category: z
      .string()
      .describe(
        'Category of the expense. Allowed categories: ' +
        'TRAVEL, MEALS, ENTERTAINMENT, OFFICE SUPPLIES, OTHER.',
      ),
    amount: z.number().describe('Amount of the expense in USD.'),
    date: z
      .string()
      .describe('Date of the expense. Format yyyy-mmm-dd, e.g. 1952-Feb-19.'),
    details: z.string().describe('Details of the expense.'),
  }),
});

export type PartialExpense = DeepPartial<typeof expenseSchema>['expense'];
export type Expense = z.infer<typeof expenseSchema>['expense'];
登入後複製

Then, you use streamObject on the server to call the language model and stream an object:

import { anthropic } from '@ai-sdk/anthropic';
import { streamObject } from 'ai';
import { expenseSchema } from './schema';

// Allow streaming responses up to 30 seconds
export const maxDuration = 30;

export async function POST(req: Request) {
  const { expense }: { expense: string } = await req.json();
  const result = await streamObject({
    model: anthropic('claude-3-5-sonnet-20240620'),
    system:
      'You categorize expenses into one of the following categories: ' +
      'TRAVEL, MEALS, ENTERTAINMENT, OFFICE SUPPLIES, OTHER.' +
      // provide date (including day of week) for reference:
      'The current date is: ' +
      new Date()
        .toLocaleDateString('en-US', {
          year: 'numeric',
          month: 'short',
          day: '2-digit',
          weekday: 'short',
        })
        .replace(/(\w+), (\w+) (\d+), (\d+)/, '$4-$2-$3 ($1)') +
      '. When no date is supplied, use the current date.',
    prompt: `Please categorize the following expense: "${expense}"`,
    schema: expenseSchema,
    onFinish({ object }) {
      // you could save the expense to a database here
    },
  });
  return result.toTextStreamResponse();
}
登入後複製

Finally, you consume the expense stream on a client page. While the expense is streaming, we preview the partial expense, and once the generation is finished, we append it to the list of expenses:

'use client';

import { experimental_useObject as useObject } from 'ai/react';
import {
  Expense,
  expenseSchema,
  PartialExpense,
} from '../api/expense/schema';
import { useState } from 'react';

export default function Page() {
  const [expenses, setExpenses] = useState<Expense[]>([]);
  const { submit, isLoading, object } = useObject({
    api: '/api/expense',
    schema: expenseSchema,
    onFinish({ object }) {
      if (object != null) {
        setExpenses(prev => [object.expense, ...prev]);
      }
    },
  });
  return (
    <div>
      <form onSubmit={e => {
        e.preventDefault();
        const input = e.currentTarget.expense as HTMLInputElement;
        if (input.value.trim()) {
          submit({ expense: input.value });
          e.currentTarget.reset();
        }
      }}
      >
        <input type="text" name="expense" placeholder="Enter expense details"/>
        <button type="submit" disabled={isLoading}>Log expense</button>
      </form>
      {isLoading && object?.expense && (
        <ExpenseView expense={object.expense} />
      )}
      {expenses.map((expense, index) => (
        <ExpenseView key={index} expense={expense} />
      ))}
    </div>
  );
}
登入後複製

The expenses are rendered using an ExpenseView that can handle partial objects with undefined properties with .? and ?? (styling is omitted for illustration purposes):

const ExpenseView = ({ expense }: { expense: PartialExpense | Expense }) => (
  <div>
    <div>{expense?.date ?? ''}</div>
    <div>${expense?.amount?.toFixed(2) ?? ''}</div>
    <div>{expense?.category ?? ''}</p></div>
    <div>{expense?.details ?? ''}</div>
  </div>
);
登入後複製

Check out this example in action and deploy the template.

You can use this approach to create generative user interfaces client-side for many different use cases. You can find more details on how to use it in our object generation documentation.

Additional LLM Settings

Calling language models is at the heart of the Vercel AI SDK. We have listened to your feedback and extended our functions to support the following features:

  • JSON schema support for tools and structured object generation: As an alternative to Zod schemas, you can now use JSON schemas directly with the jsonSchema function. You can supply the type annotations and an optional validation function, giving you more flexibility especially when building applications with dynamic tools and structure generation.
  • Stop sequences: Text sequences that stop generations have been an important feature when working with earlier language models that used raw text prompts. They are still relevant for many use cases, allowing you more control over the end of a text generation. You can now use the stopSequences option to define stop sequences in streamText and generateText.
  • Sending custom headers: Custom headers are important for many use cases, like sending tracing information, enabling beta provider features, and more. You can now send custom headers using the headers option in most AI SDK functions.

With these additional settings, you have more control and flexibility when working with language models in the Vercel AI SDK.

Conclusion

With new features like OpenTelemetry support, useObject, and support for attachments with useChat, it’s never been a better time to start building AI applications.

  • Start a new AI project: Ready to build something new? Check out our multi-modal chatbot guide.
  • Explore our templates: Visit our Template Gallery to see the AI SDK in action and get inspired for your next project.
  • Join the community: Let us know what you’re building with the AI SDK in our GitHub Discussions.

We can't wait to see what you'll build next with Vercel AI SDK 3.3!

Contributors

Vercel AI SDK 3.3 is the result of the combined work of our core team at Vercel and many community contributors.

Special thanks for contributing merged pull requests:

gclark-eightfold, dynamicwebpaige, Und3rf10w, elitan, jon-spaeth, jeasonstudio, InfiniteCodeMonkeys, ruflair, MrMaina100, AntzyMo, samuelint, ian-pascoe, PawelKonie99, BrianHung, Ouvill, gmickel, developaul, elguarir, Kunoacc, florianheysen, rajuAhmed1705, suemor233, eden-chan, DraganAleksic99, karl-richter, rishabhbizzle, vladkampov, AaronFriel, theitaliandev, miguelvictor, jferrettiboke, dhruvvbhavsar, lmcgartland, PikiLee

Your feedback and contributions are invaluable as we continue to evolve the SDK.

以上是Vercel人工智慧SDK的詳細內容。更多資訊請關注PHP中文網其他相關文章!

來源:dev.to
本網站聲明
本文內容由網友自願投稿,版權歸原作者所有。本站不承擔相應的法律責任。如發現涉嫌抄襲或侵權的內容,請聯絡admin@php.cn
熱門教學
更多>
最新下載
更多>
網站特效
網站源碼
網站素材
前端模板