首頁 > 後端開發 > Golang > 計算 Go 中發送給 LLM 的 Token 數量(第 1 部分)

計算 Go 中發送給 LLM 的 Token 數量(第 1 部分)

Patricia Arquette
發布: 2025-01-02 14:18:39
原創
539 人瀏覽過

Counting the number of Tokens sent to a LLM in Go (part 1)

介紹

幾週前,我與一家業務合作夥伴公司的財務長討論了在他們自己的解決方案中實施 watsonx.ai 功能的問題。在討論成本的過程中,我說出了「代幣」這個詞,突然出現了恐慌?

解釋完什麼是代幣,問題就來了;「我如何計算我們發送和接收的代幣?我們要花多少錢?」

答案很簡單。我們去了 watsonx.ai studio 提示實驗室,反覆詢問一些簡單的提示,然後我們看到了代幣的數量。我還向該人展示了一些非常好的網站,我們可以透過簡單的輸入找到我們發送給 LLM 的代幣數量。

後來我對自己說,為什麼我不做一個自己的令牌計數器應用程式(我的目的是用Go語言寫它,因為我已經很久沒有使用Golang了!)。嗯,我認為事情比這更複雜一點?

第一次嘗試-使用正規表示式

我的第一個想法是使用正規表示式,我或多或少可以獲得一些可以接受的結果。

我設定了以下 Go 應用程式。

package main

import (
    "bufio"
    "fmt"
    "log"
    "os"
    "regexp"
    "strings"

    "github.com/sqweek/dialog"
)

// countTokens approximates the number of tokens in a text based on whitespace and punctuation.
func countTokens(text string) int {
    // A simple regex to split text into words and punctuation
    tokenizer := regexp.MustCompile(`\w+|[^\w\s]`)
    tokens := tokenizer.FindAllString(text, -1)
    return len(tokens)
}

func main() {

    // Open a file dialog box and let the user select a text file
    filePath, err := dialog.File().Filter("Text Files", "txt").Load()
    if err != nil {
        if err.Error() == "Cancelled" {
            fmt.Println("File selection was cancelled.")
            return
        }
        log.Fatalf("Error selecting file: %v", err)
    }

    // Output the selected file name
    fmt.Printf("Selected file: %s\n", filePath)

    // Specify the file to read
    //filePath := "input.txt"

    // Open the file
    file, err := os.Open(filePath)
    if err != nil {
        fmt.Printf("Error opening file: %v\n", err)
        return
    }
    defer file.Close()

    // Read the file line by line
    var content strings.Builder
    scanner := bufio.NewScanner(file)
    for scanner.Scan() {
        content.WriteString(scanner.Text())
        content.WriteString("\n")
    }

    if err := scanner.Err(); err != nil {
        fmt.Printf("Error reading file: %v\n", err)
        return
    }

    // Get the text content
    text := content.String()

    // Count the tokens
    tokenCount := countTokens(text)

    // Output the result
    fmt.Printf("The file contains approximately %d tokens.\n", tokenCount)
}

登入後複製

你會發現我是 GUI 和對話框的粉絲,所以我實作了一個對話框來選擇輸入文字檔。

這是文字檔案(我發現了一些隨機文字?)。

The popularity of the Rust language continues to explode; yet, many critical codebases remain authored in C, and cannot be realistically rewritten by hand. Automatically translating C to Rust is thus an appealing course of action. Several works have gone down this path, handling an ever-increasing subset of C through a variety of Rust features, such as unsafe. While the prospect of automation is appealing, producing code that relies on unsafe negates the memory safety guarantees offered by Rust, and therefore the main advantages of porting existing codebases to memory-safe languages.
We instead explore a different path, and explore what it would take to translate C to safe Rust; that is, to produce code that is trivially memory safe, because it abides by Rust's type system without caveats. Our work sports several original contributions: a type-directed translation from (a subset of) C to safe Rust; a novel static analysis based on "split trees" that allows expressing C's pointer arithmetic using Rust's slices and splitting operations; an analysis that infers exactly which borrows need to be mutable; and a compilation strategy for C's struct types that is compatible with Rust's distinction between non-owned and owned allocations.
We apply our methodology to existing formally verified C codebases: the HACL* cryptographic library, and binary parsers and serializers from EverParse, and show that the subset of C we support is sufficient to translate both applications to safe Rust. Our evaluation shows that for the few places that do violate Rust's aliasing discipline, automated, surgical rewrites suffice; and that the few strategic copies we insert have a negligible performance impact. Of particular note, the application of our approach to HACL* results in a 80,000 line verified cryptographic library, written in pure Rust, that implements all modern algorithms - the first of its kind.
登入後複製

運行程式碼後,我得到以下輸出;

The file contains approximately 359 tokens.
登入後複製

看起來不錯,但是,好吧……好吧,但是……針對哪個模型?而且還有不同的方法來實現正規表示式,所以這個根本不算數? !

第二次嘗試-針對特定模型運行

我發現,除非我們不對給定的 LLM 使用特定的“標記器”,否則前一種方法是不準確的。因此,我開始研究如何針對市場上已經有一段時間的 gpt 3.5 等模型獲得一些準確的結果。在網上做了一些研究後,我想出了這個應用程式。

package main

import (
 "bufio"
 "bytes"
 "fmt"
 "log"
 "os"
 "os/exec"

 "github.com/joho/godotenv"
 "github.com/sqweek/dialog"
)

func main() {


 // Open a file dialog box and let the user select a text file
 filePath, err := dialog.File().Filter("Text Files", "txt").Load()
 if err != nil {
  if err.Error() == "Cancelled" {
   fmt.Println("File selection was cancelled.")
   return
  }
  log.Fatalf("Error selecting file: %v", err)
 }

 // Output the selected file name
 fmt.Printf("Selected file: %s\n", filePath)

 // Open the file
 file, err := os.Open(filePath)
 if err != nil {
  fmt.Printf("Error opening file: %v\n", err)
  return
 }
 defer file.Close()

 // Read the file content
 var content bytes.Buffer
 scanner := bufio.NewScanner(file)
 for scanner.Scan() {
  content.WriteString(scanner.Text())
  content.WriteString("\n")
 }

 if err := scanner.Err(); err != nil {
  fmt.Printf("Error reading file: %v\n", err)
  return
 }

 // Specify the model
 model := "gpt-3.5-turbo"

 // Execute the Python script
 cmd := exec.Command("python3", "tokenizer.py", model)
 cmd.Stdin = bytes.NewReader(content.Bytes())
 output, err := cmd.Output()
 if err != nil {
  fmt.Printf("Error running tokenizer script: %v\n", err)
  return
 }

 // Print the token count
 fmt.Printf("Token count: %s", output)
}
登入後複製

正如我們在上面的程式碼中看到的,有一個對Python 應用程式的調用,我在Microsoft 網站上找到了該應用程序,該應用程式有助於(因為它已實作)「tiktoken」函式庫來決定gpt 的令牌數量!模型名稱也是硬編碼的。

import sys
from tiktoken import encoding_for_model

def count_tokens(model, text):
    enc = encoding_for_model(model)
    tokens = enc.encode(text)
    return len(tokens)

if __name__ == "__main__":
    # Read model name and text from stdin
    model = sys.argv[1]  # E.g., "gpt-3.5-turbo"
    text = sys.stdin.read()
    print(count_tokens(model, text))
登入後複製

這很好用。對於之前給出的相同文本,現在我獲得了366 個令牌的計數,該計數對於我找到的所有網站以及我將模型設置為GPT 3.5.

的都是準確的。

我想寫的是,完全用“Golang”編寫的程式碼…我希望能夠為我在Huggingface 上找到的所有模型(或幾乎所有模型)運行它(例如如ibm- granite/granite-3.1–8b-instruct) ?

這將是本文的第二部分(WIP)。

到目前為止,我正在探索以下內容(很好?)Github 儲存庫;

  • 分詞器:https://github.com/sugarme/tokenizer
  • 標記器:https://github.com/daulet/tokenizers
  • 最後但並非最不重要的 -> go-huggingface:https://github.com/gomlx/go-huggingface?tab=readme-ov-file

結論

感謝您的閱讀並歡迎評論。

在第二個應用程式推出之前,請繼續關注...?

以上是計算 Go 中發送給 LLM 的 Token 數量(第 1 部分)的詳細內容。更多資訊請關注PHP中文網其他相關文章!

來源:dev.to
本網站聲明
本文內容由網友自願投稿,版權歸原作者所有。本站不承擔相應的法律責任。如發現涉嫌抄襲或侵權的內容,請聯絡admin@php.cn
作者最新文章
熱門教學
更多>
最新下載
更多>
網站特效
網站源碼
網站素材
前端模板