首頁 > 後端開發 > Python教學 > 初學者 Python 專案:使用 OpenCV 和 Mediapipe 建立擴增實境繪圖應用程式

初學者 Python 專案:使用 OpenCV 和 Mediapipe 建立擴增實境繪圖應用程式

Linda Hamilton
發布: 2025-01-02 14:47:38
原創
766 人瀏覽過

Beginner Python Project: Build an Augmented Reality Drawing App Using OpenCV and Mediapipe

在這個 Python 專案中,我們將建立一個簡單的 AR 繪圖應用程式。使用網路攝影機和手勢,您可以在螢幕上虛擬繪圖、自訂畫筆,甚至儲存您的創作!

設定

首先,建立一個新資料夾並使用以下命令初始化新的虛擬環境:

python -m venv venv
登入後複製
./venv/Scripts/activate
登入後複製

接下來使用 pip 或您選擇的安裝程式安裝所需的庫:

pip install mediapipe
登入後複製
pip install opencv-python
登入後複製

注意

您在 python 上安裝最新版本的 mediapipe 時可能會遇到問題。當我寫這篇部落格時,我使用的是 python 3.11.2。確保使用 python 上的相容版本。

第 1 步:捕捉網路攝影機來源

第一步是設定網路攝影機並顯示視訊來源。我們將使用 OpenCV 的 VideoCapture 來存取相機並連續顯示影格。

import cv2  

# The argument '0' specifies the default camera (usually the built-in webcam).
cap = cv2.VideoCapture(0)

# Start an infinite loop to continuously capture video frames from the webcam
while True:
    # Read a single frame from the webcam
    # `ret` is a boolean indicating success; `frame` is the captured frame.
    ret, frame = cap.read()

    # Check if the frame was successfully captured
    # If not, break the loop and stop the video capture process.
    if not ret:
        break

    # Flip the frame horizontally (like a mirror image)
    frame = cv2.flip(frame, 1)

    # Display the current frame in a window named 'Webcam Feed'
    cv2.imshow('Webcam Feed', frame)

    # Wait for a key press for 1 millisecond
    # If the 'q' key is pressed, break the loop to stop the video feed.
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# Release the webcam resource to make it available for other programs
cap.release()

# Close all OpenCV-created windows
cv2.destroyAllWindows()

登入後複製

你知道嗎?

在 OpenCV 中使用 cv2.waitKey() 時,傳回的金鑰程式碼可能包含額外的位,具體取決於平台。為了確保正確偵測按鍵,您可以使用 0xFF 屏蔽結果以隔離低 8 位元(實際 ASCII 值)。如果沒有這個,您的關鍵比較可能會在某些系統上失敗 - 因此請始終使用 & 0xFF 以獲得一致的行為!

第 2 步:整合手部檢測

使用 Mediapipe 的手解決方案,我們將偵測手並提取關鍵標誌的位置,例如食指和中指。

import cv2  
import mediapipe as mp

# Initialize the MediaPipe Hands module
mp_hands = mp.solutions.hands  # Load the hand-tracking solution from MediaPipe
hands = mp_hands.Hands(
    min_detection_confidence=0.9,
    min_tracking_confidence=0.9 
)

cap = cv2.VideoCapture(0)
while True:
    ret, frame = cap.read()
    if not ret:
        break 

    # Flip the frame horizontally to create a mirror effect
    frame = cv2.flip(frame, 1)

    # Convert the frame from BGR (OpenCV default) to RGB (MediaPipe requirement)
    frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

    # Process the RGB frame to detect and track hands
    result = hands.process(frame_rgb)

    # If hands are detected in the frame
    if result.multi_hand_landmarks:
        # Iterate through all detected hands
        for hand_landmarks in result.multi_hand_landmarks:
            # Get the frame dimensions (height and width)
            h, w, _ = frame.shape

            # Calculate the pixel coordinates of the tip of the index finger
            cx, cy = int(hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].x * w), \
                     int(hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].y * h)

            # Calculate the pixel coordinates of the tip of the middle finger
            mx, my = int(hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_TIP].x * w), \
                     int(hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_TIP].y * h)

            # Draw a circle at the index finger tip on the original frame
            cv2.circle(frame, (cx, cy), 10, (0, 255, 0), -1)  # Green circle with radius 10

    # Display the processed frame in a window named 'Webcam Feed'
    cv2.imshow('Webcam Feed', frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break  # Exit the loop if 'q' is pressed

# Release the webcam resources for other programs
cap.release()
cv2.destroyAllWindows()

登入後複製

第 3 步:追蹤手指位置並繪製

我們將追蹤食指,並僅當食指和中指分開閾值距離時才允許繪圖。

我們將維護一個食指座標列表,以在原始幀上進行繪製,並且每次中指足夠靠近時,我們都會將 None 附加到該座標數組中,以指示損壞。

import cv2  
import mediapipe as mp  
import math  

# Initialize the MediaPipe Hands module
mp_hands = mp.solutions.hands
hands = mp_hands.Hands(
    min_detection_confidence=0.9,  
    min_tracking_confidence=0.9   
)

# Variables to store drawing points and reset state
draw_points = []  # A list to store points where lines should be drawn
reset_drawing = False  # Flag to indicate when the drawing should reset

# Brush settings
brush_color = (0, 0, 255)  
brush_size = 5 


cap = cv2.VideoCapture(0)
while True:
    ret, frame = cap.read()  
    if not ret:
        break 

    frame = cv2.flip(frame, 1) 
    frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) 
    result = hands.process(frame_rgb)  

    # If hands are detected
    if result.multi_hand_landmarks:
        for hand_landmarks in result.multi_hand_landmarks:
            h, w, _ = frame.shape  # Get the frame dimensions (height and width)

            # Get the coordinates of the index finger tip
            cx, cy = int(hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].x * w), \
                     int(hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].y * h)

            # Get the coordinates of the middle finger tip
            mx, my = int(hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_TIP].x * w), \
                     int(hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_TIP].y * h)

            # Calculate the distance between the index and middle finger tips
            distance = math.sqrt((mx - cx) ** 2 + (my - cy) ** 2)

            # Threshold distance to determine if the fingers are close (used to reset drawing)
            threshold = 40 

            # If the fingers are far apart
            if distance > threshold:
                if reset_drawing:  # Check if the drawing was previously reset
                    draw_points.append(None)  # None means no line
                    reset_drawing = False  
                draw_points.append((cx, cy))  # Add the current point to the list for drawing
            else:  # If the fingers are close together set the flag to reset drawing
                reset_drawing = True  # 

    # Draw the lines between points in the `draw_points` list
    for i in range(1, len(draw_points)):
        if draw_points[i - 1] and draw_points[i]:  # Only draw if both points are valid
            cv2.line(frame, draw_points[i - 1], draw_points[i], brush_color, brush_size)


    cv2.imshow('Webcam Feed', frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# Release the webcam and close all OpenCV windows
cap.release()
cv2.destroyAllWindows()

登入後複製

第四步:改進

  • 使用 OpenCV 矩形() 和 putText() 作為按鈕來切換畫筆大小和顏色。
  • 新增保存框架的選項。
  • 新增橡皮擦工具,使用新座標修改draw_points陣列。

以上是初學者 Python 專案:使用 OpenCV 和 Mediapipe 建立擴增實境繪圖應用程式的詳細內容。更多資訊請關注PHP中文網其他相關文章!

來源:dev.to
本網站聲明
本文內容由網友自願投稿,版權歸原作者所有。本站不承擔相應的法律責任。如發現涉嫌抄襲或侵權的內容,請聯絡admin@php.cn
作者最新文章
熱門教學
更多>
最新下載
更多>
網站特效
網站源碼
網站素材
前端模板