Let's see how good you look! Public account developed based on Python

php是最好的语言
Release: 2018-07-25 13:56:17
Original
2421 people have browsed it

This is a Python-based WeChat public account development for appearance detection. Today we analyze the user's pictures through Tencent's AI platform and then return them to the user. Let’s experience the beauty test of public accounts together

Rendering

Lets see how good you look! Public account developed based on Python

Lets see how good you look! Public account developed based on Python

Lets see how good you look! Public account developed based on Python

##1. Access to Tencent AI platform

Let’s first take a look at the description of the official face detection and analysis interface:

Detect all faces (Face) in a given picture (Image) location and corresponding facial attributes. The position includes (x, y, w, h), and the facial attributes include gender, age, expression, beauty, glasses and posture (pitch, roll, yaw).

The request parameters include the following:

  • app_id application identification, we can get the app_id after registering on the AI ​​platform

  • time_stamp timestamp

  • nonce_str random string

  • sign signature information, we need to calculate it ourselves

  • image Images to be detected (upper limit 1M)

  • mode Detection mode

1. Interface authentication, constructing request parameters

The official gave us the calculation method for interface authentication.

  1. Sort the request parameter pairs in ascending dictionary order by key to obtain an ordered list of parameter pairs N

  2. The parameter pairs in list N are spliced ​​into strings in the format of URL key-value pairs to obtain string T (for example: key1=value1&key2=value2). The value part of the URL key-value splicing process requires URL encoding. The URL encoding algorithm uses capital letters. For example, instead of lowercase �

  3. , use app_key as the key name of the application key to form a URL key value and splice it to the end of the string T to obtain the string S (such as: key1=value1&key2 =value2&app_key=Key)

  4. Perform MD5 operation on the string S, convert all characters of the obtained MD5 value into uppercase, and obtain the interface request signature

2. Request the interface address

Request the interface information. We use requests to send the request, and we will get the returned image information in json format.

pip install requestsInstall requests.

3. Process the returned information

Process the returned information, display the information on the picture, and then save the processed picture. Here we use opencv and pillow libraries

pip install pillow and pip install opencv-python to install.

Start writing code. We create a new face_id.py file to connect to the AI ​​platform and return the detected image data.

import time
import random
import base64
import hashlib
import requests
from urllib.parse import urlencode
import cv2
import numpy as np
from PIL import Image, ImageDraw, ImageFont
import os


# 一.计算接口鉴权,构造请求参数

def random_str():
    '''得到随机字符串nonce_str'''
    str = 'abcdefghijklmnopqrstuvwxyz'
    r = ''
    for i in range(15):
        index = random.randint(0,25)
        r += str[index]
    return r


def image(name):
    with open(name, 'rb') as f:
        content = f.read()
    return base64.b64encode(content)


def get_params(img):
    '''组织接口请求的参数形式,并且计算sign接口鉴权信息,
    最终返回接口请求所需要的参数字典'''
    params = {
        'app_id': '1106860829',
        'time_stamp': str(int(time.time())),
        'nonce_str': random_str(),
        'image': img,
        'mode': '0'

    }

    sort_dict = sorted(params.items(), key=lambda item: item[0], reverse=False)  # 排序
    sort_dict.append(('app_key', 'P8Gt8nxi6k8vLKbS'))  # 添加app_key
    rawtext = urlencode(sort_dict).encode()  # URL编码
    sha = hashlib.md5()
    sha.update(rawtext)
    md5text = sha.hexdigest().upper()  # 计算出sign,接口鉴权
    params['sign'] = md5text  # 添加到请求参数列表中
    return params

# 二.请求接口URL


def access_api(img):
    frame = cv2.imread(img)
    nparry_encode = cv2.imencode('.jpg', frame)[1]
    data_encode = np.array(nparry_encode)
    img_encode = base64.b64encode(data_encode)  # 图片转为base64编码格式
    url = 'https://api.ai.qq.com/fcgi-bin/face/face_detectface'
    res = requests.post(url, get_params(img_encode)).json()  # 请求URL,得到json信息
    # 把信息显示到图片上
    if res['ret'] == 0:  # 0代表请求成功
        pil_img = Image.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))  # 把opencv格式转换为PIL格式,方便写汉字
        draw = ImageDraw.Draw(pil_img)
        for obj in res['data']['face_list']:
            img_width = res['data']['image_width']  # 图像宽度
            img_height = res['data']['image_height']  # 图像高度
            # print(obj)
            x = obj['x']  # 人脸框左上角x坐标
            y = obj['y']  # 人脸框左上角y坐标
            w = obj['width']  # 人脸框宽度
            h = obj['height']  # 人脸框高度
            # 根据返回的值,自定义一下显示的文字内容
            if obj['glass'] == 1:  # 眼镜
                glass = '有'
            else:
                glass = '无'
            if obj['gender'] >= 70:  # 性别值从0-100表示从女性到男性
                gender = '男'
            elif 50 <= obj[&#39;gender&#39;] < 70:
                gender = "娘"
            elif obj[&#39;gender&#39;] < 30:
                gender = &#39;女&#39;
            else:
                gender = &#39;女汉子&#39;
            if 90 < obj[&#39;expression&#39;] <= 100:  # 表情从0-100,表示笑的程度
                expression = &#39;一笑倾城&#39;
            elif 80 < obj[&#39;expression&#39;] <= 90:
                expression = &#39;心花怒放&#39;
            elif 70 < obj[&#39;expression&#39;] <= 80:
                expression = &#39;兴高采烈&#39;
            elif 60 < obj[&#39;expression&#39;] <= 70:
                expression = &#39;眉开眼笑&#39;
            elif 50 < obj[&#39;expression&#39;] <= 60:
                expression = &#39;喜上眉梢&#39;
            elif 40 < obj[&#39;expression&#39;] <= 50:
                expression = &#39;喜气洋洋&#39;
            elif 30 < obj[&#39;expression&#39;] <= 40:
                expression = &#39;笑逐颜开&#39;
            elif 20 < obj[&#39;expression&#39;] <= 30:
                expression = &#39;似笑非笑&#39;
            elif 10 < obj[&#39;expression&#39;] <= 20:
                expression = &#39;半嗔半喜&#39;
            elif 0 <= obj[&#39;expression&#39;] <= 10:
                expression = &#39;黯然伤神&#39;
            delt = h // 5  # 确定文字垂直距离
            # 写入图片
            if len(res[&#39;data&#39;][&#39;face_list&#39;]) > 1:  # 检测到多个人脸,就把信息写入人脸框内
                font = ImageFont.truetype(&#39;yahei.ttf&#39;, w // 8, encoding=&#39;utf-8&#39;)  # 提前把字体文件下载好
                draw.text((x + 10, y + 10), &#39;性别 :&#39; + gender, (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 1), &#39;年龄 :&#39; + str(obj[&#39;age&#39;]), (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 2), &#39;表情 :&#39; + expression, (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 3), &#39;魅力 :&#39; + str(obj[&#39;beauty&#39;]), (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 4), &#39;眼镜 :&#39; + glass, (76, 176, 80), font=font)
            elif img_width - x - w < 170:  # 避免图片太窄,导致文字显示不完全
                font = ImageFont.truetype(&#39;yahei.ttf&#39;, w // 8, encoding=&#39;utf-8&#39;)
                draw.text((x + 10, y + 10), &#39;性别 :&#39; + gender, (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 1), &#39;年龄 :&#39; + str(obj[&#39;age&#39;]), (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 2), &#39;表情 :&#39; + expression, (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 3), &#39;魅力 :&#39; + str(obj[&#39;beauty&#39;]), (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 4), &#39;眼镜 :&#39; + glass, (76, 176, 80), font=font)
            else:
                font = ImageFont.truetype(&#39;yahei.ttf&#39;, 20, encoding=&#39;utf-8&#39;)
                draw.text((x + w + 10, y + 10), &#39;性别 :&#39; + gender, (76, 176, 80), font=font)
                draw.text((x + w + 10, y + 10 + delt * 1), &#39;年龄 :&#39; + str(obj[&#39;age&#39;]), (76, 176, 80), font=font)
                draw.text((x + w + 10, y + 10 + delt * 2), &#39;表情 :&#39; + expression, (76, 176, 80), font=font)
                draw.text((x + w + 10, y + 10 + delt * 3), &#39;魅力 :&#39; + str(obj[&#39;beauty&#39;]), (76, 176, 80), font=font)
                draw.text((x + w + 10, y + 10 + delt * 4), &#39;眼镜 :&#39; + glass, (76, 176, 80), font=font)

            draw.rectangle((x, y, x + w, y + h), outline="#4CB050")  # 画出人脸方框
            cv2img = cv2.cvtColor(np.array(pil_img), cv2.COLOR_RGB2BGR)  # 把 pil 格式转换为 cv
            cv2.imwrite(&#39;faces/{}&#39;.format(os.path.basename(img)), cv2img)  # 保存图片到 face 文件夹下
            return &#39;检测成功&#39;
    else:
        return &#39;检测失败&#39;
Copy after login

At this point our face detection interface access and image processing are completed. After receiving the picture information sent by the user, call this function and return the processed picture to the user.

Return the picture to the user

When receiving the user picture, the following steps are required:

Save the picture

After receiving the user picture, We need to save the image first, and then we can call the face analysis interface and transfer the image information. We need to write an img_download function to download the image. See the code below for details

Calling the face analysis interface

After downloading the image, call the interface function in the face_id.py file to get the processed image.

Upload pictures

The detection result is a new picture. To send the picture to the user, we need a Media_ID. To obtain the Media_ID, we must first upload the picture as a temporary material, so here we need An img_upload function is used to upload images, and an access_token is required when uploading. We obtain it through a function.

To obtain the access_token, we must add our own IP address to the whitelist, otherwise it will not be obtained. Please log in to "WeChat Public Platform-Development-Basic Configuration" to add the server IP address to the IP whitelist in advance. You can view the IP of this machine at http://ip.qq.com/...

Start writing code, we create a new utils.py to download and upload pictures

import requests
import json
import threading
import time
import os

token = &#39;&#39;
app_id = &#39;wxfc6adcdd7593a712&#39;
secret = &#39;429d85da0244792be19e0deb29615128&#39;


def img_download(url, name):
    r = requests.get(url)
    with open(&#39;images/{}-{}.jpg&#39;.format(name, time.strftime("%Y_%m_%d%H_%M_%S", time.localtime())), &#39;wb&#39;) as fd:
        fd.write(r.content)
    if os.path.getsize(fd.name) >= 1048576:
        return &#39;large&#39;
    # print(&#39;namename&#39;, os.path.basename(fd.name))
    return os.path.basename(fd.name)


def get_access_token(appid, secret):
    &#39;&#39;&#39;获取access_token,100分钟刷新一次&#39;&#39;&#39;

    url = &#39;https://api.weixin.qq.com/cgi-bin/token?grant_type=client_credential&appid={}&secret={}&#39;.format(appid, secret)
    r = requests.get(url)
    parse_json = json.loads(r.text)
    global token
    token = parse_json[&#39;access_token&#39;]
    global timer
    timer = threading.Timer(6000, get_access_token)
    timer.start()


def img_upload(mediaType, name):
    global token
    url = "https://api.weixin.qq.com/cgi-bin/media/upload?access_token=%s&type=%s" % (token, mediaType)
    files = {&#39;media&#39;: open(&#39;{}&#39;.format(name), &#39;rb&#39;)}
    r = requests.post(url, files=files)
    parse_json = json.loads(r.text)
    return parse_json[&#39;media_id&#39;]

get_access_token(app_id, secret)
Copy after login

Return to the user

We simply modify the logic after receiving the picture, after receiving the picture Face detection, upload to obtain Media_ID, all we have to do is return the image to the user. Directly look at the code of connect.py

import falcon
from falcon import uri
from wechatpy.utils import check_signature
from wechatpy.exceptions import InvalidSignatureException
from wechatpy import parse_message
from wechatpy.replies import TextReply, ImageReply

from utils import img_download, img_upload
from face_id import access_api


class Connect(object):

    def on_get(self, req, resp):
        query_string = req.query_string
        query_list = query_string.split(&#39;&&#39;)
        b = {}
        for i in query_list:
            b[i.split(&#39;=&#39;)[0]] = i.split(&#39;=&#39;)[1]

        try:
            check_signature(token=&#39;lengxiao&#39;, signature=b[&#39;signature&#39;], timestamp=b[&#39;timestamp&#39;], nonce=b[&#39;nonce&#39;])
            resp.body = (b[&#39;echostr&#39;])
        except InvalidSignatureException:
            pass
        resp.status = falcon.HTTP_200

    def on_post(self, req, resp):
        xml = req.stream.read()
        msg = parse_message(xml)
        if msg.type == &#39;text&#39;:
            reply = TextReply(content=msg.content, message=msg)
            xml = reply.render()
            resp.body = (xml)
            resp.status = falcon.HTTP_200
        elif msg.type == &#39;image&#39;:
            name = img_download(msg.image, msg.source)  # 下载图片
            r = access_api(&#39;images/&#39; + name)
            if r == &#39;检测成功&#39;:
                media_id = img_upload(&#39;image&#39;, &#39;faces/&#39; + name)  # 上传图片,得到 media_id
                reply = ImageReply(media_id=media_id, message=msg)
            else:
                reply = TextReply(content=&#39;人脸检测失败,请上传1M以下人脸清晰的照片&#39;, message=msg)
            xml = reply.render()
            resp.body = (xml)
            resp.status = falcon.HTTP_200

app = falcon.API()
connect = Connect()
app.add_route(&#39;/connect&#39;, connect)
Copy after login

Now our work is done, and our official account can be tested for appearance. I originally planned to use it on my official account, but there are still several problems below, so I didn’t use it.

  1. WeChat’s mechanism, our program must respond within 5s. Otherwise, it will report 'The service provided by the official account is faulty'. However, image processing is sometimes slow, often exceeding 5 seconds. Therefore, the correct processing method should be to return an empty string immediately after receiving the user's request to indicate that we have received it, and then create a separate thread to process the image. When the image is processed, it is sent to the user through the customer service interface. Unfortunately, uncertified public accounts do not have a customer service interface, so there is nothing you can do. If it takes more than 5 seconds, an error will be reported.

  2. The menu cannot be customized. Once custom development is enabled, the menu also needs to be customized. However, uncertified official accounts do not have permission to configure the menu through the program and can only configure it in the WeChat background. configuration.

So, I have not enabled this program on my official account, but if you have a certified official account, you can try to develop various fun functions.

Related recommendations:

WeChat public platform development One-click follow WeChat public platform account

WeChat public platform development attempt, WeChat public platform

Video: Chuanzhi and Dark Horse WeChat public platform development video tutorial

The above is the detailed content of Let's see how good you look! Public account developed based on Python. For more information, please follow other related articles on the PHP Chinese website!

source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template