This article mainly introduces to you the relevant information on the example application of HTML5 to develop Kinect somatosensory games. I hope this article can help you. Friends in need can refer to it. I hope it can help everyone.
HTML5 example application for developing Kinect somatosensory games
1. Introduction
What we are going to do A game?
At the Chengdu TGC2016 exhibition not long ago, we developed a somatosensory game of "Naruto Mobile Game", which mainly simulates the mobile game chapter "Nine-Tails Attack". The user becomes the fourth generation and competes with the Nine-Tails. The duel attracted a large number of players to participate. On the surface, this game is no different from other somatosensory experiences. In fact, it has been running under the browser Chrome. In other words, we only need to master the corresponding front-end technology to develop a web based somatosensory game based on Kinect.
2. Implementation Principle
What is the implementation idea?
Using H5 to develop Kinect-based somatosensory games, the working principle is actually very simple. Kinect collects player and environmental data, such as human skeleton, and uses a certain method to allow the browser to access these data.
1. Collect data
Kinect has three lenses. The middle lens is similar to an ordinary camera and acquires color images. The left and right lenses obtain depth data through infrared rays. We use the SDK provided by Microsoft to read the following types of data:
Color data: color image;
Depth data: color try information;
Human skeleton data: Based on calculations based on the above data, human skeleton data is obtained.
2. Make the Kinect data accessible to the browser
The framework I have tried and understood is basically The socket allows the browser process to communicate with the server for data transmission:
Kinect-HTML5 uses C# to build the server, and color data, test data, and skeleton data are all provided;
ZigFu supports H5, U3D, and Flash development. The API is relatively complete and seems to be charged;
DepthJS provides data access in the form of a browser plug-in;
Node-Kinect2 uses Nodejs to build the server side, providing relatively complete data and many examples.
I finally chose Node-Kinect2. Although there is no documentation, there are many examples. It uses Nodejs, which is familiar to front-end engineers. In addition, the author's feedback is relatively fast.
Kinect: Capture player data, such as depth images, color images, etc.;
Node-Kinect2 : Obtain the corresponding data from Kinect and perform secondary processing;
Browser: Listen to the specified interface of the node application, obtain player data and complete game development.
3. Preparation
You must first buy a Kinect
1. System requirements:
This is a hard requirement. I have wasted too much time in an environment that did not meet the requirements.
USB3.0
Graphics card that supports DX11
win8 and above systems
Browser that supports Web Sockets
Of course the Kinect v2 sensor is indispensable
2 , Environment building process:
Connect to Kinect v2
Install KinectSDK-v2.0
Install Nodejs
Install Node-Kinect2
npm install kinect2
4. Example demonstration
There is nothing better than giving me an example!
As shown in the figure below, we demonstrate how to obtain the human skeleton and identify the middle segment of the spine and gestures:
1. Server side
Create a web server and send the skeleton data to the browser side. The code is as follows:
var Kinect2 = require('../../lib/kinect2'), express = require('express'), app = express(), server = require('http').createServer(app), io = require('socket.io').listen(server); var kinect = new Kinect2(); // 打开kinect if(kinect.open()) { // 监听8000端口 server.listen(8000); // 指定请求指向根目录 app.get('/', function(req, res) { res.sendFile(__dirname + '/public/index.html'); }); // 将骨骼数据发送给浏览器端 kinect.on('bodyFrame', function(bodyFrame){ io.sockets.emit('bodyFrame', bodyFrame); }); // 开始读取骨骼数据 kinect.openBodyReader(); }
2. Browser side
The browser side obtains the bone data and uses canvas to draw it. The key code is as follows:
var socket = io.connect('/'); var ctx = canvas.getContext('2d'); socket.on('bodyFrame', function(bodyFrame){ ctx.clearRect(0, 0, canvas.width, canvas.height); var index = 0; // 遍历所有骨骼数据 bodyFrame.bodies.forEach(function(body){ if(body.tracked) { for(var jointType in body.joints) { var joint = body.joints[jointType]; ctx.fillStyle = colors[index]; // 如果骨骼节点为脊椎中点 if(jointType == 1) { ctx.fillStyle = colors[2]; } ctx.fillRect(joint.depthX * 512, joint.depthY * 424, 10, 10); } // 识别左右手手势 updateHandState(body.leftHandState, body.joints[7]); updateHandState(body.rightHandState, body.joints[11]); index++; } }); });
With just a few lines of code, we have completed the player skeleton capture. Students with a certain knowledge of JavaScript should be able to understand it easily, but what they don’t understand is what data can we obtain? How to get it? What are the names of bone nodes? There is no documentation for node-kienct2 telling us this.
5. Development Documentation
Node-Kinect2 does not provide documentation. I have compiled the documentation of my test summary as follows:
1 , the data type that the server can provide;
kinect.on('bodyFrame', function(bodyFrame){}); //还有哪些数据类型呢?
bodyFrame | 骨骼数据 |
infraredFrame | 红外数据 |
longExposureInfraredFrame | 类似infraredFrame,貌似精度更高,优化后的数据 |
rawDepthFrame | 未经处理的景深数据 |
depthFrame | 景深数据 |
colorFrame | 彩色图像 |
multiSourceFrame | 所有数据 |
audio | 音频数据,未测试 |
2、骨骼节点类型
body.joints[11] // joints包括哪些呢?
节点类型 | JointType | 节点名称 |
0 | spineBase | 脊椎基部 |
1 | spineMid | 脊椎中部 |
2 | neck | 颈部 |
3 | head | 头部 |
4 | shoulderLeft | 左肩 |
5 | elbowLeft | 左肘 |
6 | wristLeft | 左腕 |
7 | handLeft | 左手掌 |
8 | shoulderRight | 右肩 |
9 | elbowRight | 右肘 |
10 | wristRight | 右腕 |
11 | handRight | 右手掌 |
12 | hipLeft | 左屁 |
13 | kneeLeft | 左膝 |
14 | ankleLeft | 左踝 |
15 | footLeft | 左脚 |
16 | hipRight | 右屁 |
17 | kneeRight | 右膝 |
18 | ankleRight | 右踝 |
19 | footRight | 右脚 |
20 | spineShoulder | 颈下脊椎 |
21 | handTipLeft | 左手指(食中无小) |
22 | thumbLeft | 左拇指 |
23 | handTipRight | 右手指 |
24 | thumbRight | 右拇指 |
3、手势,据测识别并不是太准确,在精度要求不高的情况下使用
0 | unknown | 不能识别 |
1 | notTracked | 未能检测到 |
2 | open | 手掌 |
3 | closed | 握拳 |
4 | lasso | 剪刀手,并合并中食指 |
4、骨骼数据
body [object] { bodyIndex [number]:索引,允许6人 joints [array]:骨骼节点,包含坐标信息,颜色信息 leftHandState [number]:左手手势 rightHandState [number]:右手手势 tracked [boolean]:是否捕获到 trackingId }
5、kinect对象
on | 监听数据 |
open | 打开Kinect |
close | 关闭 |
openBodyReader | 读取骨骼数据 |
open**Reader | 类似如上方法,读取其它类型数据 |
六、实战总结
火影体感游戏经验总结
接下来,我总结一下TGC2016《火影忍者手游》的体感游戏开发中碰到的一些问题。
1、讲解之前,我们首先需要了解下游戏流程。
1.1、通过手势触发开始游戏 | 1.2、玩家化身四代,左右跑动躲避九尾攻击 |
1.3、摆出手势“奥义”,触发四代大招 | 1.4、用户扫描二维码获取自己现场照片 |
2、服务器端
游戏需要玩家骨骼数据(移动、手势),彩色图像数据(某一手势下触发拍照),所以我们需要向客户端发送这两部分数据。值得注意的是,彩色图像数据体积过大,需要进行压缩。
var emitColorFrame = false; io.sockets.on('connection', function (socket){ socket.on('startColorFrame', function(data){ emitColorFrame = true; }); }); kinect.on('multiSourceFrame', function(frame){ // 发送玩家骨骼数据 io.sockets.emit('bodyFrame', frame.body); // 玩家拍照 if(emitColorFrame) { var compression = 1; var origWidth = 1920; var origHeight = 1080; var origLength = 4 * origWidth * origHeight; var compressedWidth = origWidth / compression; var compressedHeight = origHeight / compression; var resizedLength = 4 * compressedWidth * compressedHeight; var resizedBuffer = new Buffer(resizedLength); // ... // 照片数据过大,需要压缩提高传输性能 zlib.deflate(resizedBuffer, function(err, result){ if(!err) { var buffer = result.toString('base64'); io.sockets.emit('colorFrame', buffer); } }); emitColorFrame = false; } }); kinect.openMultiSourceReader({ frameTypes: Kinect2.FrameType.body | Kinect2.FrameType.color });
3、客户端
客户端业务逻辑较复杂,我们提取关键步骤进行讲解。
3.1、用户拍照时,由于处理的数据比较大,为防止页面出现卡顿,我们需要使用web worker
(function(){ importScripts('pako.inflate.min.js'); var imageData; function init() { addEventListener('message', function (event) { switch (event.data.message) { case "setImageData": imageData = event.data.imageData; break; case "processImageData": processImageData(event.data.imageBuffer); break; } }); } function processImageData(compressedData) { var imageBuffer = pako.inflate(atob(compressedData)); var pixelArray = imageData.data; var newPixelData = new Uint8Array(imageBuffer); var imageDataSize = imageData.data.length; for (var i = 0; i < imageDataSize; i++) { imageData.data[i] = newPixelData[i]; } for(var x = 0; x < 1920; x++) { for(var y = 0; y < 1080; y++) { var idx = (x + y * 1920) * 4; var r = imageData.data[idx + 0]; var g = imageData.data[idx + 1]; var b = imageData.data[idx + 2]; } } self.postMessage({ "message": "imageReady", "imageData": imageData }); } init(); })();
3.2、接投影仪后,如果渲染面积比较大,会出现白屏,需要关闭浏览器硬件加速。
3.3、现场光线较暗,其它玩家干扰,在追踪玩家运动轨迹的过程中,可能会出现抖动的情况,我们需要去除干扰数据。(当突然出现很大位移时,需要将数据移除)
var tracks = this.tracks; var len = tracks.length; // 数据过滤 if(tracks[len-1] !== window.undefined) { if(Math.abs(n - tracks[len-1]) > 0.2) { return; } } this.tracks.push(n);
3.4、当玩家站立,只是左右少量晃动时,我们认为玩家是站立状态。
// 保留5个数据 if(this.tracks.length > 5) { this.tracks.shift(); } else { return; } // 位移总量 var dis = 0; for(var i = 1; i < this.tracks.length; i++) { dis += this.tracks[i] - this.tracks[i-1]; } if(Math.abs(dis) < 0.01) { this.stand(); } else { if(this.tracks[4] > this.tracks[3]) { this.turnRight(); } else { this.turnLeft(); } this.run(); }
七、展望
1、使用HTML5开发Kinect体感游戏,降低了技术门槛,前端工程师可以轻松的开发体感游戏;
2、大量的框架可以应用,比如用JQuery、CreateJS、Three.js(三种不同渲染方式);
3、无限想象空间,试想下体感游戏结合webAR,结合webAudio、结合移动设备,太可以挖掘的东西了……想想都激动不是么!
相关推荐:
The above is the detailed content of HTML5 example development Kinect somatosensory game sharing. For more information, please follow other related articles on the PHP Chinese website!