最近一直在通过https://learnopengl-cn.readth...来学习现代OpenGL使用,想通过OpenGL实现一个火星以及火星上地面站与火星附近卫星的相对运动的图像。
先说下我是在win10系统下,使用vs2015 glfw库 glew库来进行编程的。
我先用assimp库导入obj格式的火星模型,并且通过坐标矩阵的变换,将火星放大了4倍,并且按y轴随时间缓慢旋转。
glm::mat4 model;
model = glm::translate(model, glm::vec3(0.0f, 0.0f, 0.0f));
model = glm::rotate(model, (GLfloat)glfwGetTime() * 0.050f, glm::vec3(0.0f, 1.0f, 0.0f));//沿y轴,按时间变化旋转
model = glm::scale(model, glm::vec3(4.0f));
然后我再导入了obj格式的地面站模型,我想让地面站模型贴着火星表面,然后随着火星一起转动。
我一开始想法是,找到火星模型读进来的mesh对象里面的顶点参数,随意取一个顶点的位置就是火星的表面,然后把地面站模型通过model矩阵位移到该顶点的位置(其中也要注意之前火星被放大了四倍)。
但是实际操作之后,并不是我想的那样,这里就不列代码了,我觉得我思路不对= =。
问题1:
我想知道导入了两个模型,如何控制它们的相对运动?(我也看了几遍坐标变换的内容,我知道模型一开始都是局部空间,然后通过model矩阵变换到世界空间,但是我不清楚,位移的程度)
问题2:
模型一开始应该是在标准化设备坐标范围内吧(就是xyz三个轴都是[-1,1]范围),但是我看了下我导入的火星obj文件内容,里面顶点超过了这个范围呢,我有点奇怪。
我的理解标准化设备坐标和局部空间坐标应该是一个意思吧。
问题3:
我地面站模型导入之后,我想画出它发射信号的范围(即下面图片紫色的笼罩光),我应该是通过几何着色器,从地面站的部分顶点,对它变形,来做吗。
First of all, for each model, it is best to ensure that the center is at the origin of the model coordinate system.
Question 1:
Relative motion can be achieved through coordinate transformation. Just set the translation or rotation parameters as a function of time, just like how you rotate Mars. When transforming coordinates, there is no requirement to directly transform from the model coordinate system to the global coordinate system. Modern OpenGL does not even provide a transformation interface. You can use glm to transform at will. For example, you can transform two objects to a certain position first, and then transform one of them into a relative displacement, or you can calculate the positions of both objects separately and then transform them.
Question 2:
The standardized device coordinate range you are talking about should mean Canonical view volume, right? Because you need to do projection transformation later, the projection transformation will convert the world coordinate system to the Canonical view volume, so the model coordinate range does not matter.
Question 3:
There are many ways to implement this. I personally think it is not necessary to use a geometry shader, but which method to use still depends on the specific situation, such as whether the light will be blocked, etc.
Demo
A rough Demo
Move the ground station from its own model space to the model space of Mars (probably rotate at a certain angle and scale, and then translate a distance equivalent to the radius of Mars?)
Then View and Projection are the same as Mars
That should be fine?
Question 1: Normalized device coordinates (ndc) are not local space coordinates (model space?)
Source of the accompanying picture
It is not necessary for the model to have xyz all in the range of [-1, 1]. The coordinates in the model will undergo various transformations later, and whether they are in or out of this range has limited impact.
Question 3: I don’t understand how ranges are defined or “deformed”. Judging from the picture you provided, it doesn't have to be a shader. It might be just as easy to draw a cone directly.
Thank you very much. The center of the Mars model I loaded is not directly at the center of the local coordinate space = =. There is also a third question. This light will be blocked. How should I do it specifically?
My current idea is to take a point on the surface of Mars as the cone launch point, then draw the cone (I don’t know how to draw it yet), and then dock it with the cone launched by the satellite. Is there any good way to implement this?