FIFE forums

FIFE Development => Framework development => Topic started by: jasoka on November 25, 2007, 10:08:28 am

Title: Matrix transformation
Post by: jasoka on November 25, 2007, 10:08:28 am
Okay, let's use forums to clarify the matrix transformation issues. Sleek has some ideas how to improve the code, but the communication via irc seems to be quite difficult (due to time differences). Later when the issue is solved, we can document this thread properly into wiki.


Let's take the basic principles first:

Visible game level (elevation) consist of layers. Layers can be positioned against each other via offsets. They can also contain rotation. Layers are partitioned into cells via cellgrids (each layer contains 1 cellgrid).

Cell related calculations are placed into cellgrids. For different kinds of cells, we have different kinds of cellgrids (e.g. hex, square). You can transform from one layer's coordinate system to another via elevation coordinates, i.e. layer1->elevation->layer2. For graphical representation of cellgrids, see http://wiki.fifengine.net/index.php?title=Model_Design_Documentation. I've been calling calculations that do mentioned transforms as Model transforms.

All model activity (e.g. instance movement) happens in logical space. In principle you can think that this is overhead view from elevation's layers. There is no camera concepts involved (i.e. pather doesn't even know that there is camera). View module contains visualizations for our pseudo-3d looks (isometric with 3d calculations). This is just one way to visualize the model data, and might be later supplemented with other types of views (e.g. full 3d or nethack character type). The way stuff is shown on screen is calculated in view. Elevation coordinate <-> screen coordinate transforms (matrix calculations) reside in camera. I've been calling these as view transforms.

Video module doesn't know anything about view stuff; it only renders images on flat surface (meaning it cannot display proper 3d atm). Even OpenGL backend renders just images on flat surface.



Now, Sleek had some questions, picked from channel (my interpretation):

Q1: Camera transforms seem to be model transforms. Why is that?
A1: Like stated above, my current terminology for these is mostly related on code/module placement. In case of mathematical point of view, that might be the case indeed, however I'm still not sure what exactly that would result in the code. Code snippet to illustrate this would be appreciated.

Q2: Is rotation left or right handed?
A2: Rotation is meant to follow polar coordinates (http://en.wikipedia.org/wiki/Polar_coordinates), i.e. rotation goes counterclockwise. In case there is some code that doesn't implement this, it very well might be a bug.

Q3: Why view does zooming when camera is rotated?
A3: View calculates reference scale on camera updates. Reference scale is used to calculate grid dimensions based on given cell width/height (usually tile image dimensions). When grid is rotated, also the size of cell bounding box changes. This causes zoom pumping.

My guesses of possible bug points:


Hopefully these things help with clarifying the code. I have no illusions that current code would be bug free from any point :) Therefore just be critical on things that you see. In case of unclarities, please use the forums. It also helps a lot to give concrete code examples what one would like to change. Also textual reasoning for changes is usually helpful to explain what one really means.

Also in case of changes to codebase, let's try to focus on one thing at the time. There's many spots that need to be improved in the engine, however if we do those all in single commit, tracking them gets hairy.
In case it feels that there are need for drastic changes to fix the view, branch might be appropriate. Before that however, I would like to hear the plan what would be changed and why.
Title: Re: Matrix transformation
Post by: Sleek on November 25, 2007, 10:36:35 pm
ModelView Transformation
===================

This is the closest thing I could find which explains what I meant from earlier :

World/model transformation
http://msdn2.microsoft.com/en-us/library/aa921159.aspx

View transformation
http://msdn2.microsoft.com/en-us/library/aa915179.aspx

In essence, view - model transformations form a pair. Rotate/translate/scale the world, and the render changes. Move the camera ( viewer, also called the eye or render window ) and the final render also changes. I.e, if we move the camera to the left, the world appears to be moving to the right relative to us. This is a simple example so there is little difference. But be assured they are totally opposite to each other.

Our current transformation code resides in camera.cpp, which implies a view transformation. However, the code & render result ( what we see on the screen ) tells me it's a model transformation. I.e, rotation of the camera clockwise should make the render rotate anti-clockwise. This is possibly the reason why jasoka used m_inverse_matrix earlier ( maybe to make the render result opposite to the transformation order ). I would like to hear someone correcting me if my understanding of the concept is wrong.

It's possible to look at the current code as a view transformation, but rotation will look like something else. I will post a picture later when I get home. The camera possible locations should look like a hemisphere. Anyway, I think I've got to the point to where I can see from jasoka's point of view.

Rotation
======
Code: [Select]
trunk/tests/swig_tests/location_tests.py

64                 self.squaregrid1.setXShift(2)
65                 self.squaregrid1.setYShift(2)
66                 self.squaregrid1.setRotation(90)
67                 self.squaregrid1.setXScale(5)
68                 self.squaregrid1.setYScale(5)
69                 self.loc1.setLayerCoordinates(P(1,1))
70                 pt = self.loc1.getElevationCoordinates()
71                 self.assert_(is_near(pt.x, 15))
72                 self.assert_(is_near(pt.y, -15))

Original coord: 1,1
shiftX 2 , result: 3,1
shiftY 2 , result: 3,3
rotateAntiClockwise 90 , result: -3,3
scaleX 5 , result: -15,3
scaleY 5 , result: -15,5
assert (15,-15) == (-15,15) , result : false

About updateReferenceScale
====================

I would prefer if we can have a constant scale ratio that we can use against real life values ( something like

1 unit cell : 200px : 1 meter @ zoom==1,rot&tilt == 0
1 unit cell : 400px : 1 meter @ zoom==2,rot&tilt == 0
). The zoom pumping behaviour is kind of expected if we change the scale every time. Note that this isn't a very big issue, since in a normal game a user wouldn't normally rotate to all direction like we did while debugging.



Finally, we could do with a basic spec on our engine. Something like what coordinate system we use ( where's the origin, which direction do x,y,z values increase to, e.g. to the left, into the face  ), rotation direction ( clockwise or anti-clockwise ). This is beneficial so that when the code is buggy, or when we need to implement a new renderer, or when we are writing a unit test, we have reference as to what is the expected behaviour. This also applies to other parts of the engine, for example with the VFS case-sensitivity problem, there is also no expected behaviour of which a novice programmer can refer to. I would be happy to browse the code, document, ask around and put down together a spec like that, but it will take time. It's better if we do it at phasing stage :)
Title: Re: Matrix transformation
Post by: Sleek on November 26, 2007, 04:49:02 am
Here is the picture as I mentioned :
(http://www.bildhoster.de/uploads/26.11.2007_11:51:19_viewtransformhemisphere.png)

I hope it helps you understand what I am referring to.
Title: Re: Matrix transformation
Post by: jasoka on November 26, 2007, 09:31:15 am
Paste from channel discussion:

<jasoka> from picture I cannot spot the difference between "camera position" and "target vector-model intersection". With camera position, do you mean the actual camera that is hanging in air or position on elevation?
<Sleek_> jasoka, yup we have our own definition of camera position. Yours is where the camera shoots at. Mine is the floating position of the camera.
<jasoka> would you like to change it to floating position definition?
<Sleek_> Not really. I just would like to know if my representation of your model-view transform is correct.
<jasoka> yes, I have tried to make it tilt/rotate around given point on elevation. It felt (to me) like a natural behavior e.g. in use case where you rotate map 90 degrees. In case camera would rotate in place, the current focus would be lost I guess
<jasoka> I guess one common scenario would also be in the editor where user tweaks around with rotation / tilt values. he prolly wants to maintain focus (meaning location where camera is pointing at)

updateReferenceScale
problem with cell width - meter mapping is that fife tries to support many game types. E.g. zero type of game against civilization type of game have drastically different real word scales. Naturally we could play with zoom there, however I'm not sure if its handy from game dev perspective (e.g. to use default zoom of 23432344).

Documentation
I agree that there is always room for improvement in this sector. Unfortunately its also the most boring task therefore not luring so much resources. Also when people get more familiar with the engine, the motivation for documentation drops even further. Furthermore, many times people are not even reading the docs but instead prefer to use e.g. irc to ask questions directly.
Anyway the plan to take one step at the time also in documentation sector is good. When something feels unclear, we can concentrate on that spot (like now done with transforms). Good starting point for additional documentation is to check what is currently there (http://wiki.fifengine.net/index.php?title=Architecture_Documentation) and start improving from that.



Title: Re: Matrix transformation
Post by: ConfusedGuy on December 13, 2007, 02:58:58 pm
i have some suggestion for specifieing your cordinate system. since you are using a bit of opengl functionality you should probably use the same one like opengl does.
that means:

right-handed
eye-point at 0 0 0  (after transforming to eye-space)

y goes up
x goes right
-z goes to background

and if i'm not wrong, this is the same system that was thought in my school(and maybe in yours too).
http://www.evl.uic.edu/ralph/508S98/coordinates.html

i think it is easier to cope with one system than changing from one to another(engine<->opengl)

and maybe you should let opengl handle all the matrix transformation stuff or at least a part of it. it was designed to do this very efficient. you have a stack with at least 32 4x4 matrices where you can easily push and pop parts off the transformation.

hope this helps.
regards.
Title: Re: Matrix transformation
Post by: jasoka on December 15, 2007, 12:56:04 pm
In SDL video screen, coordinate y axis is inverted (grows downwards). FIFE is using SDL to render video, with or without OpenGL. The same x-y logic applies also in tile space. Logically z should grow up from ground, if this is currently not so, it should probably be changed.

Since plain SDL backend is supported, engine cannot use OpenGL matrix calculations directly.