Wednesday, February 16, 2011

More Bump Mapping woes

One item I was previously missing in the equation was adding a set of texture coordinates for the second texture unit.  But, even after correcting this the bump mapping still doesn't appear as desired.

From what I've continued to research after trying several methods is that I may be missing a tangent space vertex light calculation that I didn't think was necessary for Dot3 bump mapping.  As you can see in the screenshot below, specular light is included in the scene, but is not shown on the water.  Also, detail in the original texture appears to be lost and just replaced by the bumps in the normal map.. rather than being combined.


Still the best result is without using bump mapping:

Depending on how much time I have over the break I may take a look at rendering a bump map using a GLSL shader instead of Dot3. Otherwise, I think it's about time to move onto smoke simulation since I'm reasonably happy with the texture output and appearance for water so far.

Other benefits of GLSL include..:
-More straightforward bump mapping/maybe easier to debug
-Easier reflection mapping onto water
-Easy to add a cosine function for 3D waves and/or Perlin noise

Tuesday, February 15, 2011

Alpha Blend and Normal improvements

As it turns out, the main reason why textures (and light) weren't looking quite right was that there weren't enough vertices and normals on the plane I was rendering onto.  Recreating the plane to be basically a flat slab of 256x256 terrain made a big improvement in lighting, alpha blending and texturing.  The alpha blending now properly shows the terrain below water and properly obscures areas of higher velocity.

Tuesday, February 8, 2011

Colour blend v.s. Normal Mapping

Progress this week includes optimizations to the output of rendered textures and comparisons between using colour blending v.s. a normal mapped texture combination.

By storing rendered textures and delaying writing to disk until a key is pressed, the demo now continues to animate at 30 fps.  This also allows better control over animation and produces more natural looking animated textures.

Currently, using colour blended with the rendered texture rather than using multiple textures (the rendered texture and a normal map) seems to achieve a better result visually.  Also, rendering a proper normal map as animation occurs proves to be quite difficult.  I encountered texture flickering issues which I'll continue to work out, but for now it appears the best method to continue with Fire and Smoke simulation will be to use blended colours with the rendered texture.

Normal mapped texture combination - appears dark, but additional detail is visible.  I'm not convinced this is working properly yet.

Blended colours - still currently gives the best visual result

Tuesday, February 1, 2011

Keyframe Animated Textures

Rendered textures can now be saved to a file using a simple image writing library from http://www.lonesock.net/soil.html.  This takes care of creating Bitmap header info and writing the data from a texture.  It works very well with OpenGL since it uses the same RGBA byte ordering.

Below is the output of 16 keyframes over roughly 30 seconds:

16 keyframes each scaled to 256x256

Unfortunately, attempting to write the texture to file once every second (or ideally once every frame) decreases the frame-rate to a point where it is difficult to interact with the water.  Next I will focus on allocating and rendering to an array of textures and performing all write operations after rendering is complete. Also, the large white alpha areas showing on the rendered textures will have to be coloured/handled differently. The smaller alpha areas actually seem to appear as a reflection which is interesting, but I hope to achieve a better effect using a normal map.

Flowing River & Render to Texture v.s. Rendering Points

After much manipulation, the velocity grid is now rendering very well to a 1024x1024 texture attached to a plane intersecting the terrain.  Rendering a moving velocity grid as points v.s. rendering off-screen to a texture and applying the texture to a plane resulted in the following findings:

512x512 grid points to screen: 60+ fps
512x512 grid to 512x512 texture map: 60+ fps

1024x1024 grid points to screen: 20+fps
1024x1024 grid to 1024x1024 texture map: 30+ fps

As the rendering to texture method is performed off-screen it seems to be the more resolution scalable solution.  It also results in a convincing water effect when combined with terrain:

Wednesday, January 26, 2011

Rendering to a Texture

Maintaining the correct viewport and projection/model matrices proved to be the most difficult task in rendering particles to a texture.  The viewport must be set to the same size as the texture attached to the framebuffer when rendering to a texture and this causes variations in the perspective from the default window framebuffer.  The velocity grid must be aligned such that it intersects the (texture framebuffer's) viewing frustrum perfectly.  Unfortunately, an orthogonal projection will not work in this case since z values are needed for the depth render buffer attached to the texture framebuffer.

I managed to get the particles rendering to a texture (still animating in real-time) and apply them to a simple cube:

















Next I will focus on rendering and animating this texture in the terrain scene and continuing to modify colour values for the output textures.  Also, currently only one texture is rendered at the end of the velocity computation, but ideally multiple textures will be rendered and combined later on.

Tuesday, January 25, 2011

Framebuffer Objects

Rather than rendering individual points for each position within our velocity grid we can render this data into a texture using a Framebuffer Object.  Specifying an additional Framebuffer (other than the default OpenGL Framebuffer) can be set up as follows:



Once the FBO is set up and we bind it during rendering, all draw calls will be written to the off-screen buffer and copied to the specified texture.  If the particle grid is rendered at each stage of the velocity and pressure calculation the resulting textures can then be combined to form an animated texture at each time step.  Applying this texture to a surface will make it appear to be fluid-like even though it is only 2D.  With more ability for post processing, rendering to texture seems like a better approach than simply rendering particles as points.