I am beginning an Spherical Harmonics shader project for an iOS app I am writing. I have begun by reading this excellent in-depth paper on the subject (PDF) - http://bit.ly/aQmax3.

The paper describes a scene pre-processing step that involves ray-casting. Can someone describe how ray-casting can be performed using GLSL on iOS?

有帮助吗?

解决方案

If you're referring to the ray casting used in that process in determining which surfaces are hit by ambient light (for the ambient occlusion shading), I've done that within my Molecules iOS application. I talk a little about the process in this blog post and this paper I submitted for the Nanotech conference, but I can expand upon that.

In my case, I use depth testing to determine the areas on a surface hit by ambient light at various orientations. I take my model and render it at a series of orientations that correspond to someone looking at it from an evenly distributed set of points on a sphere surrounding the object. For each point on the object's surface, I determine whether or not that point is visible from that orientation by testing its transformed Z position against the depth calculated for the visible surface at that point.

I track this visibility over the many orientations of the object through the use of an ambient occlusion intensity texture. I use a custom shader to map between locations on this texture and positions on the surface of my object. If the position is visible in that rotation, I write out a grey value (with a value of 1 / (number of orientations)), and if hidden I write out black. I then use an additive blend mode to accumulate these values in order to determine which surfaces were hit the most times by ambient light.

My ambient occlusion textures look something like the following:

Ambient occlusion texture

In this case, I was mapping a series of spheres as part of a molecular model, so each rectangle in that texture is the surface area of a sphere, flattened using a mapping function. When wrapped around the actual model, these values look like this:

Mapped ambient occlusion values

Also, for determining the depth values of my object at various orientations, I had to create my own depth writing function that calculates per-pixel depth values and stores those into a custom depth texture. I then read from this texture when determining whether ambient light hits a point in an orientation or not. If I remember correctly, you can't read from the depth buffer directly on iOS devices, so you may need to do something similar for your model, as well.

This doesn't cover the reflected diffuse lighting case, but it at least describes the means by which I did my ambient occlusion shading. The source code for this application is available at the above link, if you wish to dig into it and see how all this works in practice.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top