Question

I'm working on a shader that generates little clouds based on some mask images. Right now it works well, but i feel the result is missing something, and i thought a blur would be nice. I remember a basic blur algorithm where you have to apply a convolution with a matrix of norm 1 (the bigger the matrix the greater the result) and an image. The thing is, I don't know how to treat the current outcome of the shader as an image. So basically I want to keep the shader as is, but getting it blurry. Any ideas?, how can I integrate the convolution algorithm to the shader? Or does anyone know of other algorithm?

Cg code:

    float Luminance( float4 Color ){
    return 0.6 * Color.r + 0.3 * Color.g + 0.1 * Color.b;
}

            struct v2f {
                float4 pos : SV_POSITION;
                float2 uv_MainTex : TEXCOORD0;
            };

            float4 _MainTex_ST;

            v2f vert(appdata_base v) {
                v2f o;
                o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
                o.uv_MainTex = TRANSFORM_TEX(v.texcoord, _MainTex);
                return o;
            }

            sampler2D _MainTex;
            sampler2D _Gradient;
            sampler2D _NoiseO;
            sampler2D _NoiseT;

            float4 frag(v2f IN) : COLOR {

                half4 nO = tex2D (_NoiseO, IN.uv_MainTex);
                half4 nT = tex2D (_NoiseT, IN.uv_MainTex);
                float4 turbulence = nO + nT;
                float lum = Luminance(turbulence);
                half4 c = tex2D (_MainTex, IN.uv_MainTex);
                if (lum >= 1.0f){

                    float pos = lum - 1.0f;
                    if( pos > 0.98f ) pos = 0.98f;
                    if( pos < 0.02f ) pos = 0.02f;
                    float2 texCord = (pos, pos);
                    half4 turb = tex2D (_Gradient, texCord);
                    //turb.a = 0.0f;
                    return turb;
                }
                else return c;
            }
Was it helpful?

Solution

It appears to me that this shader is emulating alpha testing between a backbuffer-like texture (passed via the sampler2D _MainTex) and a generated cloud luminance (represented by float lum) mapped onto a gradient. This makes things trickier because you can't just fake a blur and let alpha blending take care of the rest. You'll also need to change your alpha testing routine to emulate an alpha blend instead or restructure your rendering pipeline accordingly. We'll deal with blurring the clouds first.

The first question you need to ask yourself is if you need a screen-space blur. Seeing the mechanics of this fragment shader, I would think not -- you want to blur the clouds on the actual model. Given this, it should be sufficient to blur the underlying textures and result in a blurred result -- except you're emulating alpha clipping, so you'll get rough edges. The question is what to do about those rough edges. That's where alpha blending comes in.

You can emulate alpha blending by using a lerp (linear interpolation) between the turb color and c color with lerp() function (depending on which shader language you're using). You'll probably want something that looks like return lerp(c, turb, 1 - pos); instead of return turb; ... I'd expect you'll want to tweak this continually until you understand and start getting the results you want. (For example, you may prefer lerp(c, turb, 1 - pow(pos,4)))

In fact, you can try this last step (just adding the lerp) before modifying your textures to get an idea of what the alpha blending will do for you.

Edit: I hadn't considered the case where the _NoiseO and _NoiseT samplers were changing continually, so simply telling you to blur them was minimally useful advice. You can emulate blurring by using a multi-tap filter. The most simple way is to take uniformly spaced samples, weight them, and sum them together resulting in your final color. (Typically you'll want the weights themselves to sum to 1.)

This being said, you may or may not way to do this on the _NoiseO and _NoiseT textures themselves -- you may want to create a screen-space blur instead which may look more interesting to a viewer. In this case, the same concept applies, but you need to do the calculations for the offset coordinates for each tap and then perform a weighted summation.

For example if we were going with the first case and we wanted to sample from the _Noise0 sampler and blur it slightly, we could use this box filter (where all the weights are the same and sum to 1, thus performing an average):

// Untested code.
half4 nO  = 0.25 * tex2D(_Noise0, IN.uv_MainTex + float2(         0,          0))
          + 0.25 * tex2D(_Noise0, IN.uv_MainTex + float2(         0, g_offset.y))
          + 0.25 * tex2D(_Noise0, IN.uv_MainTex + float2(g_offset.x,          0))
          + 0.25 * tex2D(_Noise0, IN.uv_MainTex + float2(g_offset.x, g_offset.y))

Alternatively, if we wanted the entire cloud output to appear blurry we'd wrap the cloud generation portion in a function and call it instead of tex2D() for the taps.

// More untested code.
half4 genCloud(float2 tc) {
    half4 nO = tex2D (_NoiseO, IN.uv_MainTex);
    half4 nT = tex2D (_NoiseT, IN.uv_MainTex);
    float4 turbulence = nO + nT;
    float lum = Luminance(turbulence);
    float pos = lum - 1.0;
    if( pos > 0.98f ) pos = 0.98f;
    if( pos < 0.02f ) pos = 0.02f;
    float2 texCord = (pos, pos);
    half4 turb = tex2D (_Gradient, texCord);
    // Figure out how you'd generate your alpha blending constant here for your lerp
    turb.a = ACTUAL_ALPHA;
    return turb;
}

And the multi-tap filtering would look like:

// And even more untested code.
half4 cloudcolor = 0.25 * genCloud(IN.uv_MainTex + float2(         0,          0))
                 + 0.25 * genCloud(IN.uv_MainTex + float2(         0, g_offset.y))
                 + 0.25 * genCloud(IN.uv_MainTex + float2(g_offset.x,          0))
                 + 0.25 * genCloud(IN.uv_MainTex + float2(g_offset.x, g_offset.y))
return lerp(c, cloudcolor, cloudcolor.a);

However doing this is going to be relatively slow for calculations if you make the cloud function too complex. If you're bound by raster operations and texture reads (transferring texture/buffer data to and from memory) chances are this won't matter much unless you use a much more advanced blurring technique (such successful downsampling through ping-ponged buffers, useful for blurs/filters that are expensive because they have lots of taps). But performance is another entire consideration from just getting the look you want.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top