0
\$\begingroup\$

I am facing some difficulties with the use of a cubemap shadow associated to a poinlight (represented as the yellow bicone in the picture). The shadow map itself is generated properly (see picture). enter image description here

I’m using a R32F texture to store a Dist = dot(Lightpos-vertex.pos) value. This is made on a vertex basis meaning that I sent to the pixel shader the color obtained in the geometry shader. Below part of the GS.

    for( int f = 0; f < 6; ++f )
    {
        f=(f==2)?f+1:f;//don’t render the Y top as I don’t need it
        matrix M = LightViewProjCube[f];
        output.RTIndex = f;
        for( int v = 0; v < 3; v++ )
        {
            float3 P = LightPos.xyz-input[v].Pos.xyz;
            output.Color = dot(P,P)+1;//+1 to avoid self shadowing because I use cullback
            output.Pos = mul( input[v].Pos, M );
            CubeMapStream.Append( output );
        }
        CubeMapStream.RestartStrip();
    }

In the rendering pass I calculate the Len = dot(Lightpos-vertex.pos) as well in the pixel shader (not in the vertex shader as in the RasterTek DX11 tutorial). Below part of the code.

float Shadow = 1;
// LightDirAmbient.w contains the number of point lights for this object
for ( int n=0;n<(int)LightDirAmbient.w;n++)
{
    txProj = LightPos[n].xyz-Input.WorldPos.xyz; 
    float Len = dot(txProj, txProj);
    //the squared distance is normalized to 1 using 1/(r*r) where r is the radius of the light n
    LTNE.y = clamp(1-LightPos[n].w*Len,0,1);//LPos.w contains 1/(r*r) of light n
    txProj = -txProj;//txProj is also used for my lighting. Must be inverted for the shadowmap sampling.
    LTNE.z = txShadowArrayCubeMap[n].Sample(samLinear, txProj).r;
    Shadow*=( LTNE.z < Len )?  1-LTNE.y:1;//attenuate shadow intensity over distance
    

In this particular scene I observe this shadow parts missing between red lines. I don’t understand why. In the scene the building is excluded from shadowcube rendering to be sure the problem is not distance fighting.
I’m using usual viewproj matrices for pointlights with PIDIV2 fov, 0 nearplane, 500 farplane. My point lights have max radius 500 units, using higher farplane does not change the problem. It seems not related to the camera position as well. There is apparently a link with the spherical shape of the light.

\$\endgroup\$

1 Answer 1

1
\$\begingroup\$

The problem was in the generation of the shadwomap at the vertex level. Moving the dot calculation at the pixel shader level solved the isssue. Below the complete modified code.

 struct VS_INPUT
 {
    float3 Pos : POSITION;
    float3 Norm : NORMAL;
    float4 Col : COLOR0;
    float2 Tex : TEXCOORD0;
 };

 struct GS_INPUT
 {
   float4 Pos       : SV_POSITION;
 };

struct PS_CUBEMAP_IN
{
   float4 Pos : SV_POSITION;
   float3 Color : COLOR0;
   uint RTIndex : SV_RenderTargetArrayIndex;
};

GS_INPUT VS_CubeMap( VS_INPUT input )
{
  GS_INPUT output = (GS_INPUT)0.0f;
  output.Pos = mul( float4(input.Pos,1), World );
  return output;
}

[maxvertexcount(18)]
void GS_CubeMap( triangle GS_INPUT input[3], inout TriangleStream<PS_CUBEMAP_IN> CubeMapStream )
{
    PS_CUBEMAP_IN output;
    for( int f = 0; f < 6; ++f )
    {
       f=(f==2)?f+1:f;//don’t render the Y top as I don’t need it
       matrix M = LightViewProjCube[f];
       output.RTIndex = f;
       for( int v = 0; v < 3; v++ )
       {
           output.Color =  LightPos.xyz-input[v].Pos.xyz
           output.Pos = mul( input[v].Pos, M );
           CubeMapStream.Append( output );
       }
       CubeMapStream.RestartStrip();
    }
}

float PS_CubeMap(PS_CUBEMAP_IN Input): SV_TARGET
{
   return dot(Input.Color,Input.Color)+1;
}
\$\endgroup\$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.