When transforming textures (drawn as flat 3D objects) to mimic depth, black lines appear randomly

We are developing a top-down RPG using XNA. Recently we bumped into a setback when writing the code to display our maps. When drawing the map, top-down view with a normal transformation matrix, everything seems to be fine. When using a non-flat transformation matrix, such as squeezing the top or bottom to mimic depth, black lines (rows when top or bottom, column when left or right is squeezed) that move around when the camera changes position, appear. The movement and placement appear to be random. (Image provided further down.)

Background information

The maps consist of tiles. The original texture has tiles consisting of 32x32 pixels. We draw the tiles by creating 2 triangles and displaying part of the original texture on these triangles. A shader does this for us. There are three layers of triangles. First we draw all the opaque tiles and all opaque pixels of all semi-opaque and partial-transparent tiles, then all the semi-opaque and partial-transparent tiles and pixels. This works fine (but when we zoom by a floating point factor, sometimes color-blended lines are in between tile rows and/or columns).

Renderstates

We use the same rasterizerState for all tiles and we switch between two when drawing solid or semi-transparent tiles.

_rasterizerState = new RasterizerState();
_rasterizerState.CullMode = CullMode.CullCounterClockwiseFace;

_solidDepthState = new DepthStencilState();
_solidDepthState.DepthBufferEnable = true;
_solidDepthState.DepthBufferWriteEnable = true;

_alphaDepthState = new DepthStencilState();
_alphaDepthState.DepthBufferEnable = true;
_alphaDepthState.DepthBufferWriteEnable = false;

In the shade we set the SpriteBlendMode as follows:

The first solid layer 1 uses

AlphaBlendEnable = False; 
SrcBlend = One; 
DestBlend = Zero; 

All the other solid and transparent layers (drawn later) use

AlphaBlendEnable = True; 
SrcBlend = SrcAlpha;
DestBlend = InvSrcAlpha; 

Other shaders use this too. The SpriteBatch for the SpriteFonts used, uses default setting.

Generated Texture

Some tiles are generated on the fly and saved to file. The file is loaded when the map is loaded. This is done using a RenderTarget created as follows:

RenderTarget2D rt = new RenderTarget2D(sb.GraphicsDevice, 768, 1792, false, 
    SurfaceFormat.Color, DepthFormat.None);
    sb.GraphicsDevice.SetRenderTarget(rt);

When generated, the file is saved and loaded (so we don't lose it when the device resets, because it no longer will be on a RenderTarget). I tried using mipmapping, but it is a spritesheet. There is no information on where tiles are placed, so mipmapping is useless and it didn't solve the problem.

Vertices

We loop through every position. No floating points here yet, but position is a Vector3 (Float3).

for (UInt16 x = 0; x < _width;  x++)
{
    for (UInt16 y = 0; y < _heigth; y++)
    {
        [...]
        position.z = priority; // this is a byte 0-5

To position the tiles the following code is used:

tilePosition.X = position.X;
tilePosition.Y = position.Y + position.Z;
tilePosition.Z = position.Z;

As you know, floats are 32 bit, with 24 bits for precision. The maximum bit value of z is 8 bits (5 = 00000101). The maximum values for X and Y are 16 bits resp. 24 bits. I assumed nothing could go wrong in terms of floating points.

this.Position = tilePosition;

When the vertices are set, it does so as follows (so they all share the same tile position)

Vector3[] offsets  = new Vector3[] { Vector3.Zero, Vector3.Right, 
    Vector3.Right + (this.IsVertical ? Vector3.Forward : Vector3.Up), 
    (this.IsVertical ? Vector3.Forward : Vector3.Up) };
Vector2[] texOffset = new Vector2[] { Vector2.Zero, Vector2.UnitX, 
    Vector2.One, Vector2.UnitY };

for (int i = 0; i < 4; i++)
{
    SetVertex(out arr[start + i]);
    arr[start + i].vertexPosition = Position + offsets[i];

    if (this.Tiles[0] != null)
        arr[start + i].texturePos1 += texOffset[i] * this.Tiles[0].TextureWidth;
    if (this.Tiles[1] != null)
        arr[start + i].texturePos2 += texOffset[i] * this.Tiles[1].TextureWidth;
    if (this.Tiles[2] != null)
        arr[start + i].texturePos3 += texOffset[i] * this.Tiles[2].TextureWidth;
}

Shader

The shader can draw animated tiles and static tiles. Both use the following sampler state:

sampler2D staticTilesSampler = sampler_state { 
    texture = <staticTiles> ; magfilter = POINT; minfilter = POINT; 
    mipfilter = POINT; AddressU = clamp; AddressV = clamp;};

The shader doesn't set any different sampler states, we also don't in our code.

Every pass, we clip at the alpha value (so we don't get black pixels) using the following line

clip(color.a - alpha)

Alpha is 1 for solid layer 1, and almost 0 for any other layer. This means that if there is a fraction of alpha, it will be drawn, unless on the bottom layer (because we wouldn't know what to do with it).

Camera

We use a camera to mimic lookup from top down at the tiles, making them appear flat, using the z value to layer them by external layering data (the 3 layers are not always in the right order). This also works fine. The camera updates the transformation matrix. If you are wondering why it has some weird structure like this.AddChange - the code is Double Buffered (this also works). The transformation matrix is formed as follows:

// First get the position we will be looking at. Zoom is normally 32
Single x = (Single)Math.Round((newPosition.X + newShakeOffset.X) * 
    this.Zoom) / this.Zoom;
Single y = (Single)Math.Round((newPosition.Y + newShakeOffset.Y) * 
    this.Zoom) / this.Zoom;

// Translation
Matrix translation = Matrix.CreateTranslation(-x, -y, 0);

// Projection
Matrix obliqueProjection = new Matrix(1, 0, 0, 0,
                                      0, 1, 1, 0,
                                      0, -1, 0, 0,
                                      0, 0, 0, 1);

Matrix taper = Matrix.Identity; 

// Base it of center screen
Matrix orthographic = Matrix.CreateOrthographicOffCenter(
    -_resolution.X / this.Zoom / 2, 
     _resolution.X / this.Zoom / 2, 
     _resolution.Y / this.Zoom / 2, 
    -_resolution.Y / this.Zoom / 2, 
    -10000, 10000);

// Shake rotation. This works fine       
Matrix shakeRotation = Matrix.CreateRotationZ(
    newShakeOffset.Z > 0.01 ? newShakeOffset.Z / 20 : 0);

// Projection is used in Draw/Render
this.AddChange(() => { 
    this.Projection = translation * obliqueProjection * 
    orthographic * taper * shakeRotation; }); 

Reasoning and Flow

There are 3 layers of tile data. Each tile is defined by IsSemiTransparent. When a tile is IsSemiTransparent, it needs to be drawn after something not IsSemiTransparent. Tile data is stacked when loaded on a SplattedTile instance. So, even if layer one of tile data is empty, layer one of the SplattedTile will have tile data in the first layer, (given that at least one layer has tile data). The reason is that the Z-buffer doesn't know what to blend with if they are drawn in order, since there might be no solid pixels behind it.

The layers do NOT have a z value, individual tile data has. When it is a ground tile, it has Priority = 0. So tiles with the same Priority we be ordered on layer (draw order) and opaqueness (semi-transparent, after opaque). Tiles with different priority will be drawn according to their priority.

The first solid layer has no destination pixels, so I set it to DestinationBlend.Zero. It also doesn't need AlphaBlending, since there is nothing to alphablend with. The other layers (5, 2 solid, 3 transparent) might be drawn when there is already color data and need to blend accordingly.

Before iterating through the 6 passes, the projection matrix is set. When using no taper, this works. When using a taper, it doesn't.

The Problem

We want to mimic some more depth by applying the taper, using the some matrix. We tried several values but this is an example:

new Matrix(1, 0, 0, 0,
           0, 1, 0, 0.1f,
           0, 0, 1, 0,
           0, 0, 0, 1);

The screen (everything with height value 0, all flat stuff) will be squeezed. The lower the y (higher on the screen), the more it's squeezed. This actually works, but now random black lines appear almost everywhere. It seems to exclude a few tiles, but I don't see what's the correlation. We think it might had something to do with interpolation or mipmaps.

And here is an image to show you what I am talking about: Screenshot with lines.

The tiles not affected seem to be static tiles NOT on the bottom layer. However, transparent tiles on top of those show other graphical artifacts. They miss lines (so rows just get deleted). I marked this text because I think it is a hint to what's happening. The vertical lines appear if I put the mip mag and minfilter to Linear.

Here is an image zoomed in (in game zoom), showing the artifact on tiles on layer 2 or 3 Screenshot with lines, zoomed in

We already tried

  • mipfilter on Point or Linear
  • Setting GenerateMipMaps on the original textures
  • Setting GenerateMipMaps on the generated textures (true flag constructor of RenderTarget)
  • Turning on mipmapping (only gave more artifacts when zoomed out, because I was mipmapping a spritesheet.
  • Not drawing layer 2 and 3 (this actually makes ALL the tiles have black lines)
  • DepthBufferEnable = false
  • Setting all solid layers to SrcBlend = One; DestBlend = Zero;
  • Setting all solid layers to ScrBlend = SrcAlpha; DestBlend = InvSrcAlpha;
  • Not drawing transparent layer (lines are still there).
  • Removing clip(opacity) in the shader. This only removes some lines. We are investigating this further.
  • Searching for the same problem on msdn, stackoverflow and using google (with no luck).

Does anyone recognize this problem? On a final note, we do call the SpriteBatch AFTER drawing the tiles, and use another Shader for avatars (show no problems, because they have height > 0). Does this undo our sampler state? Or...?


Solution 1:

Look at the rock of the bottom of that last image - it's got sandy-colored lines going through it. Presumably, you are drawing the sand first, then the rock on top.

This tells me it's not "black lines being drawn" through the textures, but that parts of the textures are not being drawn. Since it happens when you stretch vertically, this almost certainly means you are creating a mapping from old pixels to new pixels, without interpolating values inbetween in the new texture.

For instance, using the mapping (x,y) --> (x, 2y), the points will get mapped like (0,0) --> (0,0), (0,1) --> (0,2), and (0,2) --> (0, 4). Notice that no points in the source texture map to (0,1) or (0,3). This would cause the background to seep through. I bet if you change it to stretch horizonally, you'll see vertical lines.

What you would need to do is map the other way: given each pixel in the target texture, find its value in the source image using the inverse of the above transformation. You will probably get fractional values pixel-coordinates, so you will want to interpolate values.

I am not familiar at all with XNA, but there is probably a more convenient way to do this than by hand.

Solution 2:

The problem has to do with numeric types in HLSL.

First, let's clean up that shader. I will do so, because this is how we found the actual problem. Here is a unified diff for SplattedTileShader.fx before the taperfix was put in:

@@ -37,76 +37,31 @@
    data.Position = mul(worldPosition, viewProjection);

-   
     return data;
 }

-float4 PixelLayer1(VertexShaderOutput input, uniform float alpha) : COLOR0
+float4 PixelLayer(VertexShaderOutput input, uniform uint layer, uniform float alpha) : COLOR0
 {
-   if(input.TextureInfo[0] < 1)
+   if(input.TextureInfo[0] < layer)
        discard;

-    float4 color;
+   float4 color;
+   float2 coord;
+   if(layer == 1)
+       coord = input.TexCoord1;
+   else if(layer == 2)
+       coord = input.TexCoord2;
+   else if(layer == 3)
+       coord = input.TexCoord3;

-   switch (input.TextureInfo[1])
+   switch (input.TextureInfo[layer])
    {
        case 0:
-           color = tex2D(staticTilesSampler, input.TexCoord1);
+           color = tex2D(staticTilesSampler, coord);
            break;
        case 1:
-           color = tex2D(autoTilesSampler, input.TexCoord1);
+           color = tex2D(autoTilesSampler, coord);
            break;
        case 2:
-           color = tex2D(autoTilesSampler, input.TexCoord1 + float2(frame, 0) * animOffset) * (1 - frameBlend) + tex2D(autoTilesSampler, input.TexCoord1 + float2(nextframe, 0) * animOffset) * frameBlend;
-           break;
-   }
-
-   clip(color.a - alpha);
-
-   return color;
-}
-
-float4 PixelLayer2(VertexShaderOutput input, uniform float alpha) : COLOR0
-{
-   if(input.TextureInfo[0] < 2)
-       discard;
-
-    float4 color;
-
-   switch (input.TextureInfo[2])
-   {
-       case 0:
-           color = tex2D(staticTilesSampler, input.TexCoord2);
-           break;
-       case 1:
-           color = tex2D(autoTilesSampler, input.TexCoord2);
-           break;
-       case 2: 
-           color = tex2D(autoTilesSampler, input.TexCoord2 + float2(frame, 0) * animOffset) * (1 - frameBlend) + tex2D(autoTilesSampler, input.TexCoord2 + float2(nextframe, 0) * animOffset) * frameBlend;
-           break;
-   }
-
-   clip(color.a - alpha);
-
-   return color;
-}
-
-float4 PixelLayer3(VertexShaderOutput input, uniform float alpha) : COLOR0
-{
-   if(input.TextureInfo[0] < 3)
-       discard;
-
-    float4 color;
-
-   switch (input.TextureInfo[3])
-   {
-       case 0:
-           color = tex2D(staticTilesSampler, input.TexCoord3);
-           break;
-       case 1:
-           color = tex2D(autoTilesSampler, input.TexCoord3);
-           //color = float4(0,1,0,1);
-           break;
-       case 2:
-           color = tex2D(autoTilesSampler, input.TexCoord3 + float2(frame, 0) * animOffset) * (1 - frameBlend) + tex2D(autoTilesSampler, input.TexCoord3 + float2(nextframe, 0) * animOffset) * frameBlend;
+           color = tex2D(autoTilesSampler, coord + float2(frame, 0) * animOffset) * (1 - frameBlend) + tex2D(autoTilesSampler, coord + float2(nextframe, 0) * animOffset) * frameBlend;
            break;
    }
@@ -125,5 +80,5 @@
        DestBlend = Zero;
         VertexShader = compile vs_3_0 VertexShaderFunction();
-        PixelShader = compile ps_3_0 PixelLayer1(1);
+        PixelShader = compile ps_3_0 PixelLayer(1,1);
     }

@@ -134,5 +89,5 @@
        DestBlend = InvSrcAlpha;
        VertexShader = compile vs_3_0 VertexShaderFunction();
-        PixelShader = compile ps_3_0 PixelLayer2(0.00001);
+        PixelShader = compile ps_3_0 PixelLayer(2,0.00001);
    }

@@ -143,5 +98,5 @@
        DestBlend = InvSrcAlpha;
        VertexShader = compile vs_3_0 VertexShaderFunction();
-        PixelShader = compile ps_3_0 PixelLayer3(0.00001);
+        PixelShader = compile ps_3_0 PixelLayer(3,0.00001);
    }
 }
@@ -155,5 +110,5 @@
        DestBlend = InvSrcAlpha;
         VertexShader = compile vs_3_0 VertexShaderFunction();
-        PixelShader = compile ps_3_0 PixelLayer1(0.000001);
+        PixelShader = compile ps_3_0 PixelLayer(1,0.000001);
     }

@@ -164,5 +119,5 @@
        DestBlend = InvSrcAlpha;
        VertexShader = compile vs_3_0 VertexShaderFunction();
-        PixelShader = compile ps_3_0 PixelLayer2(0.000001);
+        PixelShader = compile ps_3_0 PixelLayer(2,0.000001);
    }

@@ -173,5 +128,5 @@
        DestBlend = InvSrcAlpha;
        VertexShader = compile vs_3_0 VertexShaderFunction();
-        PixelShader = compile ps_3_0 PixelLayer3(0.00001);
+        PixelShader = compile ps_3_0 PixelLayer(3,0.00001);
    }
 }

`

As you can see there is a new input variable called layer (type = uint). And there is now one PixelLayer function instead of three.

Next is the unified diff for SplattedTileVertex.cs

@@ -11,5 +11,5 @@
     {
         internal Vector3 vertexPosition;
-        internal byte textures;
+        internal float textures;
         /// <summary>
         /// Texture 0 is static tiles
@@ -17,7 +17,7 @@
         /// Texture 2 is animated autotiles
         /// </summary>
-        internal byte texture1;
-        internal byte texture2;
-        internal byte texture3;
+        internal float texture1;
+        internal float texture2;
+        internal float texture3;
         internal Vector2 texturePos1;
         internal Vector2 texturePos2;
@@ -27,8 +27,8 @@
         (
             new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0),
-            new VertexElement(12, VertexElementFormat.Byte4, VertexElementUsage.PointSize, 0),
-            new VertexElement(16, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0),
-            new VertexElement(24, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 1),
-            new VertexElement(32, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 2)
+            new VertexElement(12, VertexElementFormat.Vector4, VertexElementUsage.PointSize, 0),
+            new VertexElement(28, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0),
+            new VertexElement(36, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 1),
+            new VertexElement(44, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 2)
         );

Yes, we changed the types!

And now the problems come to light. It seems that, because of the way the input is processed, the floats will never be exactly the same as the integer value. The reasoning behind this goes beyond this thread, but maybe I should make a community wiki on it.

So, what was happening?

Ok, so we used to discard the values that were not on the layer (if input.TextureInfo[0] < layer -> discard). Inside input.TextInfo[layer] there is a float. Now we compare that float to our uint layer value. And here the magic happens. Some pixels will be just be an exact match (or maybe just above that layer value) and that would be fine, code-wise, if the type was an (u)int, but it's not.

So how to fix it? Well go with that halfway there is probably there rule. The moving code renders a pixel on a tile, if it's halfway there. We do the same thing with the layers.

Here is the fix (unified diff) for SplattedTileShader.fx

@@ -42,28 +42,24 @@
 float4 PixelLayer(VertexShaderOutput input, uniform uint layer, uniform float alpha) : COLOR0
 {
-   if(input.TextureInfo[0] < layer)
+   if(input.TextureInfo[0] < (float)layer - 0.5)
        discard;

    float4 color;
    float2 coord;
-   if(layer == 1)
+   if(layer < 1.5)
        coord = input.TexCoord1;
-   else if(layer == 2)
+   else if(layer < 2.5)
        coord = input.TexCoord2;
-   else if(layer == 3)
+   else
        coord = input.TexCoord3;

-   switch (input.TextureInfo[layer])
-   {
-       case 0:
-           color = tex2D(staticTilesSampler, coord);
-           break;
-       case 1:
-           color = tex2D(autoTilesSampler, coord);
-           break;
-       case 2:
-           color = tex2D(autoTilesSampler, coord + float2(frame, 0) * animOffset) * (1 - frameBlend) + tex2D(autoTilesSampler, coord + float2(nextframe, 0) * animOffset) * frameBlend;
-           break;
-   }
+   float type = input.TextureInfo[layer];
+
+   if (type < 0.5)
+       color = tex2D(staticTilesSampler, coord);
+   else if (type < 1.5)
+       color = tex2D(autoTilesSampler, coord);
+   else
+       color = tex2D(autoTilesSampler, coord + float2(frame, 0) * animOffset) * (1 - frameBlend) + tex2D(autoTilesSampler, coord + float2(nextframe, 0) * animOffset) * frameBlend;

    clip(color.a - alpha);

Now all the types are correct. The code works as it should, and the problem is solved. It had nothing todo with the discard(...) code, what I initially pointed at.

Thanks everyone that participated in helping us solve this.

Solution 3:

I can't figure out why the black lines are appearing, but I can give you another way to render the landscape that could result in it looking correct (and, hopefully, give you a little speed boost).

Sprites

You will need a texture atlas (a.k.a. sprite sheet) for this to work. You could split your altas into multiple atlases and use multi-texturing.

Vertex Buffer

What I would do is scratch the SpriteBatch, you always know where your sprites are going to be - create a VertexBuffer at startup (possibly one per layer) and use it to draw the layers. Something like this (this is a 2D buffer, it just 'looks' 3d like yours):

Vertex Buffer Sample

The vertex definition would probably consist of:

  • Position (Vector2)
  • Texture Coordinate (Vector2)
  • Color (Vector4/Color)

Each time the landscape needs to be 'cycled' (more on this later) you would go through the map under the camera and update the texture co-ordinates and/or color in the VertexBuffer. Don't refresh the buffer each frame. I wouldn't send the texture co-ordinates to the GPU in [0, 1] range, rather [0, Number of Sprites] - calculate the [0, 1] in your vertex shader.

Important: Don't share vertices (i.e. use an IndexBuffer) because the vertices that are shared by more than two faces need to remain distinct (they have distinct texture co-ordinates) - build up the buffer as though IndexBuffers didn't exist. Using an IndexBuffer is wasteful in this scenario, so just stick with a VertexBuffer alone.

Rendering

The world matrix you use will map [0, 1] to the size of the screen plus the size of a tile (i.e. a simple scale x = Viewport.Width + 32 and y = Viewport.Height + 32). Your projection matrix would be an identity matrix.

The view matrix is tricky. Imagine that your map is looking at a current block of the tiles (which it is) at {0,0}, what you need to do is figure out the offset (in pixels) from that where your camera is looking. So essentially it will be an offset matrix with x = Camera.X - LeftTile.X * (Viewport.Width / NumTiles.X) and similar for y.

The matrices are the only tricky bit, once you have them set up it's a simple DrawUserPrimitives() call and you are done.

Note that this only deals with your landscape, draw your other sprites as you are today.

Landscape Cycling

When the position of your camera changes, you basically need to determine if it's looking at a new block of tiles and update the VertexBuffers appropriately (texture co-ordinates and color - leave the position alone, no need to recalculate it).

Alternatively

Another option is to render each layer to a RenderTarget2D and use your current transformation once against the entire layer. This would either resolve your problem, or, it would make the real reason very apparrant.

Side note: I would provide sample code if it wasn't 00h40 here, this question deserves it. I'll see how much time I have tomorrow night.