.
Polygon Scan Conversion 06.july.1996 GFX

by Hin Jang

The process of filling the interior region of a polygon is known as polygon scan conversion. Whereas the standard approach requires two data structures, a sorting procedure and is device-dependent [1], the critical points method described herein is easy to implement, uses less space and is fast [2]. Assume the polygon has no intersecting edges and that its vertices are represented by two cyclic arrays X and Y.

The scan conversion algorithm begins with the Build_CR( ) procedure which determines the set CR of critical points of the polygon. A critical point is a vertex whose y coordinate is either a local minimum or is a representative of a set of consecutive vertices that together form a local minimum. After Build_CR( ), the active edge list (AEL) is initialised to empty. From every critical point whose y value is less than scanline S, each of the two polygon edges ascending from it is followed until either edge turns down before reaching S or it passes S. If the edge turns down, it is deleted from the AEL, otherwise a new element is inserted into the AEL. Each AEL element contains

The polygon is then filled at the current scanline by traversing the AEL and drawing lines between alternate pairs of AEL elements. At the next scanline, the AEL is updated. For every point currently in the AEL, the edge is followed in the direction specified until it either turns down before the scanline or passes it. If the edge turns down, the edge is deleted from the AEL, otherwise the information in the element is updated. The procedure MoveUp( ) determines whether an edge ascends to meet the scanline. If so, the intersection between the scanline and the edge is computed.

The entire algorithm for the critical points method is shown below, as pseudocode, where c is the number of critical points and CR is an array of critical points. It is assumed that procedures Insert( ) and Delete( ) are available for the AEL and that Paint( ) is available for drawing horizontal lines between alternate pairs of AEL elements.

Optimisations can take many forms


   CriticalPoints()
   {
     /* m:  the index to array CR */

     Build_CR()
     initialise AEL to empty
     m = 1

     for (each scan_line)
       for (each AEL element k) {
         j  = k->index
         di = k->direction
         MoveUp(scan_line, j, di, s)
         if (j == 0)
            Delete(k)
         else {
            k->index = j
            k->x_val = s
         }
       }

       /* insert new critical points when necessary */

      while (m <= c and y[CR[m]] <= scan_line) {
         j = CR[m]
         MoveUp(scan_line, j, -1, s)
         if (j > 0) Insert(j, -1, s)
         j = CR[m]
         MoveUp(scan_line, j, 1, s)
         if (j > 0) Insert(j, 1, s)
         ++m
      }

      Paint()

     }
   }

   Build_CR()
   {
      candidate = 0

      for (i=1 to n) {
         if (Y[i] > Y[i+1])
            candidate = 1
         else if (Y[i] < Y[i+1] and candidate) {
            add i to cr
            candidate = 0
         }
      }

      if (candidate and Y[1] < Y[2])
      add 1 to cr
   }

   MoveUp(scan_line, j, di, s)
   {
      /* j:  the current array index from which to
             start the search
         di: indicates the direction of advance from
             j along the polygon arrays X and Y
         s:  the x coordinate of the intersection point */

     j1 = j + di

     while (Y[j1] <= scan_line) {
        if (Y[j1] < y[j]) {           /* polygon turns down */
           j = 0
           return
        } else {                         /* move up */
           j = j1 
           j1 = j + di
        }
     }

     s = x coorindate of intersection of edge
         ( (X[j], Y[j]), (X[j1], Y[j1]) ) with scan_line
   }



[1] Foley, J.D., A. van Dam, S.K. Feiner, and J.F. Hughes, Computer Graphics Principles and Practice, Second Edition, Addison-Wesley, Reading, 92-99, 1990

[2] Gordon, D., M.A. Peterson, and R.A. Reynolds, "Fast Polygon Scan Conversion with Medical Applications," IEEE Computer Graphics and Applications, 14(6):20-27, November 1994


.
Perspective Projection 04.july.1996 GFX

by Hin Jang

The strategy of transforming 3D objects onto a 2D projection plane consists of the following [1]:

View source.



[1] Bourke, P., World to Screen Projection Transformation, School of Architecture Property and Planning, University of Auckland, New Zealand, December 1994

[2] Catmull, E., and A.R. Smith, "3-D Transformations of Images in Scanline Order," Computer Graphics, SIGGRAPH 1980 Proceedings, 279-285


.
Projective Texture Mapping 17.december.1996 GFX

by Hin Jang
revised on 14.april.1999

Mapping a set of parameters onto a surface of a three dimensional scene enhances the realism of the image. Parameters include surface normal vectors, transparency, specularity, and illumination [1]. The most widely known and applied surface parameter is colour. Mapping of this sort is known as texture mapping. For scenes viewed with perspective projections, proper foreshortening of texture images is achieved through projective mapping: a projection of one plane through a point onto another plane. The source and destination points are expressed in terms of homogenous coordinates; a system used extensively in projective geometry. A point in destination space is represented by the homogenous vector pd = (x', y', w) where w is an arbitrary nonzero number. To recover the actual coordinates we simply divide by the homogenous component as in (x, y) = (x'/w, y'/w). A point in source space is denoted by ps = (u', v', q).

computing homogeneous texture coordinates
The transformation of ps to pd, known as forward projective mapping, is easily written in homogenous matrix notation:

         pd = ps Msd

                         | a d g |
(x', y', w) = (u', v', q)| b e h |
                         | c f i |

where i = 1 without loss of generality. Given texture coordinates (u, v) the screen coordinates are found by computing

    au + bv + c        du + ev + f
x = -----------;   y = -----------
    gu + hv + i        gu + hv + i

The transformation of pd to ps, known as inverse projective mapping, is
         ps = pd Mds

                         | A D G |
(u', v', q) = (x', y', w)| B E H |
                         | C F I |

                         |ei - fh  fg - di  dh - eg|
            = (x', y', w)|ch - bi  ai - cg  bg - ah|
                         |bf - ce  cd - af  ae - bd|
Notice that the inverse mapping matrix Mds is the adjoint of forward mapping matrix Msd. Given screen coordinates (x, y) the texture coordinates are found by computing
    Ax + By + C        Dx + Ey + F
u = -----------;   v = -----------
    Gx + Hy + I        Gx + Hy + I
To find the coefficients a-h of the forward mapping matrix, assuming i = 1, requires a method to solve eight equations with eight unknowns. With the vertices of the source and destination quadrilaterals numbered cyclically, these equations have the form
     auk + bvk + c
xk = -------------   ===>   uka + vkb + c - ukxkg - vkxkh = xk
     guk + hvk + i



     duk + evk + f
yk = -------------   ===>   ukd + vke + f - ukykg - vkykh = yk
     guk + hvk + i

where k = 0, 1, 2, 3. Written as an 8 x 8 system:
| u0 v0 1 0  0  0 -u0x0 -v0x0 | |a|   |x0|
| u1 v1 1 0  0  0 -u1x1 -v1x1 | |b|   |x1|
| u2 v2 1 0  0  0 -u2x2 -v2x2 | |c|   |x2|
| u3 v3 1 0  0  0 -u3x3 -v3x3 | |d| = |x3|
| 0  0  0 u0 v0 1 -u0y0 -v0y0 | |e|   |y0|
| 0  0  0 u1 v1 1 -u1y1 -v1y1 | |f|   |y1|
| 0  0  0 u2 v2 1 -u2y2 -v2y2 | |g|   |y2|
| 0  0  0 u3 v3 1 -u3y3 -v3y3 | |h|   |y3|
This linear system can be solved using Guassian elimanation for the coefficients a-h. In speed-critical situtations there is a more efficient way of computing the coefficients of the forward mapping matrix [2]. Given a vertex coorespondance as follows
   x  y  u v
   ---------
   x0 y0 0 0
   x1 y1 1 0
   x2 y2 1 1
   x3 y3 0 1
the eight equations reduce to
                       c = x0
             a + c - gx1 = x1
   a + b + c - gx2 - hx2 = x2
             b + c - hx3 = x3
                       f = y0
             d + f - gy1 = y1
   d + e + f - gy2 - hy2 = y2
             e + f - hy3 = y3
If
   dx1 = x1 - y2    dx2 = x3 - x2   sx = x0 - x1 + x2 - x3
   dy1 = y1 - y2    dy2 = y3 - y2   sy = y0 - y1 + y2 - y3

then
   g = | sx  dx2 |  /  | dx1  dx2 |
       | sy  dy2 |     | dy1  dy2 |

   h = | dx1  sx |  /  | dx1  dx2 |
       | dy1  sy |     | dy1  dy2 |

   a = x1 - x0 + gx1
   b = x3 - x0 + hx3
   c = x0
   d = y1 - y0 + gy1
   e = y3 - y0 + hy3
   f = y0
The desired inverse mapping is the adjoint of the matrix generated by the following routine. As shown, Mds is a mapping from a square to a quadrilateral. A mapping of this type is used to calculate the texture coordinates (u, v) given screen coordinates (x, y). The concatenation of two matrices Msm and Mmt yields Mst, a general quadrilateral-to-quadrilateral projective mapping [2].
                  |
      ---         |-----+           -----
     /   \        |     |           \    \
    /     \       |     |            \    \
    -------       ------+---          -----

           =====>              =====>

            Msm                 Mmt

Given the following declaration

   float  quad[4][2],
          sm[3][3],
          mt[3][3],
          st[3][3];

the algorithm to compute Mst (a screen space to texture space mapping) is

   quad = screen coordinates of quadrilateral
   adjoint(square_to_quad(quad, sm))

   quad = texture coordinates of texture
   square_to_quad(quad, mt)

   st = matrix_multiply(sm, mt)

The square_to_quad routine is available here.

an optimisation
Moreton and Heckbert realised a far more efficient alternative than computing the homogeneous texture coordinates as described above [3]. Moreton observed that a given parameter p suitable for linear interpolation across a polygon, in screen space, can be found by dividing the parameter by screen w, linearly interpolating p / w, and dividing the quantity p / w by 1 / w at each pixel to recover the desired parameter. The algorithm can be generalised as such [3]

  1. associate n parameters with each vertex of the polygon
  2. at each vertex, transform object space coordinates to homogeneous screen space
  3. clip polygon to view frustum and linearly interpolate all parameters for any new vertices
  4. at each vertex, divide the homogeneous screen coordinates, all parameters, and the digit one by the quantity w.
  5. scan convert in screen space by linearly interpolating all parameters and at each pixel compute (p / w) / (1 / w) for each parameter. use these values for texture mapping.

parameter interpolation
If the polygon is a triangle, interpolation of texture parameters can be accomplished as follows. In three dimensional space, a parameter p varies linearly across a triangle such that

   p  =  ax + by + cz

If the point (x, y, z) is represented in two dimensional homogeneous coordinates where x' = x, y' = y and w = z, then
   p  =  ax' + by' + cw

Division by w yields a function that is linear in screen space
   p/w  =  ax + by + c

and is called the perspective-correct interpolation function for p / w. Given a value at each vertex of a triangle as denoted with the parameter vector [p0, p1, p2] the coefficients of the function are
                                       -1
                           | x0 x1 x2 |
   [a b c]  =  [p0 p1 p2]  | y0 y1 y2 |
                           | w0 w1 w2 |

            =  [p0 p1 p2]  M-1

To recover the actual parameter, it is also necessary to interpolate 1 / w. The coefficients of the interpolation function for 1 / w is found using the parameter vector [1 1 1]. One possible algorithm is
   M-1 = inverse of the homogeneous vector matrix
   for (each parameter) interpolation function = parameter vector * M-1

   interpolate parameter functions
   w = 1/(1/w)
   for (each parameter) perspective-correct parameter = parameter * w

Note that M-1 is only computed once for a given triangle, regardless of the number of parameters that will be interpolated. Although the setup may seem costly, there are some useful information as a result of matrix inversion. If the inverse of M does not exist (i.e., det(M) = 0), the triangle is viewed edge-on or passes through the camera. In both cases, the triangle should not be rendered. Also, det(M) is positive for triangles that are front-facing. If the w coordinates are all one, det(M) is twice the signed screen area of the triangle. This information can be used for level-of-detail management. In summary, parameter interpolation of homogenous coordinates is valid for any point representation and can be incorporated easily into parallel implementations and/or incremental scan line evaluators.



[1] Carey, R.J., and D.P. Greenberg, "Textures for Realistic Image Synthesis", Computers and Graphics, 9(2):125-138, 1985

[2] Heckbert, P.S., Fundamentals of Texture Mapping and Image Warping, UCB/CSD 89/516, Computer Science Division, University of California at Berkeley, 19-20, June 1989

[3] Heckbert, P.S., and H.P. Moreton, Interpolation for Polygon Texture Mapping and Shading, Department of Electrical Engineering and Computer Science, University of California at Berkeley, 1991

[4] Maillot, J., H. Yahia, and A. Verroust, "Interactive Texture Mapping," Computer Graphics, SIGGRAPH 1993 Proceedings, 27-34

[5] Litwinowicz, P., and G. Miller, "Efficient Techniques for Interactive Texture Placement," Computer Graphics, SIGGRAPH 1994 Proceedings, 119-122