Why are pixels square?
Pixels in screens are square, but I'm not sure why.
They aren't (necessarily) square.
Some would argue that they are never square ("A pixel is a point sample. It exists only at a point.").
So what's the advantage of squares in an LCD / CRT display?
-
Other arrangements (such as triangles, hexagons or other space filling polygons) are more computationally expensive.
-
Every image format is based on pixels (whatever shape they are) arranged in a rectangular array.
-
If we were to choose some other shape or layout a lot of software would have to be re-written.
-
All the factories currently manufacturing displays with a rectangular pixel layout would have to be retooled for some other layout.
Practicalities of Using a Hexagonal Coordinate System
There are generally four major considerations that must be pondered upon when using a hexagonal coordinate system:
- Image Conversion – Hardware capable of capturing images from the real world directly onto a hexagonal lattice is highly specialist, and so not generally available for use. Therefore, efficient means of converting a standard square-latticed image into a hexagonal one is required before any processing can be performed.
- Addressing and Storage – Any manipulations performed on images must be able to index and access individual pixels (in this case hexagons rather than squares), and any image in hexagonal form should be storable in hexagonal form (otherwise image conversion would have to be performed every time the image was accessed). Moreover, an indexing system that is simple to follow and makes the arithmetic of certain functions simpler would be very valuable.
- Image Processing Operations – In order to make effective use of the hexagonal coordinate system, operations must be designed or be converted that are geared to exploit the strengths of the system, and particularly the strengths of the addressing system used for indexing and storage.
- Image Display – As with actually obtaining the image in the first place, display devices in general do not use hexagonal lattices. Therefore the converted image must be returned to a form that can be sent on to an output device (whether a monitor, a printer or some other entity) with the resultant display appearing in natural proportions and scale. The exact nature of this conversion is dependent on the indexing method used. This could be a simple reversion of the original conversion process, or be a more considerable convolution.
Issues with Hexagonal Coordinate Systems
There are some problems with hexagonal coordinate systems however. One issue is that people are very used to the traditional square lattice.
Reasoning in hexes can seem unnatural and therefore a little difficult. While it could be argued that people can become used to it if they have to, it is still the case that they will be naturally inclined towards reasoning with the traditional Cartesian coordinate system by default, with hexagonal systems merely a secondary choice.
The lack of input devices that map onto hexagonal lattices, and the lack of output devices that display as such is also an obstacle:
The necessity of converting from squares to hexagons and back again detracts from the usefulness of operating on hexagonal lattices.
As such lattices are denser than equivalent square lattices of the same apparent size, unless images are fed in at a deliberately higher resolution than is to be operated on, converted images shall have to extrapolate some pixel locations (which is generally less desirable than having all pixels provided directly from a source).
The conversion back to square lattices would collapse some pixel locations into one another, which results in loss of apparent detail (which could result in a lower quality image than the one that was originally fed in).
If one seeks to use hexagonal coordinate systems in their own vision work, then they should first determine whether these problems are outweighed by the inherent advantages of operating with hexagons.
Source Hexagonal Coordinate Systems
Has any other shape or layout been tried?
The XO-1 display provides one color for each pixel. The colors align along diagonals that run from upper-right to lower left To reduce the color artifacts caused by this pixel geometry, the color component of the image is blurred by the display controller as the image is sent to the screen.
Comparison of the XO-1 display (left) with a typical liquid crystal display (LCD). The images show 1×1 mm of each screen. A typical LCD addresses groups of 3 locations as pixels. The OLPC XO LCD addresses each location as a separate pixel:
Source OLPC XO
Other displays (especially OLEDs) employ different layouts - such as PenTile:
The layout consists of a quincunx comprising two red subpixels, two green subpixels, and one central blue subpixel in each unit cell.
It was inspired by biomimicry of the human retina which has nearly equal numbers of L and M type cone cells, but significantly fewer S cones. As the S cones are primarily responsible for perceiving blue colors, which do not appreciably affect the perception of luminance, reducing the number of blue subpixels with respect to the red and green subpixels in a display does not reduce the image quality.
This layout is specifically designed to work with and be dependent upon subpixel rendering that uses only one and a quarter subpixel per pixel, on average, to render an image. That is, that any given input pixel is mapped to either a red-centered logical pixel, or a green-centered logical pixel.
Source PenTile matrix family
Simple Definition of pixel
Any one of the very small dots that together form the picture on a television screen, computer monitor, etc.
Source http://www.merriam-webster.com/dictionary/pixel
Pixel
In digital imaging, a pixel, pel, or picture element is a physical point in a raster image, or the smallest addressable element in an all points addressable display device; so it is the smallest controllable element of a picture represented on the screen.
...
A pixel does not need to be rendered as a small square. This image shows alternative ways of reconstructing an image from a set of pixel values, using dots, lines, or smooth filtering.
Source Pixel
Pixel aspect ratio
Most digital imaging systems display an image as a grid of tiny, square pixels. However, some imaging systems, especially those that must be compatible with standard-definition television motion pictures, display an image as a grid of rectangular pixels, in which the pixel width and height are different. Pixel Aspect Ratio describes this difference.
Source Pixel aspect ratio
A Pixel is Not A Little Square!
A pixel is a point sample. It exists only at a point.
For a color picture, a pixel might actually contain three samples, one for each primary color contributing to the picture at the sampling point. We can still think of this as a point sample of a color. But we cannot think of a pixel as a square or anything other than a point.
There are cases where the contributions to a pixel can be modeled, in a low order way, by a little square, but not ever the pixel itself.
Source A Pixel is Not A Little Square! (Microsoft Technical Memo 6 Alvy Ray Smith, July 17, 1995)
I would like to offer an alternative to David Postill's well thought out answer. In his answer, he approached the question of pixels being square, just as the title suggested. However, he made a very insightful comment in his answer:
Some would argue that they are never square ("A pixel is a point sample. It exists only at a point.").
This position can actually spawn off a whole different answer. Instead of focusing on why each pixel is a square (or not), it can focus on why we tend to organize these point-samplings into rectangular grids. It actually wasn't always that way!
To make this argument, we're going to play back and forth between treating an image as abstract data (such as a grid of points), and the implementation thereof in hardware. Sometimes one view is more meaningful than the other.
To start, let's go quite far back. Traditional film photography had no "grid" at all, which is one reason why the pictures always looked so crisp compared to modern digital ones. Instead, it had a "grain" which was a random distribution of crystals on the film. It was roughly uniform, but it was not a nice rectilinear array. The organization of these grains arose from the production process of the film, using chemical properties. As a result, film really didn't have a "direction" to it. It was just a 2d spattering of information.
Fast forward to the TV, specifically the old scanning CRTs. CRTs needed something different from photos: they needed to be able to represent their content as data. In particular, it needed to be data that could stream, in analog, over a wire (typically as a continuously changing set of voltages). The photo was 2d, but we needed to turn it into a 1d structure so that it could just vary in one dimention (time). The solution was to slice the image up by lines (not pixels!). The image was encoded line by line. Each line was an analog stream of data, not a digital sampling, but the lines were separated from each other. Thus, the data was discrete in the vertical direction, but continuous in the horizontal direction.
TVs had to render this data using physical phosphors, and a color TV required a grid to divide them into pixels. Each TV could do this differently in the horizontal direction, offering more pixels or fewer pixels, but they had to have the same number of lines. In theory, they could have offset every other row of pixels, exactly as you suggest. However, in practice this wasn't needed. In fact, they went even further. It was quickly realized that the human eye handled movement in a way that let them actually only send half the image every frame! On one frame, they'd send the odd numbered lines, and on the next frame ,they'd send the even numbered lines, and stitch them together.
Since that time, digitizing these interlaced images has been a bit of a trick. If I had a 480 line image, I actually only have half the data in each frame due to interlacing. The result of this is very visible when you try to see something move fast across the screen: each line is temporally shifted 1 frame from the other, creating horizontal streaks in fast moving things. I mention this because it's rather amusing: your suggestion offsets every other row in the grid by half a pixel to the right, while interlacing shifts every other row in the grid by half in time!
Frankly, it is easier to make these nice rectangular grids for things. With no technical reason to do any better than that, it stuck. Then we hit the computer era. Computers needed to generate these video signals, but they had no analog capabilities to write out an analog line. The solution was natural, the data was split into pixels. Now the data was discrete in both vertical and horizontal. All that was left was to pick how to make the grid.
Making a rectangular grid was extremely natural. First off, every TV out there was already doing it! Second, the math for drawing lines on a rectangular grid is much simpler than drawing them on a hexagonal one. You might say "but you can draw smooth lines in 3 directions on a hexagonal grid, but only 2 in the rectangular one." However, rectangular grids made it easy to draw horizontal and vertical lines. Hexagonal grids can only be made to draw one or the other. In that era, not many people were using hexagonal shapes for any of their non-computing efforts (rectangular paper, rectangular doors, rectangular houses...). The ability to make smooth horizontal and vertical lines far outstripped the value of making smooth full color imagery... especially given that the first displays were monochrome and it would be a long time before smoothness of imagery played a major part in thinking.
From there, you have a very strong precedent for a rectangular grid. The graphics hardware supported what the software was doing (rectangular grids), and the software targeted the hardware (rectangular grids). In theory some hardware might have tried to make a hexagonal grid, but the software just didn't reward it, and nobody wanted to pay for twice as much hardware!
This fast forwards us to today. We still want nice smooth horizontal and vertical lines, but with high end retina displays, that's getting easier and easier. However, developers are still trained to think in terms of the old rectangular grid. We are seeing some new APIs support "logical coordinates" and doing anti-aliasing to make it seem like there's a full continuous 2d space to play with rather than a grid of rigid 2d pixels, but its slow. Eventually, we might see hexagonal grids.
We actually do see them, just not with screens. In print, it is very common to use a hexagonal grid. The human eye accepts the hexagonal grid much faster than it accepts a rectangular grid. It has to do with the way lines "alias" in the different systems. Hexagonal grids alias in a less harsh way, which the eye is more comfortable with (if a hex grid needs to go one row up or down, they get to do it smoothly over a diagonal transition. Rectangular grids have to skip, creating a very clear discontinuity)