What's the difference between Bilinear, Trilinear, and Anisotropic texture filtering?
The way you have them listed (Bilinear -> Trilinear -> Anisotropic) is the proper order from least to best image quality, and in increasing order with respect to processing power.
In the simplest terms, moving from bilinear to trilinear will avoid issues where texture size changes (ie, while walking towards a wall, the texture won't seem to abruptly change at certain intervals when you approach it). Moving from trilinear to anisotropic will make textures on objects that stretch away from you look sharper than they would be otherwise.
A bit more detailed explanation follows, but note that this is a very technical topic, and a full treatment is probably beyond the scope of Gaming.StackExchange.
The core problem is that artists working on 3D textures create a set of 2D pictures of fixed sizes. These pictures are then "painted" onto 3D objects. However, once that texture is applied to a 3D object, it can be rotated and viewed from many different angles and distances. Texture filtering attempts to map the discrete steps of the art available to the continuous domain of how you can view it.
For instance, an artist might create a 64x64 image to be used as the texture for a simple object. However, when you view that object in the game world, you get really close to it, and it fills your entire screen, which might be several thousand pixels wide. Now the engine has to take a simple, low-res 2D picture and make it much, much larger without sacrificing quality.
Bilinear and Trilinear are "isotropic" filtering techniques for mipmapping, in contrast to "anisotropic." Wikipedia has a decent article on this subject, but I'll attempt to summarize.
Essentially, as you zoom in on a texture (ie, by getting the player/camera close to it), the pixels of the texture need to be mapped onto multiple pixels of the output image. Bilinear mapping is one way of computing or interpolating the output pixel color value based on the size of the output polygon, and the pixels from the input texture.
Trilinear mapping takes into account the fact that textures often have several sizes depending on the distance you are from the textured object. Our artist from earlier might make several different texture sizes, so that the objects that are close to the camera can have higher-resolution textures applied to them, thus making them look better. In addition to interpolating the pixels of the current texture size, trilinear filtering is capable of interpolating between different texture sizes as well.
(A "mipmap" or multi-resolution copy of an image, see this Wikipedia article)
Anisotropic filtering takes into account that due to the camera orientation, the output polygon may not be rectangular. This filter method does some additional math to compute the effect the camera angle has on the dimensions of the output texture.
Again, Wikipedia has a good example of this as part of the anisotropic filtering article. You can also experience the difference yourself by playing with the graphics settings in Google Earth, which is where the screenshot was taken from.