What is data oriented design?
I was reading this article, and this guy goes on talking about how everyone can greatly benefit from mixing in data oriented design with OOP. He doesn't show any code samples, however.
I googled this and couldn't find any real information as to what this is, let alone any code samples. Is anyone familiar with this term and can provide an example? Is this maybe a different word for something else?
First of all, don't confuse this with data-driven design.
My understanding of Data-Oriented Design is that it is about organizing your data for efficient processing. Especially with respect to cache misses etc. Data-Driven Design on the other hand is about letting data control a lot of the behavior of your program (described very well by Andrew Keith's answer).
Say you have ball objects in your application with properties such as color, radius, bounciness, position, etc.
Object Oriented Approach
In OOP you would describe balls like this:
class Ball {
Point position;
Color color;
double radius;
void draw();
};
And then you would create a collection of balls like this:
vector<Ball> balls;
Data-Oriented Approach
In Data Oriented Design, however, you are more likely to write the code like this:
class Balls {
vector<Point> position;
vector<Color> color;
vector<double> radius;
void draw();
};
As you can see there is no single unit representing one Ball anymore. Ball objects only exist implicitly.
This can have many advantages, performance-wise. Usually, we want to do operations on many balls at the same time. The hardware usually wants large contiguous chunks of memory to operate efficiently.
Secondly, you might do operations that affect only part of the properties of a ball. For E.g. if you combine the colors of all the balls in various ways, then you want your cache to only contain color information. However, when all ball properties are stored in one unit you will pull in all the other properties of a ball as well. Even though you don't need them.
Cache Usage Example
Say each ball takes up 64 bytes and a Point takes 4 bytes. A cache slot takes, say, 64 bytes as well. If I want to update the position of 10 balls, I have to pull in 10 x 64 = 640 bytes of memory into cache and get 10 cache misses. If however, I can work the positions of the balls as separate units, that will only take 4 x 10 = 40 bytes. That fits in one cache fetch. Thus we only get 1 cache miss to update all the 10 balls. These numbers are arbitrary - I assume a cache block is bigger.
But it illustrates how memory layout can have a severe effect on cache hits and thus performance. This will only increase in importance as the difference between CPU and RAM speed widens.
How to layout the memory
In my ball example, I simplified the issue a lot, because usually for any normal app you will likely access multiple variables together. E.g. position and radius will probably be used together frequently. Then your structure should be:
class Body {
Point position;
double radius;
};
class Balls {
vector<Body> bodies;
vector<Color> color;
void draw();
};
The reason you should do this is that if data used together are placed in separate arrays, there is a risk that they will compete for the same slots in the cache. Thus loading one will throw out the other.
So compared to Object-Oriented programming, the classes you end up making are not related to the entities in your mental model of the problem. Since data is lumped together based on data usage, you won't always have sensible names to give your classes in Data-Oriented Design.
Relation to relational databases
The thinking behind Data-Oriented Design is very similar to how you think about relational databases. Optimizing a relational database can also involve using the cache more efficiently, although in this case, the cache is not CPU cache but pages in memory. A good database designer will also likely split out infrequently accessed data into a separate table rather than creating a table with a huge number of columns where only a few of the columns are ever used. He might also choose to denormalize some of the tables so that data don't have to be accessed from multiple locations on disk. Just like with Data-Oriented Design these choices are made by looking at what the data access patterns are and where the performance bottleneck is.
Mike Acton gave a public talk about Data oriented design recently:
My basic summary of it would be: if you want performance, then think about data flow, find the storage layer that is most likely to screw with you and optimize for it hard. Mike is focusing on L2 cache misses, because he's doing realtime, but I imagine the same thing applies to databases (disk reads) and even the Web (HTTP requests). It's a useful way of doing systems programming, I think.
Note that it doesn't absolve you from thinking about algorithms and time complexity, it just focuses your attention at figuring out the most expensive operation type that you then must target with your mad CS skills.
I just want to point out that Noel is talking specifically about some of the specific needs we face in game development. I suppose other sectors that are doing real-time soft simulation would benefit from this, but it is unlikely to be a technique that will show noticeable improvement to general business applications. This set up is for ensuring that every last bit of performance is squeezed out of the underlying hardware.
If you want to take advantage of modern processor architecture, you need to lay out your data in memory in a certain way. CPUs are really good at processing simple types that are laid out sequentially in memory. Any other layout has a much higher processing cost.
In object-oriented approach, you always think about one instance, and then you are extending it to several instances by grouping objects into collections. But from the hardware point of view, this comes with the added cost.
In data-oriented approach, you don't have an "instance" in the same way you have in object-oriented programming. Your instance can have an identifier, similar to data in relational databases, but apart from that, data related to your instance can be split over several tables (tables are implemented as vectors), to allow efficient processing.
An example: imagine you have class Student { int id; std::string name; float average; bool graduated; }. In case of OOP, you would put all your students in a single vectors.
In data-oriented design, you will first ask yourself what kind of processing you want to do to this data. Say you want to calculate an average mark for all students that still haven't graduated. So you will create a table which contains only students that have graduated, and another that haven't. You won't keep the student name in that table since it is not used for processing. But you will keep a student ID and an average mark in the table.
Now calculating average mark for non-graduated students will mean iterating through the non-graduated table and performing the calculation. Since average marks are neighboring in memory, your CPU will use SIMD and process the data in the most efficient way possible. Since we are not querying the bool graduated to test if the student has graduated, there are no data cache misses.
This sounds nice in theory but I have never done this kind of development on a real-world project. If anybody have any experience, please contact me, I have many questions.