Mysql: Working With 192 Trillion Records... (Yes, 192 Trillion)

Here's the question...

Considering 192 trillion records, what should my considerations be?

My main concern is speed.

Here's the table...

    CREATE TABLE `ref` (
  `id` INTEGER(13) AUTO_INCREMENT DEFAULT NOT NULL,
  `rel_id` INTEGER(13) NOT NULL,
  `p1` INTEGER(13) NOT NULL,
  `p2` INTEGER(13) DEFAULT NULL,
  `p3` INTEGER(13) DEFAULT NULL,
  `s` INTEGER(13) NOT NULL,
  `p4` INTEGER(13) DEFAULT NULL,
  `p5` INTEGER(13) DEFAULT NULL,
  `p6` INTEGER(13) DEFAULT NULL,
  PRIMARY KEY (`id`),
  KEY (`s`),
  KEY (`rel_id`),
  KEY (`p3`),
  KEY (`p4`)
    );

Here's the queries...

SELECT id, s FROM ref WHERE red_id="$rel_id" AND p3="$p3" AND p4="$p4"

SELECT rel_id, p1, p2, p3, p4, p5, p6 FROM ref WHERE id="$id"

INSERT INTO rel (rel_id, p1, p2, p3, s, p4, p5, p6)
VALUES ("$rel_id", "$p1", "$p2", "$p3", "$s", "$p4", "$p5", "$p6")

Here's some notes...

  • The SELECT's will be done much more frequently than the INSERT. However, occasionally I want to add a few hundred records at a time.
  • Load-wise, there will be nothing for hours then maybe a few thousand queries all at once.
  • Don't think I can normalize any more (need the p values in a combination)
  • The database as a whole is very relational.
  • This will be the largest table by far (next largest is about 900k)

UPDATE (08/11/2010)

Interestingly, I've been given a second option...

Instead of 192 trillion I could store 2.6*10^16 (15 zeros, meaning 26 Quadrillion)...

But in this second option I would only need to store one bigint(18) as the index in a table. That's it - just the one column. So I would just be checking for the existence of a value. Occasionally adding records, never deleting them.

So that makes me think there must be a better solution then mysql for simply storing numbers...

Given this second option, should I take it or stick with the first...

[edit] Just got news of some testing that's been done - 100 million rows with this setup returns the query in 0.0004 seconds [/edit]


pQd's estimate of 7PB seems reasonable, and that's a lot of data for a RDBMS. I'm not sure I've ever heard of someone doing 7PB with any shared disk system, let alone MySQL. Querying this volume of data with any shared disk system is going to be unusably slow. The fastest SAN hardware maxes out at 20GB/sec even when tuned for large streaming queries. If you can afford SAN hardware of this spec you can affort to use something better suited to the job than MySQL.

In fact, I'm struggling to conceive of a scenario where you could have a budget for a disk subsystem of this spec but not for a better DBMS platform. Even using 600GB disks (the largest 15K 'enterprise' drive currently on the market) you're up for something like 12,000 physical disk drives to store 7PB. SATA disks would be cheaper (and with 2TB disks you would need around 1/3 of the number), but quite a bit slower.

A SAN of this spec from a major vendor like EMC or Hitachi would run to many millions of dollars. Last time I worked with SAN equipment from a major vendor, the transfer cost of space on an IBM DS8000 was over £10k/TB, not including any capital allowance for the controllers.

You really need a shared nothing system like Teradata or Netezza for this much data. Sharding a MySQL database might work but I'd recommend a purpose built VLDB platform. A shared nothing system also lets you use much cheaper direct-attach disk on the nodes - take a look at Sun's X4550 (thumper) platform for one possibility.

You also need to think of your performance requirements.

  • What's an acceptable run time for a query?
  • How often will you query your dataset?
  • Can the majority of the queries be resolved using an index (i.e. are they going to look at a small fraction - say: less than 1% - of the data), or do they need to do a full table scan?
  • How quickly is data going to be loaded into the database?
  • Do your queries need up-to-date data or could you live with a periodically refreshed reporting table?

In short, the strongest argument against MySQL is that you would be doing backflips to get decent query performance over 7PB of data, if it is possible at all. This volume of data really puts you into shared-nothing territory to make something that will query it reasonably quickly, and you will probably need a platform that was designed for shared-nothing operation from the outset. The disks alone are going to dwarf the cost of any reasonable DBMS platform.

Note: If you do split your operational and reporting databases you don't necessarily have to use the same DBMS platform for both. Getting fast inserts and sub-second reports from the same 7PB table is going to be a technical challenge at the least.

Given from your comments that you can live with some latency in reporting, you might consider separate capture and reporting systems, and you may not need to keep all 7PB of data in your operational capture system. Consider an operational platform such as Oracle (MySQL may do this with InnoDB) for data capture (again, the cost of the disks alone will dwarf the cost of the DBMS unless you have a lot of users) and a VLDB platform like Teradata, Sybase IQ, RedBrick, Netezza (note: proprietary hardware) or Greenplum for reporting


shard it. at this size having one large instance is a suicide - think about possible backup restores, table space corruptions, adding new columns or any other 'house keeping' processes - all those are impossible to be done in reasonable time at this scale.

simple back of the envelope calculations - assuming 32bit integers for all columns except 64bit id; no indices included:

8*4B+8B = 40B per row [and this is very optimistic]

192 Trillion rows 40B each gives us almost 7 PB

maybe you can re-think the whole thing, summarize information for quick reporting, and store compressed records for given time intervals when someone needs to dig into deeper details.

questions to answer:

  • what is acceptable downtime in case system crashes/get rebooted?
  • what's accessible downtime when you need to recover backup or pull server out of production for planned maintenance.
  • how often and where to do you want to backup?

random links - speed of inserts:

  • http://highscalability.com/blog/2010/5/20/strategy-scale-writes-to-734-million-records-per-day-using-t.html
  • https://stackoverflow.com/questions/654594/