Definitely Relational. Unlimited flexibility and expansion.

Two corrections, both in concept and application, followed by an elevation.

Correction

  1. It is not "filtering out the un-needed data"; it is selecting only the needed data. Yes, of course, if you have an Index to support the columns identified in the WHERE clause, it is very fast, and the query does not depend on the size of the table (grabbing 1,000 rows from a 16 billion row table is instantaneous).

  2. Your table has one serious impediment. Given your description, the actual PK is (Device, Metric, DateTime). (Please don't call it TimeStamp, that means something else, but that is a minor issue.) The uniqueness of the row is identified by:

       (Device, Metric, DateTime)
    
    • The Id column does nothing, it is totally and completely redundant.

      • An Id column is never a Key (duplicate rows, which are prohibited in a Relational database, must be prevented by other means).
      • The Id column requires an additional Index, which obviously impedes the speed of INSERT/DELETE, and adds to the disk space used.

      • You can get rid of it. Please.

Elevation

  1. Now that you have removed the impediment, you may not have recognised it, but your table is in Sixth Normal Form. Very high speed, with just one Index on the PK. For understanding, read this answer from the What is Sixth Normal Form ? heading onwards.

    • (I have one index only, not three; on the Non-SQLs you may need three indices).

    • I have the exact same table (without the Id "key", of course). I have an additional column Server. I support multiple customers remotely.

      (Server, Device, Metric, DateTime)

    The table can be used to Pivot the data (ie. Devices across the top and Metrics down the side, or pivoted) using exactly the same SQL code (yes, switch the cells). I use the table to erect an unlimited variety of graphs and charts for customers re their server performance.

    • Monitor Statistics Data Model.
      (Too large for inline; some browsers cannot load inline; click the link. Also that is the obsolete demo version, for obvious reasons, I cannot show you commercial product DM.)

    • It allows me to produce Charts Like This, six keystrokes after receiving a raw monitoring stats file from the customer, using a single SELECT command. Notice the mix-and-match; OS and server on the same chart; a variety of Pivots. Of course, there is no limit to the number of stats matrices, and thus the charts. (Used with the customer's kind permission.)

    • Readers who are unfamiliar with the Standard for Modelling Relational Databases may find the IDEF1X Notation helpful.

One More Thing

Last but not least, SQL is a IEC/ISO/ANSI Standard. The freeware is actually Non-SQL; it is fraudulent to use the term SQL if they do not provide the Standard. They may provide "extras", but they are absent the basics.


Found very interesting the above answers. Trying to add a couple more considerations here.

1) Data aging

Time-series management usually need to create aging policies. A typical scenario (e.g. monitoring server CPU) requires to store:

  • 1-sec raw samples for a short period (e.g. for 24 hours)

  • 5-min detail aggregate samples for a medium period (e.g. 1 week)

  • 1-hour detail over that (e.g. up to 1 year)

Although relational models make it possible for sure (my company implemented massive centralized databases for some large customers with tens of thousands of data series) to manage it appropriately, the new breed of data stores add interesting functionalities to be explored like:

  • automated data purging (see Redis' EXPIRE command)

  • multidimensional aggregations (e.g. map-reduce jobs a-la-Splunk)

2) Real-time collection

Even more importantly some non-relational data stores are inherently distributed and allow for a much more efficient real-time (or near-real time) data collection that could be a problem with RDBMS because of the creation of hotspots (managing indexing while inserting in a single table). This problem in the RDBMS space is typically solved reverting to batch import procedures (we managed it this way in the past) while no-sql technologies have succeeded in massive real-time collection and aggregation (see Splunk for example, mentioned in previous replies).


You table has data in single table. So relational vs non relational is not the question. Basically you need to read a lot of sequential data. Now if you have enough RAM to store a years worth data then nothing like using Redis/MongoDB etc.

Mostly NoSQL databases will store your data on same location on disk and in compressed form to avoid multiple disk access.

NoSQL does the same thing as creating the index on device id and metric id, but in its own way. With database even if you do this the index and data may be at different places and there would be a lot of disk IO.

Tools like Splunk are using NoSQL backends to store time series data and then using map reduce to create aggregates (which might be what you want later). So in my opinion to use NoSQL is an option as people have already tried it for similar use cases. But will a million rows bring the database to crawl (maybe not , with decent hardware and proper configurations).