Incremental JSON Parsing in C#

Solution 1:

I have to admit I'm not as familiar with the JavaScriptSerializer, but if you're open to use JSON.net, it has a JsonReader that acts much like a DataReader.

using(var jsonReader = new JsonTextReader(myTextReader)){
  while(jsonReader.Read()){
    //evaluate the current node and whether it's the name you want
    if(jsonReader.TokenType.PropertyName=="add"){
      //do what you want
    } else {
      //break out of loop.
    }
  }
}

Solution 2:

Here are the generic and simple methods I use to parse, load and create very large JSON files. The code uses now pretty much standard JSON.Net library. Unfortunately the documentation isn't very clear on how to do this but it's not very hard to figure it out either.

Below code assumes the scenario where you have large number of objects that you want to serialize as JSON array and vice versa. We want to support very large files whoes size is only limited by your storage device (not memory). So when serializing, the method takes IEnumerable<T> and while deserializing it returns the same. This way you can process the entire file without being limited by the memory.

I've used this code on file sizes of several GBs with reasonable performance.

//Serialize sequence of objects as JSON array in to a specified file
public static void SerializeSequenceToJson<T>(this IEnumerable<T> sequence, string fileName)
{
    using (var fileStream = File.CreateText(fileName))
        SerializeSequenceToJson(sequence, fileStream);
}

//Deserialize specified file in to IEnumerable assuming it has array of JSON objects
public static IEnumerable<T> DeserializeSequenceFromJson<T>(string fileName)
{
    using (var fileStream = File.OpenText(fileName))
        foreach (var responseJson in DeserializeSequenceFromJson<T>(fileStream))
            yield return responseJson;
}

//Utility methods to operate on streams instead of file
public static void SerializeSequenceToJson<T>(this IEnumerable<T> sequence, TextWriter writeStream, Action<T, long> progress = null)
{
    using (var writer = new JsonTextWriter(writeStream))
    {
        var serializer = new JsonSerializer();
        writer.WriteStartArray();
        long index = 0;
        foreach (var item in sequence)
        {
            if (progress != null)
                progress(item, index++);

            serializer.Serialize(writer, item);
        }
        writer.WriteEnd();
    }
}
public static IEnumerable<T> DeserializeSequenceFromJson<T>(TextReader readerStream)
{
    using (var reader = new JsonTextReader(readerStream))
    {
        var serializer = new JsonSerializer();
        if (!reader.Read() || reader.TokenType != JsonToken.StartArray)
            throw new Exception("Expected start of array in the deserialized json string");

        while (reader.Read())
        {
            if (reader.TokenType == JsonToken.EndArray) break;
            var item = serializer.Deserialize<T>(reader);
            yield return item;
        }
    }
}

Solution 3:

If you take a look at Json.NET, it provides a non-caching, forward-only JSON parser that will suit your needs.

See the JsonReader and JsonTextReader class in the documentation.

Solution 4:

I'm currently in hour 3 of an unknown timespan, watching 160GB of JSON get deserialized into class objects. My memory use has been hanging tight at ~350MB, and when I inspect memory objects it's all stuff the GC can take care. Here's what I did:

    FileStream fs = File.Open("F:\\Data\\mysuperbig150GB.json", FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
    StreamReader sr = new StreamReader(fs);

    using (JsonReader reader = new JsonTextReader(sr))
    {
        JsonSerializer serializer = new JsonSerializer();

        MyJsonToClass result = serializer.Deserialize<MyJsonToClass>(reader);
    }

The problem is the deserialization. That 160GB of data is way bigger than what my PC can handle at once.

  1. I used a small snippet (which is tough, even just opening a 160GB file) and got a class structure via jsontochsarp.

  2. I made a specific class for the big collection in the auto-generated-via-json-tool class structure, and subclassed System.Collection.ObjectModel.ObservableCollection instead of List. They both implement IEnumerable, which I think is all the Newtsonsoft JSON deserializer cares about.

  3. I went in and overrode InsertItem, like this:

     protected override void InsertItem(int index, Feature item)
     {
       //do something with the item that just got deserialized
       //stick it in a database, etc.
       RemoveItem(0);
     }
    

Again, my problems where partially about JSON deserialization speed but beyond that I couldn't fit ~160GB of JSON data into collection. Even tightened up, it would be in the dozens of gigs area, way bigger than what .net is going to be happy with.

InsertItem on ObservableCollection is the only method I'm aware of that you can handle when deserialization occurs. List.Add() doesn't. I know this solution isn't "elegant", but it's working as I type this.

Solution 5:

You'd be wanting a SAX-type parser for JSON

http://en.wikipedia.org/wiki/Simple_API_for_XML

http://www.saxproject.org/event.html

SAX raises an event as it parses each piece of the document.

Doing something like that in JSON would (should) be pretty simple, given how simple the JSON syntax is.

This question might be of help: Is there a streaming API for JSON?

And another link: https://www.p6r.com/articles/2008/05/22/a-sax-like-parser-for-json/