Volley out of memory error, weird allocation attempt
Sometimes randomly Volley crashes my app upon startup, it crashes in the application class and a user would not be able to open the app again until they go into settings and clear app data
java.lang.OutOfMemoryError
at com.android.volley.toolbox.DiskBasedCache.streamToBytes(DiskBasedCache.java:316)
at com.android.volley.toolbox.DiskBasedCache.readString(DiskBasedCache.java:526)
at com.android.volley.toolbox.DiskBasedCache.readStringStringMap(DiskBasedCache.java:549)
at com.android.volley.toolbox.DiskBasedCache$CacheHeader.readHeader(DiskBasedCache.java:392)
at com.android.volley.toolbox.DiskBasedCache.initialize(DiskBasedCache.java:155)
at com.android.volley.CacheDispatcher.run(CacheDispatcher.java:84)
The "diskbasedbache" tries to allocate over 1 gigabyte of memory, for no obvious reason
how would I make this not happen? It seems to be an issue with Volley, or maybe an issue with a custom disk based cache but I don't immediately see (from the stack trace) how to 'clear' this cache or do a conditional check or handle this exception
Insight appreciated
In the streamToBytes()
, first it will new bytes by the cache file length, does your cache file was too large than application maximum heap size ?
private static byte[] streamToBytes(InputStream in, int length) throws IOException {
byte[] bytes = new byte[length];
...
}
public synchronized Entry get(String key) {
CacheHeader entry = mEntries.get(key);
File file = getFileForKey(key);
byte[] data = streamToBytes(..., file.length());
}
If you want to clear the cache, you could keep the DiskBasedCache
reference, after clear time's came, use ClearCacheRequest
and pass that cache instance in :
File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
DiskBasedCache cache = new DiskBasedCache(cacheDir);
RequestQueue queue = new RequestQueue(cache, network);
queue.start();
// clear all volley caches.
queue.add(new ClearCacheRequest(cache, null));
this way will clear all caches, so I suggest you use it carefully. of course, you can doing conditional check
, just iterating the cacheDir files, estimate which was too large then remove it.
for (File cacheFile : cacheDir.listFiles()) {
if (cacheFile.isFile() && cacheFile.length() > 10000000) cacheFile.delete();
}
Volley wasn't design as a big data cache solution, it's common request cache, don't storing large data anytime.
------------- Update at 2014-07-17 -------------
In fact, clear all caches is final way, also isn't wise way, we should suppressing these large request use cache when we sure it would be, and if not sure? we still can determine the response data size whether large or not, then call setShouldCache(false)
to disable it.
public class TheRequest extends Request {
@Override
protected Response<String> parseNetworkResponse(NetworkResponse response) {
// if response data was too large, disable caching is still time.
if (response.data.length > 10000) setShouldCache(false);
...
}
}
I experienced the same issue.
We knew we didn't have files that were GBs in size on initialization of the cache. It also occurred when reading header strings, which should never be GBs in length.
So it looked like the length was being read incorrectly by readLong.
We had two apps with roughly identical setups, except that one app had two independent processes created on start up. The main application process and a 'SyncAdapter' process following the sync adapter pattern. Only the app with two processes would crash. These two processes would independently initialize the cache.
However, the DiskBasedCache uses the same physical location for both processes. We eventually concluded that concurrent initializations were resulting in concurrent reads and writes of the same files, leading to bad reads of the size parameter.
I don't have a full proof that this is the issue, but I'm planning to work on a test app to verify.
In the short term, we've just caught the overly large byte allocation in streamToBytes, and throw an IOException so that Volley catches the exception and just deletes the file. However, it would probably be better to use a separate disk cache for each process.
private static byte[] streamToBytes(InputStream in, int length) throws IOException {
byte[] bytes;
// this try-catch is a change added by us to handle a possible multi-process issue when reading cache files
try {
bytes = new byte[length];
} catch (OutOfMemoryError e) {
throw new IOException("Couldn't allocate " + length + " bytes to stream. May have parsed the stream length incorrectly");
}
int count;
int pos = 0;
while (pos < length && ((count = in.read(bytes, pos, length - pos)) != -1)) {
pos += count;
}
if (pos != length) {
throw new IOException("Expected " + length + " bytes, read " + pos + " bytes");
}
return bytes;
}