Overcoming Rate Limiting in Facebook Marketing API

Solution 1:

Add this to your code and you'll never have to worry about FB's Rate Limiting. Your script will automatically sleep as soon as you approach the limit, and then pick up from where it left after the cool down. Enjoy :)

import logging
import requests as rq

#Function to find the string between two strings or characters
def find_between( s, first, last ):
    try:
        start = s.index( first ) + len( first )
        end = s.index( last, start )
        return s[start:end]
    except ValueError:
        return ""

#Function to check how close you are to the FB Rate Limit
def check_limit():
    check=rq.get('https://graph.facebook.com/v3.3/act_'+account_number+'/insights?access_token='+my_access_token)
    call=float(find_between(check.headers['x-business-use-case-usage'],'call_count":','}'))
    cpu=float(find_between(check.headers['x-business-use-case-usage'],'total_cputime":','}'))
    total=float(find_between(check.headers['x-business-use-case-usage'],'total_time":',','))
    usage=max(call,cpu,total)
    return usage

#Check if you reached 75% of the limit, if yes then back-off for 5 minutes (put this chunk in your loop, every 200-500 iterations)
if (check_limit()>75):
    print('75% Rate Limit Reached. Cooling Time 5 Minutes.')
    logging.debug('75% Rate Limit Reached. Cooling Time 5 Minutes.')
    time.sleep(300)

Solution 2:

If you're just reading the data now, why not make a batch request? I was doing the same as you but just ended up requesting more data (I had to fiddle with it since there is such a thing as too much data, FB will not allow that either) and then loop through the data.

For my purposes, I did batch async requests + sleep (10 seconds) if I hit my limit. Works well for me.