How to prevent python requests from percent encoding my URLs?
Solution 1:
It is not good solution but you can use directly string
:
r = requests.get(url, params='format=json&key=site:dummy+type:example+group:wheel')
BTW:
Code which convert payload
to this string
payload = {
'format': 'json',
'key': 'site:dummy+type:example+group:wheel'
}
payload_str = "&".join("%s=%s" % (k,v) for k,v in payload.items())
# 'format=json&key=site:dummy+type:example+group:wheel'
r = requests.get(url, params=payload_str)
EDIT (2020):
You can also use urllib.parse.urlencode(...)
with parameter safe=':+'
to create string without converting chars :+
.
As I know requests
also use urllib.parse.urlencode(...)
for this but without safe=
.
import requests
import urllib.parse
payload = {
'format': 'json',
'key': 'site:dummy+type:example+group:wheel'
}
payload_str = urllib.parse.urlencode(payload, safe=':+')
# 'format=json&key=site:dummy+type:example+group:wheel'
url = 'https://httpbin.org/get'
r = requests.get(url, params=payload_str)
print(r.text)
I used page https://httpbin.org/get to test it.
Solution 2:
The solution, as designed, is to pass the URL directly.
Solution 3:
In case someone else comes across this in the future, you can subclass requests.Session, override the send method, and alter the raw url, to fix percent encodings and the like. Corrections to the below are welcome.
import requests, urllib
class NoQuotedCommasSession(requests.Session):
def send(self, *a, **kw):
# a[0] is prepared request
a[0].url = a[0].url.replace(urllib.parse.quote(","), ",")
return requests.Session.send(self, *a, **kw)
s = NoQuotedCommasSession()
s.get("http://somesite.com/an,url,with,commas,that,won't,be,encoded.")
Solution 4:
The answers above didn't work for me.
I was trying to do a get request where the parameter contained a pipe, but python requests would also percent encode the pipe. So instead i used urlopen:
# python3
from urllib.request import urlopen
base_url = 'http://www.example.com/search?'
query = 'date_range=2017-01-01|2017-03-01'
url = base_url + query
response = urlopen(url)
data = response.read()
# response data valid
print(response.url)
# output: 'http://www.example.com/search?date_range=2017-01-01|2017-03-01'