Squid over i2p, tor and localhost resources [duplicate]
So, here's my dilemma. I had this bright idea that I could use DNS to create my own TLD for my computer. I wanted all services to run on localhost as I will be creating sites that can modify my computer. All these sites will run under the .senor TLD. I currently got this TLD to work with my jekyll server on http://nic.senor/.
The problem comes in trying to get domains such as .onion.senor and .i2p.senor to work with their respective proxies. Obviously I had oversight when plannning for proxying data from my proxy to Tors or I2Ps proxy as that wasn't in my original plan when creating the .senor TLD. My current setup is dnsmasq on 127.0.0.1:53 which redirects any domain not listed in /etc/dnsmasq.hosts file to dnscrypt which is hosted on 127.0.0.1:52.
I tried searching for answers on how to setup squid like this as the configuration confuses me (I will keep searching) and even found questions such as Squid over i2p, tor and localhost resources that asks to do the same as what I want to do, but nobody's answered it since it was asked 2 years ago, so I am still stuck. Any help or pointing in the right direction will be greatly appreciated!
Edit 3: So, I am still working on the full answer, but at least I have the proxy itself down with Squid. I just need to figure out how to get Tor and I2P to work with Squid. (I also have problems getting the proxy to work outside the browser when testing on my Android, but that is outside of the scope of this question, plus I have to disable my data connection to get the servers connections to work on my android without throwing a DNS not found error, for stuff like nic.senor and mailpile.senor).
Edit 3 (continued): I have modified the config so I can connect to Tor on .onion and I2P on .i2p. I have tested I2P, but since Tor is currently blocked on my connection, I will have to work around that to test to see if Tor works later, however, this is currently good enough for me, and maybe in the future if I get Tor unblocked, I will add in my own separate proxy for Tor (for the purpose of a Tor "Address Book" (as dnsmasq does not support CNAME, sadly :( )). (I got help from https //serverfault.com/questions/198806/squid-selects-parent-depending-on-requested-url?newreg=6cd1dcadf97e4794bfcf4f1dcf977426). This is good enough for me to accept this answer, so all that's left are tweaks and testing! :)
Basically, for my implementation of my private network, I first have my browser, which then goes through my proxy, which will then solve the dns requests internally (as in using my laptop's dns server), which will then connect me to whatever site I want that is accessible to my laptop.
As for the DNS resolver, I use dnsmasq with a hosts file located at /etc/dnsmasq.hosts, which then routes anything not cached or found in the hosts file to dnscrypt-proxy. DNSMasq resides on 127.0.0.1:53 while DNSCrypt resides on 127.0.0.1:52.
The websites found in dnsmasq.hosts are located at both port 80 and 443 and using nginx to route any connections it has to various other servers, like blog.senor routes to a Jekyll server located at 127.0.0.2:4000 (I should probably block direct access to these URLs using Squid), well anyway nic.senor just points to https://mailpile.senor/ as it doesn't currently have a proper site, and mailpile.senor goes to mailpile on 127.0.0.1:33411. This TLS works because I have my own root CA I imported into both my phone and laptop (I generated it with Open SSL using instructions from https://datacenteroverlords.com/2012/03/01/creating-your-own-ssl-certificate-authority/).
A sample nginx config file that I use for my sites (with minor modifications):
/etc/nginx/conf.d/mailpile.conf
## our http server at port 80
server {
listen 127.0.0.3:80 default;
server_name mailpile.senor;
## redirect http to https ##
rewrite ^ https://$server_name$request_uri? permanent;
}
## our https server at port 443
server {
# IP Address, Port, and Protocol to use
listen 127.0.0.3:443 ssl;
# Server URL
server_name mailpile.senor;
# Certs
ssl_certificate certs/public/mailpile.senor.pub.pem;
ssl_certificate_key certs/private/mailpile.senor.priv.pem;
# Only use "Safe" TLS protocols, not SSL or TLS 3.0 (POODLE Attack)
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# Use ciphers the server supports
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
# Don't know how secure this elliptic curve algorith is, so needs more research!
#ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
# Basically reuses ssl sessions to speed up page load
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off; # Requires nginx >= 1.5.9
# Stapling is sending OSCP info (may require resolver too)
#ssl_stapling on; # Requires nginx >= 1.3.7
#ssl_stapling_verify on; # Requires nginx => 1.3.7
#resolver $DNS-IP-1 $DNS-IP-2 valid=300s;
#resolver_timeout 5s;
# Remember HSTS? Well, have FUN!
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
# Prevents this site from being loaded in an iframe
add_header X-Frame-Options DENY;
# Prevents browser from changing MIME-Types from what the server says (supposedly more secure)
add_header X-Content-Type-Options nosniff;
# File with revoked certificates used to determine if client-side cert is valid or not!
#ssl_dhparam /etc/nginx/ssl/dhparam.pem;
location / {
access_log /var/log/nginx/mailpile_access.log;
error_log /var/log/nginx/mailpile_error.log info;
proxy_pass http://127.0.0.1:33411;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
/etc/dnsmasq.hosts
127.0.0.1 nic.senor
127.0.0.2 blog.senor
127.0.0.3 mailpile.senor
Edit 1: I just Wiresharked my proxy and realized that basic auth does not encrypt my data at all, so I am now also working on fixing that!
Edit 2: I had found http //patchlog.com/security/squid-digest-authentication/ (took out : because of need 10 rep to post more than 2 links message) which helped me learn how to create the digest authentication system and how to create the new password file, which is just creating a md5sum off of $user:$realm:$pass. I also learned that digest_pw_auth has been renamed to digest_file_auth from https //bbs.archlinux.org/viewtopic.php?id=152346.
/etc/squid/squid.conf
auth_param digest program /usr/lib/squid/digest_file_auth -c /etc/squid/passwords
auth_param digest realm Proxy
auth_param digest child 5
auth_param digest nonce_garbage_interval 5 minutes
auth_param digest nonce_max_duration 30 minutes
auth_param digest nonce_max_count 50
acl authenticated proxy_auth REQUIRED
http_access allow authenticated
http_port 3128
# External Proxies
# Format is: hostname type http_port udp_port
cache_peer 127.0.0.1 parent 4444 0
cache_peer 127.0.0.2 parent 9050 0
# Determines When to Use External Proxies
cache_peer_domain 127.0.0.1 .i2p
cache_peer_domain 127.0.0.2 .onion
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 1025-65535 # unregistered ports
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
#http_access allow localnet
#http_access allow localhost
http_access deny all
coredump_dir /var/spool/squid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
vrefresh_pattern . 0 20% 4320