Expose IIS SMTP Pickup folder as a file share - bad idea?

Environment:

  • IIS Web Farm
    • 5 servers
    • Windows Server 2008 R2
    • IIS 7.5
    • ASP.NET 3.5 and 4.0 web application

Our web app, like many, needs to send mail. Send only, no receiving.

In the past, we've enabled the IIS 6 SMTP service on each web server and used the .NET SMTP classes to drop mail files in the pickup folder. Works fine.

In order to simplify the environment and run fewer services on the web servers, I'm considering the viability of running the SMTP service on just one utility Windows service, and exposing the pickup directory to the rest of the web servers in the farm using a file share. The ASP.NET web applications will simply drop their mail files in the shared pickup folder, and the single SMTP service will handle the outgoing mail flow for all the web servers. I'm not worried about volume or the ability of the SMTP service to keep up, that won't be a problem.

The pros:

  • Simplified administration and configuration
  • Lower load and reduced attack vector on the web servers
  • Single point of troubleshooting if there are mail problems

The cons:

  • Single point of failure
  • If I need to reboot the utility box, I lose outbound mail capabilities. This could potentially be mitigated by using offline folder caching for the file share on the web servers. Haven't tested it, but possibly the web servers, sensing no connection to the file share, could drop their files locally, to be automatically synced to the SMTP server when it reboots.
  • Email file name clashing - need to make sure that ASP.NET, when it writes the .eml files, does so using a guaranteed unique name across the entire web farm. SmtpClient source code indicates a GUID is used to name the files, but MS could change that in a future implementation.

Will this work?

Edit: Thinking more about my idea for Offline Files, I'm not sure it's the greatest approach. Offline Files requires a scheduled task to schedule synchronization, and every time it runs, I'll be pulling all the other web servers' mail files every other server's local cache. Maybe a better idea is to just have the files pile up in a local folder, and some other job (scheduled robocopy) attempts to copy them to the remote file share whenever it's up. Had thought about DFSR, but again, I'll be shuffling all the mail files around to all the web servers, and that's wasteful.


Better bet would be to use SMTP via tcp/ip i.e. listening on port 25 (or any other port) and then using System.Net.Mail's MailMessage and SmtpClient.

You then could throw a load balancer in front of the SMTP server and have each server connect to that load balanced name/ip.


I can't see your point here. You try to simplify your setup by introducing more complexity and security issues.

SMTP is the solution for your problem. Now you only have to look for a good implementation. SMTP comes with built-in queue handling (no file name collisions), failure resistant, redundancy aware due to multiple MX and is an ancient well-tested protocol. By the way it is Operating System independent.

You want to exchange this with single-point-of-failure, Scheduler dependent, Operating System bound, undocumented and new untested re-inventing-the-wheel procedure. Also you expose a drop-in-box for everybody not only for these 5 servers.

Use a simple store-and-forward server (like E-MailRelay) on the webservers and a central SMTP server where the mails are forwarded to. This SMTP server will then be used for final delivery. Then you can secure your central SMTP to only accept connections from these 5 servers and do sender-based-rejecting or whatever filter/block you want to reduce potential (Spam-)risk for the public.

This solution has all the benefits of transparency (logging+monitoring), simplicity (one process), conformity (standard Internet protocol) and integrity (delivery confirmation). No need to exchange it self-made workarounds.