Is that OK to leave a small endless loop script running on server? [closed]

Looks like I need to update the full scenario here for all.

Our users need to pick up whatever they need from our file server to sync to remote location, but users have limited permissions on file server to move files around. so here is my task:

create a tool that user can use to pickup data and sync to remote location. DFS and 3rd party tools are not options, must be codes made by our own and everything must be running on background.

Here is my way to do it and it is working now. I have made 3 pieces of components:

**A**** HTA application with VBS sitting on user PC providing user a file browser to pickup data.

**B**** A shared location that allows that HTA to write data path to a txt file. any path in this text file will be made as softlink into a final location.

**C**** A final location on file server holds all softlinks.

Here is how basically it works:

User pick a data from file server by using HTA I made, It will write the full data path to the the 000.txt file on the shared location. My endless looping script monitors this shared location, if 000.txt file is created by any user in this shared folder, it will call up another script to read all data paths in this 000.txt and using mklink to make softlinks based on the paths user provided and outputs softlinks to the final location, then deletes 000.txt file. All softlinks on this final location will be synced by robocopy during the night on schedule. There are more functions required in my HTA application, there is no need to talk about it.

Since no one here talking about coding, so I deleted my endless loop code, This loop script starts with Windows and running as a service. I can start/stop it anytime I want. It basically just monitors that shared folder, if any user creates 000.txt file in there, it will call up mklink.bat to make softlins and 000.txt will be deleted by mklink.bat when softlink are made. The reason that I use an endless loop instead of task scheduler is user need to see results in that final location right after they submit the data path. I thought the minimal interval of task scheduler is one min, (@MikeAWood said it can be 1 second. Thanks!) so I made a this 2 seconds interval endless loop to monitor that shared folder.

My question was the following:

Is this a good idea to running a endless loop on server like forever to monitor a folder?

I monitored the resources usage on server while this script is running. I dont see any significant consumings...so I guess it will be harmless right?

If task scheduler can handle 1 second interval, I guess my question is solved. Thanks to you all.

Or if you have better way to do this or any opinion on the way I do it.


Solution 1:

As a general alternative to this: put your script in Task Scheduler and trigger it every minute, two minutes, whatever. This is more reliable, as your process with survive reboots or script errors. Using Scheduled Tasks not only allows your process to survive reboots, as mentioned already, but can also make your task deployable to a large number of servers via Group Policy Preferences. Your current solution is an enemy to both scalability and reliability.

As for the actual script you're talking about - it seems like you're re-inventing a Frankenstein's Monster of DFS-R and/or Robocopy.


DFS-R is a scalable, mature file replication tool that is built into Windows Server. You should see if you can use it for this situation. Microsoft has put way more engineering brain-power into DFS-R than you could ever put into a script that does the same thing.

Also, even if you can't use DFS-R for some reason, robocopy has a /mir switch, which will mirror directories. If you really can't use DFS-R for some reason, at least use something like this in a script.

Solution 2:

You should be commended for asking about your approach. Its easy to run with the first idea you have, but better to validate with with others.

Several issues with your approach:

  • It has to be restarted manually everytime the server restarts
  • It requires you to stay logged onto the console with credentials that have access to source & destination
  • There are already tools that do this (DFS, scheduled tasks, etc.)

(Some of these issues can be addressed as you've mentioned with the service.) That said, only you can assess the validity of any particular solution to the problem you face. At least now you have options.

Solution 3:

There are surely better ways to do that than running endless loop. Endless loops are pain, and causes frustration at all levels to everyone. Please don't do that.

Solution 4:

I'm curious why you are asking. Several people have offered alternative solutions and the response seems to be that you were ordered to do it this way. Are you looking for alternatives or are you looking for ammunition to go back to your manager and protest doing it this way?

The reasons for not doing it this way have been enumerated in other answers:

  • it is prone to failure in that it won't survive a failure, or a system restart, or any kind of processing error.

  • it requires effort on your part (as opposed to a vendor's) to maintain

  • there are security risks to this method

  • it is relatively inefficient

As far as alternatives go, I too had a bad experience with DFS and used DoubleTake Replication with great results. However a subsequent release of DFS resolved my issues with DFS and now we use that for DR replication across a WAN.