Is this idea about distributed database server with centralized storage feasible?
You could actually build a good distributed database environment if you partition (or target) your reads and writes at different databases. We do such work, and the trick is very simple. You have the master database on a file server and target all writes to it. You have a local copy of the database on every user's computer and you target the reads to it. You now also need a synchronizing mechanism between the master database and the local databases. This can be done in multiple ways. One way is to have a "delta" table in the master database. This delta table will contain the transactions that have been applied in the master database. Whenever the user's application performs a read or write operation, the delta on the master is first checked and updated locally. Only the transactions in the delta not yet applied (which can be checked based on time stamp) need to be applied. You could even have a background process doing this continuously. This delta could be a daily delta (or a weekly delta) when it is flushed. If a user has not logged on for a week or so, you just simply copy over the whole database to the user's computer. The advantage of having a local copy is that users can query stuff even when they are offline and - believe it or not - this is pretty fast even when you are online updating stuff.
Is this idea feasible?
No.
Does it already exist?
Not that I know of.
What kind of database could support such an architecture?
See above.
Honestly, this is a really bad idea on many levels. There's a reason companies keep critical data within the datacenter. You don't want business applications to be dependent on X number of desktop machines to be up and running. Another issue would be firewalls - in all but small environments, there would be no guarantees that Desktop X would be able to communicate with Desktop Y, and good luck getting that firewall change past your network team.
Is there any reason your company doesn't have a central well-maintained database server that this app can use? There's no reason a company wiki should need its own database server.
This question isn't related to system administration but when I read it so many warning alarms went off that I just have to answer.
I really have to tell you that your entire concept is so far off the mark that you won't find anyone else doing it. For starters, SQLite is unsuited to such jobs and the fact that you've had some success with it is more due to good luck that anything else.
Your plan has so many holes in it I really don't know where to start but I will tell you that it will be an overly complex system that will prove to be incredibly unreliable and poor performing.
Your comment
time to set up something like a company Wiki or similar can be reduced from several months to just days
Tells me much. To set up a wiki normally takes only minutes and any decent wiki system will have aides to speed up the importing of data from other systems.
I suggest you abandon your current design ideas and have a look at how such things are being done by others. Use any one of the common wiki systems (I prefer MediaWiki) with a regular database system (MySQL being very popular) and you'll not only save massive amounts of time but you'll end up with a system that is both more usable and more robust, plus much cheaper to implement.
In short, stop trying to reinvent the wheel, because your current design is going to end up more like a square with a hole roughly in the middle.
As mentioned, this question is outside the scope of systems administration. That said, distributed databases and distributed data stores are being used in some very recognizable places. While SQLite's strengths do not generally lend itself to this type if application, it is not unheard of. Look, for example, at the Fossil project. Even though this is a distributed source control system based on SQLite, it also provides a distributed wiki and a blogging application, and might actually do the trick for you. While you should probably look beyond SQLite, this does not mean you need to abandon open-source. Consider implementing your project in Apache CouchDB or a Hadoop-based data store An even more novel approach is to create applications in a distributed user-space virtual environment like Inferno.
Your description sounds a lot like what POS (Point of Sale) systems use. One master terminal is declared on startup that does the database processing. A copy of the database is synced between the master and all the slave terminals for backup.
If the master was to fail, all the other terminals pop up a message saying "Make me the new master?". You press yes and everything continues on. This could continue until there was one terminal standing.
It works, and is sort of idiot proof, but having a corrupted database at the end of the day is common. Luckily, the terminals only store that days sales, so your daily totals may be off a little as some order didn't get saved right. This is preferred over the system going down for a few hours and loosing sales.
In a big network/power outage, the end of day cleanup is what over-time is for as the current days sales can get spreads across several different terminals and you have to sort it all out. I am glad I no longer do that work.
Stick to one big database server with good backups.