Why is client-side validation not enough?

Client-side validation - I assume you are talking about web pages here - relies on JavaScript.

JavaScript powered validation can be turned off in the user's browser, fail due to a scripting error, or be maliciously circumvented without much effort.

Also, the whole process of form submission can be faked.

Therefore, there is never a guarantee that what arrives server side, is clean and safe data.


There is a simple rule in writing server application: Never trust the user data.

You need to always assume that a malicious user accesses your server in a way you didn't intend (e.g. in this case via a manual query via curl instead of the intended web page). For example, if your web page tries to filter out SQL commands an attacker already has a good hint that it might be a good attack vector to pass input with SQL commands.


anyone who knows basic javascript can get around client side.

client side is just used to improve the user experience (no need to reload page to validate)


The client you're talking to may not be the client you think you're talking to, so it may be ignoring whatever validation you're asking it to do.

In the web context, it's not only possible that a user could have javascript disabled in their browser, but there's also the possibility that you may not be talking to a browser at all - you could be getting a form submission from a bot which is POSTing to your submission URL without ever having seen the form at all.

In the broader context, you could be dealing with a hacked client which is sending data that the real client never would (e.g., aim-bots for FPS games) or possibly even a completely custom client created by someone who reverse-engineered your wire protocol which knows nothing about any validation you're expecting it to perform.