Does "argument list too long" restriction apply to shell builtins?
Solution 1:
In bash, the OS-enforced limitation on command-line length which causes the error argument list too long
is not applied to shell builtins.
This error is triggered when the execve()
syscall returns the error code E2BIG
. There is no execve()
call involved when invoking a builtin, so the error cannot take place.
Thus, both of your proposed operations are safe: cmd <<< "$string"
writes $string
to a temporary file, which does not require that it be passed as an argv element (or an environment variable, which is stored in the same pool of reserved space); and printf '%s\n' "$cmd"
takes place internal to the shell unless the shell's configuration has been modified, as with enable -n printf
, to use an external printf
implementation.
Solution 2:
I don't seem to figure if the length restriction applies to shell builtins or not.
Probably not, but you should check the source code of your particular version of bash
(since it is free software). However, there obviously is some -hopefully larger- limitation (in particular because some malloc
done inside bash
could fail), but then you'll get another error message or behavior.
AFAIK, the argument list too long error is given by execve(2) failing with E2BIG
, and builtin functions of bash don't fork
then execve
(like command invoking external programs do).
In practice, E2BIG
might appear with a few hundred thousands bytes (the exact limit depends upon the kernel and system) but I guess that builtins could be used on several dozens of megabytes (on today's desktops). But YMMV (since you could use ulimit
to have your shell doing some setrlimit(2)...). I won't recommend handling gigabytes of data thru shell builtins.
BTW, xargs(1) can be helpful, and you could even raise the limit (for E2BIG
) by recompiling your kernel (and also thru some other ways, in recent kernels). A few years ago that was a strong motivation for me to recompile kernels.