Oracle PL/SQL UTL_FILE.PUT buffering
I'm writing a large file > 7MB from an Oracle stored procedure and the requirements are to have no line termination characters (no carriage return/line feed) at the end of each record.
I've written a stored procedure using UTL_FILE.PUT and I'm following each call to UTL_FILE.PUT with a UTL_FILE.FFLUSH. This procedure errors with a write error once I get to the point where I've written more than the buffer size (set to max 32767) although I'm making the FFLUSH calls. The procedure works fine if I replace the PUT calls with PUT_LINE calls.
Is it not possible to write more than the buffer size without a newline character? If so, is there a work around?
Solution 1:
Dustin,
The Oracle documentation here: http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/u_file.htm#i1003404
States that: FFLUSH physically writes pending data to the file identified by the file handle. Normally, data being written to a file is buffered. The FFLUSH procedure forces the buffered data to be written to the file. The data must be terminated with a newline character.
The last sentence being the most pertinent.
Could you not write the data using UTL_FILE.PUT_LINE before then searching the resulting file for the line terminators and removing them?
Just a thought....
Solution 2:
deleted quote from docs, see Ollie's answer
Another possible way to do this is a Java stored procedure, where you can use the more full-featured Java API for creating and writing to files.
Solution 3:
Although it is less than desirable, you could always PUT
until you have detected that you are nearing the buffer size. When this occurs, you can FCLOSE
the file handle (flushing the buffer) and re-open that same file with FOPEN
using 'a'
(append) as the mode. Again, this technique should generally be avoided, especially if other processes are also trying to access the file (for example: closing a file usually revokes any locks the process had placed upon it, freeing up any other processes that were trying to acquire a lock).