Recently a colleague came to me asking if I knew of a way to clear a file that was in use. She had tried waiting for the file to become "free", unused, so it could be cleared. It was continuously being used by various IBM i jobs and intranet applications.
I think all IBM i developers soon work out you cannot use the CLRPFM command to clear a file when it is open in another program. Even if you create a program to delete the records from the file you still cannot delete any ones that are locked, used by another job.
So what to do?
It is possible using a DELETE statement in SQL.
DELETE FROM MYLIB/TESTPF |
If you omit the library name it will use the library list to find the file.
If the file is being used for input this statement will delete all the records from the file (or rows from the table) even though the file may be opened by another program or job.
If the file is being used for update and a record is locked by another program then the SQL statement will "hang" for a time, quit the delete, and return a SQL code (field SQLCOD) of -913. If I add a call to QCMDEXC with a DLYJOB I can wait until the locked record is released, see below:
01 dcl-pr Command extpgm('QCMDEXC') ; 02 *n char(20) options(*varsize) const ; 03 *n packed(15:5) const ; 04 end-pr ; 05 dow (1 = 1) ; 06 exec sql DELETE FROM MYLIB/TESTPF ; 07 if (SQLCOD = -913) ; // Record lock 08 Command('DLYJOB DLY(30)':15) ; 09 else ; 10 leave ; 11 endif ; 12 enddo ; |
If you are unable to use the free format for definitions the equivalent Procedure prototype in fixed format would be:
01 D Command PR extpgm('QCMDEXC') 02 D 20 options(*varsize) const 03 D 15 5 const |
In my colleague's case there was not an issue of records being locked in the wee small hours. The file was cleared using DELETE and records added using SQL’s INSERT statement. Fortunately the INSERT statement will insert rows/records into a file even though it is open in another program or job.
The simplest insert would be if the two files were identical:
INSERT INTO MYLIB/TESTPF2 SELECT * FROM MYLIB/TESTPF |
If they are not identical then the statement could look like:
INSERT INTO MYLIB/TESTPF2 SELECT FLD1,FLD3,FLD5,FLD21 FROM MYLIB/TESTPF WHERE FLD1 = 'A' |
You can learn more about this from IBM’s website:
This article was written for IBM i 7.1.
Feel free to use this RPG
ReplyDeleteFFILE UP E DISK
C DELETREC
CLR RETRN
Yes, you could RPG to do the same, but if the file is update & there is a record locked by another program you will get an error message.
DeleteI could use the MONITOR op code to stop the error, see here for how to use the MONITOR.
In my testing I found that using SQL to delete & insert all the records from the file was a lot faster than using RPG to do the same.
Why not use embedded SQL It ignores locking
ReplyDeleteOr just SQL directly
SQL will ignore a lock, but leaves the record in the file.
Deletehola, puedes usar una rutina SQL y realizar un DELETE FROM FILE. para mejorar el rendimiento del archivo puedes colocar en su definiciĆ³n que reutilice los registro borrados. es una sugerencia de como procesar. espero que sirva de algo saludos.
ReplyDeletehi, you can use a SQL routine and perform a DELETE FILE. to improve the performance of file you can place on your definition reuse deleted record. is a suggestion of how to process. I hope you serve something greetings.
A very good point about the "resue deleted records".
DeleteHi. Create a the same temporary file , fill it and then cpyf with *UPDADD .
ReplyDeleteI agree with the other comments. SQL DELETE will clear a file when CLRPFM won't, assuming you have no record locks.
ReplyDeleteYes, SQL DELETE would be my perspective on the proper operation to use for such a requirement. Especially when I create a work file and want to use and clear it this allows me to keep it open. In the rare instance when you have a file that you are using in a single program and not by others and want to clear it you can (in RPG) close the file, process a cl command in the RPG to CLRPFM and reopen, but I assume that is not to what you are referring.
ReplyDeletesql it to death
ReplyDeleteYou can define the file like usropn and when you need refresh data make a close operation and then clear the file, make a open operation and continue the process.
ReplyDeleteshake your server, and say it : f... file , i want you die !
ReplyDeleteUse SQL to clear locked files, minimize record-lock issues with an override to the record wait-time.
ReplyDeleteE.g. to set this wait-time to 60 seconds, issue command OVRDBF WAITRCD(60)
I like the SQL option. You can also use the error indicator on the READ or DELETE in RPG to ignore any locked record as you attempt to delete them.
ReplyDeleteFrom a different perspective, you would hope the the never-ending program would be designed not to lock records unless updating them to prevent or limit conflict:) !
Nicely stated, clear and yet a complete solution.
ReplyDeleteMy approach was always the same as Chris's. Skip the locked records and only lock for add or update on never ending programs. An example of this type of program being a velocity check verification used by the POS systems at store level.
ReplyDeleteThis not only relates to nevr-ending programs.
ReplyDeletePrograms that do not close down upon returning control to their caller (in RPG terms, set on *INLR) will keep a lock on outputfiles too. This is usually done for performance reasons because by keeping the program open, used file(s) are not opened/closed on each program call.
You must define that file with "reuse delete records". Run an SQL or RPG to delete older records.
ReplyDeleteI rarely need to clear an entire file. If it's a work file the program first does a CRTDUPOBJ into QTEMP - the program uses the QTEMP version of the file. Hence - the only time the entire file needs cleared out it is the very program that would ever have a lock on it.
ReplyDeleteI try.
ReplyDeleteUsually for refresh all data in a table is used the command CPYF with the option *Replace or before the command CLRPFM and after the CPYF.
With this technique, I suppose, that the system must place an implicity allocation on the table, and if it's in use from another program the commands seen before fail.
Instead using the SQL command (Delete From MySchema/MyTable) the system work at record level and, if there isn't record allocated, tha table is clean. After this you can copy in the table the new data.