As part of the IBM i 7.1 release IBM notified us all of their intentions not to perform any future enhance to DDS for data (physical and logical) files, and are encouraging us to migrate those files to their equivalents in SQL (tables, views, and indexes). For many of us this is an overwhelming proposition as the applications we support contain many thousands of files. To convert these by hand would take a very long time. Fortunately IBM has an API, 'Generate Data Definition Language' (QSQLGNDDL), to assist.
The QSQLGNDDL API has been around since V5R1, and is a simple to use. I am going to give examples of RPG programs to call the API in all free RPG and the version with the fixed definitions. I will then show to convert a simple file and two logical files built over it. When the API is runs it looks at the object for the files and updates a source member with the equivalent SQL statements. It does not create the source member, you will have to do that before the API is called. I would have a CL program that would create the source member desired, and then call the example RPG programs.
A member can be added to a source file using the 'Add Physical Member' command:
ADDPFM FILE(MYLIB/DEVSRC) MBR(TESTFILE2) |
I am going to describe the all free program, and just include the fixed version at the bottom here. if you are not familiar with the free form definitions you should read my earlier posts:
- File definition in RPG all free
- Defining variables in RPG all free
- Defining Procedures in RPG all free
- Mixing it up with RPG all free
- Getting off the RPG cycle
Here is the code for the all free RPG program:
01 ctl-opt actgrp(*new) dftactgrp(*no) main(Main) ; 02 dcl-pr Main extpgm('TESTRPG.1') ; 03 *n char(10) ; 04 *n char(10) ; 05 *n char(10) ; 06 *n char(10) ; 07 *n char(10) ; 08 *n char(10) ; 09 end-pr ; 10 dcl-proc Main ; 11 dcl-pi *n ; 12 DDS_Library char(10) ; 13 DDS_File char(10) ; 14 DDS_Type char(10) ; 15 SQL_Library char(10) ; 16 SQL_SourceFile char(10) ; 17 SQL_Member char(10) ; 18 end-pi ; 19 dcl-pr CreateSqlSource extpgm('QSQGNDDL') ; 20 *n char(583) const ; 21 *n int(10) const ; 22 *n char(8) const ; 23 *n char(256) ; 24 end-pr ; 25 dcl-ds InputTemplate qualified ; 26 InFile char(258) ; 27 InLibrary char(258) ; 28 InFileType char(10) inz('TABLE') ; 29 OutSourceFile char(10) ; 30 OutLibrary char(10) ; 31 OutMember char(10) ; 32 SeverityLevel uns(10) inz(30) ; 33 *n char(16) inz('10ISO ISO SYS.00') ; 34 MessageLevel uns(10) inz(0) ; 35 *n char(3) inz('111') ; 36 end-ds ; 37 dcl-ds ErrorDs qualified ; 38 Size uns(10) inz(%size(ErrorDs)) ; 39 ReturnedSize uns(10) ; 40 Id char(7) ; 41 *n char(1) inz(x'00') ; 42 Data char(240) ; 43 end-ds ; 44 InputTemplate.InFile = DDS_File ; 45 InputTemplate.InLibrary = DDS_Library ; 46 InputTemplate.OutSourceFile = SQL_SourceFile ; 47 InputTemplate.OutLibrary = SQL_Library ; 48 InputTemplate.OutMember = SQL_Member ; 49 if ((DDS_Type = 'INDEX') or (DDS_Type = 'VIEW')) ; 50 InputTemplate.InFileType = DDS_Type ; 51 endif ; 52 CreateSqlSource(InputTemplate:%len(InputTemplate): 'SQLR0100':ErrorDs) ; 53 return ; 54 end-proc ; |
As this program uses a 'Main' procedure, see line 1, I have defined the Main procedure in lines 2 – 9. I could have given the subfields in the procedure definitions names, but there is no need to do so. Therefore, I have to use *N in place of a name, on lines 3 – 8.
The Main procedure starts at line 10 and ends at line 54. The procedure interface is given in lines 11 – 18, this is the equivalent of an *ENTRY plist.
The procedure definition for the QSQGNDDL API is given in lines 19 – 24. I have chosen to use the name CreateSqlSource rather than the name of the API. Again I have not name the subfields, and have to use *N in place names.
The InputTemplate data structure is the equivalent of a parameter list to be passed to the API. An explanation of the various subfields can be found in the IBM documentation, link at the bottom of this post. I have tried to use names that, in my opinion, describe the function of the subfields. The two exceptions are on lines 33 and 35. I have not named these as these are subfields that I have created that are the equivalent of several of the definitions. As I am going to use default values I did not see the point to give each its separate subfield. I have "qualified" the data structure, therefore, when I use the subfields I need to give the data structure name too.
The error data structure, ErrorDs is filled by the API and returned to this program. I would normally check the Id subfield if any errors had been found by the API. As this is just a simple example I have not included the error checking I would normally perform.
Lines 44 – 48 are just moving the input parameters from the procedure interface to the equivalent subfields in the InputTemplate data structure. I have designed these program to only convert to SQL tables, views, and indexes, it will do many other types of conversions too. On lines 49 – 51 I am changing the type of conversion to be something other than table.
The call to the API is on line 52. The four parameters are:
- The input parameters in the InputTemplate data structure.
- the length of the input template.
- Input template format, which will always be SQLR0110.
- Error data structure, which will be returned to this program and contain data about any errors encountered.
As this is a procedure I can just RETURN exit the procedure, line 53. And the procedure end on line 54.
What does this produce? Below I show first the DDS for the file, and below it the SQL the API produced on the right. Notice that the SQL produced does not contain the key field that the DDS has, it could not translate the DATFMT or VALUES keywords in the DDS, and it incorporates the referenced field (FLD010) in the SQL DDL rather than keep it as a reference to the reference file (TESTFREF).
DDS |
A UNIQUE A R TESTFILER A FLD001 1A TEXT('Alphanumeric field 1') A COLHDG('Alpha' '1') A FLD002 3P 0 TEXT('Packed numeric field 2') A COLHDG('Pack' '2') A FLD003 5S 2 TEXT('Signed numeric field 3') A COLHDG('Sign' '3') A FLD004 L DATFMT(*USA) A TEXT('Date field in *USA format 4') A COLHDG('Date' '4') A FLD005 T TEXT('Time field 5') A COLHDG('Time' '5') A FLD006 Z TEXT('Timestamp field 6') A COLHDG('Timestamp' '6') A FLD007 1A VALUES('Y' 'N') A TEXT('Alphanumeric field with value- A s 7') A COLHDG('Alpha' '7') A FLD008 200A VARLEN A TEXT('Alphanumeric field with varyi- A ng length 8') A COLHDG('Alpha' '8') A FLD009 30P 0 TEXT('Packed numeric field 9') A COLHDG('Pack' '9') A FLD010 R REFFLD(REF001 TESTFREF) A K FLD001 |
SQL table generated |
-- Generate SQL -- Version: V7R1M0 100423 -- Generated on: DD/DD/DD TT:TT:TT -- Relational Database: -- Standards Option: DB2 for i CREATE TABLE MYLIB/TESTFILE ( -- SQL1506 30 Key or attribute for TESTFILE in MYLIB ignored. FLD001 CHAR(1) CCSID 37 NOT NULL DEFAULT '' , FLD002 DECIMAL(3, 0) NOT NULL DEFAULT 0 , FLD003 NUMERIC(5, 2) NOT NULL DEFAULT 0 , FLD004 DATE NOT NULL DEFAULT CURRENT_DATE , -- SQL150D 10 DATFMT in column FLD004 ignored. FLD005 TIME NOT NULL DEFAULT CURRENT_TIME , FLD006 TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP , FLD007 CHAR(1) CCSID 37 NOT NULL DEFAULT '' , -- SQL150D 10 VALUES in column FLD007 ignored. FLD008 VARCHAR(200) CCSID 37 NOT NULL DEFAULT '' , FLD009 DECIMAL(30, 0) NOT NULL DEFAULT 0 , FLD010 CHAR(10) CCSID 37 NOT NULL DEFAULT '' ) RCDFMT TESTFILER ; LABEL ON TABLE MYLIB/TESTFILE IS 'Test file' ; LABEL ON COLUMN MYLIB/TESTFILE ( FLD001 IS 'Alpha 1' , FLD002 IS 'Pack 2' , FLD003 IS 'Sign 3' , FLD004 IS 'Date 4' , FLD005 IS 'Time 5' , FLD006 IS 'Timestamp 6' , FLD007 IS 'Alpha 7' , FLD008 IS 'Alpha 8' , FLD009 IS 'Pack 9' , FLD010 IS 'Ref 1' ) ; LABEL ON COLUMN MYLIB/TESTFILE ( FLD001 TEXT IS 'Alphanumeric field 1' , FLD002 TEXT IS 'Packed numeric field 2' , FLD003 TEXT IS 'Signed numeric field 3' , FLD004 TEXT IS 'Date field in *USA format 4' , FLD005 TEXT IS 'Time field 5' , FLD006 TEXT IS 'Timestamp field 6' , FLD007 TEXT IS 'Alphanumeric field with values 7' , FLD008 TEXT IS 'Alphanumeric field with varying length 8' , FLD009 TEXT IS 'Packed numeric field 9' , FLD010 TEXT IS 'Reference field 1' ) ; |
The first logical file I converted is very simple, it is shown on the first row. I ran the API twice, once to generate the SQL code for a view and a second time for SQL code for an index.
DDS | |
A R TESTFILER PFILE(TESTFILE) A K FLD002 A K FLD001 |
|
SQL view generated | |
-- Generate SQL -- Version: V7R1M0 100423 -- Generated on: DD/DD/DD TT:TT:TT -- Relational Database: -- Standards Option: DB2 for i CREATE VIEW MYLIB/TESTFILEL ( -- SQL1506 30 Key or attribute for TESTFILEL in MYLIB ignored. FLD001 , FLD002 , FLD003 , FLD004 , FLD005 , FLD006 , FLD007 , FLD008 , FLD009 , FLD010 ) AS SELECT FLD001 , FLD002 , FLD003 , FLD004 , -- SQL150D 10 DATFMT in column FLD004 ignored. FLD005 , FLD006 , FLD007 , -- SQL150D 10 VALUES in column FLD007 ignored. FLD008 , FLD009 , FLD010 FROM MYLIB/TESTFILE RCDFMT TESTFILER ; LABEL ON TABLE MYLIB/TESTFILEL IS 'Test logical' ; LABEL ON COLUMN MYLIB/TESTFILEL ( FLD001 IS 'Alpha 1' , FLD002 IS 'Pack 2' , FLD003 IS 'Sign 3' , FLD004 IS 'Date 4' , FLD005 IS 'Time 5' , FLD006 IS 'Timestamp 6' , FLD007 IS 'Alpha 7' , FLD008 IS 'Alpha 8' , FLD009 IS 'Pack 9' , FLD010 IS 'Ref 1' ) ; LABEL ON COLUMN MYLIB/TESTFILEL ( FLD001 TEXT IS 'Alphanumeric field 1' , FLD002 TEXT IS 'Packed numeric field 2' , FLD003 TEXT IS 'Signed numeric field 3' , FLD004 TEXT IS 'Date field in *USA format 4' , FLD005 TEXT IS 'Time field 5' , FLD006 TEXT IS 'Timestamp field 6' , FLD007 TEXT IS 'Alphanumeric field with values 7' , FLD008 TEXT IS 'Alphanumeric field with varying length 8' , FLD009 TEXT IS 'Packed numeric field 9' , FLD010 TEXT IS 'Reference field 1' ) ; |
|
SQL index generated | |
-- Generate SQL -- Version: V7R1M0 100423 -- Generated on: DD/DD/DD TT:TT:TT -- Relational Database: -- Standards Option: DB2 for i CREATE INDEX MYLIB/TESTFILEL ON MYLIB/TESTFILE ( FLD002 ASC , FLD001 ASC ) ; LABEL ON INDEX MYLIB/TESTFILEL IS 'Test logical' ; |
The second logical file contained a record selection statement:
DDS | |
A R TESTFILER PFILE(TESTFILE) A K FLD001 A S FLD007 COMP(EQ 'Y') |
|
SQL view generated | |
-- Generate SQL -- Version: V7R1M0 100423 -- Generated on: DD/DD/DD TT:TT:TT -- Relational Database: -- Standards Option: DB2 for i CREATE VIEW MYLIB/TESTFILEA ( -- SQL1506 30 Key or attribute for TESTFILEA in MYLIB ignored. FLD001 , FLD002 , FLD003 , FLD004 , FLD005 , FLD006 , FLD007 , FLD008 , FLD009 , FLD010 ) AS SELECT FLD001 , FLD002 , FLD003 , FLD004 , -- SQL150D 10 DATFMT in column FLD004 ignored. FLD005 , FLD006 , FLD007 , -- SQL150D 10 VALUES in column FLD007 ignored. FLD008 , FLD009 , FLD010 FROM MYLIB/TESTFILE WHERE FLD007 = 'Y' RCDFMT TESTFILER ; LABEL ON TABLE MYLIB/TESTFILEA IS 'Test logical with select' ; LABEL ON COLUMN MYLIB/TESTFILEL ( FLD001 IS 'Alpha 1' , FLD002 IS 'Pack 2' , FLD003 IS 'Sign 3' , FLD004 IS 'Date 4' , FLD005 IS 'Time 5' , FLD006 IS 'Timestamp 6' , FLD007 IS 'Alpha 7' , FLD008 IS 'Alpha 8' , FLD009 IS 'Pack 9' , FLD010 IS 'Ref 1' ) ; LABEL ON COLUMN MYLIB/TESTFILEL ( FLD001 TEXT IS 'Alphanumeric field 1' , FLD002 TEXT IS 'Packed numeric field 2' , FLD003 TEXT IS 'Signed numeric field 3' , FLD004 TEXT IS 'Date field in *USA format 4' , FLD005 TEXT IS 'Time field 5' , FLD006 TEXT IS 'Timestamp field 6' , FLD007 TEXT IS 'Alphanumeric field with values 7' , FLD008 TEXT IS 'Alphanumeric field with varying length 8' , FLD009 TEXT IS 'Packed numeric field 9' , FLD010 TEXT IS 'Reference field 1' ) ; |
|
SQL index generated | |
-- Generate SQL -- Version: V7R1M0 100423 -- Generated on: DD/DD/DD TT:TT:TT -- Relational Database: -- Standards Option: DB2 for i CREATE INDEX MYLIB/TESTFILEA ON MYLIB/TESTFILE ( FLD001 ASC ) WHERE FLD007 = 'Y' ; LABEL ON INDEX MYLIB/TESTFILEA IS 'Test logical with select' ; |
If you are running on an IBM i has 7.1 TR9 or 7.2 there is what I think is an easier method that I will describe in next week's post.
This is the version of the example program using fixed format definitions, and not using the Main procedure. You can use an *ENTRY plist in all free, but it has to be in fixed format.
01 H actgrp(*new) dftactgrp(*no) 02 D CreateSqlSource... 03 D PR extpgm('QSQGNDDL') 04 D 583 const 05 D 10I 0 const 06 D 8 const 07 D 256 08 D InputTemplate DS qualified 09 D InFile 258 10 D InLibrary 258 11 D InFileType 10 inz('TABLE') 12 D OutSourceFile 10 13 D OutLibrary 10 14 D OutMember 10 15 D SeverityLevel 10U 0 inz(30) 16 D 16 inz('10ISO ISO SYS.00') 17 D MessageLevel 10U 0 inz(0) 18 D 3 inz('111') 19 D ErrorDs DS qualified 20 D Size 10U 0 inz(%size(ErrorDs)) 21 D ReturnedSize 10U 0 22 D Id 7 23 D 1 inz(x'00') 24 D Data 240 25 D DDS_Library S 10 26 D DDS_File S 10 27 D DDS_Type S 10 28 D SQL_Library S 10 29 D SQL_SourceFile S 10 30 D SQL_Member S 10 31 C *entry plist 32 C parm DDS_Library 33 C parm DDS_File 34 C parm DDS_Type 35 C parm SQL_Library 36 C parm SQL_SourceFile 37 C parm SQL_Member /free 38 InputTemplate.InFile = DDS_File ; 39 InputTemplate.InLibrary = DDS_Library ; 40 InputTemplate.OutSourceFile = SQL_SourceFile ; 41 InputTemplate.OutLibrary = SQL_Library ; 42 InputTemplate.OutMember = SQL_Member ; 43 if ((DDS_Type = 'INDEX') or (DDS_Type = 'VIEW')) ; 44 InputTemplate.InFileType = DDS_Type ; 45 endif ; 46 CreateSqlSource(InputTemplate:%len(InputTemplate): 'SQLR0100':ErrorDs) ; 47 *inlr = *on ; |
Go to the all free version here.
You can learn more about the QSQGNDDL API on the IBM website here.
This article was written for IBM i 7.1, and should work for release V5R1 and greater too.
Simon, such tools are very helpful, but if they churn out DDL that is not indented, nor easy to read, it can generate a real mess. If it can gen pretty and indented DDL all the better. And once you have automatically generated DDL, you should go back in and manually add constraints, or perhaps the tool can do that to some degree for: PRIMARY KEY, FOREIGN KEY, CHECK, REFERENCE, etc. I cannot say how robust IBM's tool is however. Additionally, if the tool creates DDL with field names that are 10 or less bytes long, that would be a shame. It would be worth it to manually go in and add long names.
ReplyDeleteTo my mind, such tools should not be expected to do more than just create a starting DDL. Usually that DDL is going to need some manual tweaking, and more than one might expect.
Nice topic!
I can see two good reasons to use this tool:
Delete1. Use this tool to learn DDL, I can then move on and learn the elements you mentioned, after understanding the basics.
2. If I am presented with a very large number of programs the new DDL tables and indexes will work without having to change the programs. Yes it would be a long term goal to use longer column names, etc. But for the moment I would be using tables in place of files.
Be very careful with the tools. DDS logicals and SQL views are very different animals. I strongly suggest avoiding VIEWS in a DB2 database. They can produce unexpected results and introduce potential performance penalties if not carefully designed. Additionally, you really don't need them and should be able work directly with your tables. I would let the utilities generate the create table DDL, but question why you need to do this at all?
ReplyDeleteAll physical files on an IBM i system are database tables! You can query them and you can interrogate the QSYS tables to get their definition. You can even run an ALTER table command against them to change the definition of the tables.
What are you gaining running the API's?
@Bob, what is gained is DDL.
ReplyDeleteLogicals convert to indexes, and I agree not all logicals should be converted. Most should just be views. Too many indexes make for slow writes. Too few make for slow performance. There's a happy medium.
Dan, not all logicals are indexed. Keep in mind that a DDS logical is very different from a View or an Index. I personally would create the Indexes I knew I required and run my queries using Visual Explain (available with the tools for IBM i) and let it help me build the correct set of indexes for the database. The tools have been able to generate DDL from SQL or Physical files for years. You do not have to convert DDS to DDL... You can extract the DDL from any file or table.
ReplyDeleteWhat most people never knew is that all PHYSICAL FILES are DB2 tables.
@Bob, I know. A logical that is not converted to an index smells and acts like an index, but agreed is not an index. A view is just a stored query; it contains no rows, no data. It materializes only when it is used.
ReplyDeletePhysical files are not really DB2 tables. Sure they act like them, smell like them, you can use them as tables, but they are not true DB2 tables. One gets better performance from the SQE (not SQL) engine if a PF is converted to a table. Same for LF's to indexes, or stored views.
I would not advise anyone to use those Navigator tools you just wrote to create tables and indexes on the fly, of because although they do create the tables, they do not save the DDL source in a source member. Same for indexes. Sure the source is found in the catalog, and it can be extracted, but that wastes time.
Sure you can use the ALTER TABLE but I prefer to have the table DDL source in a source member.
I prefer using the tools that are described by Simon in this article. They create the DDL as a starting point. One can always go back and delete the DDL for indexes that should be views. This also lets the developer tweak the DDL, adding constraints, etc.
Dan,
ReplyDeleteWhy do you need DDL stored is a source member? The reality is the database defines itself to itself and you can query it in the tables in qsys. Here is a link that explains how to access the definitions and query them: http://preview.tinyurl.com/qy7trnt
DDL is not meant to be saved like a source file. There is absolutely no use for a create table ddl after you create the table. In fact if you need ot change the table you should use ALTER table which will handle the data that exists in the table without forcing a backup and reload.
> Why do you need DDL stored is a source member?
ReplyDeleteChange management?
Propagating changes from one environment to another?
> In fact if you need ot change the table you should use ALTER table which will handle the data that exists in the table without forcing a backup and reload.
ALTER TABLE works great in a source file and in conjunction with change management tooling can provide a history of exactly who and when a particular change was made to a table.
Bob
ReplyDeleteI agree, PF are DB2 tables with a twist, a PF has Edit Codes and Edit Words and that is probably the reason they perform sligtly slower than DDL defined DB2 Tables. Besides that many of the DDS definition commands actually runs SQL under the hood, the same goes for Query.
I also agree that DDL definitions are rater useless after they have been initially used, this dosn't however applies to DDS for one simple reason - the CHGPF command that in a very single way can convert a PF from one design to a new design by simply altering the old DDS to a new.
@Dan, I recently attended a session that contradicts your comment regarding performance of PFs vs tables. If the presenter is correct, it matters more the method you use to access the data than whether you use CRTPF vs CREATE TABLE. If you use CREATE TABLE but access with RPG record level IO, you still use the original engine, but if you use CRTPF and access with imbedded SQL, you use the SQE. These are recent changes in V7R1 and later and have been implemented incrementally over recent TRs.
ReplyDeleteReagrdless, I appreciated the value of DDL vs DDS. Can anybody confirm or deny this information on SQE and DDS files?
Ed, I think Henrik is 100% on the mark with the details, but you have always been able to access PF's via SQL and there is no performance difference.
ReplyDelete@Bob, I know that since Day One, you've been able to access with PFs or SQL Tables using either SQL or legacy methods. That was one of many pluses back in the day. But there used to be a penalty as SQL over DDS defined files didn't always use the SQE. SQL was just easier to program than the OPNQRYF crap, but not any faster. Using the SQE more is what's supposed to have changed over time.
ReplyDelete@Ed
ReplyDeletethe use of SQE vs CQE on a DDS/PF has something to do with QAQQINI option IGNORE_DERIVED_INDEX.
As far as I remenber it had something to do with how the SQL SELECT statement was coded in SQL embedded RPG. If you used a datastructure in the SQL statement it would chose CQE, if you specified all fields it would change to SQE - but it is only a bell that rings in my head so I maybe wrong.
From IBM i 7.1 Technical Overview with Technology Refresh Updates
ReplyDelete"SQE supports simple logical files in IBM i 7.1.
SQE support of simple logical files has the following restrictions:
* SQE support of OmniFind using logical files.
* SQE support of multi-data space logical files.
* SQE support of logical files over a partition table.
* supports only read-only queries. There is no SQE support of insert, update, or delete
using logical files.
Over time, it appears that IBM is using SQE more over the CQE when using SQL over legacy files.
@Ed has corrected me, and he is right. There was a TR for v7r1 that will remove the performance hit when using SQL over PF vs SQL tables. Before this TR, PF's provided smaller cache memory over using DDL tables, so the penalty was in fact there.
ReplyDelete@Henrik, you are so far out of touch regarding DDS vs DDL. As usual, you talk about DB2 from a very uninformed position. Your comments about CHGPF, and the "failings" of DDL over DDS are laughable, and again exceedingly misinformed. And regarding SELECT INTO a data-structure, your information is either wrong, or was never true. SELECT INTO into a data-structure in fact uses the newer SQE engine.
DDL provides a MASSIVE WEALTH of features that DDS cannot even begin to provide. DDL can give you these that DDS cannot:
(1) CHECK constraint where you can specify elaborate clauses for each field. The check with DDS is a joke.
(2) PRIMARY KEY constraint
(3) FOREIGN KEY constraint
(4) REFERENCE constraint
(5) DDL allows you to specify Temporal table attributes, that allow an app to do "time travel" when accessing that table.
(6) DDL allows you to specify RCAC (Row & Column Access Control) in such a way that the calling programs that use that table do not even know about these constraints.
(7) You specify column masks via a CASE clause so that you can format numeric data.
Additionally Henrik, you make way too much about CHGPF. SQL provides the ALTER TABLE statement, and it allows you to do a LOT MORE than CHGPF.
Henrik, like DDS/PF, SQL TABLES also have source DDL, and the ALTER TABLE statement, so your nonsense again us uninformed.
Ed, the only performance advantage that I know of was using the RPG Cycle which used some very low level machine code to process files sequentially with incredible performance. Well crafted SQL should not see any performance difference over PF's... The phrase "well crafted" is a critical factor. Many folks wrote some SQL that would retrieve data but had terrible performance. I strongly recommend running your SQL under Visual Explain to validate performance.
ReplyDelete@Dan
ReplyDeleteI will not go into a deep discussion about that since I'm fully aware of ALTER and you have no experience in large scale Change Management.
But if you have a structure of DDS and a FRF file, you are able to make change to either of them and simply run a CHGPF command on the PF without any programming.
In large scale you have a CL program that runs a CHGPF on all PF. Files that are not changed the CHGPF will automatically drop, files that are change the CHGPF will upgrade them.
So the DDS actually just hold a new image of each PF and whatever backlevel the PF had before it will have the new format.
Let me give you an example:
PF A may have format 1, 2, 3 or 4 and now you want to upgrade the PF A to format 5. Between format 1,2,3,4 5 columns has changed sizes, two colums has been dropped and 3 new columns has been added. Format 5 introduces a new column to format 4 and drops one.
Tell men that you can do that in one statement in SQL upgrading the PF A from any of format 1,2,3,4 to format 5
@Henrik, I have worked in very large scale change managements most of my 36 year career.
ReplyDeleteYou are describing a straw argument to support your thoughts on CHGPF, which simply doesn't exist.
There are MANY IBM i shops that use change management packages which handle DDL tables just fine.
Henrik, you are grasping and holding onto tight the past, DDS. The fact that in 2015 you are still singing the praises of DDS is very telling. ;-)
The recently announced 7.1 TR10 and 7.2 TR2 have added the clause OR REPLACE to the CREATE TABLE SQL statement (CREATE OR REPLACE TABLE). According to the documentation, when a table is replaced, it in essence acts similar to the CHGPF command, so the examples that Henrik gave about a table adding and dropping fields can now be done in one SQL statement.
ReplyDeleteThe other advantage of SQL tables over physical files is that for tables, the data is validated when inserting or updating records, whereas files have the data validated at read time. You can never get a data decimal error on an SQL table like you could with a physical file.
Dan, you bring an interesting aspect to this discussion namely who owns the change to the database, the DBA or the programmer... In my view data is a corporate resource and managed by the DBA or DBA team. You need change documentation and authorization to make changes to the database, but I do not feel you need to automate it in the same fashion that you would program changes.
ReplyDeleteOn the other hand your argument becomes more meaningful if the organization allows programmers to make changes to the database. In that case you really don't have a database, you have files, even if they happen to be DB2 tables...
@Dan and Bob
ReplyDeleteYou simply need to understand the ‘P10’ IBM I business.
Many of these customers may have one or two programmers (if any) that develop and maintain their legacy system. But they also rely of different ISV’s products like my financial system, my web framework, others archive systems etc. etc.
They don’t have large Change Management systems because their legacy system most of the time has two partitions, a test and a production system and they can manage the Change Management manually quite easily.
These customers move forward in different speed. Their main focus is not running their system on the latest OS version but bringing business functionality to their systems. Most knows that current version -2 is an acceptable level and without their gaining anything substantial the most likely upgrade their systems while the buy more power and thereby new HW.
If you try to sell them a move from DDS and DDL they will simply ask you to justify the cost/benefit in a short term business plan. If they gain nothing they don’t make the move.
As an ISV you can’t afford to lose these customers, they pay SWMA and support willingly and they pay it because we as ISV’s are able to deliver “the product” value that suits their chosen upgrade pace. Remember “the customer is always right”!
Henrik,
ReplyDeleteI totally agree with you. The number of shops with a significant number of developers and actual in-house development is very small. You have hit the nail on the head, that the vast majority of IBM i shops use 3rd party package software and have very few people to maintain the systems...
1) The API can generate DDL for other SQL objects too, such as SQL
ReplyDeletefunctions, SQL procedures, triggers, aliases, etc.
2) It is fairly simple to create a CL command interface to the QSQGNDDL API (a "wrapper"). This article shows an example partially written in the C
language, but a few years ago I wrote one in CL and RPG:
http://www.ibmsystemsmag.com/ibmi/developer/general/Generating-DDL-Source-Using-a-CL-Command/
I came upon this blog in efforts to understand the difference between how to apply changes to DDL versus DDS defined files and was horrified by the disrespectful and condescending treatment of Henrik by Dan. So sorry Henrik, that you had to endure that and I was impressed to see you did not respond with the same attitude but instead justified your position. Bravo Henrik!
ReplyDelete