There's no way to batch up SELECT statements, but I thought I could perhaps generate a big SELECT statement that includes all the keys I needed.
SELECT 1 AS OLD_KEY, MYSEQUENCE.NEXTVAL FROM SYSIBM.SYSDUMMY1 UNION ALL SELECT 2 AS OLD_KEY, MYSEQUENCE.NEXTVAL FROM SYSIBM.SYSDUMMY1
The problem with this approach is that when you request NEXTVAL multiple times in the same SELECT statement, it returns the same value each time. So I ended up generating one new key assigned to all of my old keys. No good, so I had to try something else. I decided to insert into a GLOBAL TEMPORARY TABLE.
db.execute('''DECLARE GLOBAL TEMPORARY TABLE SESSION.GEN_IDS ( ORIG_ID DECIMAL(12,0), NEW_ID DECIMAL(12,0)) ON COMMIT PRESERVE ROWS''') db.withBatch('INSERT INTO SESSION.GEN_IDS VALUES (?, MYSEQ.NEXTVAL)') { BatchingPreparedStatementWrapper ps -> oldKeys.each { ps.addBatch(it) } } def mappings = target.rows('SELECT * FROM SESSION.GEN_IDS').collectEntries { Map data -> [(data.ORIG_ID) : data.NEW_ID] } db.execute('DROP TABLE SESSION.GEN_IDS')
Problem solved. The collectEntries method will create a map for the old and new values. Of course, then I learned about the JDBC getGeneratedKeys method, which makes this approach much less useful, but I'm still glad I learned how to use the GLOBAL TEMPORARY TABLEs.
Post a Comment