I have a data flow which results in the addition of a record to a table.This table has an identity key.
I now wish to use this identity key to create records in other tables. So I am doing this in another data flow which follows the previous data flow in the control flow.
So ... how do I access the new identity key and have that assigned to a column?
I can use an OLE DBE Source to SELECT IDENT_CURRENT('table_name') as 'TableName' for instance. But then how do I assign that to a variable or add that value to a column in each row of the table I want to save to?
I have tried creating a column with the value 1 in the two data sources and then using a Sort on them both and doing a Merge Join to get acess to the IDENTITY KEY but that a bit of a mess really ... very awkward.
There must be a far better way ...
Your help is very much appreciated.
MarkThis might not be the best solution but this is what I am doing:
In your first data flow, don't use Ole DB Destination for inserting rows.
Instead, use OLE DB Command Transformation.
Write a stored procedure with parameters that will do these steps:
Insert into table 1 values (<values from parameter_list>)
//table 1 is having the identity column
Insert into table 2(col1, col2, identity_col)
values ('foo', 'bar', SELECT @.@. IDENTITY)
Call this stored procedure from Ole DB Command Transformation with appropriate parameters.
HTH
thanks,
Nitesh|||
A few options come to mind:
1. call a stored procedure that would insert the value and return the identity key used back to you. This would be done through an OLEDB Command destination but as a result will be slow since it will be called once for each row.
2. generate the keys in the pipeline itself and write to the identity column. You can initialize the seed before the data flow starts in a variable and get the increment into another variable.
3. Nitesh provided another way of having the stored procedure insert into two tables. This will be slow as well.
regards,
ash
|||Thank-you very much Nitesh and Ash - that has given me some options. :)|||The other option is to insert all the data into a temp table. and then in another task have an OLEDB source that performs the insert into the proper table but using the new select syntax to return the rows with the identies that were generated.This does mean splitting your flow into multiple data flow tasks but should work. Even with the overhead of inserting the data twice it is likely to be more performant than using individual sql calls.
You take this to the another level and use events to have the processing of the data from the temporary table into the final table being handled on a buffer by buffer basis.
I am working on soemthing that would wrap this up in a single component
No comments:
Post a Comment