Albertoim


Hi,

My data flow has several transformations:

1. Search an employee, if the employee already exists, update it, otherwise insert it.

2. Once the new employee is created, i have to get its id (with another search transformation )to update another table with it. This id is an autonumeric , thats the reason i have to get it once the record is inserted.

At this momment this second search transformation to get the assigned id for the new reacord doesnt find any employee... i suppose its because these new data is not commited in the database....

the question is, Its possible to force a commit

Thanks!




Re: Force a commit

ggciubuc


try to use T-SQL function IDENT_CURRENT

"IDENT_CURRENT returns the value generated for a specific table in any session and any scope."(from Books Online)







Re: Force a commit

Albertoim

thanks for your answer ggciubuc,

I have seen the IDENT_CURRENT documentation, and i think it could work if the flow process rows isolated ( ie. using a for each bucle), but in my case, the output of the oledb command transformation is a bunch of rows, so , i think that using this function i get only the last id generated... am i wrong

Any other suggestion

Thanks






Re: Force a commit

ggciubuc

After your last post i suppose there are many clients that run your package, so there are many ID's.

But let's think at these ID's; you are got in a variable IDENT_CURRENT of time t0 an unique value of ID

if someone run the some package, the variable get another value of IDENT_CURRENT of time t1, and t0<t1

this because I thing this running of package is a consecutive running.

Try this ideea.






Re: Force a commit

ggciubuc

Another ideea is to use 2 Data Flow tasks

In first you update/insert employee then you have source-transform-destination, so is made a commit and in second

you update another tables. Link with a "Constraint precedence" arrow with value "succes"






Re: Force a commit

Albertoim

thanks ggciubuc,

Thats a good solution , the problem is that the time to reatrive data source is to large. If i split in two data flows, i have to get these data two times...