Question

My application used to read data from different sections of large file and update the tables. Each section of the data is associated with a separate table. The update process can contains multiple insert, update and delete statements.

I am trying to split the file into multiple files and running 3, 4 or n instances of the application to achieve parallel execution. I hope this will improve the performance.

Question:

  1. What is the maximum (optimal) allowed process that can handle parallel by unixODBC?

Environment: Red hat Linux 64 bit, C++, unixODBC (32 bit), OTL, Oracle 10,11

Was it helpful?

Solution

unixODBC is just a module which provides APIs compliant to ODBC, not a individual process. To use it, the unixODBC should be linked as a static or dynamic library into the application.

Many factors can impact the performance: hardwares(CPU, memory), the database design, etc. I think this issue is no related to unixODBC. You can run as many processes which use unixODBC as possible.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top