We are developing a big system with SQL Server database, ASP.NET Web API 2.2 services and another external services.

We need to load more data on a table while we process the current data on it. To allow load data in background we call our ASP.NET Web API from a CLR SQL Server Stored Procedure.

Then, our ASP.NET Web API retrieve more data and itself insert that data on SQL Server.

Better explained the process is the following:

  1. SQL Server Stored procedure detects that we need more data, then it calls CLR SQL Server Stored Procedure.
  2. CLR SQL Server SP calls ASP.NET Web API.
  3. ASP.NET Web API retrieves more data and insert then into SQL Server.

What do you think about this 'architecture'? Do you know a better approach?

My doubt here is second step: a SQL Server database calling an ASP.NET Web API. But we need to do that work in background.

And also, the new data that we need to insert into SQL Server could be in a WCF SOAP Service.

有帮助吗?

解决方案

I've worked in places where CLR assemblies are used in this fashion. In my opinion it is a bad idea.

Here's my reasoning.

1: Embedding logic in the database is always bad. This is a 'programming philosophy' position, but I feel that it is justified as a general rule

2: The CLR will be limited to .net 2 on mssql server

3: The CLR may timeout or error due to its dependency on the external resource. This will produce different result sets from the sproc

4: the DB selects? then calling the API which then inserts is going to give you locking and transaction problems with no easy solution

5: You already have a .Net layer in your solution (the webapi) so you have the infrastructure to support a windows service or similar application which can fulfill the business role.

I guess the thing missing from your question which would enable me to suggets a better pattern is what is calling the sproc in the first place?

Just a quick expansion on point 1 as I know it can be contentious. I've worked in a lot of companies and I tend to see two types of system.

Systems initially created by DBAs. these tend to have lots of SQL Agent Jobs, big sprocs with case statements, loops and business logic, SSIS packages etc

Systems initially created by programmers. These tend to have no indexes or keys, xml columns etc. DB just used for persistence of objects

The DBA created systems fall over because they don't scale

The programmer systems get corrupt data but struggle on. Because you can always add another web box

其他提示

Based on Ewan's response, I can propose a way to do your work without using CLR:

1) You create a job (windows service, application run by a scheduled task etc.) that checks if extra data is required

2) Fetch the data from the job by calling the Web.API

3) Do your processing in the job. If the logic is convoluted, C# is way better than SQL

4) Persist the data from the job. Persistence can be done using BulkInsert for better performance.

High level languages like C++, C#, Java are almost always more fit to be used for defining business logic. They perform much better in iterative cases, allow OOP, advanced logging, exception management, debugging etc.

If migrating the stored procedure is not an option, you can still do your job:

3) persist fetched data into some buffer tables which have a processing session identifier. If fetched data volume is high, bulk insert is your friend

4) call the procedure using the processing session identifier. The procedure will get data from your buffer tables

5) if data volume is very high, you can empty (truncate) buffer tables during a low usage period, as deletes have great impact upon performance

许可以下: CC-BY-SA归因
scroll top