Fast, efficient method of assigning large array of data to array of clusters?

StackOverflow https://stackoverflow.com/questions/23663209

  •  22-07-2023
  •  | 
  •  

Question

I'm looking for a faster, more efficient method of assigning data gathered from a DAQ to its proper location in a large cluster containing arrays of subclusters.

My current method 1 relies heavily on the OpenG cluster manipulation tools, but with a large data-set the performance is far too slow.

The array and cluster location of each element of data from the DAQ is determined during an initialization phase and doesn't change during acquisition.

Because the data element origin and end points are the same throughout acquisition, I would think an array of memory locations could be created and the data directly assigned to its proper place. I'm just not sure how to implement such a thing.

enter image description here

Was it helpful?

Solution

The following code does what you want:
enter image description here
For each of your cluster elements (AMC, ANLG_PM and PA) you should add a case in the string case structure, for the elements AMC and PA you will need to place a second case structure.

OTHER TIPS

This is really more of a comment, but I do not have the reputation to leave those yet, so here it is:

Regarding adding cases for every possible value of Array name, is there any reason why you cannot use an enum here? Since you are placing it into a cluster anyway, I would suggest making a type-defined enum of your possible array names. That way, when you want to add or remove one, you only have to do it in one place.

You will still need to right-click on your case structures that use this enum and select Add item for every value if you are adding a value, or manually delete the obsolete value if you are removing one. I suppose some maintenance is required either way...

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top