Question

UPDATE: It turns out it was just LabView messing up. Even ints weren't coming through properly. Deleting and recreating some of the nodes solved the problem.

I wrote a .Net 3.5 assembly which is being consumed by a LabView engineer. It's at least LabView 7, but I think higher. A method in my assembly is returning an array of objects where each instance has a property of type decimal (among other things). The LabView engineer is doing nothing particular fancy, and is just dumping the sequence to the front-end of the VI, and each of these decimal properties look like very tiny floating point numbers. The actual decimal might be 740.0, but it gets seen in LabView as a double, with a value like 8.12345E-315. That's off by quite a few orders of magnitude!

The string and boolean properties are coming through just fine.

Any idea why this is happening?

EDIT: We tested this using a very simple class with some decimal fields and properties, and it worked perfectly fine in LabView. There's something fishy going on with this one DLL, so we're trying some other tests to see if we can replicate the issue using a different DLL.

Here is a screenshot of some endian-changing tests. Swapping the endian type of the properties of our simple test class produced the same values. Swapping the endian type of the decimals from the real class library just produces different tiny floats.

http://i.imgur.com/WpZ8bYX.jpg

Was it helpful?

Solution 4

Try deleting and recreating the property access nodes. Sometimes LabView gets confused and messes up the data.

OTHER TIPS

A LabVIEW double is a 64-bit float in Big Endian (because of its Mac heritage). Your decimal might be something different. If you want to fix it on the LabVIEW side you can use the following code: LabVIEW code snippet for endianness swap

Perhaps he should need to alter the conversion constants around.

It smells like a bad cast, LabVIEW believing the decimal is a double or single precision floating point number. You should explicitly convert the decimal to a standard floating point, before passing it to LabVIEW. Note that you'll lose digits of precision. Or, find a numeric type that matches the precision of decimal in LabVIEW and make the proper conversion.

Here's a work-around I could do:

Make an adapter class that changes the interface to use a well-supported data type like double, and then use this adapter class in LabView instead of the original class.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top