Question

So, I have been reading through some code saw an interesting pattern for parsing data from a data reader. It looks like:

public static long ParseLong(IDataReader reader, string field, long default)
{
    long result;
    var value = GetValue(reader, field);
    if (value == DBNull.Value) return default;

    // is this next line bad?
    if (Int64.TryParse(value.ToString(), out result)) return result;
    return default;
}

This pattern doesn't sit right with me because it hides type casting exceptions and it seems like converting to a string then casting again would be slower than just calling "Convert.ToInt64(value)".

Am I off base in this? Is there a better pattern for parsing data from a data reader?

Note - In my case, the type of the column is "NUMBER(20)". Even if this wasn't the case though, it would be better to throw the exception so the developer will fix the issue before going to production, right?

Was it helpful?

Solution

Yes, it's bad.

If the column is already one of the sql number types, this is not only the slow way to do it, but it can introduce bugs if the machine's number format uses some strange separator characters. If you know the column is already a number, you should just cast it to long.

On the other hand, there's no context available to this method. From the method's perspective, that column could be anything. And in fact, it's fairly common to find databases designed by people who want to use string types (nvarchar, etc) for everything, however misguided that may be. This code does have a small advantage in that it can still work if the column is using a string type to hold numeric data, and it would be fair to allow a method that wanted to be truly generic to use this technique (though I personally might want to try casting first and then fallback to the string technique only if the cast fails).

However, there is another issue in this code: the DBNull check. If you're going to convert to a string and parse it, the null check is redundant. DBNull.Value.ToString() returns an empty string, which will fail the TryParse() call and result in returning the default long, same as the check would. The code is just not needed.

Therefore, we're now in a situation where it's either bad because of the string conversion, or if you allow the string conversion it's bad because of the null check. It fails either way. The only reason I would accept this code is if you can show that you do need the string conversion, and after profiling you find that the program is spending a lot of time needlessly converting DBNull to a string. Then adding the faster null check would be okay.

I do have one point in defense of this code, based on your edit. A quick check of the docs for the long type show that the max value for a long tops out at 19 digits. You're storing up to 20. So this code could be there in an attempt to protect against very large values between 9,223,372,036,854,775,807 and 99,999,999,999,999,999,999 that do fit in the column but don't fit within the long type... though even in this case, my preference would be to throw an exception for those values, rather than just returning 0.

One last thought. This code works because IDataReader implements the IDataRecord interface. The method should just ask for an IDataRecord, to make it more flexible.

OTHER TIPS

There is no reason parse value through string, because it can hide data error. You cen use generics variant of your method:

public static T Parse<T>(IDataReader reader, string field, T default)
{
    int columnIndex = reader.GetOrdinal(field);
    if (reader.IsDBNull(columnIndex)) return default;    
    return (T)reader.GetValue(columnIndex);
}

IDataRecord interface provide you type specific read than you can read with the IDataRecord.GetInt64 method, but you have to check to DBNull as well.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top