Intersystems Cache - Maintaining Object Code to ensure Data is Compliant with Object Definition

StackOverflow https://stackoverflow.com/questions/8569520

  •  21-03-2021
  •  | 
  •  

Question

I am new to using intersytems cache and face an issue where I am querying data stored in cache, exposed by classes which do not seem to accurately represent the data in the underlying system. The data stored in the globals is almost always larger than what is defined in the object code.

As such I get errors like the one below very frequently.

Msg 7347, Level 16, State 1, Line 2
OLE DB provider 'MSDASQL' for linked server 'cache' returned data that does not match expected data length for column '[cache]..[namespace].[tablename].columname'. The (maximum) expected data length is 5, while the returned data length is 6.

Does anyone have any experience with implementing some type of quality process to ensure that the object definitions (sql mappings) are maintained in such away that they can accomodate the data which is being persisted in the globals?

Property columname As %String(MAXLEN = 5, TRUNCATE = 1) [ Required, SqlColumnNumber = 2, SqlFieldName = columname ];

In this particular example the system has the column defined with a max len of 5, however the data stored in the system is 6 characters long.

How can I proactively monitor and repair such situations.

/*

I did not create these object definitions in cache

*/

Was it helpful?

Solution

It's not completely clear what "monitor and repair" would mean for you, but:

How much control do you have over the database side? Cache runs code for a data-type on converting from a global to ODBC using the LogicalToODBC method of the data-type class. If you change the property types from %String to your own class, AppropriatelyNamedString, then you can override that method to automatically truncate. If that's what you want to do. It is possible to change all the %String property types programatically using the %Library.CompiledClass class.

It is also possible to run code within Cache to find records with properties that are above the (somewhat theoretical) maximum length. This obviously would require full table scans. It is even possible to expose that code as a stored procedure.

Again, I don't know what exactly you are trying to do, but those are some options. They probably do require getting deeper into the Cache side than you would prefer.

As far as preventing the bad data in the first place, there is no general answer. Cache allows programmers to directly write to the globals, bypassing any object or table definitions. If that is happening, the code doing so must be fixed directly.

Edit: Here is code that might work in detecting bad data. It might not work if you are doing cetain funny stuff, but it worked for me. It's kind of ugly because I didn't want to break it up into methods or tags. This is meant to run from a command prompt, so it would have to be modified for your purposes probably.

{
    S ClassQuery=##CLASS(%ResultSet).%New("%Dictionary.ClassDefinition:SubclassOf")
    I 'ClassQuery.Execute("%Library.Persistent")  b  q
    While ClassQuery.Next(.sc) {
    If $$$ISERR(sc) b  Quit
        S ClassName=ClassQuery.Data("Name")
        I $E(ClassName)="%"   continue
        S OneClassQuery=##CLASS(%ResultSet).%New(ClassName_":Extent")
        I '$IsObject(OneClassQuery) continue  //may not exist
        try {
        I 'OneClassQuery.Execute()  D OneClassQuery.Close() continue
        }
        catch
        {

            D OneClassQuery.Close()
            continue
        }
        S PropertyQuery=##CLASS(%ResultSet).%New("%Dictionary.PropertyDefinition:Summary")
        K Properties
        s sc=PropertyQuery.Execute(ClassName)  I 'sc D PropertyQuery.Close()  continue
        While PropertyQuery.Next()
        {
            s PropertyName=$G(PropertyQuery.Data("Name"))
            S PropertyDefinition=""
            S PropertyDefinition=##CLASS(%Dictionary.PropertyDefinition).%OpenId(ClassName_"||"_PropertyName)
            I '$IsObject(PropertyDefinition)  continue
            I PropertyDefinition.Private  continue
            I PropertyDefinition.SqlFieldName=""    
            {
                S Properties(PropertyName)=PropertyName
            }
            else
            {
                I PropertyName'="" S Properties(PropertyDefinition.SqlFieldName)=PropertyName
            }
        }
        D PropertyQuery.Close()

        I '$D(Properties) continue

        While OneClassQuery.Next(.sc2)  {
            B:'sc2
            S ID=OneClassQuery.Data("ID")
            Set OneRowQuery=##class(%ResultSet).%New("%DynamicQuery:SQL")
            S sc=OneRowQuery.Prepare("Select * FROM "_ClassName_" WHERE ID=?") continue:'sc
            S sc=OneRowQuery.Execute(ID) continue:'sc
            I 'OneRowQuery.Next() D OneRowQuery.Close() continue
            S PropertyName=""
            F  S PropertyName=$O(Properties(PropertyName))  Q:PropertyName=""  d
            . S PropertyValue=$G(OneRowQuery.Data(PropertyName))
            . I PropertyValue'="" D
            .. S PropertyIsValid=$ZOBJClassMETHOD(ClassName,Properties(PropertyName)_"IsValid",PropertyValue)
            .. I 'PropertyIsValid W !,ClassName,":",ID,":",PropertyName," has invalid value of "_PropertyValue
            .. //I PropertyIsValid W !,ClassName,":",ID,":",PropertyName," has VALID value of "_PropertyValue
            D OneRowQuery.Close()
        }
        D OneClassQuery.Close()
    }   
    D ClassQuery.Close()
}

OTHER TIPS

The simplest solution is to increase the MAXLEN parameter to 6 or larger. Caché only enforces MAXLEN and TRUNCATE when saving. Within other Caché code this is usually fine, but unfortunately ODBC clients tend to expect this to be enforced more strictly. The other option is to write your SQL like SELECT LEFT(columnname, 5)...

The simplest solution which I use for all Integration Services Packages, for example is to create a query that casts all nvarchar or char data to the correct length. In this way, my data never fails for truncation. Optional: First run a query like: SELECT Max(datalength(mycolumnName)) from cachenamespace.tablename.mycolumnName

Your new query : SELECT cast(mycolumnname as varchar(6) ) as mycolumnname, convert(varchar(8000), memo_field) AS memo_field from cachenamespace.tablename.mycolumnName

Your pain of getting the data will be lessened but not eliminated. If you use any type of oledb provider, or if you use an OPENQUERY in SQL Server, the casts must occur in the query sent to Intersystems CACHE db, not in the the outer query that retrieves data from the inner OPENQUERY.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top