Question

I have a database file that I beleive was created with Clipper but can't say for sure (I have .ntx files for indexes which I understand is what Clipper uses). I am trying to create a C# application that will read this database using the System.Data.OleDB namespace.

For the most part I can sucessfully read the contents of the tables there is one field that I cannot. This field called CTRLNUMS that is defined as a CHAR(750). I have read various articles found through Google searches that suggest field larger than 255 chars have to be read through a different process than the normal assignment to a string variable. So far I have not been successful in an approach that I have found.

The following is a sample code snippet I am using to read the table and includes two options I used to read the CTRLNUMS field. Both options resulted in 238 characters being returned even though there is 750 characters stored in the field.

Here is my connection string:

Provider=Microsoft.Jet.OLEDB.4.0;Data Source=c:\datadir;Extended Properties=DBASE IV;

Can anyone tell me the secret to reading larger fields from a DBF file?

using (OleDbConnection conn = new OleDbConnection(connectionString))
{
    conn.Open();

    using (OleDbCommand cmd = new OleDbCommand())
    {
        cmd.Connection = conn;
        cmd.CommandType = CommandType.Text;
        cmd.CommandText = string.Format("SELECT ITEM,CTRLNUMS FROM STUFF WHERE ITEM = '{0}'", stuffId);

        using (OleDbDataReader dr = cmd.ExecuteReader())
        {
            if (dr.Read())
            {
                stuff.StuffId = dr["ITEM"].ToString();

                // OPTION 1
                string ctrlNums = dr["CTRLNUMS"].ToString();

                // OPTION 2
                char[] buffer = new char[750];
                int index = 0;
                int readSize = 5;
                while (index < 750)
                {
                    long charsRead = dr.GetChars(dr.GetOrdinal("CTRLNUMS"), index, buffer, index, readSize);

                    index += (int)charsRead;

                    if (charsRead < readSize)
                    {
                        break;
                    }
                }
            }
        }
    }
}
Was it helpful?

Solution

You can find a description of the DBF structure here: http://www.dbf2002.com/dbf-file-format.html

What I think Clipper used to do was modify the Field structure so that, in Character fields, the Decimal Places held the high-order byte of the size, so Character field sizes were really 256*Decimals+Size.

I may have a C# class that reads dbfs (natively, not ADO/DAO), it could be modified to handle this case. Let me know if you're interested.

OTHER TIPS

Are you still looking for an answer? Is this a one-off job or something that needs doing regularly?

I have a Python module that is primarily intended to extract data from all kinds of DBF files ... it doesn't yet handle the length_high_byte = decimal_places hack, but it's a trivial change. I'd be quite happy to (a) share this with you and/or (b) get a copy of such a DBF file for testing.

Added later: Extended-length feature added, and tested against files I've created myself. Offer to share code with anyone who would like to test it still stands. Still interested in getting some "real" files myself for testing.

3 suggestions that might be worth a shot...

1 - use Access to create a linked table to the DBF file, then use .Net to hit the table in the access database instead of going direct to the DBF.

2 - try the FoxPro OLEDB provider

3 - parse the DBF file by hand. Example is here.

My guess is that #1 should work the easiest, and #3 will give you the opportunity to fine tune your cussing skills. :)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top