質問

I would like to access csv files in scala in a strongly typed manner. For example, as I read each line of the csv, it is automatically parsed and represented as a tuple with the appropriate types. I could specify the types beforehand in some sort of schema that is passed to the parser. Are there any libraries that exist for doing this? If not, how could I go about implementing this functionality on my own?

役に立ちましたか?

解決

product-collections appears to be a good fit for your requirements:

scala> val data = CsvParser[String,Int,Double].parseFile("sample.csv")
data: com.github.marklister.collections.immutable.CollSeq3[String,Int,Double] = 
CollSeq((Jan,10,22.33),
        (Feb,20,44.2),
        (Mar,25,55.1))

product-collections uses opencsv under the hood.

A CollSeq3 is an IndexedSeq[Product3[T1,T2,T3]] and also a Product3[Seq[T1],Seq[T2],Seq[T3]] with a little sugar. I am the author of product-collections.

Here's a link to the io page of the scaladoc

Product3 is essentially a tuple of arity 3.

他のヒント

If your content has double-quotes to enclose other double quotes, commas and newlines, I would definitely use a library like opencsv that deals properly with special characters. Typically you end up with Iterator[Array[String]]. Then you use Iterator.map or collect to transform each Array[String] into your tuples dealing with type conversions errors there. If you need to do process the input without loading all in memory, you then keep working with the iterator, otherwise you can convert to a Vector or List and close the input stream.

So it may look like this:

val reader = new CSVReader(new FileReader(filename))
val iter = reader.iterator()
val typed = iter collect {
  case Array(double, int, string) => (double.toDouble, int.toInt, string)
}
// do more work with typed
// close reader in a finally block

Depending on how you need to deal with errors, you can return Left for errors and Right for success tuples to separate the errors from the correct rows. Also, I sometimes wrap of all this using scala-arm for closing resources. So my data maybe wrapped into the resource.ManagedResource monad so that I can use input coming from multiple files.

Finally, although you want to work with tuples, I have found that it is usually clearer to have a case class that is appropriate for the problem and then write a method that creates that case class object from an Array[String].

You can use kantan.csv, which is designed with precisely that purpose in mind.

Imagine you have the following input:

1,Foo,2.0
2,Bar,false

Using kantan.csv, you could write the following code to parse it:

import kantan.csv.ops._

new File("path/to/csv").asUnsafeCsvRows[(Int, String, Either[Float, Boolean])](',', false)

And you'd get an iterator where each entry is of type (Int, String, Either[Float, Boolean]). Note the bit where the last column in your CSV can be of more than one type, but this is conveniently handled with Either.

This is all done in an entirely type safe way, no reflection involved, validated at compile time.

Depending on how far down the rabbit hole you're willing to go, there's also a shapeless module for automated case class and sum type derivation, as well as support for scalaz and cats types and type classes.

Full disclosure: I'm the author of kantan.csv.

I've created a strongly-typed CSV helper for Scala, called object-csv. It is not a fully fledged framework, but it can be adjusted easily. With it you can do this:

val peopleFromCSV = readCSV[Person](fileName)

Where Person is case class, defined like this:

case class Person (name: String, age: Int, salary: Double, isNice:Boolean = false)

Read more about it in GitHub, or in my blog post about it.

Edit: as pointed out in a comment, kantan.csv (see other answer) is probably the best as of the time I made this edit (2020-09-03).

This is made more complicated than it ought to because of the nontrivial quoting rules for CSV. You probably should start with an existing CSV parser, e.g. OpenCSV or one of the projects called scala-csv. (There are at least three.)

Then you end up with some sort of collection of collections of strings. If you don't need to read massive CSV files quickly, you can just try to parse each line into each of your types and take the first one that doesn't throw an exception. For example,

import scala.util._

case class Person(first: String, last: String, age: Int) {}
object Person {
  def fromCSV(xs: Seq[String]) = Try(xs match {
    case s0 +: s1 +: s2 +: more => new Person(s0, s1, s2.toInt)
  })
}

If you do need to parse them fairly quickly and you don't know what might be there, you should probably use some sort of matching (e.g. regexes) on the individual items. Either way, if there's any chance of error you probably want to use Try or Option or somesuch to package errors.

I built my own idea to strongly typecast the final product, more than the reading stage itself..which as pointed out might be better handled as stage one with something like Apache CSV, and stage 2 could be what i've done. Here's the code you are welcome to it. The idea is to typecast the CSVReader[T] with type T .. upon construction, you must supply the reader with a Factor object of Type[T] as well. The idea here is that the class itself (or in my example a helper object) decides the construction detail and thus decouples this from the actual reading. You could use Implicit objects to pass the helper around but I've not done that here. The only downside is that each row of the CSV must be of the same class type, but you could expand this concept as needed.

class CsvReader/**
 * @param fname
 * @param hasHeader : ignore header row
 * @param delim     : "\t" , etc     
 */

 [T] ( factory:CsvFactory[T], fname:String, delim:String) {

  private val f = Source.fromFile(fname)
  private var lines = f.getLines  //iterator
  private var fileClosed = false

  if (lines.hasNext) lines = lines.dropWhile(_.trim.isEmpty) //skip white space

  def hasNext = (if (fileClosed) false else lines.hasNext)

  lines = lines.drop(1) //drop header , assumed to exist


 /**
 * also closes the file 
 * @return the line
 */
def nextRow ():String = {  //public version
    val ans = lines.next
    if (ans.isEmpty) throw new Exception("Error in CSV, reading past end "+fname)
    if (lines.hasNext) lines = lines.dropWhile(_.trim.isEmpty) else close()

    ans 
  }

  //def nextObj[T](factory:CsvFactory[T]): T = past version

  def nextObj(): T = {  //public version

    val s = nextRow()
    val a = s.split(delim)        
    factory makeObj a
  }

  def allObj() : Seq[T] = {

    val ans = scala.collection.mutable.Buffer[T]()
    while (hasNext) ans+=nextObj()

    ans.toList
  }

  def close() = {
    f.close;
    fileClosed = true
  }

} //class 

next the example Helper Factory and example "Main"

trait CsvFactory[T] {  //handles all serial controls (in and out)   

  def makeObj(a:Seq[String]):T  //for reading 

  def makeRow(obj:T):Seq[String]//the factory basically just passes this duty 

  def header:Seq[String]    //must define headers for writing 
}



/**
 * Each class implements this as needed, so the object can be serialized by the writer
 */


case class TestRecord(val name:String, val addr:String, val zip:Int)  {

  def toRow():Seq[String] = List(name,addr,zip.toString) //handle conversion to CSV

}


object TestFactory extends CsvFactory[TestRecord] {

  def makeObj (a:Seq[String]):TestRecord =  new TestRecord(a(0),a(1),a(2).toDouble.toInt)
  def header = List("name","addr","zip")
  def makeRow(o:TestRecord):Seq[String] = {
    o.toRow.map(_.toUpperCase())
  }

}

object CsvSerial {

  def main(args: Array[String]): Unit = {

    val whereami = System.getProperty("user.dir")
    println("Begin CSV test in "+whereami) 

    val reader = new CsvReader(TestFactory,"TestCsv.txt","\t")


    val all = reader.allObj() //read the CSV info a file
    sd.p(all)
    reader.close

    val writer = new CsvWriter(TestFactory,"TestOut.txt", "\t")

    for (x<-all) writer.printObj(x)
    writer.close

  } //main  
}

Example CSV (tab seperated.. might need to repair if you copy from an editor)

Name    Addr    Zip "Sanders, Dante R." 4823 Nibh Av.   60797.00 "Decker, Caryn G." 994-2552 Ac Rd. 70755.00 "Wilkerson, Jolene Z." 3613 Ultrices. St.  62168.00 "Gonzales, Elizabeth W."   "P.O. Box 409, 2319 Cursus. Rd."    72909.00 "Rodriguez, Abbot O."  Ap #541-9695 Fusce Street   23495.00 "Larson, Martin L."    113-3963 Cras Av.   36008.00 "Cannon, Zia U."   549-2083 Libero Avenue  91524.00 "Cook, Amena B."   Ap
#668-5982 Massa Ave 69205.00

And finally the writer (notice the factory methods require this as well with "makerow"

import java.io._


    class CsvWriter[T] (factory:CsvFactory[T], fname:String, delim:String, append:Boolean = false) {

      private val out   = new PrintWriter(new BufferedWriter(new FileWriter(fname,append)));
      if (!append)  out.println(factory.header mkString delim )

      def flush() = out.flush()


      def println(s:String) =    out.println(s)

      def printObj(obj:T) =  println( factory makeRow(obj) mkString(delim) )
      def printAll(objects:Seq[T]) = objects.foreach(printObj(_))
      def close() = out.close

    }

If you know the the # and types of fields, maybe like this?:

case class Friend(id: Int, name: String) // 1,  Fred

val friends = scala.io.Source.fromFile("friends.csv").getLines.map { line =>
   val fields = line.split(',')
   Friend(fields(0).toInt, fields(1))
}
ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top