Question

Hi I am new here and a beginner in R,

My problem: in the case i have more than one file (test1.dat, test2.dat,...) to work with in R i use this code to read them in

filelist <- list.files(pattern = "*.dat")

df_list <- lapply(filelist, function(x) read.table(x, header = FALSE, sep = ","
                                               ,colClasses = "factor", comment.char = "", 
                                               col.names = "raw"))

Now i have the problem that my data is big, i found a solution to speed things up using the sqldf-package :

sql <- file("test2.dat")
df <- sqldf("select * from sql", dbname = tempfile(),
                    file.format = list(header = FALSE, row.names = FALSE, colClasses = "factor", 
                                       comment.char = "", col.names ="raw"))

it is working well for one file but i am not able to change the code to read-in multiple files like in the first code snippet. can someone help me? Thank you! Momo

Was it helpful?

Solution

This seems to work (but i assume there is a quicker sql way to this)

sql.l <- lapply(filelist , file)

df_list2 <- lapply(sql.l, function(i) sqldf("select * from i" ,  
    dbname = tempfile(),  file.format = list(header = TRUE, row.names = FALSE)))


Look at speeds - partially taken from mnel's post Quickly reading very large tables as dataframes in R

library(data.table)
library(sqldf)

# test data
n=1e6
DT = data.table( a=sample(1:1000,n,replace=TRUE),
                 b=sample(1:1000,n,replace=TRUE),
                 c=rnorm(n),
                 d=sample(c("foo","bar","baz","qux","quux"),n,replace=TRUE),
                 e=rnorm(n),
                 f=sample(1:1000,n,replace=TRUE) )

# write 5 files out
lapply(1:5, function(i) write.table(DT,paste0("test", i, ".dat"), 
                                 sep=",",row.names=FALSE,quote=FALSE))

read: data.table

filelist <- list.files(pattern = "*.dat")

system.time(df_list <- lapply(filelist, fread))

#  user  system elapsed 
# 5.244   0.200   5.457 

read: sqldf

sql.l <- lapply(filelist , file)

 system.time(df_list2 <- lapply(sql.l, function(i) sqldf("select * from i" ,  
   dbname = tempfile(),  file.format = list(header = TRUE, row.names = FALSE))))

#    user  system elapsed 
#  35.594   1.432  37.357 

Check - seems ok except for attributes

all.equal(df_list , df_list2)

OTHER TIPS

Somehow the lappy() doesn't work for me.

map_df() works for me in combining 7000+ .dat files. Also skipped the 1st row of each file and filter the column "V1"

rawDATfile.list <- list.files(pattern="*.DAT")

data <- rawDATfile.list%>%
  map_dfr(~read.delim(.x, header = FALSE, sep=";", skip=1, quote = "\"'")%>%
            mutate_all(as.character))%>%
  filter(V1=="B")
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top