Question

I’ve been working on a Perl script for my master thesis to extract a small piece of text (CAE) from a 10K (an annual report of a company). I managed to finish writing this script after a lot of work. Now I need to write a new script, but due to a deadline next week, I’m afraid I won’t make it in time to finish. I was wondering if there is someone who can help me with the following problem:

I have almost 52.000 .txt files with a small piece of text. I need a script which writes down the name of each .txt file, and the amount of words and/or characters in this file and copies this of all the files into one text file.

Is there someone who could help me please? I would really appreciate it!

This is what I got so far:

#!/usr/bin/perl -w
use strict;
use warnings;

my $folder;                     #Base directory for the 10K filings
my $subfolder="2012";           #Subdirectory where 10K filings are placed (Default is ./10K/10K_Raw/2012/*.txt)
my $folder10kcae="10K_CAE";     #Name of subdirectory for output (CAE)
my $folderwc="10K_WC";          #Name of subdirectory for output (WordCount)
my $target_cae;                 #Name of target directory for output (CAE)
my $target_wc;                  #Name of target directory for output (WordCount)
my $slash;                      #Declare slash (dependent on operating system)
my $file;                       #Filename
my @allfiles;                   #All files in directory, put into an array
my $allfiles;                   #Total files in directory
my $data;                       #Input file contents
my $cae;                        #Results of the search query (CAE)
my $wc                          #Results of the search query (WordCount)
my $output_cae;                 #Output file with CAE
my $output_wc;                  #Output file with WordCount
my $log;                        #Log file (also used to determine point to continue progress)
my $logfile="$subfolder".".log";#Filename of log file
my @filesinlog;                 #Files that have been processed according to log file

{
#Set folders for Windows. Put raw 10K filings in folder\subfolder
$slash="\\";
$folder="C:\\10KK\\";                    ###specify correct base-map###
}


#Open source folder and read all files
opendir(DIR,"$folder$slash$subfolder") or die $!;
@allfiles=grep /(.\.txt)/, readdir DIR;
chomp(@allfiles);


#Creates destination folder
$target_wc="$folder$slash$folder10kwc$slash$subfolder";

mkdir "$folder$slash$folder10kwc";
mkdir $target_wc;


#Count lines, words and characters
my ($lines, $words, $chars) = (0,0,0);

while ($data=@allfiles) {
$lines++;
$chars += length($_);
$words += scalar(split(/\s+/, $_));
}

open $output_wc, ">", "$target_wc$slash$file" or die $!;
print $output_wc $wc;
close $output_wc;

print("lines=$lines words=$words chars=$chars\n");
Was it helpful?

Solution

I'd say you have a bit of a wheel reinvention problem here, and I wouldn't use a perl script. There's a unix command line tool called 'wc' ( short for word count ), that will do everything you want to do with no programming required.

On unix

$ wc /path/to/my/folder/* > /path/to/my/output/file.txt

On windows, you can download the wc program as part of the GNU Coreutils for Windows package, then run the same command in windows stylee

C:\ > wc \path\to\my\folder\* > \path\to\my\output\file.txt
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top