Question

I'd like to generate dummy files in bash. The content doesn't matter, if it was random it would be nice, but all the same byte is also acceptable.

My first attempt was the following command:

rm dummy.zip;
touch dummy.zip;
x=0;
while [ $x -lt 100000 ];
do echo a >> dummy.zip;
  x=`expr $x + 1`;
done;

The problem was its poor performance. I'm using GitBash on Windows, so it might be much faster under Linux but the script is obviously not optimal.

Could you suggest me a quicker and nice way to generate dummy (binary) files of given size?

Was it helpful?

Solution

You can try head command:

$ head -c 100000 /dev/urandom >dummy

OTHER TIPS

You may use dd for this purpose:

dd if=/dev/urandom bs=1024 count=5 of=dummy
  • if:= in file
  • of:= out file
  • bs:= block size

Note, that

 x=`expr $x + 1`;

isn't the most efficient way to calculation in bash. Do arithmetic integer calculation in double round parenthesis:

 x=((x+1)) 

But for an incremented counter in a loop, there was the for-loop invented:

x=0;
while [ $x -lt 100000 ];
do echo a >> dummy.zip;
  x=`expr $x + 1`;
done;

in contrast to:

for  ((x=0; x<100000; ++x))
do
    echo a 
done >> dummy.zip 

Here are 3 things to note:

  • unlike the [ -case, you don't need the spacing inside the parens.
  • you may use prefix (or postfix) increment here: ++x
  • the redirection to the file is pulled out of the loop. Instead of 1000000 opening- and closing steps, the file is only opened once.

But there is still a more simple form of the for-loop:

for x in {0..100000}
do
    echo a 
done >> dummy.zip 

This will generate a text file 100,000 bytes large:

yes 123456789 | head -10000 > dummy.file
$ openssl rand -out random.tmp 1000000

If your file system is ext4, btrfs, xfs or ocfs2, and if you don't care about the content you can use fallocate. It's the fastest method if you need big files.

fallocate -l 100KB dummy_100KB_file

See "Quickly create a large file on a Linux system?" for more details.

echo "To print the word in sequence from the file" c=1 for w in cat file do echo "$c . $w" c = expr $c +1 done

Easy way:

make file test and put one line "test"

Then execute:

 cat test >> test

ctrl+c after a minute will result in plenty of gigabytes :)

Possibly

dd if=/dev/zero of=/dummy10MBfile bs=1M count=10
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top