You can export data from your database as a CSV or tab-delimited, or pipe-delimited, or Ctrl-A (Unicode 0x0001) - delimited files. Then you can copy those files into HDFS and run a very simple MapReduce job, maybe even consisting just of a Mapper and configured to read the file format you used and to output the sequence files.
This would allow to distribute the load for the creating of the sequence files between the servers of the Hadoop cluster.
Also, most likely, this will not be a one-time deal. You will have to load the data from the Postgres database into HDFS on the regular basis. They you would be able to tweak your MapReduce job to merge the new data in.