If you are setting up your Data Pipeline through the DynamoDB Console's Import/Export button, you will have to create a separate pipeline per table. If you are using Data Pipeline directly (either through the Data Pipeline API or through the Data Pipeline console), you can export multiple tables in the same pipeline. For each table, simply add an additional DynamoDBDataNode, and an EmrActivity to link that Data Node to the output S3DataNode.
Regarding your year_month prefix use case, using the data pipeline sdk to change the table names periodically seems like the best approach. Another approach could be to make a copy of the script that export EmrActivity is running (you can see the script location under the "step" of the activity), and instead change the way that the hive script determines the table name by checking the current date. You would need to make a copy of this script and host the modified script in your own S3 bucket, and point the EmrActivity to that location instead of the default. I have not tried either approach before, but both are theoretically possible.
More general information about exporting DynamoDB tables can be found in the DynamoDB Developer Guide, and more detailed information can be found in the AWS Data Pipeline developer guide.