The AWR has this section called IO Stats. Not unreasonably, this is where it reports the statistics for IO activity. Within that section there are figures for each tablespace. You need to run these reports against the target database. Ideally you want several different runs, each run being no more than twenty minutes long; the longer the run, the more likely it is that the law of averages will drain meaning from the figures you get.
So, the principle is: you run this report and get a benchmark. Then you deploy your changes and re-run the report. The difference in the various columns is the performance benefit of the change. Really you would want to requests and data per sec go up, average time and waits go down.
To make these figure more convincing you should run the same workload in the before and after tests. This is where something like Real Application Testing really comes into its own.