I use rerun extensively, and yes, it does output the correct features into the rerun.txt file. I have a cucumber.yml file that defines a bunch of "profiles". Note the rerun profile:
<%
rerun = File.file?('rerun.txt') ? IO.read('rerun.txt') : ""
rerun_opts = rerun.to_s.strip.empty? ? "--format #{ENV['CUCUMBER_FORMAT'] || 'progress'} features" : "--format #{ENV['CUCUMBER_FORMAT'] || 'pretty'} #{rerun}"
%>
<% standart_opts = "--format html --out report.html --format rerun --out rerun.txt --no-source --format pretty --require features --tags ~@wip" %>
default: <%= standart_opts %> --no-source --format pretty --require features
rerun: <%= rerun_opts %> --format junit --out junit_format_rerun --format html --out rerun.html --format rerun --out rerun.txt --no-source --require features
core: <%= standart_opts %> --tags @core
jenkins: <%= standart_opts %> --tags @jenkins
So what happens here is that I run cucumber. During the initial run, it'll throw all the failed scenarios into the rerun.txt file. Then, after, I'll rerun only the failed tests with the following command:
cucumber -p rerun
The only downfall to this is that it requires an additional command (which you can automate, of course) and that it clutters up test metrics if you have them in place.