Question

I am currently developing a website that dominantly uses JavaScript (JS) history.

Now the problem is, this history uses pathname of the URL. So for example if I use S3 as static hosting, and for the first time would like to request a URL like www.example.com/about, the S3 would redirect it to /about folder instead of loading the root (www.example.com) and have the JS to handle the /about. Of course, this behavior is absolutely normal in case of static hosting, so it is tricky.

I am just wondering if it is possible that it would work in my case, using S3, where every link based on the root (www.example.com/) would always refer to the root website? In short, I'm expecting S3 would ignore the pathname at all. So if I request for www.example.com/about, it will load just the index.html located in www.example.com/index.html, while retaining the URL (www.example.com/about).

If this is still not clear, think about JS's location.hash (#). Browser will ignore that hash when making request to the static web server, then when the web is loaded, the JS contained in it will handle the given hash. So my case can be exactly solved by using "#" in my URL, but this time I want to use a clean URL, which is without "#".

Thank you for your help. :)

Was it helpful?

Solution

There's no way to have S3 ignore the path and just return the root file (index.html). The closest thing to that would be to use an error file that catches all the requests, but although that would load your HTML file, it would always return a 404 status.

You could use URL parameters for paths instead of hashes if you don't want to use those, like this:

www.yoursite.com/?folder/page.html

This would result in the index file of your site being loaded in the browser.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top