Question

For a project I need asynchronous loading of HTML content using XMLHttpRequest or jQuery.

In concrete terms I make a request and load the HTML response into a div container.

(This is just some example code:)

<!DOCTYPE html>
<html>
<head>
<script>
function loadXMLDoc()
{
    var xmlhttp;
    if (window.XMLHttpRequest)
    {// code for IE7+, Firefox, Chrome, Opera, Safari
        xmlhttp=new XMLHttpRequest();
    }
    else
    {// code for IE6, IE5
        xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
    }
    xmlhttp.onreadystatechange=function()
    {
        if (xmlhttp.readyState==4 && xmlhttp.status==200)
        {
            document.getElementById("myDiv").innerHTML=xmlhttp.responseText;
        }
    } 
    xmlhttp.open("GET","testFile.php",true);
    xmlhttp.send();
}

</script>
</head>
<body onLoad="loadXMLDoc()">

<div id="myDiv" style=""></div>

</body>
</html>

Now the question: Afaik search engines don't see the content in #myDiv, am I right? What are my opportunities to make this content crawl able?

Appendix: The request may be also cross-domain!

Was it helpful?

Solution

@Evgeniy's answer is the best solution for something search engines are not designed to do, which is trawl dynamic pages.

You should clarify with your stakeholder whether the content actually needs to be loaded dynamically, or does it just need to be shown/hidden dynamically. If it's the latter, then you can include all the content in the original markup, then show/hide with css display:none, or jQuery toggle (same thing). If your content is bandwidth heavy like photos or videos, this may be a no go, but if it's just text, then text is cheap. Preloading and show/hide will let search engines trawl content and save you a heap of trouble.

OTHER TIPS

Goog question!

As far as i know, Ajax Crawling can help you. But in needs serious update on server and client side.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top