An introduction into web analytics and an insight into why you need web analytics.
This post explains what is the w3c validation tool for and how to use it. It also highlights on the importance of validation your HTML code.
Even though both the terms bounce rate and exit rate look to mean same, they are different web analytics terms. Bounce rate : is the percentage of visitors leaving the site from the landing page. Exit rate: Exit rate of a page is the percentage of visitors leaving the site from the particular page. Now.. [...]
How do you personally feel when you end up on a page which says ‘Page Not Found’ in a site you were browsing? Do you like that? There is no reason for you to like that – I’m sure.. So will you purposely keep any link in your site which can end up in ‘Page [...]
Comparing a website, optimized for SEO and a default installation of WordPress, one can conclude that it lacks friendliness in SEO, except the fact that it uses feeds, proper navigation and linking, pinging search engines etc. Some of the things I observed are, – You can’t have a separate title other than the post title, [...]
I assume that you have gone through the initial parts of my SEO Tutorial . Lets get into the topic on how to do OnPage Optimization. Search Engine Optimization consists of basically two stages – On Page Optimization and Off Page. OnPage optimization refers to the SEO activities we perform on a page in our [...]
When you hear the name wordpress, what comes in your mind is a blog as it has become widely used blogging software. But it is much advanced software than a simple blogging tool where you can have a blog post live within a flash. WordPress has support for pages as well as posts. Pages, well [...]
In an affiliate tracking system, there will be usually a referring URL containing the unique id of the referrer to credit the commission. If affiliates use there referring URL’s as links from their site, it will be counted as a back link pointing to the site’s URL having the affiliates unique id. Now think of [...]
In earlier post I have mentioned that using a robot directive file – robots.txt you can control whether a document can be crawled by a search engine or not. Generally if there is no robots.txt file in the root of your website or it has no instructions written in it, it is interpreted by the [...]
What a search engine needs is it has to fetch out information regarding your site. For the same reason, the crawlers keep visiting your site. Now how to keep your site crawler friendly? The crawler comes one of your page first, picks up the content from it, it can take out information such as your [...]