Kentico Xperience 13 documentation and ASP.NET Core

Most documentation about running Xperience applications under ASP.NET Core can be found in a dedicated section: Developing Xperience applications using ASP.NET Core. The rest of the documentation still applies, but some code samples and scenarios might need slight modifications for Core projects.

All major differences between the MVC 5 and Core platforms are summarized in Migrating to ASP.NET Core.


Managing search engines for web analytics

Search engine tracking allows you to monitor the amount of page views received from visitors who found the website using a search engine.

In order to correctly log incoming traffic from search engines, you need to define objects representing individual search engines. You can register search engines in the Search engines application. By default, the list contains some of the most commonly used search engines, but any additional ones that you wish to track must be added manually.

When creating or editing () a search engine object, you can specify the following properties:

Display name

Sets the name of the search engine used in the administration and web analytics interface.

Code nameSets a unique identifier for the search engine. You can leave the default (automatic) option to have the system generate an appropriate code name.
Domain ruleThe system uses this string to determine whether website traffic originates from the given search engine. To work correctly, this string must always be present in the URL of the search engine's pages, for example, google. for the Google search engine.
Keyword parameter

Specifies the name of the query string parameter that the given search engine uses to store the keywords entered when a search request is submitted. The system needs to know this parameter to log which search keywords visitors used to find your website.

Crawler agent

Sets the user agent value that identifies which web crawlers (robots) belong to the search engine.

Examples of common crawler agents are Googlebot for Google, or msnbot for Bing.

This property is optional and it is recommended to specify the crawler agent only for major search engines, or those that are relevant to your website.

If the site is accessed from an external website, the system parses the URL of the page that generated the request. First, the URL is checked for the presence of a valid Domain rule that matches the value specified for a search engine object. Then the query string of the URL is searched for the parameter defined in the corresponding Keyword parameter property to confirm that the referring link was actually generated by search results, not by a banner or other type of link. This allows the system to accurately track user traffic that is gained from search engine results.

Whenever a page is accessed (indexed) by a web robot with a user agent matching the Crawler agent property of one of the search engines registered in the system, a hit is logged in the Search crawlers web analytics statistic.

Was this page helpful?