SEO is a big topic for many people doing online business today, and learning all the ins and outs of search engine optimization vocabulary is an uphill task. However, you don’t need to worry about this anymore since we’ve got your back with this SEO glossary, which contains useful definitions of all the major terms used in SEO.
So, let’s delve deeper into the SEO glossary.
200 OK Status
This is an HTTP response provided by a server to browser, showing that the resource, for instance, a webpage was found on that server. Typically, the server sends a 200 OK status to the browser behind the scenes, and in the meantime, the browser gets the page and then provides it to the users.
301 Redirects are commands carried out by web servers, for example, IIS, Apache, and others, and it redirects browsers requesting a particular URL to another URL. One of the common reasons for having 301 Redirects is after a page’s URL has been altered. Therefore, to make sure that the external and internal links to the original URL continue generating traffic to the original web page, a 301 redirect is created, which allows the webserver to redirect online traffic from the old URL to the new redirect. John Mueller from Google says that 301 redirects transfer the PageRank to the new URLs.
This is an instruction released by web servers, redirecting web browsers from a particular URL to another. Mostly, 302 Redirects are used in scenarios where the redirect is considered to be temporary. Therefore, if you want to content to remain on your new URL, then you should use a 301 or permanent Redirect. Google says that any 30x type redirect transfers link equity to new URLs.
When you arrive at a URL of a page that doesn’t exist—whether it existed or was moved to another URL, or the URL has an error on it, the webserver will give you a 404 error, which is an indication that the web page you are searching for doesn’t exist.
Above the Fold
This is the part of a web page that a visitor sees before they start scrolling down. The content that appears in this area is considered to be pivotable when it comes to the visitor’s decision—the visitor can choose to stay on the page, scroll down, or leave the page.
This is a common term used in online marketing, which is a site that promotes the products of another business, and then gets a commission for every sale or referral originating from its promotions. As a seller, you will be needed to provide the affiliate with a creative—which is a banner or text ad that will be used to promote your services or products.
Certain affiliate relationships are created through personal communication between the involved parties. Some companies, like Amazon, allow sites to register for affiliate programs using a registration form on Amazon’s website. In this case, Amazon provides the affiliate website with a code to insert in their site, and the code serves the ads.
These are websites that provide links to the latest sources or articles on particular topics like financial news, world news, or other types of content niches. Certain aggregation sites have editors who their sole responsibility is to manually select the articles or sources to link to, while others rely on bots to find and post the links.
Aggregation sites that post headlines automatically can be viewed by Google to have a negative or negligible value, while the sites where the titles are original, content is curated, and editorial content is added have a potential for positive ranking.
The Alexa site ranks topmost domains depending on the amount of traffic that they attract—page views and unique visitors. This rank is based on the data the websites receive of their toolbar, as well as other sources that they cannot identify. Alexa ranks the site that has the most page views and unique visitors number 1, and the less trafficked website gets a higher rank value.
This is a short form for alternative text, which is displayed when a visitor scrolls over an image that didn’t load. The main aim of the ALT Text is to give a brief description of an image, which allows a visitor to have an idea of the missing image’s content.
Anchor Text (link text)
This is the text that is linked to a web page’s hyperlink. Search engines stress the importance of anchor text to evaluate the relevance of a linked site to the keywords appearing in the anchor text.
This is a short form of Application Programming Interface, which is a software developed by a business to enable computers access data and tools that the business hosts. Every API is normally programmed to support certain commands, protocols, and syntax. Most APIs are normally gated, and interested parties are required to apply for access. An example of using an API might be developing an app that enables online users to update the status of their accounts by connecting to the Graph API of Facebook.
This means the indexing of content found on mobile apps by search engines and allows search engines to offer search results from the apps, in addition to the search results from desktop or mobile websites. In case the content that the user searches for exists in an application that’s installed on the user’s phone, Google will provide a “deep link” that leads users to the content. If the user doesn’t have the application, Google will provide an install card for the application in the search results.
App Packs (App Box)
This is a feature found in Google Mobile Search results page, and it displays the applications that are relevant to the search query of the user. Every app’s listing contains its logo, rating, name, cost, and the number of downloads. When you click one of the links on the app, it should take you to the product page of the application in the appropriate app store for the user.
These are the search terms displayed by search engines in real-time when a user starts typing a search term in the address bar. Autocomplete puts into consideration the frequency of the different search term options and the browsing history of the user.
Backlinks (inlink, incoming link, inbound link)
These are the hyperlinks found on third party websites—that is site A pointing to a web page on site B. The main source of the term Backlinks comes from the early years of SEO, when it was a common practice to give a link to another website, and in return the website would link back to your site. Today, backlinks remain to be one of the factors that search engines use when determining the authority and quality of a website and can influence the ranking of a website.
Black Hat SEO
This is the use of deceptive or manipulative ways to get better search engine ranking. Some of these practices include generation of automatic content, participating in link-building schemes, as well as placing hidden links or text on web pages, and Google disapproves of these practices. Websites that practice black hat SEO tactics can be demoted automatically by Google algorithms like Penguin, or the search engine’s quality team can take manual action against these pages. Both Google and Bing have categorized some SEO practices as Black Hat.
Also known as Favorites, Bookmarks are the list of web pages shortcuts that users save on their browsers, and they allow the user to access them easily and quickly in the future. Some web browsers allow users to organize their bookmarks in certain folders, and the users can access these bookmarks through the browser’s menu or a special bookmark bar that makes it easier to access the bookmarks since they are always visible.
Bot (crawler, spider, robot)
This is a script of software application that is programmed to perform a series of tasks over the internet automatically. Common examples are the bots developed by search engines that crawl sites on the World Wide Web (www), gathering and indexing the content on these web pages. Another common type of bot is the chatbot—which is used by many companies to answer customer service queries.
Also, you should note that there are certain bots that are created for malicious purposes, such as performing DDOS attacks that flood web servers with communication, and the aim of these malicious bots is to prevent the web servers from handling real web traffic.
This is the percentage of direct visits to a certain web page that ends with the visitor exiting the website. To calculate this percentage, you need to divide the number of visits that the user entered and left the site through the web page by the total number of visits to the page. The bounce rate is one of the essential metrics used to measure the quality of the content of a web page.
This is an element on the SERP or website page that displays the location of a web page in the hierarchy of the website. Breadcrumbs can be purely informational, or they can allow web users to use the hierarchical elements for navigational purposes. A good example of breadcrumb structure can be like: Home Page > Type > Web Page Being Viewed
This is a web page that’s been saved by a user’s browser on a smartphone or computer, or by a search engine on its servers. Search engines usually cache pages to give access to them, even at times when the site’s services cannot be accessed. Web browsers, on the other hand, cache pages to increase the loading speed for the paged that the user had previously accessed.
Call to Action (CTA)
A CTA is an online marketing term that refers to an audible or visual message that encourages users to do an immediate action like “Start Your Free Trial,” “Buy Now,” or “Call Now.” Mostly, imperative terms are mostly used to call on users to take action. Worded Call to Action stress the benefit to the user and are considered to be more effective, while generic Call to Action, like “Click Here,” usually leads to a lower percentage of users taking the necessary action.
A canonical tag is the link element that’s placed on the header part of a web page, either when a URL has a different query string limitations or when the URL content might be duplicated on other websites or internal pages. In the event a URL has query string parameters, then a canonical URL is added in the header section of the web page, as this prevents search engines from taking each type of the URL as a unique web page with duplicate content. Canonical tags inform search engines that the URLs in the canonical tags should be treated as the only source for the content that should be included in the search results.
For example: <link rel=”canonical” href=”https://article.example.com/shirts/navy-blue-shirts-are-awesome” />.
Carousel (SERP feature)
This is a sliding row of images that mostly appears at the top of Google’s search engine results page, but it can as well as appear further down. Every item in the carousel shows an image, and it has a caption text too. Some examples of search queries that can trigger carousels include best Ivy League Colleges, Chicago Bears Roster, and Best Movies list.
CCTLDs refer to Country Code Top Level Domains. These are two-letter internet domains, which are reserved for every country, and they are managed by Internet Assigned Numbers Authority (IANA) appointed organizations. Some examples of CCTLDs include .us (USA), .au (Australia), .uk (United Kingdom), and .fr (France).
These are online references to a business’ NAP (name, address, and phone number) on the 3rd party website or directory. They are essential for local SEO since Google relies on citations when evaluating a local business’s authority. Citations, reviews, and backlinks are key factors that determine whether a business will appear in Google Local Pack, as well as determine its position in Google’s Local Finder. For a citation to give your SEO boost, the NAP in the citations should be exactly the same, just like on your Google My Business listing and business’ website.
This is a black-hat SEO technique, which used artificially to increase the number of clicks on PPC (Pay Per Click) advertising. There are two categories for such black-had ad clickers:
- Website owners who sell PPC adverts, and they want to increase their income from the ads on the website.
- The advertiser’s competitors who want to click on the ads so that they can waste their advertising budget through useless clicks that do not create leads.
It can be a big challenge to identify and block click fraud since the users clicking on these ads are anonymous, and the techniques such as using VPN or deleting cooking can be very simple.
This is a technique where a website presents one type of web to online users and a different version to search engine crawlers. Both Google and Bing have don’t allow this practice as it is used for deceptive reasons. In some cases of cloaking, sites normally present search engines with harmless content, where the websites are listed as sources on the search engines, whereas the users get gambling, porn, or other content when they land on these pages. All forms of clocking are risky, even when the content isn’t deceptive.
CMS (Content Management System)
A CMS is a software program that is used to upload, organize, and modify the content that needs to be published on the internet. One of the most commonly used CMS software today is WordPress, which is used by millions of people across the globe. Other common CMSs include Joomla, Blogger, and Drupal. Certain CMSs, like WordPress, come with third-party SEO plugins, which assist website owners in optimizing the content for the search engines.
Comment spam is the content posted in the comment segment of blogs, forums, as well as other online forums by black hat advertisers or bots. The comments are mostly texts created randomly or repeated with the intention of propagating SEO links, advertising, or spammy content.
This is a term used in online advertising, which is the completion of the desired action by the visitors of an app or website. One of the most common conversion actions is the purchase of a service or product. Other conversion actions include registering for an event or newsletters, signing up for free trials, and other desired actions. The total number of conversions are used to measure the success of a landing page or an advertising campaign that attracted the visitors to the website.
A conversion rate is a metric that focuses on the effectiveness of an advertising campaign or landing page. To get the conversion rate of a page, you should divide the total number of conversions by the total number of visitors who visited the landing page. In PPC, you are required to divide the total of the conversions by the number of visitors who clicked on the ad that promoted the landing page.
A cookie is the files that a site saves to a user’s PC after visiting a web page. They save information like what the user added to a shopping cart at an eCommerce website or the history of the web pages they visited on the site. One of the major functions of cookies is to determine whether a user logged into an account of the website. When a user logs in, a cookie is saved, showing that they logged in, and it allows them to move from one page to another, without having to sign in between pages.
CPC (Cost Per Click) is a measure that is used in online advertising to determine the cost incurred by an advertiser every time a user clicks on the advertiser’s ad. In Cost Per Click advertising, an advertiser is not charged for the views of an ad, but for the clicks only.
This is an acronym for Cost Per Mille, where Miller is 1,000 in Latin. Cost per Miller is the cost per 1,000 views of an online advert, and it is used in TV, radio, as well as other media. This is the standard metric that’s used in the online advertising industry to compare the prices of different adoptions.
Crawl budget is the number of URLs that a search engine’s spiders crawl in one session. The crawl budget of a website is usually influenced by the crawl rate that Google bot determines whether it’s optimal for the website and the crawl demand that is determined by the popularity of the website, and the need to keep Google’s search engine results page fresh for a certain search query.
This refers to how far down into a site’s page ladder a search engine spider can crawl. The homepage of a website is usually at the top of the ladder. The pages that are directly linked from the homepage are usually at the first level, while the pages linked from the first levels are at the second level, and so on. The closer a page is to the homepage, the more important it is, and thus, the significance of crawling the page increases. Large sites with many web pages and levels require a strong domain authority for the craw depth to cover the entire website. Typically, the crawl depth of the initial and subsequent crawls won’t be the same.
These are the unsuccessful attempts by the search engine spiders to crawl a site. Google Search Console divided crawl errors into the following categories:
- Website errors – these affect the entire site. They comprise of problems with server connectivity, DNS, as well as the robots.txt file.
- URL errors – these affect particular URLs. They include 404 error, soft 404 errors, Not followed, and Access Denied.
If the craw errors are significant, it can be an indication of poor website healthy, and they can affect the user experience negatively, as well as the crawl frequency, crawl depth, and crawl frequency.
This refers to the frequency at which search engine spiders crawl the pages of a website to update the website’s index with new and updated web pages. The search engine’s algorithms are responsible for determining which sites to crawl, and at what frequency, depending on the site’s level of domain authority, and how often the site’s content is updated. Websites with higher domain authority and the ones that update their content more regularly are crawled more frequently.
Crawl rate represents the total number of matching connections Google Bot uses to crawl a site, and the time taken between fetches. If a site is fast, Google bot increased the crawl rate, and if a site is slow or has a considerable number of 5xx errors, Google bot will reduce the crawl rate.
CSS (Cascading Style Sheets)
These are the instructions to web browsers that determine the many characteristics of the elements of a web page, like the position of elements on the web page, text size, as well as the effects that happen upon a mouse-over. A CSS code can be placed in the header section of an HTML page, or in a different file. Using separate CSS files allows several web pages to call up the styles in the CSS file.
CTR (Click Through Rate)
CTR is the metric that measures the percentage of users who viewed a link or an advert on the website and then clicked on the link or ad. The Click Through Rate is usually calculated by dividing the total number of users who clicked on the advert by the total number of visitors who viewed the page.
This is the practice of registering domain names that are related to popular companies or domain names with the intention of reselling then at a higher price. In the US, it is illegal to register, traffic in, or use an internet domain name with the intention of benefiting from the profit of a trademark that belongs to another person.
This is a term that’s used to describe the links that point towards content that’s found inside smartphone applications. For Google to index the content found inside these applications, the app developers must sign up with Google, and allow Google to index their applications. Google crawls these applications, and then shows the search results with deep links too particular “pages” inside the applications. Users with the relevant application on their smartphones who click on the link will be directed to the content, or they will be asked to install the application.
A disavow backlink is a tool in Google Search Console, which allows users to request Google not to take particular spammy links into consideration when accessing their websites. According to Google, a disavowing link is an advanced feature that should be used with lots of caution as it can affect the ranking of a website negatively. Google recommends using this tool only when you believe that you have a considerable number of artificial, spammy, or low-quality links pointing to your website—and when you are sure that the links are bringing issues to your site.
Discover more places
This is a carousel-type Google search engine results page feature that appears together with the Local Pack when a user is searching for local restaurants. The carousel appears towards the bottom of the search engine results page, displaying images, and each image is associated with a different category. When you click on one of the items, it opens up Google’s Local Finder with a list of restaurants in that category, as well as the map.
DNS (Domain Name System)
the DNS is a large network of servers that are spread across the globe, and they usually act as the giant directory for translating domain names—the human-friendly addresses that users type into their browsers, into IP addresses that provide the route to the servers hosting the sites.
DNS server (Domain Name System Server)
A DNS server is a directory that’s hosted on a server that maps domain names, human-friendly addresses, to their associated IP addresses. For example, yourdomain.com to 184.108.40.206. When a user types in a URL, their web browser sends a query to their Internet Service Provider’s (ISP) DNS, which in turn gives the IP address of the server that hosts the requested site, as well as the route to access that server.
This is the address of the site that acts as a human-friendly location of a site or resource over the internet. When you type a domain name in your web browser’s address bar, the browser sends a query to the DNS server to find out the location of the domain’s website on the internet. A DNS lookup then retrieves the IP address related to the domain name, and the browser now navigates to the site’s server and then asks the resource from this server.
Domain Name Registrars
This is an organization, mostly commercial, which manages and sells the ownership records of domain names, and these organizations are licensed by top-level domain registries. Domain owners, on the other hand, set the location for the names of their domain servers by notifying the registrar—and this happens via the registrar’s site.
Also known as Gateway Page, bridge page, or jump page, a Doorway Page is a webpage that’s been optimized for a certain keyword or several keywords with the objective of directing users to another web page. At times, the redirecting process is automatic and uses a meta-refresh script or tricks users to click a link that directs them to the intended page. Search engines consider this to be misleading, and thus they prohibit the practice, and sites that are caught practicing it are penalized heavily or removed from the search engine’s index.
This is the content that appears on more than one web page on one site or several websites. The major problem with duplicate content is that search engines don’t want to display more than one listings with identical content in their search results. At times, duplicate content does happen on the same site due to mistakes or usage of parameters in the URL. The parameters can change the URL, and since the landing page is, all the same, the content is the same too—and search engines consider these to be two URLs, which have duplicate content.
One way to solve the issue of duplicate content on a site is by creating a 301 redirect from the duplicate content—only if it’s a mistake, to include a canonical tag to direct users to the original version, or to use parameter handling tools found in Google’s search console and Bing’s Webmaster Tools. When the content is from other sites, both the source and pages with the duplicate content should have a canonical tag directing users to the original source page.
This is an icon that’s associated with a web page and is usually displayed in the address bar or the browser tab—together with the title of the page. Typically, the favicon resembles the website’s logo, and it allows users to identify the sites in the event visually they have opened several browser tabs. The dimensions of the favicon should be 16 by 16 pixels or 32 by 32 pixels and should use either 8- or 24-bit colors. Moreover, their image format can be either GIF, ICO, or PNG.
Featured Snippet (SERP Feature)
A featured snippet is a short answer that shows up just above the organic search results of the first page of Google—after a user asks a query in regards to specific information. Please note that featured snippets can appear in the format of a numbered or bulleted list, table, YouTube video player, or Paragraph.
Fetch as Google
This is a tool in Google’s Search Console that allows users to test whether Google Bot can fetch a URL on your website, as well as how it renders a web page. After the completion of the fetch attempt, you will see on these 4 statuses:
- Error message
If you receive another status apart from the completed statues, you can try to troubleshoot the issue. The render option will show you how the fetched page looks like once a Google Bot crawls it. Once you fetch a page, you can request Google to re-index it.
When a page is successfully fetched, you can request Google to re-crawl the page or re-index the page if possible. Google limits only 10 fetched each day.
FTP (File Transfer Protocol)
This is the protocol used to transfer data between computers. Users who want to transfer files use software with a user-friendly interface to transfer data through FTP. However, you should also note that FTP is supported by Unix systems, and uses textual commands at the command prompt. There are two common FTP programs, which include Core FTP LE and Filezilla.
This is an online free website analytics service, which comes with premium benefits. Google Analytics has several functions, like reporting on web traffic, tracking, and reporting on goal completion like newsletter signups and online purchases—if you have installed the correct tracking script on your site. Also, Google Analytics can give you PPC reports if you have connected your AdWords account to your Google Analytics account.
Google Analytics Tracking Code
Google Cached Links
Mostly, Google caches or saves the latest version of a web page, so that when the page doesn’t load fast, or it’s unavailable, you can go back to the search results page, and select to view a cached version of that page. You can check to see if a page has a cached version by clicking the green-down arrow, which mostly appears on the right side of the URL in a particular search result. Once you click the arrow, select “cached” if you want to view the cached version of the web page.
This was a term created to describe the colossal rank variations that would happen in the past when Google used to update its ranking algorithm each month. However, rank changes had become less dramatic, particularly when Google started to implement smaller algorithm updates—which means Google Dance is no longer used in SEO.
Previously known as Google Web Fonts, Google Fonts is a cloud-based repository for open-source fonts, which are used to display texts on a site. Currently, there over 800 Google fonts available in the database. When a person is searching for Google fonts (https://fonts.google.com/), they can filter these fonts by category (like Handwriting and Serif), by style (bold, italics, etc.) and by language.
Google for Jobs
Google for Jobs is a SERP feature that displays three job listings in response to search queries explicitly or implicitly searching for job opportunities. The job listing in this feature comes from different sources, such as job listing sites.
Google Home Services Ads
Local service providers like locksmiths, plumbers, house cleaners, electricians, and other professionals can use Google Home Services Ads to promote their services in a format that’s similar to Google Local 3 Pack on the search engine results page. When users search for these services, the ads about the service providers from their area will appear on top of the Google SERPs.
According to Google, there are certain factors that affect the ranking of homer service adverts, and these factors include location, proximity to customers, the score of the business’ review, number of reviews the business has received, responsiveness to customer requests and inquiries, business hours, and whether there are any repeated or serious complaints about the business.
Next to every home services ad, there’s a green shield with a checkmark, which appears with the words “Google Guaranteed.” Google has to prescreen the service providers to ensure that they meet certain standards, and they are entitled to run a home service ad.
The Google Hummingbird is an algorithm that was released by Google in August 2013, and it interprets user intent depending on the context of their search term, and then gives the search result that best matches the user’s search intent.
This is an informal term mostly used when referring to the rank authority passed from one web page to another, by linking from the first web page to the second.
This is a tool that was previously used to suggest Google Maps edits—and this was either updating the map or a business on the map. However, Google MapMaker was depreciated in March 2017, and the majority of its functionalities are now found on Google Maps.
Google My Business Directory
Also known as GMB Directory, it is a global index of all the businesses that are managed by Google. These businesses can create listings for themselves, and users can access that information of Google Maps and Google Search. Generally, the listing included information like the name of the business, address and phone number (NAP), and business hours, among others. Google’s Local Finder and Local Pack rely on this information to determine the exact location of the business.
Digital marketers can use Google Optimize, which is a free tool to increase the conversion rates on their landing pages. For this tool to make the necessary changes to a web page’s CSS and HTML, a small script should be added to that page. The tool’s interface is a visual editor, which means anyone using it doesn’t need to learn how to code. When you manage to optimize your page, Google Optimize then performs an A/B test, of the modified page and original page, and then gives a report of the two versions. It is possible to integrate Google Optimize with Google Analytics, Google FireBase, and Google BigQuery when aggregating your customer insights.
The Google Panda algorithm, which was first rolled out in February 2011, is an algorithm that demotes the ranks of sites with poor-quality content, or sites attempting to manipulate Google’s ranking system. One thing about Google Panda is that it affects the ranking of the entire website, not an individual page. In March 2013, Google made Panda become an integral part of its core algorithm, and it announced that Google Panda updates would continue to be regular. Websites that have had their ranking affected by Google Panda can recover—if they rectify the issues that made them be penalized in the first place.
This is an algorithm that was first released by Google on April 24, 2012, and its major role is to detect and filter out any spammy web pages from search results. Google Penguin targets sites using different techniques to improve their rankings and targets two main techniques—link schemes and keyword stuffing. The algorithm was updated 4 times, until Google updated it in its core algorithm in September 2016. Since Google Penguin was now part of Google’s core algorithm, Google announced that it would assess websites in real-time. Previously, sites that were penalized were forced to wait until the following Penguin update was released.
Google Pigeon is an algorithm that was released in July 2014, with the aim of improving local search query results. Before the algorithm was released, searches would provide different results on Google search and Google Maps. This problem was solved by Google Pigeon, and also boosted the ranks of local directories like Open Table, Trip Advisor, and Yelp.
Google Play Store
This is Google’s online market for applications designed for smartphones and tablets that operate on Android OS. Google Play Store has its search engine, which locates the applications based on categories, keywords, most popular, etc.
Possum is a nickname that the local search community gave the algorithm that was released by Google back in September 2016. The main purpose of this algorithm was to deal with the increasing spam in Google Local Finder and Maps.
Google Posts is a feature that enables business owners to publish content directly to the Knowledge Panel of the business that shows on Google Maps and SERPs. These posts can include images, texts, as well as CTA buttons.
Google Related Searches
These are lists of related or similar search queries that appear at the bottom of search engine results pages. The main purpose of the related searches is to assist users who didn’t find their desired results to redefine their search query so that they can describe whatever they are searching for more accurately. The average number of related searches that appear on the SERPs is just below eight.
Allegedly, Google Sandbox is a virtual probation zone where Google places new sites to check whether they are observing Google’s Quality Guideline. It is said that the sites on the Sandbox font rank well, even after they publish compelling content, and receive a good number of backlinks. However, the existence of Google Sandbox remains to be one of the most disputed topics in the SEO industry.
Google Search Console
Formerly known as Webmaster Tools, this is a free online service that enables website owners to assess the technical health and search analytics that has an impact on their search traffic.
This is an online tool that offers information on the comparative popularity of search terms. The results are filtered depending on the time period, country, interest category, as well as search type (image, news, Google web, Google Shopping, and YouTube). Google Trends allows users to compare the popularity of different search terms, and they get the data that’s presented in two graphs (interest by region and interest over time). Also, the tool provides related queries for every search term that is searched.
Google Webmaster Guidelines
This is an online document that’s found on Google’s support website, and it provided the basic recommendations to assist websites to get found and ranked. However, the main guidelines in the document are the quality guidelines outlining the practices that are considered illegal by Google. Sites that violate any of the quality guidelines set by Google can be de-indexed or have their rank demoted, but this depends on the severity of the violation.
This is a Google’s web crawler bot, that’s responsible for finding sites on the internet so that they can update the index that’s used by its search engine. As the bot crawls websites, it finds new pages through sitemaps and links, and then adds these pages to its crawl. The operations of Googlebot’s craw are guided by a program with a sophisticated algorithm, that decides which sites to crawl, and at what frequency, as well as how many pages to find.
Also known as Generic Top-Level Domains, gTLDs are top-level domains manages by the IANA (Internet Assigned Numbers Authority). They were named “generic” to differentiate them from Country Code Top-Leve Domains (ccTLDs). The initial gTLDs were .com, .org, .net, .gov, .int, and .edu.
This is a navigation icon consisting of 3-parallel horizontal lines, and it’s usually positioned on top of a mobile application or website. When you tap or click on this button, it opens up a navigation menu, which has a list of items where you can choose from. Its name, Hamburger Button, was derived from the similarity of the three layers found in a hamburger.
Head Tag (in HTML)
This is an element in the HTML files, which contains metadata—data about data, as well as script calls. Typically, the head tag is inserted between the opening <HTML> and <body> tags at the beginning of an HTML file. Although the metadata in the head tag is not displayed, the information is used by both the search engines and browsers. From an SEO perspective, the most important tags that appear in the head define the title of the page, styles, description, robot instructions like nofollow and noindex, as well as social media sharing information.
This is an attribute, typically placed on the link elements in the HTML head of web pages with the aim of specifying additional URLs with the similar or same content for specific regions or other languages. A link element of this type would have a syntax with this appearance: <link rel=”alternate” hreflang=”en” href=”http://en.example.com/” />, where “en” stipulates the language on the targeted link. The information found in the hrefland attributes tell Google that the translated web page resembles the original, and it is not plagiarized. As a result, this assists Google in providing the URL in the language used to request it in its search results. The attributes can also be added to the HTTP headers for files without HTML, like sitemap or PDFs.
HyperText Markup Language (HTML), was developed by Tim Berners-Lee in 1980, and it is the standard language used by web developers to design web pages on the internet. Once the HTML web page is developed and stored on a web server, web browsers can fetch it, and then render the page by reading and executing the HTML tags (instructions) found in the HTML document. Other than defining a web page’s structure, HTML markup was originally used to control the appearance, fonts, and layout of a web page. However, the World Wide Web Consortium (W3C) recommended developers to use CSS for this purpose at the beginning of 1997.
HTML headings are the tags that define the hierarchy of the different sections on a page. These sections include the title, sections, and subsections, and they usually range from <H1> to <H6>. Typically, the <H1> tag represents the title of the page. In SEO, every page should have an H1 tag, which should include the primary keyword that’s being targeted by the page’s content.
This represents the HyperText Transfer Protocol, which is the underlying communication protocol used by web servers and web browsers to transfer web pages, as well as their associated files. Both the HTTP protocol and HTML coding language were developed to support interconnectivity between web pages on the internet, as the term HyperText means.
These are the initial requests and responses that are passed between web servers and browsers. The HTTP headers include information like the requested page, client browser, and sever type, among others, and it can also include instructions to a search engine not to rank the requested page or a web browser to redirect to another web page.
This is an encrypted protocol used to transmit HTML pages. The standard HTTP communications are encrypted by HTTPs using TLS (Transport Layer Security) protocol. There are different levels of encryption, and the 256 bits is one of the most secure encryption methods. In 2014, Google announced that HTTPs would be one of the signals used by its ranking algorithm to determine the ranking of a web page. Therefore assuming that all other factors are equal, a site with HTTPs encryption will rank better than a page without. The main benefit of HTTPs over the HTTP protocol is that it protects a user from eavesdropping and man-in-the-middle attacks, which pose a risk to financial and private information.
A short form for Inline frame, Iframe, is an HTML tag that allows external webpages or widgets to be rooted and displayed in webpages. Mostly, Iframes are used to embed Google Maps and YouTube videos, but they can also be used in a number of ways. These tags have several parameters, like height, width, start=x, frame border, and others, that modify the display attributes, as well as how the widget behaves.
This is a small image that appears to the left side of organic search results. Typically, the image originates from the web page linked to from that search result, and it allows users to have a visual preview of the content on that particular web page. Image thumbnails are part of organic SERP features, as they are connected to organic search results.
This is a string of images that appear in response to search queries, where the searcher can benefit from a collection of images on a particular topic. Examples of search queries that can trigger this SERP feature include “sports cars,” skyscrapers,” and “Golden Gate Bridge.”
Incognito mode or private browsing is a feature that allows users to open browser windows that do not save cookies, site data, as well as their browsing history. Major browsers such as Google Chrome, Internet Explorer and Firefox have the incognito feature.
These are the pages of a particular site that a search engine has crawled, analyzed, and then included them to its web pages’ database. Web pages are indexed because the site owner has request search engines to index them, or through the process of discovering web pages by search engine bot, and this is via the links to the pages.
This is a hyperlink linking one web page to another, and this happens on the same site. According to Google, internal links are ranking signals, but this depends on the relevancy of the destination page, as well as the keyword appearing on the anchor text. However, you need to understand that internal links are not a ranking signal that changes the rank of the destination page in comparison to similar pages on other sites, but it signals Google which page on the site is the influential source for the keyword. Google doesn’t consider Internal links with rel=nofollow attributes as ranking signals, as with inbound links.
Also known as internet protocol address, this is a unique series of 3-digital decimal numbers, which are separated by a period, and they serve as a distinct address for a computer connected on the internet, to which web traffic can be routed to and from.
iTunes App Store
This is Apple’s online store for mobile applications for tablets and smartphones. App owners usually optimize the rank of their listings in order to drive more downloads and sales.
This is an object-oriented programming language used in the development of client-server web applications, as well as Java applets that run in the major web browsers. Java is somehow unique, in that it runs on any operating system and hardware, so long has it has Java Virtual Machine installed on it.
In an SEO context, a keyword is a short phrase or word that people search for on the internet. Keywords with one or two phrases are called short-tail keywords, whereas keywords with three or more phrases are known as long-tail keywords. Typically, the higher the monthly search volume is for a certain keyword, the higher the competition is for the rank of that keyword.
This is the situation where several ages on a site target the same keyword or keywords by using the keyword on the title and content of the page. The term cannibalization fits this situation, just like cannibals eat their own—and this case, the web pages target the same keywords are cannibalizing each other, thus not allowing any of their page to rank for that keyword. These cases make it hard for search engines to determine which page is authoritative for that keyword. Therefore, for pages to optimize rank, they should concentrate on different keywords.
This is a metric used to quantify the number of times that a keyword appears on a page as the total percentage of the word count on that web page. Search engines usually examine the keyword density for a certain term on a web page when determining the relevancy of the page for the particular keyword. This relevancy score is one of the main factors that search engine algorithms use when determining the rank of a web page for a specific keyword.
This is a free tool found in Google AdWords, and it allows advertisers to search for new keyword ideas as well as get statistics like average cost per click and average monthly search volume, among others. However, it’s only the AdWords account with a substantial number of ad campaigns running that can see the actual average monthly search volume on this tool. If your account doesn’t have sufficient ad spend, you will see the search volumes in ranges, rather than exact figures.
Keyword research involves the process of discovering and investigating keywords that have the highest potential of driving traffic to a site. These keywords should be relevant to the topic on the website, as well as a sufficient search volume for them to be worth the time you have invested when ranking for the keyword. There are several keyword research tools, but the common one is Google’s Keyword Planner.
This is a blackhat SEO technique used by website owners to artificially increase the number of times a certain keyword appears on a page with the aim of boosting the rank of the page on search engines. Major search engines discourage this practice, and they can penalize a website for it. One of the major reasons why search engines stopped considering meta tags in the ranking is because web site owner used to repeat keywords in the meta tags. And, this is the reason why Google developed the Penguin algorithm to identify and penalize sites using this method.
This is a knowledge base launched by Google back in May 2012, and it contains large stores of structured data about places, events, objects, and people. Google describes the Knowledge Graph as an intelligent-model—or graph that understands the real-world entities, as well as how they relate to each other—things, not strings. The knowledge graph has pointers between different objects, and Google understands how the different facts stored in this knowledge base are connected. Google Knowledge Graph is the source for all the information that’s displayed in Google’s Knowledge Panel, as well as other SERP features.
The knowledge panel is a Google SERP feature that displays just above the organic search results on mobile devices, and on the right side of the desktop. This feature provides additional information about the topics on the search query. It includes a brief description of the topic, as well as images, factoids, links that are related to the search topics, Google Posts, reviews, and many more.
Also known as a hyperlink, a link is a referral in a page, which when tapped on mobile or clicked, opens up the URL that’s referenced by that link. To add a link, you need to add an HTML tag with the target URL in an image or text. The text with the HTML tag that’s clicked by the user is referred to as an anchor text.
This is a dramatic, entertaining, or extraordinary content that is created with one intention—that the content will make people link to the page has the content. The objective of attracting a large number of inbound links can be to boost the rank of a site on search engines, as well as to increase the referral traffic. You can use linkbait for the benefit of your site, especially when creating unusual and useful content that will benefit users. Common examples of link bait include videos, online games, infographics, and case studies. Also, you need to note that link bait can have negative uses, like repackaged content, that lacks real added value.
This refers to the methods or efforts used to acquire backlinks from other sites to your own site. There are two types of link building, and this includes whitehat and blackhat link building. Examples of genuine whitehat link building included promoting your content on social media platforms or contacting influencers and customers to let them know of your content. If your content is relevant, compelling, and of high quality, it will attract natural backlinks.
Blackhat link building, on the other hand, includes tactics like paying for your links to third party sites or build your own sites with the aim of creating backlinks.
This is a jargon term that’s used by some SEO experts for the NoFollow link attribute that’s added in a hyperlink’s code as an indication that the linking to a certain site or web page should not be taken to be approbation by search engines.
Also known as a reciprocal link is when two sites come into an agreement to link to each other, with the aim of boosting the page ranks. However, you should note that excessive link exchanges can negatively impact the ranking of a site in search results.
This is a site or a group of sites that were developed with the intention of promoting the PageRank of other sites by linking to the site from different pages in the link farm. Google, as well as other search engines’ webmaster guidelines, prohibit link farms. That’s the reason why search engines penalize link farms by de-indexing them. Also, the sites that conspired with the link farms to improve their ranking can be de-indexed, or have their rank lowered.
This is the practice of building many incoming links to an individual’s own site and desisting from giving outbound links at the same time. Most people have developed a notion that link hoarding can maximize the ranking of their sites since they believe that outbound links will drain the ranking of their own page. However, Google’s John Mueller disapproved of this approach as he said that outbound links to other people’s sites are not a ranking factor—but it can add value to your content.
Also known as Link Love, Link Juice is a term commonly used in the SEO industry when referring to the rank value that’s passed on to a target web page by a link from another site.
This refers to the quality and quantity of the links that point to a certain site. The quality of the links is mostly determined by the authority level of the site issuing the link, as well as the relevance of the website it’s pointing to. You should note that link popularity is a major factor that influences page rank.
This is a large listing of local businesses that appear when you click on the “More Places” link found at the bottom of Google’s Local Pack. The Local Finder contains a list of several pages of local businesses that are relevant to a particular search query. In addition to this, you will also find a map of the local area, with a number of businesses appearing on the map.
This a group of three local business listings that appear in response to search queries of products or services offered by local businesses. However, the Local Pack is a bit different from organic search results that come from Google’s website indexes, as the Local Pack listing comes from the Google My Business (GMB) directory. Also, the Local Pack includes maps of the relevant area, as well as a link to “More Places” that opens up the entire list of the local businesses in the Google Local Finder. The common raking factors for the Google Local Pack include the business’ relevance to the search query, which is determined by the business’ Google My Business listing as well as the website. Also, the distance of the business from the user making the search query and the reputation of the business (depending on links to its site, Google reviews, local directories listings, and mentions of third party reviews) have an impact of the ranking of the business on the Local Pack.
This focuses on improving the ranking websites on organic search and business in Google Local Pack, and making sure that businesses have accurate and updated listings on the major business directories such as Google My Business, YellowPages, LinkedIn, Facebook, and Yelp. Local SEO also assists in getting business local reviews and citations, which are essential for ranking, as well as building brand reputation.
Long Tail Keyword
These are the keyword phrases that consist of three or more words. The term “long tail” comes from the graphic illustration of search volume vs. the length of a keyword on a line graph. Typically, the shorter the keyword phrase, the higher the search volume, and on the contrary, the longer the keyword phrase, the lower the search volume.
Machine learning is part of AI (Artificial Intelligence), which allows computers to learn and function without the need for them being programmed to do so. The main goal of this is to create algorithms that enable computers to make data-driven decisions and predictions. For instance, Google’s RankBrain relies on machine learning to change ranks based on historical signals. Google’s sentence-compression algorithms, on the other hand, use machine learning to boost the extraction of content that’s needed for Featured Snippets.
Manual Action Penalty
This is a disciplinary action that’s taken by human reviewers at Google against sites for breaking Google’s webmaster quality guidelines. The penalties usually range from dropping the site’s rank, to removing them from Google’s index, and no longer appearing in SERPs. The Manual Actions Penalty page in Google’s Search Console usually lists the manual actions that are taken against any site.
Meta Descriptions are essential attributes of the header section of web pages, which allow the website owner to describe the contents of the particular page. Search engines normally display the meta description, alongside the title of the page, as well as the web page’s entry in the search engine’s search results. Google has put a limit to the length of meta descriptions, by limiting the width—in pixels of every line, as well as limiting the total number of lines.
These are the lists of terms that appear in the meta keywords element in the HTML header section of a particular web page. Meta keywords were originally used by search engines a signals to determine keywords relevant for a certain web page. However, meta keywords were dropped from ranking signals after search engines discovered that site owners used to stuff keywords that were not related to the content on the web page. Matt Cutt announced that the meta keywords stopped affecting the ranking in Google algorithm from September 2009.
Meta tags are the structured data elements that appear in the header section of web pages. The meta tags are structured since they are standardized tags used to define the specific attributes relating to the web page. Some of the meta tags that are commonly used include the description and title tags since, without these tags, the listing of a web page on search engines would not be complete.
This is the data that describes or gives information about other data. When it comes to meta tags, the data in the tags give information about the content found on a certain page.
This is a copy of a site that’s hosted on another server. It can use the same URL as the main site, like a subdomain or even an entirely different domain. Mostly, mirror sites are used to distribute traffic loads in different servers, as well as to locate serves in closer proximity to users and thus reduces latency. In case a subdomain or a different domain is used for similar content, canonical tags must be used to prevent the duplicate content from being a negative ranking factor for the main domain.
This is the process of making sure that the website content displays properly on different screen sizes of mobile devices. Also, the process has to ensure that the overall user experience—which included usability, page speed, and images, among others are is optimized.
This is a term used to show that Google shall be using the mobile version, rather than the desktop version of web pages to rank and index URLs. The main purpose of Mobile-First Indexing, according to Google, is to assist mobile users to find what they are searching for. Unresponsive sites and sites with different content on the desktop and mobile versions are more likely to see their ranking affected by the migration to mobile-first indexing.
Natural Language Processing (NLP)
NLP is a section of artificial intelligence and computer science that is working to assist computers in processing and analyzing human language. It is said that Natural Language Processing will enable computers to become better in natural language understanding, natural language generation, as well as speech recognition.
A NoFollow attribute, which can be included in a link tag (a href), commands search engine robots not to consider that link as a recommendation that will boost the rank of the target web page. The attributes command the search engine not to follow the link at all—but that it’s a recommendation. Most SEO experts have come to the conclusion that search engines do and can follow NoFollow links.
This is a meta tag that instructs search engines not to include certain pages in their search results. You can place the Noindex tag in the <head> section of a web page’s HTML, or in an HTML header that’s been returned by the webserver. If a page is indexed by Google, and then a Noindex tag is added, the web page is dropped from Google’s search results once Google recrawls the page again.
Outline Reputation Management
Also known as ORM, Outline reputation management is the field that aims to positively impact the reputation of a person, product or service, or organization on the internet. When you optimize the organization’s site, as well as social media accounts for the keywords that relate to the brand using SEO and proactively working to ensure that you have positive mentions on online review sites and social media accounts are part of online reputation management.
The correct definition or organic links has two opposing views. According to some leading SEO experts, organic links are totally unsought backlinks, which a site receives from another site or sites. Other experts claim that organic links can include the links that were solicited, so long as there was no compensation given to receive the links. Furthermore, the links ought to be natural for both the site receiving the link, as well as the linking website. That is, there should be proper relevancy and context to justify the links.
Organic Search results
These are the results displayed depending on their relevance to a particular search query, and this is determined by a search engine’s algorithm, as opposed to the paid search results or the additional SERP features that can appear on the SERPs.
Also known as organic search traffic, organic traffic refers to the site visits that come from online users clicking on the unpaid search results. Other sources of traffic include direct traffic, referral traffic from other sites, as well as paid traffic (PPC). SEO targets to increase organic traffic by optimizing the content on the website, as well as ensuring that the content meets all the technical standards set by different search engines.
PageRank (PR) is the score that Google Credits a web page depending on the quality and quantity of backlinks to the specific page from other sites. The quality of the incoming links depends on the PageRank, as well as the domain’s relevance, or the site offering the link, among other factors. Previously Google allowed people to view the PageRank of any web page by installing their toolbar, but PageRank is no longer publicly visible since mid-2016.
This refers to the loading speed of the web page, together with its elements. Page speed is a major factor that search engines use when determining a web page’s rank. All factors constant, slower pages normally rank lower compared to pages that load faster. You can use different free online tools to check your web page speed.
Also known as impressions, they are metrics used to determine the amount of traffic flowing to a site. The total number of page views is a representation of the total number of times a web page is viewed by online visitors within a given period of time. However, the metric doesn’t give an indication of the visitors who checked the websites since every user can view a number of pages.
People Also Ask
This is a SERP feature that comprises of a group of 4 search queries that are directly related or similar to the original search query. These questions normally appear in a table, and when you click one of the questions, the row expands, showing a snippet of a text resembling a Featured Snippet, and another suggested question is added on the table at the same time.
This is a Google Mobile search engine results page feature that looks like a telephone icon, together with the relevant phone number of the business in particular search results. When you tap on the phone number, it opens your default phone app, and the phone number is usually entered and ready for the dial.
Pay Per Click (PPC)
PPC refers to the most common form of online marketing, where advertisers pay every time a user clicks on their text ad or banner. This form of advertising is a method of increasing web traffic to a site, other than the traffic that comes from different sources like a referral, direct, or organic traffic. The most common PPC advertising platforms are found on search engines like Bing Ads and Google AdWords, on social media platforms like Twitter and Facebook, as well as content recommendation platforms such as Taboola or Outbrain.
Also known as answer box or direct answer, the quick answer is a Google search engine results page feature that appears at the top of the search engine results page in response to the explicit or implied queries for information that can be answered shortly using the publicly available information. The different types of Quick Answer can be broken into the following categories: Dictionary, Weather, Unit Conversions, Sports, Quick Facts, and Calculations.
This is a machine learning AI system, which interprets search queries to understand the user intent, and then give the appropriate search results. RankBrain has the capability to handle phrases or words that it has never come across before by predicting which phrases or words that have a similar meaning, after doing an analysis on the historical searches of the user.
There are different types of URLs redirects that a webmaster can use for URLs that are either permanently or temporarily unavailable. These include:
- 301 redirect – which indicates that the original page content has been moved permanently to a different URL.
- 302 redirect – show that the content is temporarily moved.
The web server implements both the 301 and 302 redirects. In addition to that, there is also an on-page technique that’s used to redirect URLs. This technique is known as meta-refresh, but the technique is known to cause more latency, and this makes it be regarded as a poor user experience.
This is data that is transmitted by a browser when a user is navigating from one site to another, and it tells the destination web server the site that was visited previously. The information contained in the referrer string is usually recorded in the log files of the particular web server, as well as by different analytics systems like Google Analytics. Mostly, this information is used to analyze referral trends.
Responsive Web Design
Responsive web design is an approach used when programming and designing web pages in order to optimize the pages for the screen size and orientation of the user.
Also known as stars, the reviews SERP feature shows the total number of reviews, as well as average rating in search results for pages that allow users to rate their services or products. This feature appears just below the page’s URL, and in addition to the numerical data, it also displays a graphic representation of the average rating with 5 stars.
These are a mobile search result format, used to present a carousel of search results in card form, that is, the white rectangles on the search engine results page’s grey background. The results of the rich card include a title, reviews (in case of any), image, as well as other information like calories and preparation time (for recipes). For content to appear in rich cards, sites are required to implement schema.org structured markup on their pages.
This is a text file placed in a site’s root directory that instructs search engine bots which web pages need to be crawled and indexed, and the pages that don’t need that. The robots.txt file has three elements—User-agent, Allow and Disallow, and the language used on these pages is pretty simple.
The user-agent element defines which robot the Allow and Disallow keywords apply to. Google has several user-agents, Googlebot, which handles Google Search crawling, as well as Googlebot-Image, which handles Google Image Search crawls. Generally, all the user-agents are put in the same category using an asterisk (wild card)—like in this format: User-agent: *
The Allow element can be sued to allow access to a subdirectory under a parent direct that has been disallowed.
The order and language of the robot.txt file look like this:
- User-agent: [name of the robot the instruction applies to]
- Allow: [the URL path found in the subdirectory, and in a blocked parent directory, then you need to be unlocked]
- Disallow: [the URL path that needs to be blocked]
Return on Investment (ROI) is a metric used to measure the efficiency of investment in terms of a percentage. The ROI is calculated by dividing the number of goal achievements by the total amount of money you invested to achieve the goals.
Also known as Rich Simple Syndication or Rich Site Summary, RSS feed is an XML file that details the content on a website (mostly news stories and blogs). You can monitor the updates of different sites through a news aggregator program to access RSS feeds from different sites.
Service Area Business (SAB)
SABs are professionals and tradesmen who go out to the clients and offer their services, but they don’t have a business premise, like locksmiths, electricians, and plumbers. Google has been working to prevent SABs from creating spammy and duplicate entries in its GMB directory by requiring the businesses to go through an offline vetting process.
This is a Google SERP feature that allows users to search a target site directory from a search result on search engine results pages by using the search box that appears just below the description of the results. For some sites, doing a search shows the search results on Google, while other sites will take you to the search results on the target site.
These consist of a single character or a number of characters embedded in a search engine query that allows advanced filtering or allow users to access extra information. Search operators allow users to narrow the focus of their search.
Also known as search engine marketing, it is part of online marketing that assists in driving traffic to sites by boosting their visibility on search engines via search engine optimization and paid search like PPC.
SEO Friendly URL
SEO friendly URLs are web addresses, whereby the domain and the slash are followed by a number of words that are mostly separated by underscores or hyphens. These words give a brief description of the content on that web page. A good example of SEO friendly URL is https://www.example.com/seo-best-practices-2019/. Non-SEO friendly URLs have just a collection of numbers and parameters that follow the slash. Mostly, SEO friendly URLs resemble the title of the page and have keywords used on the content in the age. These URLs are very beneficial for both search engines and internet users.
These are extensions for Google Chrome as well as other web browsers that allow users to access SEO metrics, as well as the information about the pages they have visited, or for the sites on SERPs. Some of the information found on different toolbars include on-page metrics for SEO, like the number of pages ranked, keyword density, and other relevant information.
Also known as the search engine results page, SERP is the page of search results for a certain keyword or search phrase that’s displayed by search engines. In addition to organic search results, different SERPs also display ads, related searches, as well as other items related to the search phrase.
These are different elements that appear alongside the organic search results on SERPs. Some of these features appear together with individual search listings like image thumbnails and SiteLinks, while others like the Quick Answers and Knowledge Panel are distinct and stand-alone elements.
Server Log Files
These are the files situated on a server, responsible for storing data associated with web page requests. Web servers create server logs and then keep them updated. Some of the data stored in log files include the time of the request, the page requested, the IP address of the visitor, errors, and referrers, among others. Before online analytics services were introduced to us, site owners relied on programs that grouped the log file data to create web traffic reports. Log files are used as forensic tools to assist in investigating hack attacks against sites.
Similar Link (on Google)
This is another SERP feature, found on the right side of the URL of particular search results that link to another SERP with the web pages resembling the current one. You can access this feature on the green-down arrow located next to the URL, and then selecting the “similar” in the menu.
These are the additional links that appear just below some search results on Google. Sitelinks, which usually number between 4 and 6, point to other pages on that site. According to Google, the main purpose of these links to assist users in navigating sites and quickly find what they are searching for. Google algorithms automatically generate site links, but the Search engine recommends that webmasters should check whether their navigational kinks are compact to avoid repetition. Websites with sitelinks have a higher CTR, and that’s the reason why many sites try to optimize their navigational links for this SERP feature.
This is a listing of a site’s pages that are publicly accessible, and they are usually arranged in a hierarchical order. The most common use of a sitemap is to create files and file them in a websites’ root directory in XML format to assist search engine crawlers in finding all the web pages.
This is an acronym for social media marketing, which is the use of social media platforms to market or promote products and services. SMM includes things like promotional events, paid advertising, contests, as well as other marketing initiatives that use social media platforms when targeting their audience.
Through social bookmarking, users can discuss, share, and rate web pages with the aim of creating online communities with people of similar interests. There are different social bookmarking services, but the three common ones include Digg, Stumbleupon, and Reddit.
Soft 404 Error
Soft 404 errors happen when URLs cannot be found a web server, but the server gives a 200-level (success) to the browser, rather than a 404 error or a 301 redirect to a different URL. These errors are mostly a big problem for search engines since the success code can make pages that don’t exist to be listed on search results.
This is an SEO term that consists of the words “spam” and “indexing,” and the term refers to the manipulation of search engine indexing via dubious methods.
This is a set of web pages that trap web crawlers in a boundless ring of web pages, thus biding the spider’s resources of making it crash—and this can be intentional or unintentional.
A structured data is an on-page markup code, which is added to an HTML to define the different content elements on webpages. Google uses markup to check and display relevant search content like recipe ingredients, cooking temperature, and cooking time in SERP features like rich cards.
The Structured Data Markup Helper is a Google tool that enables webmasters to create structured data markup code to embed in their pages. Today, you can use certain WordPress plugins that make the process of adding structured data markup to a site pretty simple.
This is a SERP feature that’s attached to distinct search results that display facts and specifications extracted from target landing pages. Structured snippets usually show product specifications, details about music and events, as well as information about movies like film runtime, genre, and rating, among others.
This is an extra level of specs, which can be added to SLD (second-level domain). Mostly subdomains are used to direct traffic to certain sections of a site, and they can also direct users to a different IP, apart from the second-level domain. Most SEO experts agree that Google should consider a domain and its subdomain to be entirely different things when it comes to PageRank.
TF*IDF (Term Frequency-Inverse Document Frequency)
TF*IDF is an information retrieval technique, used to measure the comparative centrality of a particular keyword to a certain web page, depending on its frequency, in comparison to a group of similar web pages.
Thing to Do/Top Sights
This is a Google search engine results page feature that displays a box with 3 or 4 places or attractions to see within the area mentioned in the search. The feature is mostly triggered by queries that mention certain locations or cities. On mobile devices, clicking a particular attraction opens up the company’s Knowledge panel, while on desktop, it takes you to a Local Finder, which displays the area map, as well as the knowledge panel of the business. When you click on the “More Sights” link, it opens the Top Sights tab for that particular location in Google’s Travel Guide.
Time on Page
This is the total amount of time that a visitor spends on a particular web page before leaving it, either by visiting another web page or by closing the browser window.
Also known as Page Title, this is a metatag that’s placed in the header section of a web page with the purpose of defining the title of the page. When search engine spiders crawl that page, they collect the title, and then display it, alongside the page description on their SERP. On majority of web browsers, the title also appears on the tab showing the page, which allows the user to identify the pages they have opened in their browser.
These are the domains that appear on the right side of the far-right dot of a URL. There are five different categories of top-level domains, the common ones being the generic .com, .org, and .net, and the ccTLDs .uk., .au, .us, and .de, among others.
Also known as News Box, Top Stories is a SERP feature that displays the news stories relating to a certain search query.
This is a SERP feature that mostly displays movies, brands, musicians, as well as other celebs that appear directly under the search result for the website of the brand. This feature gives a link to the Twitter feed of the brand and also provides a carousel of the brand’s most recent tweets.
URIs (Uniform Resource Identifiers)
A URI is a sequence of characters that classify a precise resource on the internet. URIs are similar to URLs, but URLs provide information about how you can navigate to the resource, like using the FTP or HTTP protocols.
URL (Uniform Resource Locator)
URLs are URIs that not only identify resources on the internet but also specifies how to navigate these resources. A good example of a URI can be images.twitter.com, while the URL for that URI can be http://images.twitter.com, and the latter explains that the resource can be reached via the HTTP protocol.
A URL parameter is an element that is added to the base URL, and it is used to serve a particular variation of a certain landing page or track traffic from a certain source that is provided with the URL with parameters. The question mark at the end of the base URL shows that the following elements are just parameters and not part of the base URL. Here is an example of a URL with a parameter: https//examples.com/url/parameter-base?source=google.
UGC (User Generated Content)
UGC is any form of content that’s created the visitors of an app or website. The idea of developing web apps for user-generated content gained popularity back in the early 2000s, and this concept was called Web 2.0. The advent of social media has made the user-generated content to explode.
This is a SERP feature that appears on the left side of individual search results that provide video clips that are related to the search query of the user. The video thumbnails can display videos from either YouTuber, whereby clicking on the video opens up the clip in YouTube, or for web pages hosting their own videos, whereby the link opens up the web page with the rooted video player.
Web 2.0 is a term that gained its popularity back in the early 2000s, and it refers to sites that allowed user-generated content, like comments and forums, and also focused on the development of an easy-to-use user interface. The Web 2.0 concept later led to the development of online blogs and social media platforms like YouTube, MySpace, and later Facebook, Instagram, Twitter, and other related platforms.
This is a data collection technique from web pages, and it’s mostly done by a bot. Scraping mostly involved downloading (fetching) the web page, analyzing the elements to extract the desired data, and then saving that information in a database for later analysis or reuse. The web scraping technique is mostly used by search engines when indexing sites, and also be used to monitor online price variations or to mine online data.
White Hat SEO
This is the process of optimizing content, creating organic backlinks, and technical best practices to boost the rank of a website in following the rules laid down by search engines. Content optimization included the process of identifying the target audience, keyword research, and the creation of excellent content that meets the needs of the target audience. Link building efforts, on the other hand, included making bloggers, influencers, and website owners aware of the site’s unique content so that they can be willing to create backlinks. Some of the technical issues surrounding White Hat SEO include boosting page speed and ensuring that hreflang and canonical tags are in place, among other practices.
This is a directory of all the registered domains in a certain country. These directories can be used to search the details about a certain domain—the name of the domain owner, the dates when the domain was originally bought and last renewed, as well as the contact information of the domain owner, and more. Most domain registrars provide a paid service that ensures that the name and contact information of the domain owner remains private.