Mark van Berkel - Schema App CTO https://www.schemaapp.com/author/vberkel/ End-to-End Schema Markup and Knowledge Graph Solution for Enterprise SEO Teams. Tue, 13 Aug 2024 18:38:49 +0000 en-CA hourly 1 https://wordpress.org/?v=6.5.5 https://ezk8caoodod.exactdn.com/wp-content/uploads/2020/07/SA_Icon_Main_Orange.png?strip=all&lossy=1&resize=32%2C32&ssl=1 Mark van Berkel - Schema App CTO https://www.schemaapp.com/author/vberkel/ 32 32 Knowledge Graphs: The Value of Schema Markup Beyond Rich Results https://www.schemaapp.com/schema-markup/knowledge-graphs-value-of-schema-markup-beyond-rich-results/ Wed, 11 Oct 2023 17:42:54 +0000 https://www.schemaapp.com/?p=14428 For years, SEOs have primarily associated Schema Markup with its ability to enhance the visibility of web pages on search engine results pages (SERPs), by enabling rich results that capture users’ attention. However, it’s important to recognize that while rich results are a nice benefit of Schema Markup, they don’t fully capture its true value....

The post Knowledge Graphs: The Value of Schema Markup Beyond Rich Results appeared first on Schema App Solutions.

]]>
For years, SEOs have primarily associated Schema Markup with its ability to enhance the visibility of web pages on search engine results pages (SERPs), by enabling rich results that capture users’ attention.

However, it’s important to recognize that while rich results are a nice benefit of Schema Markup, they don’t fully capture its true value.

The real value of Schema Markup lies in its capacity to provide search engines with a deeper, more semantic understanding of your website’s content. When implemented correctly, Schema Markup allows you to develop your content knowledge graph and take better control of how your content appears in search.

This article will explore how Schema Markup enhances website visibility and search engine understanding of your content through robust knowledge graphs. This, in turn, refines how your content appears for relevant queries with greater accuracy and helpfulness to the user.

Why Rich Results Are Not Enough

Measuring the return on investment from your SEO efforts can be tough. Hence, many SEOs like implementing Schema Markup because they can easily measure the ROI on their Schema Markup efforts through the performance of rich results.

However, implementing Schema Markup solely for the purpose of achieving rich results can be risky due to their ever-changing criteria and eligibility.

Rich Result Volatility

Over the past few years, we’ve seen the performance of rich results fluctuate based on Google’s algorithm changes. This year, Google has also made substantial changes to the rich results available on the SERP and the criteria for achieving certain rich results.

They’ve ceased awarding video rich results to pages that lack video as their primary content and deprecated How-to rich results entirely from the SERP. Similarly, FAQ rich results have been curtailed for most websites, now reserved only for authoritative government and health websites.

These volatile fluctuations and changes can be unsettling for businesses and SEOs who have come to rely heavily on rich results to drive traffic and engagement.

The True Purpose of Schema Markup

While rich results offer visual enhancements and additional SERP information, they play a secondary role to Schema Markup’s core objective.

The main purpose of Schema Markup is to enable search engines to clearly understand and contextualize the content on a page. That way, search engines can better match the content on a page to the searcher’s query, and provide more accurate search results.

Think of Schema Markup as a tool to assist search engines in content comprehension, with rich results being a bonus feature for publishers using specific markups.

By structuring your content with Schema Markup, you’re not just chasing rich results; you’re preparing your content for the future of AI-driven search.

What Else Can You Do With Schema Markup?

By now it’s been made clear that Schema Markup has much greater potential than most have given it credit for. Let’s dive into some of the powerful ways Schema Markup can drive results for your organization and keep you competitive in search as it continues to evolve.

Integrate Your Schema Markup

Once implemented, you can also seamlessly integrate your Schema Markup with other external data sources. This flexibility enables you to provide richer, more comprehensive data experiences in the applications and platforms your business chooses to integrate with.

In addition to integrating it with external data sources, you can also integrate your Schema Markup with internal tools, platforms, or systems. This allows for a more cohesive data management strategy within your organization.

Your Schema Markup can be integrated using APIs or Linked Open Data. For example, an e-commerce website might integrate Schema Markup with their inventory management system via APIs. This would allow the product details (like price, availability, and ratings) to be dynamically updated in real-time based on the Schema Markup.

Another example is integrating through Linked Open Data. A cultural institution, like a museum, might use Schema Markup to describe their exhibits and then integrate this information with global datasets like Wikidata. This would help in providing richer context about the exhibits and potentially drive more visitors.

Reuse Your Schema Markup

Your Schema Markup can be reused in various scenarios. One prime example is with our WordPress plugin feature. By appending ?format=application/ld+json to URLs, you can retrieve the schema for a particular page. This facilitates:

  • Mobile Apps: Developers could pull this Schema Markup to display rich content snippets in a mobile app about the company’s services or products.
  • Chatbots: Businesses could leverage the schema to answer user queries more accurately, providing detailed information pulled directly from the website.
  • Partner Websites: If a business has partnerships with other websites or platforms, they can share the Schema Markup, ensuring consistent and updated information across platforms.

Build Your Knowledge Graph

A knowledge graph is a collection of relationships between the entities defined using a standardized vocabulary, from which new knowledge can be gained through inferencing.

For additional clarity, an entity is a thing that has specific attributes. For example, your postal address is a thing that can be described by the country, region, postal code and street address.

When you implement Schema Markup on your site, you are essentially using the Schema.org Types and properties to describe the entities on your site. Each entity is then identifiable through a Uniform Resource Identifier (URI) to ensure that it can be referenced to other items in your graph.

You can develop a knowledge graph by using the Schema.org vocabulary to connect the entities on your site to other entities on your site and other external authoritative knowledge bases like Wikidata or Wikipedia. By doing so, you are establishing your entity and defining how it connects to other things that exist in the world.

Download our guide to learn how to connect the entities on your site using Schema Markup.

What Makes Knowledge Graphs So Valuable?

At Schema App, we leverage Schema Markup to enable you to present your data in the form of a semantic knowledge graph, but the real magic lies in how you choose to use this connected data.

Your knowledge graph is a versatile resource that opens up a world of possibilities tailored to your specific business objectives.

For instance, you can harness the power of SPARQL Queries to extract precise data and information from your knowledge graph. This capability enables tasks such as generating insightful reports, counting the number of pages related to a particular topic, or tracking external entities linked to your Schema Markup.

These reports not only offer valuable insights but also serve as a foundation for identifying content gaps within your domain. By analyzing your existing content against your knowledge graph, you can determine which topics are well-covered and which areas require further exploration.

This strategy helps you build your authority by pinpointing opportunities for content expansion.

Enhance User Experience with Better Content-Query Alignment

When left to their own devices, search engines rely on natural language processing to parse the information on a site, which can lead to inaccuracies. When the information on your site is organized in a structured knowledge graph using the schema.org vocabulary, it makes it easier for search engines to understand and contextualize your site content.

This leads to more precise matches between your content and search queries, ultimately improving user experience and the quality of traffic you are getting to your site.

Our Customer Success team has even experimented with linking entities on a page to external authoritative knowledge bases like Wikidata and Google’s knowledge graph. This approach has yielded positive results, increasing click-through rates for queries related to those entities.

While it might not necessarily boost the visibility of your pages like a rich result, it does ensure that the clicks are from users who are genuinely interested in your content.

Integrate Your Knowledge Graph

Your knowledge graph can also seamlessly integrate into your workflow, serving as a backbone for various tools and applications.

At Schema App, for instance, our Editor tool relies on the knowledge graph to provide a comprehensive experience. All of the information in that interface is part of our knowledge graph. Any changes made to data items in our tool directly impact and update the knowledge graph.

Additionally, you can leverage your content knowledge graph to build custom web applications. This is accomplished by providing data for new apps and enabling developers to create user interfaces that utilize the wealth of information within your knowledge graph.

Ground and Train Your Internal LLMs

In the realm of AI search engines, one significant challenge is the potential for incorrect inferences leading to hallucinations. Hallucinations occur when Large Language Models (LLMs) making up false information that is not based on real data.

You have the power to mitigate this major risk by using your knowledge graph as a control point to define your content more precisely to AI search engines. 

Although major search engines have yet to officially confirm this, there’s potential to train AI search engines to provide more accurate results by grounding their understanding with your knowledge graph.

Another interesting use case for knowledge graphs is that you can reuse them to train your own internal LLMs. An example of this is the use of AI chatbots on your site to address common customer queries. 

Grounding your LLMs with a knowledge graph enhances the performance of customer queries. It also ensures the accuracy of the information provided, since the LLM is restricted to the statements (RDF triples) expressed in your knowledge graph. 

You can clearly define entities in your content knowledge graph to ground it with factual and accurate information about your organization.

Learn the fundamentals of Content Knowledge Graphs and actionable steps to develop your own using Schema Markup.

Leveraging the True Power of Schema Markup

As search engines become more sophisticated and semantic, they attempt to grasp the nuances of human language, meaning and intention.

Schema Markup serves as a bridge between your content and these semantic search engines.  It enables your content to be interpreted more accurately, leading to improved relevance in search results.

While rich results undoubtedly hold distinctive value and can elevate your content’s visibility, they should be seen as a bonus rather than the sole objective of Schema Markup.

Schema Markup’s true value lies in its ability to help search engines understand your content’s context and intent. When you implement Schema Markup with machine comprehension in mind, you not only enhance your chances of securing rich results but also ensure your content remains resilient and relevant in an ever-changing search landscape.

Looking to develop your very own marketing knowledge graph through the power of Schema Markup?

Get started today to learn about our solution.

The post Knowledge Graphs: The Value of Schema Markup Beyond Rich Results appeared first on Schema App Solutions.

]]>
How to Leverage Your Content Knowledge Graph for LLMs Like ChatGPT https://www.schemaapp.com/schema-markup/how-to-leverage-your-content-knowledge-graph-for-llms-like-chatgpt/ Tue, 04 Jul 2023 16:59:54 +0000 https://www.schemaapp.com/?p=14208 It’s no secret that the AI revolution is well underway. According to a report by Accenture, 42% of companies want to make a large investment in ChatGPT in 2023. Most organizations are trying to stay competitive by embracing the AI changes in the market and identifying ways to leverage “off-the-shelf” Large Language Models (LLMs) to...

The post How to Leverage Your Content Knowledge Graph for LLMs Like ChatGPT appeared first on Schema App Solutions.

]]>
It’s no secret that the AI revolution is well underway. According to a report by Accenture, 42% of companies want to make a large investment in ChatGPT in 2023.

Most organizations are trying to stay competitive by embracing the AI changes in the market and identifying ways to leverage “off-the-shelf” Large Language Models (LLMs) to optimize tasks and automate business processes.

However, as the adoption of generative AI accelerates, companies will need to fine-tune their Large Language Models (LLM) using their own data sets to maximize the value of the technology and address their unique needs. There is an opportunity for organizations to leverage their content Knowledge Graphs to accelerate their AI initiatives and get SEO benefits at the same time.

What is an LLM? 

A Large Language Model (LLM) is a type of generative artificial intelligence (AI) that relies on deep learning and massive data sets to understand, summarize, translate, predict and generate new content.

LLMs are most commonly used in natural language processing (NLP) applications like ChatGPT, where users can input a query in natural language and generate a response. Businesses can utilize these LLM-powered tools internally to provide employees with Q&A support or externally to deliver a better customer experience.

Despite the efficiency and benefits it offers, however, LLMs also have their challenges.

LLMs are known for their tendencies to ‘hallucinate’ and produce erroneous outputs that are not grounded in the training data or based on misinterpretations of the input prompt. They are expensive to train and run, hard to audit and explain, and often provide inconsistent answers.

Thankfully, you can use knowledge graphs to help mitigate some of these issues and provide structured and reliable information for the LLMs to use.

What is a Knowledge Graph?

Gartner’s “30 Emerging Technologies That Will Guide Your Business Decisions” report, published in February 2024, highlighted Generative AI and Knowledge Graphs as critical emerging technologies companies should invest in within the next 0-1 years. 

A Knowledge Graph is a collection of relationships between things defined using a standardized vocabulary, from which new knowledge can be gained through inferencing. When knowledge is organized in a structured format, it enables efficiencies in the retrieval of information and improves accuracy.

For instance, most organizations have websites that contain extensive information about the business, such as its products and services, locations, blogs, events, case studies, and more. However, the information is unstructured, because it exists as text on the website.

You can use Structured Data, also known as Schema Markup, to describe the content and entities on each page, as well as the relationships between these entities across your site and beyond. Implementing semantic Schema Markup can:

  • Help search engines better understand and contextualize your content, thereby providing users with more relevant results on the SERP
  • Help your organization develop a reusable content knowledge graph. This graph can provide valuable structured information to enhance your business’s capabilities with LLMs.

Learn the fundamentals of Content Knowledge Graphs and actionable steps to develop your own using Schema Markup.

Using an LLM to Generate your Schema Markup

To develop your content knowledge graph, you can create your Schema Markup to represent your content. One of the new ways SEOs can achieve this is to use the LLM to generate Schema Markup for a page. This sounds great in theory however, there are several risks and challenges associated with this approach.

One such risk includes property hallucinations. This happens when the LLM makes up properties that don’t exist in the Schema.org vocabulary. Secondly, the LLM is likely unaware of Google’s required and recommended structured data properties, so it will predict them and jeopardize your chances of achieving a rich result. To overcome this, you need a human to verify the structured data properties generated by the LLM.

LLMs are good at identifying entities on Wikidata. However, it lacks knowledge of entities defined elsewhere on your site. This means the markup created by the LLM will create duplicate entities, disconnected across pages on your site or even within a page, making it even more difficult for you to manage your entities.

In addition to duplicate entities, LLMs lack the ability to manage your Schema Markup at scale. It can only produce static Schema Markup for each page. If you make changes to the content on your site, your Schema Markup will not update dynamically, which results in schema drift.

With all the risks and challenges of this piecemeal approach, the Schema Markup created by the LLM is static and unconnected for a page—it doesn’t help you develop your content knowledge graph.

Instead, you should create your Schema Markup in a connected, scalable way that updates dynamically. That way, you’ll have an up-to-date knowledge graph that can be used not only for SEO but also to accelerate your AI experiences and initiatives.

Synergy Between Knowledge Graphs and LLMs

There are three main ways of leveraging the content knowledge graph to enhance the capabilities of LLMs for businesses.

  1. Businesses can train their LLMs using their content knowledge graph.
  2. Businesses can use LLMs to query their content knowledge graphs.
  3. Businesses can structure their information in the form of a knowledge graph to help the LLM function more effectively.

Training the LLM Using Your Content Knowledge Graph

For a business to thrive in this technological age, connecting with customers through their preferred channel is crucial. LLM-powered AI experiences that answer questions in an automated, context-aware manner can support multi-channel digital strategies. By leveraging AI to support multiple channels, businesses can serve their customers through their preferred channels without having to hire more employees.

That said, if you want to leverage an AI chatbot to serve your customers, you want it to provide your customers with the right answers at all times. However, LLMs don’t have the ability to perform a fact check. They generate responses based on patterns and probabilities. This results in issues such as inaccurate responses and hallucinations.

To mitigate this issue, businesses can use their content knowledge graphs to train and ground the LLM for specific use cases. In the case of an AI chatbot, the LLMs would need an understanding of what entities and relations you have in your business to provide accurate responses to your customers.

Using the Schema.org Vocabulary to Define Entities

The Schema.org vocabulary is robust, and by leveraging the wide range of properties available in the vocabulary, you can describe the entities on your website and how they are related with more specificity. The collection of website entities forms a content knowledge graph that is a comprehensive dataset that can ground your LLMs. The result is accurate, fact-based answers to enhance your AI experience.

Let’s illustrate how your content knowledge graph can train and inform your AI Chatbot.

A healthcare network in the US has a website with pages on their physicians, locations, specializations, services, etc. The physician page has content relating to the specific physician’s specialties, ratings, service areas and opening hours.

If the healthcare network has a content knowledge graph that captures all the information on their site, when a user searches on the AI Chatbot “I want to book a morning appointment with a neurologist in Minnesota this week”, the AI Chatbot can deduce the information by accessing the healthcare network’s content knowledge graph. The response would be the names of the neurologists who service patients in Minnesota and have morning appointments available with their booking link.

The content knowledge graph is also readily available, so you can quickly deploy your knowledge graph and train your LLM. If you are a Schema App customer, we can easily export your content knowledge graph for you to train your LLM.

Using LLMs to Query Your Knowledge Graph

Instead of training the LLM, you can use the LLM to generate the queries to get the answers directly from your content knowledge graph.

This approach of generating answers through the LLM is less complicated, less expensive and more scalable. All you need is a content knowledge graph and a SPARQL endpoint. (Good news, Schema App offers both of these.)

  1. The Schema App application loads the content model from your content knowledge graph, which would be all the Schema.org data types and properties that exist within your website knowledge graph.
  2. Then the user would ask the Schema App application a question.
  3. The Schema App application combines the question with the content model and asks the LLM to write a SPARQL query. Note: The only thing the LLM does is transform the question into a query.
  4. Schema App application then executes the SPARQL against your content knowledge graph and displays the results or requests as a formatted response using the LLM.

This method is possible because the LLMs have a great understanding of SPARQL and can help translate the question from natural language to a SPARQL query.

By doing this, the LLM doesn’t have to hold the data in memory or be trained on the data because the answers exist within the content knowledge graph, which makes it stateless and a less resource-intensive solution. Furthermore, companies can avoid providing all their data to the LLM as this method introduces a control point to the knowledge graph owner to only allow questions on their data that they approve.

Overcoming LLM Restrictions

This approach also overcomes some of the restrictions of the LLMs.

For example,  LLMs have token limits, which restrict the input and output number of words that can be included. This approach eliminates this problem by using the LLMs to build the query/prompt and using the knowledge graph to query. Since SPARQL queries can query gigabytes of data, they don’t have any token limitations. This means you can use an entire content knowledge graph without worrying about the word limit.

By using the LLM for the sole purpose of querying the knowledge graph, you can achieve your AI outcomes in an elegant, cost-effective manner and have control of your data while also overcoming some of the current LLM restrictions.

Optimizing LLMs by Managing Data in the form of a Knowledge Graph

You can machine learn Obama’s birthplace every time you need it, but it costs a lot and you’re never sure it is correct.” – Jamie Taylor, Google Knowledge Graph

One of the most considerable costs of running an LLM is the inference cost (aka the cost of running a query through the LLM).

In comparison to a traditional query, LLMs like ChatGPT have to run on expensive GPUs to answer queries ($0.36 per query according to research), which can eat into profits in the long run.

Businesses can reduce the inference cost of the LLM by storing the historical responses or knowledge generated by the LLM in the form of a knowledge graph. That way, if someone asks the question again, the LLM does not have to exhaust resources to regenerate the same answer. It can simply look up the answer stored in the knowledge graph.

Unstructured data that the LLM is trained on can also cause inefficiencies in the retrieval of information and high inference costs. Therefore, converting unstructured data such as documents and web pages into a knowledge graph can reduce information retrieval time and produce more reliable facts.

As the volume of data in the hybrid cloud environment continues to grow exponentially, knowledge graphs play a crucial role in data management and organization. They contribute to the ‘Big Convergence,’ which combines data management and knowledge management to ensure efficient information organization and retrieval.

Build Your Knowledge Graph Through Schema App

In summary, the integration of knowledge graphs with LLMs can significantly enhance decision-making accuracy, especially in the realm of Marketing.

The content knowledge graph is an excellent foundation to leverage schema data in LLM tools, leading to more AI-ready platforms. It’s an investment that could pay off handsomely, especially in a world increasingly reliant on AI and knowledge management.

At Schema App, we can help you quickly implement your Schema Markup data layer and develop a semantically relevant and ready-to-use content knowledge graph to prepare your organization for AI.

Regardless of whether you use Schema App to author your Schema Markup, we can produce a content knowledge graph for you. Schema App can capture the Schema.org data from your existing implementation using our Schema App Analyzer to develop your marketing knowledge graph.

Get in touch with our team to find out more about how Schema App can help you build your marketing knowledge graph to enhance your LLM.

The post How to Leverage Your Content Knowledge Graph for LLMs Like ChatGPT appeared first on Schema App Solutions.

]]>
4 Basic SEO Factors to Consider Before Doing Schema Markup https://www.schemaapp.com/schema-markup/4-basic-seo-factors-to-consider-before-doing-schema-markup/ Thu, 02 Mar 2023 16:31:27 +0000 https://www.schemaapp.com/?p=13883 If you’ve been looking for an SEO tactic that will help your organization drive more organic traffic to your site, Schema Markup is an effective solution.  By adding Schema Markup (also known as Structured Data) to your page, your page can show up as a rich result on the search engine results pages (SERPs). Some...

The post 4 Basic SEO Factors to Consider Before Doing Schema Markup appeared first on Schema App Solutions.

]]>
If you’ve been looking for an SEO tactic that will help your organization drive more organic traffic to your site, Schema Markup is an effective solution. 

By adding Schema Markup (also known as Structured Data) to your page, your page can show up as a rich result on the search engine results pages (SERPs). Some rich results claim a prime location at the top of SERPs, while others include additional information about your page, such as images, pricing, reviews, and snippets of content.

Even though Google says Schema Markup does not have a direct impact on your rankings, investing in it can definitely help your pages stand out in the SERP and drive an increase in Click-Through Rates. 

However, before you start implementing Schema Markup on your page, there are a few basic technical SEO factors that can have an impact on your search visibility. Here are the 4 SEO factors you should consider before doing Schema Markup: 

1. Ensure Your Pages Are Indexable

Indexing is the process that search engines use to understand what your page is about. For example, Google’s algorithm achieves this by analyzing textual content, key content tags, alt attributes, and other elements of your page. 

However, some sites mistakenly embed a no-index rule in their site’s code, which will prevent certain content from showing up on SERPs. If your page is not indexed, search engines cannot read your Schema Markup or your content, which defeats the purpose of implementing Schema Markup.

To find out whether your site is indexed, launch Google’s URL inspection tool, paste in a URL, and initiate your query. Within seconds, Google will reveal if your URL is indexed. If it isn’t, you need to adjust your robots.txt file or meta tags to fix the issue.

2. Have a Fast Page Load Speed

You can implement Schema Markup regardless of page load speed. However, page load speed impacts your ranking on search engines like Google, which is why you should strive to optimize it before investing in more extensive technical SEO like Structured Data. 

Google considers page load speed so important that it’s one of Google’s Core Web Vitals – a metric that also considers your page’s visual stability and interactivity. If your page speed is slow, it will negatively impact the user experience and your “crawl budget,” which refers to the resources Google allocates to crawl your page.

Additionally, if rendering your page and loading its JavaScript takes too long, Google might index the site before the JavaScript is finished executing and not even see some of your content or Schema Markup. As a result, you might miss your chance to be featured as a rich result on the SERP. Therefore, it is vital for you to address your page load speed before implementing Schema Markup. 

To assess and optimize load speed, we suggest following Google’s recommendations

3. Have a Mobile-Friendly Site

As of Q4 of 2022, mobile devices accounted for over 59% of global web traffic. With these numbers climbing daily, it’s no surprise that Google has adopted a mobile-first indexing philosophy. 

This means that the search engine giant will index websites based on their mobile versions rather than the desktop alternatives. Therefore, the mobile version of your site must be indexable and optimized for speed if you want to achieve a strong ranking.

Your website should be responsive, which means its format seamlessly adapts to the screen size and resolution of the user’s mobile device. This is so important to the user experience that Google incorporated this concept into one of its most significant algorithm updates in 2020. 

The bottom line is this: If your page does not rank well in mobile searches, Schema Markup will do little to improve your visibility. Before you start doing any Schema Markup, test your site’s mobile-friendliness with Google’s free tool to ensure your site functions well on any device.

4. Have More Helpful Content

Google recently launched its Helpful Content Update, a system designed to reward sites with people-first content that provide visitors with a satisfying, valuable experience. Conversely, content that falls short of visitor expectations will perform poorly in rankings.

This update is another in a long string of algorithm adjustments to discourage sites from keyword-stuffing their content and producing content that lacks any real substance for readers. Instead, it gives more weight to sites that produce quality content written for people, not algorithms.

As you start investing in Schema Markup, evaluate the quality of your site’s content and ensure it answers the user’s queries and meets their needs. On top of that, you should make sure that your page has the content needed to meet the required properties to be eligible for the relevant rich results. 

Google has also provided some great tips for generating helpful content, so be sure to put these tips to use.

Learn how to optimize your content to achieve Google’s rich results.  

Run an Effective Schema Markup Strategy by Mastering the Basics

Rich results can help differentiate your website from the competition. But if your website doesn’t load quickly, lacks quality on-page content, and isn’t optimized for mobile devices, it will not rank well even if you invest heavily into Structured Data.

With that in mind, we recommend mastering these basic tenets of SEO, so you have a solid groundwork upon which to build. To be clear, while your site does not have to run at lightning-fast speeds and host content written by world-class copywriters, it should function reliably on desktop and mobile alike and feature a good mix of content that provides value to readers. 

If you’re ready to start investing in a sophisticated Schema Markup strategy that will drive more organic traffic to your website, get in touch with our team to learn more about our end-to-end Schema Markup Solution.

The post 4 Basic SEO Factors to Consider Before Doing Schema Markup appeared first on Schema App Solutions.

]]>
Schema Drift: The Divergent Schema Markup! https://www.schemaapp.com/schema-markup/schema-drift-the-divergent-schema-markup/ Thu, 19 May 2022 15:22:34 +0000 https://www.schemaapp.com/?p=13038 Change is constant, and on the world wide web, it is even more true. For a website product owner and digital teams, many things happen that may be out of your control.   For example: Google Features are introduced and updated Schema.org versions change Content is published, updated or moved.  JavaScript and 3rd party components get...

The post Schema Drift: The Divergent Schema Markup! appeared first on Schema App Solutions.

]]>
Change is constant, and on the world wide web, it is even more true. For a website product owner and digital teams, many things happen that may be out of your control.  

For example:

  • Google Features are introduced and updated
  • Schema.org versions change
  • Content is published, updated or moved. 
  • JavaScript and 3rd party components get updated
  • Syndicated content changes structure or content
  • CMS switches
  • Websites merge, re-architected 
  • Digital team members change (SEO, Content)
  • Companies have mergers and acquisitions, centralize and decentralize

When these things happen, Digital teams and specifically SEO Strategists are ready to navigate and diagnose the issues. When it comes to managing your schema markup through these changes, ask “Is the schema markup reflective of these changes?”  How do you know? To what extent? When changes happen and schema markup is out of sync with what is on your website, it is called “Schema Drift”.

Schema Drift is a complicated problem that Schema App’s Highlighter resolves. In this article, we’ll define what Schema Drift is, when it tends to happen and how to calculate the magnitude of the issue. 

What is Schema Drift?

Schema drift is the concept of the divergence of web content and schema markup.

Typically this might show up with static, hard-coded, schema.org markup that does not change along with updated page content. Drift is a measure of the distance between the new content and original schema.org markup. Schema markup drifts with time through either change in content or schema markup without the corresponding change of its counterpart.

Schema Drift was recently mentioned by Google’s Martin Splitt, in the podcast “Search Off the Record: Structured Data What’s it all about”at 19:29, he says: 

[How] to ensure that there’s no drift between what is on the page and what is in the structured data [is]  not necessarily easy.” – Martin Splitt

Is Schema Drift a Data Quality problem?

Yes, at its core, schema drift is a data quality problem.

Schema.org markup is a machine-readable representation, a data layer, for content that is presented to human readers. Data Quality, meanwhile, is the measure of how well-suited a data set is to serve its specific purpose. The degree to which the data layer is correct shows up as a data quality problem.

Data Quality is an IT problem with a framework called the Data Management Body of Knowledge (DMBOK) developed over 30 years through a community of experts. DMBOK describes data quality as having the following characteristics:

  • Completeness – does the schema data describe the whole content and is it connected to the adjacent data items? 
  • Validity – Is the schema data correct in its syntax, is it semantically correct as per the schema.org model? Is it valid as per the Google structured data guidelines?
  • Accuracy –  Degree to which the schema data represents the content
  • Consistency – Degree to which the data is equal within and between datasets
  • Uniqueness – Degree to which data is unique and cannot be mistaken for other entries
  • Timeliness – Degree to which the data is available at the time it is needed

Content-based Schema Drift

Primary schema drift occurs when content on the page is updated but the corresponding schema.org markup does not get updated. This is typical if and when schema.org markup uses static data elements, and users copy/paste content into the schema.

Configuration-based Schema Drift

Inversely, Schema Drift can also occur when the schema.org markup is changed without changes to the schema markup. Perhaps there is a change in mappings, and a setting is changed for a group of pages but accidentally affects the properties of a subgroup of pages. While not intended, the schema markup when variable configurations are used, can be more problematic to detect.

External Schema Drift

A more subtle version of schema drift is when content’s related content changes (connected data items) but that’s not directly observable in the content. External in this case is outside the webpage container, such as other webpages or third party providers.

Example 1: a Physician primary webpage is likely connected to its Service availability, and when the business hours change, the hoursAvailable should also be updated.

Example 2: if an Event is created and the schema markup is correct initially, but the venue changes the Event>location>name or the price went up due to high demand, the Event>offers>price would change. These properties of connected data items may not explicitly be in the page content, but they are certainly relevant and a requirement of the Google feature.

Other times there are 3rd Party plugin providers, e.g. Product Review platforms, which publish schema markup for products without being connected to the rest of the schema markup. While we can use additive schema markup methods with the @id it is brittle and a form of external schema drift.

Schema.org Vocabulary Drift

Terminology Changes

During the year the Schema.org community releases several updates to the vocabulary (https://schema.org/docs/releases.html). During the past few years, there have been several significant changes to terms and the organization of extensions. Each change to the schema.org vocabulary can create Schema Drift. In particular, changes in v0.91 included a large number of properties were made to be singular terms, for example maps became map and members became member. The following shows members is SupersededBy member, telling you if you have the schema.org property, you should update the schema markup.

In the schema.org data model in RDF Graph Database we can retrieve the terms using a simple SPARQL query:
# Find supersededBy terms

Old term New term
schema:Code schema:SoftwareSourceCode
schema:DatedMoneySpecification schema:MonetaryAmount
schema:Dermatologic schema:Dermatology
schema:Season schema:CreativeWorkSeason
schema:Taxi schema:TaxiService
schema:UserBlocks schema:InteractionCounter
schema:UserCheckins schema:InteractionCounter
schema:UserComments schema:InteractionCounter
schema:UserDownloads schema:InteractionCounter
schema:UserInteraction schema:InteractionCounter
schema:UserLikes schema:InteractionCounter
schema:UserPageVisits schema:InteractionCounter
schema:UserPlays schema:InteractionCounter
schema:UserPlusOnes schema:InteractionCounter
schema:UserTweets schema:InteractionCounter
schema:actors schema:actor
schema:albums schema:album
schema:application schema:actionApplication
schema:area schema:serviceArea

Vocabulary is removed

In some vocabulary updates, such as v7.0 there were several largely unused medical terms. If you were a company that used these, you could query a RDF database to look for them.
Removed several largely unused medical health properties whose names were inappropriately general: action, background, cause, cost, function, indication, origin, outcome, overview, phase, population, purpose, source, subtype. Note that we do not remove terms casually, but in the current case the usability consequences of keeping them in the system outweighed the benefits of retaining them, even if flagged as archived/superseded.

Using SPARQL we can query for the list of properties no longer in the vocabulary with
# Find data items using removed properties

Schema.org is versioned but I don’t know that Google supports versioning the markup in the context and I haven’t seen schema.org providers (including us) specify a version of Schema.org that we are implementing.

  • Vertical distance is a measure of Time for the number of hours of incorrect schema markup, x
  • Horizontal distance is a measure of incorrect properties, in which a simple measure is the number of properties that are no longer correct, y

Drift = x hours * y properties 

If you know what day the schema went adrift, then calculate the total area as the risk profile of the drift. If you were to compare that to Google Search Console Indexing API you hope that Google hasn’t indexed it yet.

How might you find Schema Drift on your site?

If and when there is divergence and Schema drift, you should evaluate a page and if the y > 0 you will want to fix the schema markup. Furthermore, you want to address schema drift quickly, and in as little time as possible and ideally before Google indexes it.

Schema Monitoring

Toolkits that monitor your website can and should detect schema drift. Often tools will inform you about what is discovered on the page and potentially what errors/warnings it has. The tools do not understand schema drift, and do not evaluate a comparison of content vs schema markup. At scale, this is a difficult endeavour and why the problem is persistent.

Schema App’s crawler allows you to query the database to see if there are any outdated properties, allowing us to monitor Schema Drift in the vocabulary.

Can I use Microdata & RDFa to avoid schema drift?

Microdata and RDFa are inline HTML tags that directly connect the schema scope and properties to the raw content. Not without their limitations, these syntaxes are no doubt a good way to avoid schema drift. For more complicated graphs of schema content, interlinking data items on the page and across pages can be done with itemref but may point to a broken link or no longer valid items.

Why is Schema Drift important to BI and Data Analytics?

Data Management is the development, execution, and supervision of plans, policies, programs, and practices that deliver, control, protect, and enhance the value of data and information assets throughout their lifecycles.  https://dataninjago.com/2021/09/15/what-is-data-management-actually-dama-dmbok-framework/ 

In the broadest sense, Data Quality is a building block of Data Management. The contribution of the DMBOK pyramid is to reveal the logical progression of steps for constructing a data system. Whether you use this approach or not doesn’t really matter, however data quality is a necessity for building up data analytics projects. If the organization starts to feel the pains from bad data quality, you may need to revisit data quality, have reliable metadata and enforce consistent data architecture.

Ensuring the data quality is fit to use in Phase 2 of the organizations’ data management journey. To ensure the data is of service to higher-order functions, the quality must be relied upon to make decisions. If you’re like some of our customers, schema.org data is supplied not only to Google, but also to other data consumers in the marketing tech stack. Therein, the problem of schema drift and data quality is magnified.

Schema App’s Solution to Schema Drift

Schema App manages Schema Drift through the following solutions.

The Schema App Highlighter is built to dynamically generate schema markup based on the content on the page. So if your teams are changing the content, it is dynamically updated. In addition, if templates within a site are changed, the configuration in Schema App can be updated in minutes.

Schema App Analyzer provides periodic crawls of your website to report on schema data in totality. In addition to validating for Google Features, visualize the results and query the data (RDF triples) for deprecated properties. 

Schema App’s dynamic Editor and Highlighter libraries import the latest schema.org vocabulary, mapping old definitions to new ones, so that they are updated dynamically in our customers’ markup. 

Lastly, Customer Success at Schema App reviews and resolves errors and warnings, working with our customers to manage content, schema, and component changes. 

If you don’t want to worry about Schema Drift, reach out, we’d love to work with you.

Resources & Links

The post Schema Drift: The Divergent Schema Markup! appeared first on Schema App Solutions.

]]>
Schema Markup for Product Models https://www.schemaapp.com/schema-markup/schema-org-variable-products-productmodels-offers/ https://www.schemaapp.com/schema-markup/schema-org-variable-products-productmodels-offers/#respond Mon, 08 Jun 2020 14:42:03 +0000 https://www.schemaapp.com/?p=6024 Creating schema markup for a single product is straightforward and well documented. But things get more complicated when you’re creating markup for many variations of a product. There are several ways to create schema markup for complex products. This article will describe three common strategies for modeling product variations so you can optimize your markup...

The post Schema Markup for Product Models appeared first on Schema App Solutions.

]]>
Creating schema markup for a single product is straightforward and well documented. But things get more complicated when you’re creating markup for many variations of a product. There are several ways to create schema markup for complex products. This article will describe three common strategies for modeling product variations so you can optimize your markup for search engines.

These strategies are: 1. Simplified and Aggregate Product Offers 2. Each Variant as an Individual Offer and 3. Each Variant as a Product Model.

What is a Product Variant?

Generally, variants are identified as having their own Store Keeping Units (SKUs) which are unique to the Product group and used for eCommerce and Supply Chain information systems. Below is what WooCommerce and Shopify, two popular eCommerce platforms, say about Product Variants.

WooCommerce Variable Products are a product type that lets you offer a set of variations on a particular product such as price, stock, size and more. For example, they may be used on a shirt that’s offered in large, medium and small sizes and in different colours.

Shopify Product Variants are used on products that come with more than one option, such as color or size. Each combination of options is a variant of that product. For example, you might sell a t-shirt with two options, such as size and color. The size option might have three option values: small, medium, or large. The color option might have two option values: blue or green. A variant of these options could be a small, blue t-shirt.

1. Simplified and Aggregate Product Offers

For situations where you don’t have all the data readily available, or want to start off with something basic, you can simplify the product models. With this approach, your Product markup would only use the properties that are shared across all variants such as, name, image, and description. The Product type would then use the offers property to connect to either an Offer, if no variation in pricing was present, or an AggregateOffer if pricing changed among the product variants.

For example, if you’re selling shoes, there may be variations in sizing and colour, but all of them are the same price. You could create markup for a single Product, excluding all sizing and colour information, and connect it to an Offer data item with the price shared across all product models. This is what the markup would look like:

{
  "@context": "http://schema.org/",
  "@type": "Product",
  "name": "Clarks Falalala Shoes for Men",
  "image": "https://example.net/shoes/clarks-falalala.jpeg",
  "description": "A great comfortable walking shoe, carried in sizes 9-11, but you wouldn’t really know that unless you applied fancy NLP to this string",
  "offers": {
    "@type": "Offer",
    "price": 45.99,
    "priceCurrency": "EUR",
    "availability": "InStock"
  }
}

If you were selling something that varied in pricefor instance, Soap that comes in 250ml, 500ml and 1000ml bottles—then you could call out the lowest price and highest price using AggregateOffer:

{
  "@context": "http://schema.org/",
  "@type": "Product",
  "name": "Super Suds",
  "image": "https://example.net/soap/super-suds.jpeg",
  "offers": {
    "@type": "AggregateOffer",
    "lowPrice": 5.99,
    "highPrice": 17.99,
    "priceCurrency": "EUR",
    "availability": "InStock"
  }
}

2. Each Variant as an Individual Offer

This first option doesn’t tell the machine-channel anything about the variation of products you carry, nor does it provide the granularity of stock information by individual SKU. The next level of detail would be to include each variant’s price and availability as a separate Offer. Each Offer should have (as recommended by Google) a sku to differentiate it from other variants, along with its price and availability. Using the same example as before, we might generate:

{
  "@context": "http://schema.org/",
  "@type": "Product",
  "name": "Clarks Falalala Shoes for Men",
  "image": "https://example.net/shoes/clarks-falalala.jpeg",
  "description": "A great comfortable walking shoe, carried in sizes 9-11, but now size 11 isn’t in stock",
  "offers": [ {
    "@type": "Offer",
    "sku": "QWERTYSHOE-9",
    "price": 45.99,
    "priceCurrency": "EUR",
    "availability": "InStock"
  },{
    "@type": "Offer",
    "sku": "QWERTYSHOE-10",
    "price": 45.99,
    "priceCurrency": "EUR",
    "availability": "InStock"
  },{
    "@type": "Offer",
    "sku": "QWERTYSHOE-11",
    "price": 45.99,
    "priceCurrency": "EUR",
    "availability": "OutOfStock"
  } ]
}

The Soap Suds example shows varying Offer properties sku, name, price, priceCurrency (in ISO 4217 currency format) and availability;

{
  "@context": "http://schema.org/",
  "@type": "Product",
  "name": "Super Suds",
  "image": "https://example.net/soap/super-suds.jpeg",
  "offers": [{
    "@type": "Offer",
    "sku": "egsoapsuds-250",
    "name": "Soap Suds 250 ml",
    "price": 5.99,
    "priceCurrency": "EUR",
    "availability": "InStock"
  },{
    "@type": "Offer",
    "sku": "egsoapsuds-500",
    "name": "Soap Suds 500 ml",
    "price": 10.99,
    "priceCurrency": "EUR",
    "availability": "OutOfStock"
  },{
    "@type": "Offer",
    "sku": "egsoapsuds-1000",
    "name": "Soap Suds 1000 ml",
    "price": 17.99,
    "priceCurrency": "EUR",
    "availability": "InStock"
  }]
}

3. Each Variant as a Product Model

If your products have significant variations among their critical properties, you may want to use the Product Model approach. Essentially, you define a schema.org/Product as the base product, adding properties that are common across all variations. Then, to express properties that are variable, use the ProductModel type. For example, the iPhone 11 is a Product with certain consistent characteristics, but there are different options for GB of memory, colour, and pricing. Each combination of these properties would be a different instance of ProductModel:

{
  "@context": "http://schema.org/",
  "@type": "Product",
  "name": "iPhone 11",
  "description": "A great device, loads of memory, 1 million different apps preloaded, outstanding camera, and even makes phone calls!",
  "image": "https://example.net/phones/apple-iphone11-jpeg",
  "offers": {
    "@type": "AggregateOffer",
    "lowPrice": 599.00,
    "highPrice": 899.00,
    "priceCurrency": "USD",
    "availability": "InStock"
  },
  "additionalProperty": {
    "@type": "PropertyValue",
    "name": "Memory",
    "unitCode": "E34", 
    "unitText": "GB",
    "value": "64"
  },
  "model": [ {
    "@type": "ProductModel",
    "name": "iPhone 11 with 64GB",
    "color": "White",
    "offers": {
      "@type": "Offer",
      "price": 599.00,
      "name": "White iPhone 11",
      "availability": "InStock"
    }
  },{
    "@type": "ProductModel",
    "name": "iPhone 11 with 64GB",
    "color": "Red",
    "offers": {
      "@type": "Offer",
      "price": 649.00,
      "name": "red usually costs slightly more because it's faster",
      "availability": "InStock"
    }
  },{
    "@type": "ProductModel",
    "name": "iPhone 11 with 128GB",
    "color": "White",
    "offers": {
      "@type": "Offer",
      "price": 899.00,
      "name": "White iPhone 11",
      "availability": "InStock"
    },
    "additionalProperty": {
      "@type": "PropertyValue",
      "name": "Memory",
      "unitCode": "E34",
      "unitText": "GB",
      "value": "128"
    }
  }]
}

Note that ProductModels themselves may contain other ProductModels. This relationship can be defined using the isVariantOf property.

ProductModel Examples in the Wild

If you’d like to see more ProductModel examples in the wild, you can use PublicWWW to search for any schema class: see example.

unitCode Lookup Values

If you’re wondering where the unitCode “E34” comes from, then you’ll want to look up UN/CEFACT Common Codes for specifying the unit of measurement. Here are some common codes for various units of measurement. A spreadsheet is available to download here.

UN/CEFACT Common Code Unit of Measurement
28 kg/m²
2N dB
4H µm
4K mA
4P N/m
A24 cd/m²
A86 GHz
A94 g/mol
B22 kA
B32 kg • m2
B43 kJ/(kg.K)
B49 kΩ
B61 lm/W
BAR bar
C16 mm/s
C24 mPa.s
C26 ms
C45 nm
C62 1
C65 Pa.s
C91 1/K
C94 min-1
CDL cd
CEL °C
CMQ cm³
CMT cm
D33 T
D52 W/K
D74 kg/mol
DAY d
DD °
E01 N/cm²
E32 l/h
FAR F
GM g/m²
GRM g
HTZ Hz
HUR h
KEL K
KGM kg
KGS kg/s
KHZ kHz
KL kg/m
KMQ kg/m³
KVT kV
KWT kW
L2 l/min
LTR l
LUM lm
LUX lx
MBR mbar
MHZ MHz
MIN min
MMK mm²
MMQ mm³
MMT mm
MPA MPa
MQH m3/h
MQS m³/s
MTK
MTQ
MTR m
MTS m/s
NEW N
NU N • m
NU N.m
OHM
P1 %
PAL Pa
SEC s
VLT V
WTT W

We want your schema markup to be successful! Schema markup can be time-consuming and complicated. That’s why we’re always looking for ways to make things easier for customers through our comprehensive solutions. Book a strategy call with our technical experts today!

Start reaching your online business goals with structured data.

 

The post Schema Markup for Product Models appeared first on Schema App Solutions.

]]>
https://www.schemaapp.com/schema-markup/schema-org-variable-products-productmodels-offers/feed/ 0
Interview with Nick Wilsdon – Data Portability & Google/Amazon Friend or Foe [Podcast & Transcript] https://www.schemaapp.com/schema-app-news/nick-wilsdon-data-portability-search/ Tue, 26 Mar 2019 17:25:21 +0000 https://www.schemaapp.com/?p=7659 Schema App’s CEO Martha van Berkel interviews Nick Wilsdon, search product owner from Vodafone on the topic of, “Data Portability and its role in Search”. Nick shares how he is seeing the search landscape change and how data will be the foundation for companies moving forward as the customer experience becomes fractured. Finally, they have...

The post Interview with Nick Wilsdon – Data Portability & Google/Amazon Friend or Foe [Podcast & Transcript] appeared first on Schema App Solutions.

]]>
Schema App’s CEO Martha van Berkel interviews Nick Wilsdon, search product owner from Vodafone on the topic of, “Data Portability and its role in Search”.

Nick shares how he is seeing the search landscape change and how data will be the foundation for companies moving forward as the customer experience becomes fractured. Finally, they have a conversation about Amazon and how they are disrupting the ecosystem.

Some of my favourite moments in the conversation with Nick include:

“There’s so many different places we are finding now to interact with the web, and I think that’s the big change that we are having to deal with. It’s not so voice itself, isn’t the biggest change. It’s the fact that internet has lept out to being something contained on your laptop or your phone to be something that just surrounds us, and it will be on every billboard, every bus stop, every screen.”

“[In the EU],  there are pretty stringent controls on Google in terms of how much news, how much of a snippet, how much is fair use, how much is just taking advantage of the publisher and then taking their information. I think this is, again this is going to be the fight. It’s going to be the fight that the affiliates have had for years. They have been dealing with this for a very long time, they have been producing the content, they have been doing the work and they have a pull/push, good/bad relationship with Google where they kind of, cede some control to get more visibility. Now brands, they are kind of engaged in that same kind of you know, fight with Google, you know, how much can we give Google, how much can we still retain control? This will be something that we will have to think about a lot, and this will feed into the whole data issue if you lose control of your content, you also lose control of the data surrounding your content.”

If you’d like to listen to this interview in Podcast form, check out Connecting the Digital Dots, Interview with Nick Wilsdon on Spreaker or search for it on Google Podcasts. Enjoy the conversation!

__

Martha: Hi and welcome to Schema Stories. It’s Martha Van Berkel, the CEO here at Schema app and I am delighted today to be joined by my friend Nick Wilsdon. Welcome Nick.

Nick: Hello Martha, I am glad to be here as well. Thank you.

Martha: Nick and I had the pleasure of speaking at Tech Retail in London, U.K in September and had many fun conversations out of where we thought this world of structured data was changing and how it played a role in data architecture and thought we’d share some of that conversation with everyone today. So, Nick, to start off, why don’t you introduce yourself and tell us a little about what you do in this area of SEO?

Nick: Yeah, absolutely Martha. My background is primarily SEO. It’s been about 20 years nearly in this field. My current role is cross Vodafone Group. So, I work across all markets globally, 27+ markets. In terms of search products any kind of project or innovation project around search and SEO.

Martha: Excellent. So, today we’re going to talk a little bit about the changing landscape and specifically the roles that the structured data is going to play and maybe a little bit of background on sort of Nick, your journey with structured data sort of when did you first learn about it, and start using it.

Nick: Oh, that’s a very long time ago. Yeah, it’s hard to put my finger on when it all started.

Martha: I know it was like 2012-2013 when we were starting to deal with it. So, I imagine you were in the same boat.

Nick: Yeah, must been around then. It was very interesting idea the fact that we could then markup certain bits of information, to give that information away in a way Google understood and I remember back then, Yandex back then was looking at it in Russia . So, it makes sense, but I don’t think we looked at it initially for locations, for addresses I think if I had to guess, my first usage of Schema would be for addresses and business names. It seemed to make a lot of sense and I think we saw initially really though something where you get clear advantage in SEO over your competitors because your information was marked up in that clearly understood way by the search engine. So, it’s very much search engine optimization in the very traditional sense.

Martha: And do you think about it differently now? Was 2018 an interesting year? In fact, I was talking to someone who would reach out to us his past year you know 2019 is going to be the year of Schema Markup and structured data and I sort of chuckled, I said “I was thinking 2018 was going to be the year of structured data and Schema Markup!”. How did you see things change from when you first started using it and as it accelerated like this past year ? Could you talk a little about your perception of that and its relevance to the business?

Nick: Absolutely. It has certainly changed. It’s gone from being something where you are sort of marking something for advantage in Google, to where organizing data and organizing your information. Now that’s sort of a dramatic change. I think we’ve seen that especially really with things of voice coming into the picture where we are starting to see how to mark up specific information for voice. Schema is going to take a much wider usage really, I think across websites, across all information – it is now information management technique, categorization technique. Now Linking between those different entities and make sure they make sense to search engines. So, taking on a much wider appeal. And it’s certainly crossing many more fields from just location pages and store pages to something that you use across entire site for products, for any kind of information you can think of really the way that it’s expanding.

Martha: So, it kind of brings me to a question I often have which is like, is structured data really just an SEO strategy now, right? Like with in Vodafone like who else is involved in these discussions because of the changes it has started to make?

Nick: Yes, not just an SEO thing at all, it is sort of becoming a proxy CMS really for data isn’t it. It involves many many different teams now in terms of getting that Schema implemented and these are requests I’m seeing coming from the development side as well because they want to write semantically and the ones right in a way that makes sense. So, it’s not questions that are only coming from SEO teams. I think SEO is still primarily the driver of Schema because you need to have sort of value proposition behind schema and certainly, we will understand that it’s making website information more understandable to Google. That’s great, you need to have a kind of business aspect to this as well and I think SEO kind of gives you that in a sense we have added this to our websites we will get kind of more visibility and more traffic. Actually, not just more traffic, but the better kind of traffic, because we define our information in a better way.

Martha: Yes, quality traffic!

Nick: Exactly it’s quality. I think SEO still kind of drives schema adoption, but certainly, it has a wider appeal now than it used to.

Martha: One of the topics you and I spoke about at tech retail that was a little disruptive to the people who attended was I made the blank statement, “You’ve lost control of the customer experience.” Let me clarify. What we are talking about was how people are finding answers in search right or going through other channels like voice. As a result, people are getting some of the information they are looking for without ever getting to your website.  This makes me think of schema as a data strategy, right? I think about sort of the change in search, sort of almost like supporting that it is a data strategy which then leads to the next big question of like, are websites relevant? Or people are just going to consume the data with context and understanding,  through those different channels or through the channels they choose?

Nick: Yeah, now you are right. I think this is kind underlines the dilemma that publishers have really with Schema. How much do I mark up my information and just give it away to other services, give it away to Google?  Because once I have marked up everything in a way that Google can understand, they can simply take those snippets and put them into the SERPs and then no longer need to send traffic to my website. This is the dilemma that all people have if I  categorize too much have I given everything away to third-party platforms. This is a dilemma for a lot of publishers. But I am in a similar boat to you Martha, I think you can’t really think in that way because SEO is about being discovered in lots of different mediums and ways and not only search results. Speakable markup, it very much in line with that. You need your information data to be found in many other places. It needs to be portable and this portability provides the value so you can’t hold on to, too tightly to the fact that people aren’t going to come to your website. You need to be focused entirely on am I getting the sales, am I getting the conversions that matter. If I am getting these conversions through partnerships with third parties, who are taking part of my data, somehow the sale is still coming through to the business and that’s what really matters.

Martha: Revenue becomes the ultimate measure, right? And a lot of these other things really truly becomes vanity metrics, right?

Nick: Absolutely, it has to be revenue, has to be about being discovered in all these different platforms and mediums, you know from voice to all forms of search. Yes, it has to be revenue and not simply traffic.

Martha: It comes back to SEO kind of getting tough on those ROI numbers, right? Because the actual endgame is key. We mention voice and you know, Amazon has kind of, come in like a ten-ton truck, right, into the voice base. They are also owning the distribution channel from a retail space which I think is also interesting. When we started thinking about how you are ordering through your Alexa that revenue is going through that one channel. How do you think about Amazon? Amazon doesn’t necessarily have a search engine, although have been partnering with Bing. How do you see Amazon disrupting the search landscape?

Nick: They are certainly leading in terms of product discovery and I think when we’re seeing surveys from Jumpshot (survey last year), the majority of product searches actually carried out on Amazon, they are not carried out on Google. And that doesn’t really come as a surprise when you look at Google’s products’ offering is clearly inferior to Amazon’s. So, I think product discovery happens on Amazon, and I think that is the way they can really disrupt this with voice. They dominate at the moment for a voice tech, they clearly have the most distributed devices. Even though, you know, Google has technically more, because they have the phones thrown into those numbers.  Amazon is way ahead. So, I think Amazon is going to disrupt search a lot because they are going to be focused on the product and if they focus on the product, that’s where the money is. If anyone knows how the web develops, you follow the money. So, that’s the threat really that Amazon has. But I can see that they’re incredibly interested in Schema, they are incredibly interested in owning a knowledge graph around products and they can probably do that in a better way than Google can. At the moment they have got far more data to work with, far more historical data to work with.

Martha: Yeah that’s interesting, we’ve seen them hiring, semantic intelligence and knowledge engineer. Since we play in that world. So, it’s interesting starting to see more people who are knowledge engineers coming from the Amazon side. I think it will be really interesting, right now we don’t see them necessarily publishing a lot in how do I adopt structured data more around skills. Can you see those worlds, merging or do you see Amazon starting to publish more about how they are using it?

Nick: They are using a lot of data in different places. Amazon is an incredibly exciting company at the moment,  they fascinate me. I think they are looking at the crossovers between these. I mean, I saw something literally the other day something that I thought was brilliant. Amazon was releasing advertising for relaxing sounds and relaxing sleep albums that they have now released because they can sense there is a demand for them. I find it absolutely fascinating that it coincides with the demand that you can clearly see in terms of the top skills for relaxing sounds and for sleep-related skills that are available in the ecosystem. You are kind of left wondering whether one is informing data on the other.

Martha: Is that circle.

Nick: Yeah, it’s that circle. It like what they are doing with the skills. They have done this before in terms of sales. When something sells particularly well on Amazon, Amazon then releases it as an Amazon product so you can find the Amazon USB cables and all these things available because Amazon can clearly see as a need. And then they are very commercial company and they step in to fill that need. So, I think they will do that in a much better way, and it comes back to the point about the kind of following the money. Amazon is very good at following the money. That’s kind of where they are quite a big threat to Google, who would probably do this more for a sort of wider, more educational kind of piece or…

Martha: … or to make money in ads

Nick: I shouldn’t give them that much credit…yes money in ads, exactly, Martha.

Martha: I believe they follow the money too. It will just be really interesting how that business model gets disrupted with these new channels consuming information. So, the question I like to ask is you know, will websites be relevant in three to five years?

Nick: Yes, I think they will. They’ll still be there but there will just be one place you update your data to. But I think websites themselves will more likely to follow a kind of methodology, being more database driven. So, all of your data is held in central CMS (ie. headless CMS). You will concentrate more on the data and the website will just be one place,  one container, that you port your data to. But you will have your data in a fluid way that can really be ported around every other screen that will be there – from our voice tech to your screen that will be around different rooms or in the back of your car. There are so many different places we are finding now to interact with the web, and I think that’s the big change that we are having to deal with. It’s not that voice itself, is the biggest change. It’s the fact that internet has lept out to being something contained on your laptop or your phone to be something that just surrounds us, and it will be on every billboard, every bus stop, every screen. It’s just going to surround us now the internet. I think when you have an environment like that, your data has to be portable to survive in that environment.

Martha: … and control how it’s understood, right? I think that’s a lot how we look at it. How do you actually build in those control points to add context, especially as it becomes globally relevan?. And the portability is also really interesting from the standpoint of licensing. So, that’s something else we’ve been looking at is, when it stops becoming just like the primary place, people are consuming and if Google is then reusing that data in different ways, how do you also put control points in sort of understanding who can license it and who can use it.

Nick: Yeah that’s the issue that the EU is having, particularly only. So, there are pretty stringent controls on Google in terms of how much news, how much of a snippet, how much is fair use, how much is just taking advantage of the publisher and then taking their information. I think this is, again this is going to be the fight. It’s going to be the fight that the affiliates have had for years. They have been dealing with this for a very long time, they have been producing the content, they have been doing the work and they have a pull/push, good/bad relationship with Google where they kind of, seed some control to get more visibility. Now brands, they are kind of engaged in that same kind of you know, fight with Google, you know, how much can we give Google, how much can we still retain control? This will be something that we will have to think about a lot, and this will feed into the whole data issue if you lose control of your content, you also lose control of the data surrounding your content. You are not collecting that data on the users who are engaging with your content and often that’s the biggest value that publishers have.

Martha: Understanding that audience.

Nick: Reselling that data.

Martha: Absolutely. Nick, this has been awesome. Thank you. We will leave it at that sort of scary thought of the future and Google or Amazon friend or foe and how do you take control of your data. Nick if people want to follow you or find you online, where do they look?

Nick: Yeah, absolutely. I am on Twitter a lot. So, feel free to follow me on Nick Wilsdon on Twitter or you can find me fairly easy on LinkedIn or NickWilsdon.com.

Martha: Excellent. Thanks, so much Nick. Hope you have way less snow in the U.K. than we do here in Canada today and look forward to continuing the conversation.

Nick: Brilliant, thanks Martha. Take care. Cheers.

At Schema App, one of our core values is to always be learning and teaching. That’s why we love talking with other structured data experts!

Are you ready to unleash the power of structured data?

 

The post Interview with Nick Wilsdon – Data Portability & Google/Amazon Friend or Foe [Podcast & Transcript] appeared first on Schema App Solutions.

]]>
How to Get Review Snippet Rich Results for Local Business using Third Party Reviews https://www.schemaapp.com/schema-markup/get-rating-rich-results-for-local-business-with-third-party-reviews/ Sat, 07 Jul 2018 18:35:33 +0000 https://www.schemaapp.com/?p=13076 Review Snippets are awesome! When they show up under a search result, you can get significantly higher click-through rates, ranging from 20-82% (as per Google’s latest case studies). While these coveted rich results are commonly found for products, and local businesses, there has been much debate as to whether you can achieve them using third-party...

The post How to Get Review Snippet Rich Results for Local Business using Third Party Reviews appeared first on Schema App Solutions.

]]>
Review Snippets are awesome!

When they show up under a search result, you can get significantly higher click-through rates, ranging from 20-82% (as per Google’s latest case studies). While these coveted rich results are commonly found for products, and local businesses, there has been much debate as to whether you can achieve them using third-party review sites (such as Google, Facebook, etc.).

Local business schema markup Google search

 

Well, we have good news! Based on evidence over the last few years, we are here to say that YES, you can get Review Snippet rich results for schema.org/LocalBusiness using third-party reviews. While there are some caveats and guidelines you need to follow, it bucks the prevailing advice in the marketplace.  Why trust us on this? We’ve helped clients get out of penalties with this recommendation using third-party reviews and wanted to share the learning with you. Let’s dig in.

Google has public documentation for each search engine results page feature, for reviews, you can find it here:

https://web.archive.org/web/20210302200820/https://developers.google.com/search/docs/data-types/review

Within which, there are two types of Review features.

  1. Critic Reviews, this movie critic or a publishing company with professional authors.
  2. Review Snippet, this is the most common feature coveted by SEO Professionals. It generates a 5-star rating on the SERP pages and reliably provides a 10%+boostt in Click-Through Rates. How does one qualify for these? As my ninth-grade teacher used to quip, “If all else fails, read the instructions”.

Given we are talking about LocalBusiness, let’s see what kind of Schema.org markup qualifies.

We recommend you review the general structured data policies documented by Google Search Documentation. Next, have a look at the fine print in the guidelines.

  • Aggregate Rating is recommended for grouping multiple Review Ratings. This is clear.
  • Refer clearly to a specific product or service. Simply put, you must either relate the Aggregate Rating to the thing being reviewed using schema.org/aggregateRating or schema.org/itemReviewed.
  • “Make sure the reviews and ratings you mark up are readily available to users from the marked-up page. It should be immediately obvious to users that the page has review or rating content.” This is where we see the Structure Data Penalties coming from.

Show Third Party Reviews

If you are marking up third-party reviews, e.g. Google My Business or Yelp, you need to provide a sample of those reviews on the screen to the user. GMB even has an API which you can use to keep the list fresh, rolling through the most recent few reviews. So, in the page content we would show each review with its reviewBody, the rating number out of the scale used, along with the author. For each of those review samples, provide the schema markup for each and include the URL to link to the rating on the third-party site. For good measure, you can add an href link for the customer to follow to read more.

Here is what it would look like on your page:

 

Equally important, we would show the Aggregate Rating, its ratingCount (or reviewCount) plus the ratingValue. In plain text in this example, you can see 4.7/5 across 20 reviews. By showing this information you are showing the user all the information in addition to informing Google with the schema.org markup. Add the URL to guide web users to the source of the ratings so there is no question as to where the data resides 

Continuing with the guidelines:

  • “Provide review and/or rating information about a specific item, not about a category or a list of items”, this is straightforward, match the aggregateRating up to the actual content. Although it does raise a point of clarification. The LocalBusiness in question that you’re marking up must be the primary topic of the third-party review site. You cannot use GMB for all your Products, and one Facebook page rating should not be used for numerous subOrganizations and the parent company. If its scope is too broad, don’t use it, if it’s too narrow, the rating is for a Product or Service, you should instead use the rating on the page that speaks to the Product or Service.
  •  “No reviews are shown for adult-related products or services.” Against the guidelines.
  •  “Single reviewer name needs to be valid” reviews need to be from real people.
  • “Ratings that don’t use a 5-point scale” must include the scale of rating with bestRating & worstRating.

Now, given all that, the misunderstanding for LocalBusiness Review Snippets for third party reviews comes from the LocalBusiness guidelines, in the second tab.

 

The misunderstanding for LocalBusiness Review Snippets for third party reviews comes from the LocalBusiness guidelines, in the second tab.

Here it says “Google may display information from aggregate ratings markup in the Google Knowledge Cards. The following guidelines apply to review snippets in knowledge cards for local businesses:” Emphasis should be on Google Knowledge Cards here. This criterion, for aggregate rating information in the right-hand side Knowledge Cards, displays differently than the primary organic results review rich results. These are two different features, and the requirement for the knowledge cards is more strict and SEOs commonly misattribute the restriction to both features.

Basically, the rule is if you are using third-party reviews you are ONLY eligible for review rich results and not star rating in knowledge graph cards. You still have to comply with the other guidelines described above, but it’s totally feasible.

I can also attest to this working for several structured data manual penalties we have fixed for clients. In each case, we update the contents for showing reviews and linking to sources and double-check the third-party reviews are actually about the business.

Given the interest in the topic, I’m surprised by the number of people in the SEO industry who continue to say you cannot use third-party ratings for LocalBusiness. Hopefully, the word gets out and we can all move forward with well-presented, well-sourced and representative schema markup. It would also be nice if Google tweaked the language in Docs.

Reflect in Schema Markup

Reflect in Schema Markup
Reflect in Schema Markup
Reflect in Schema Markup
Reflect in Schema Markup

Product Example

As another example, we have our WordPress Plugin Reviews, which gets a rich result in search, and validates in the SDTT.

Schema app WordPress plugin in Google

 

Sample reviews of Schema App's WordPress Plugin

 

Schema App plugin validates in Structured Data Testing Tool

 

Schema Markup on the page is about the product and has the nested Aggregate Rating schema markup that reflects what is on the page.

Schema Markup on the page is about the product and has the nested Aggregate Rating schema markup that reflects what is on the page

Schema Markup on the page is about the product and has the nested Aggregate Rating schema markup that reflects what is on the page.

 

 

The post How to Get Review Snippet Rich Results for Local Business using Third Party Reviews appeared first on Schema App Solutions.

]]>
Interview with Schema App Creator Mark van Berkel: Schema at Scale Now and in the Future https://www.schemaapp.com/schema-app-news/interview-with-schema-app-creator-mark-van-berkel-schema-at-scale-now-and-in-the-future/ https://www.schemaapp.com/schema-app-news/interview-with-schema-app-creator-mark-van-berkel-schema-at-scale-now-and-in-the-future/#respond Fri, 06 Jul 2018 19:43:33 +0000 https://www.schemaapp.com/?p=6765 Martha: Hi and welcome to schemas stories. My name is Martha van Berkel and here we interview people who are thought leaders in the schema markup and structured data world and today I’m absolutely delighted to welcome my co-founder Mark van Berkel. Welcome Mark. Mark: Hi Martha. Martha: So, let’s kick off and talk a...

The post Interview with Schema App Creator Mark van Berkel: Schema at Scale Now and in the Future appeared first on Schema App Solutions.

]]>

Martha: Hi and welcome to schemas stories. My name is Martha van Berkel and here we interview people who are thought leaders in the schema markup and structured data world and today I’m absolutely delighted to welcome my co-founder Mark van Berkel. Welcome Mark.

Mark: Hi Martha.

Martha: So, let’s kick off and talk a little bit, have you start by just telling us a little bit about yourself, a little bit about your background.

Mark: Sure.  So, I started off as a developer and spent a few years doing custom development. In 2005 started Master of Engineering. So, there I started to learn about semantic technologies way back in the day. I was 13 years ago and did a proof of concept for SP research labs which was really interesting and that kind of whet my appetite for getting into it, but we were a little early even in those days of course with the tools that were available. Spent a few more years doing consulting and other kind of IT projects and then 2012 started Hunch Manifest and started rolling through some different ideas and eventually landed on the Schema App idea. So, yeah, I guess my background; developer first and I was into a technical team leads architecture really interested in the information architecture especially and kind of the interplay between semantic technologies and the rest of the world.

Martha: Fantastic and so tell us why did you build schema app? Where did that come from?

Mark: Good question. Well, while we started the business in 2012, I hadn’t yet determined, or we hadn’t determined what we’re going to actually build and sell and what was going to take off, so we had tried a couple of ideas. One of the ones was in 2013. I had built a kind of a little gadget which would help some of our marketing clients to help get found on the web and it was specifically very narrow. We were looking at home and construction businesses and this is back in 2013, so we were trying to create the kind of templates for schema markup and sort of put it in the hands of those people who need help getting found online. So, that was a very early days and it was just kind of an interest of mine, but it was about a year later when I was at SEMTECH BIZ conference in San Francisco where there were a few different thought leaders again in kind of the intersection of semantic technology and SEO. I believe J Myers from Best Buy was there, Barbara Starr and some other person on the panel and I’m forgetting who it was at this moment but it was kind of an interesting intersection because they were even articulating how few people there were that lived at that intersection. So, coming from the semantic technology background was like “Oh well. This is something that’s very interesting to me and something that we can work on”. So, it was from that I started to—I introduced a JSON-LD schema generator. So, this was in 2014 but it was still a bit premature. So we were at a conference later, probably a year later after that first one, I met Aaron Bradley at the conference and he’s like “Oh.  That’s a great little tool” and he wondered though if Google was using it to reward companies with Rich Snippets and the answer at that point was like “No. I haven’t seen evidence of that actually yet.” But, so, we were a little early with the JSON-LD product but from there it was just kind of like—Google then did support it and then it was like “Okay well, it’s game on. Let’s really get this schema product and schema vocabulary into the hands of a lot more people. So, for me it was just how can I make it a lot easier for adoption; Where marketers who maybe don’t have the IT resources to build into their site how do we kind of start getting some robust schema markup into their hands and then also for the experts. How do we enable them to do the more complex things but at a scale that is otherwise quite difficult to achieve? So, to us that’s really where I think you know we’re providing a lot of value and where we continue to kind of build out our products.

Martha: Now you kind of skipped over the fact that you also built it for you to use, you know, as we found ourselves as a digital marketing agency. Do you want to talk a little bit about sort of the challenges when you first started schema markup and then we’ll transition to talk about like some of the biggest challenges of doing it at that scale?

Mark: Yeah. So, we had a period of two years roughly where we were a marketing agency providing kind of SEO and email marketing strategies and services and where this was something that we often recommended clients to take a look at. We did want to include schema markup among their kind of tactics and so for me it was kind of a productivity tool. How can I generate the schema markup a little quicker. So, at the time, there was no generators out there and there was one actually by Raven Tools, but it was very limited in its scope. So, I wanted something for the whole vocabulary, so I could more well describe all those businesses that we were doing marketing services for, so describing all their services and all the products with their additional attributes and properties and all those things is where like the juicy details live, where you can actually really articulate something and again Google was moving quickly—they’re introducing new features. So, for us it was a productivity tool, so how can I generate it and then also maintain it. That’s a big part that I think is overlooked. It’s great to generate and copy and paste code in but what happens next month when the rules change or the vocabulary changes like what do you do to go back and update all that stuff and I’m not really interested in doing a lot of maintenance as my co-founder can attest in terms of like doing work over and over again. I’m really happy to figure out how to solve it once for a bunch of people so that now with kind of this database generating tool we can then actually query the data, update the query or update the data on the fly and kind of make the maintenance a breeze rather than having making it a pain and actually overlooked.

Martha: You talked to me a little bit. You talked a lot about maintenance as being one of those big challenges when you’re starting to do schema markup more and more detailed as well as that scale can you talk a little bit more about sort of other challenges you’ve seen working with sort of global clients around doing schema markup at scale and perhaps how you ever come though at those challenges.

Mark: So, I guess primarily there’s a bit of a divide between marketing and IT – like marketing wants to adopt all these tools, all these features so that they can get the latest from—the latest features from Google and yet IT wants to be in control of the technology. They want to be the ones calling the shots and the implementing and sometimes there’s a bit of a tension between the want to go fast and while also maintaining, you know, maybe the stability of the system and kind of maintaining like that which IT does often and does well. So there’s a bit of this tension and that’s kind of one thing that we often deal with is how do we kind of speak to the IT team with how they can and are still playing a role among the roles and the different ways in which they may want to think about this. There’s a whole lifecycle approach to this—like you know there’s probably four different steps to the lifecycle of your schema markup. There’s one about determining what’s your strategy; what are the things you’re going to mark up, what are kind of the content things that you may want to adjust in your HTML,  secondly there’s the generating part—like how do you actually map the data that you have into the schema markup, then there’s also a reporting kind of aspect to this. A third step, so like how do you know that and how do you measure that it’s been implemented. How do you know like the depth or the breadth of your content and schema on your website. And then there’s also the reporting and then leveraging this, so once you have all this schema markup you can then repurpose it for analytics or repurpose it for chatbots. There’s always so many things they’re going to add on and this kind of expansion opportunity that IT doesn’t look at. So, they [IT] look at like the second step. Marketing might look at the first step which then marketing might say okay here’s what we want to mark up and then IT says okay well here’s your generated code but then they wave their hands and they’re done. So you know, how does marketing kind of keep on top of the quality of that content – is it meeting the needs they want and are they actually then repurposing it to kind of make the best use of all this rich information. So that’s kind of like I guess overall like we see that over and over this kind of pattern or maybe limited thinking in terms of the lifecycle but more specifically like microdata for a long time has been the kind of method of choice. So, adding properties within HTML elements has been a great way for templating those repeated data items within a web page but the challenge with that has been the maintenance again so like designers might go in and adjust the display of the image and they might forget to retain the item prop and you know there goes your image feature for Google and you know while marketing may also be wanting to get things in there you know the kind of interplay of like the developer and the designer and the marketing team like kind of like there’s too many hands in the pot with my core data so it also has some challenges with all of those changing requirements and kind of the busy landscapes and simpler smaller teams like maybe that’s not a challenge maybe that person is all one in the same and you know everything’s hunky-dory. Also, JavaScript implementations are quite popular in the last year as well but they’re also not a golden bullet I would say. Basically, it’s just you take JavaScript there’s a couple of different approaches you could use. I think predominantly they use like data scraping so you use JavaScript to inspect elements on the page to determine you know for some ID or some class within the HTML that’s unique to the page you grab that and then you pull it in to the name of the product or the name of the article or whatever the case may be. But again, this is sensitive to changes in design so if you’re having a team that’s maybe a bit distributed and either maybe the designers adjusted something again or perhaps it’s a third party that you have on that site which does reviews, and they put in widgets for all the reviews that thing has generated or the aggregate score if they change something that can throw off your JavaScript. So, again there’s quite a bit of maintenance to it so that’s one of the challenges with JavaScript and then there’s also kind of like the moody structured data testing tool that like month to month maybe works and maybe doesn’t and so like you have to really know whether or not your JavaScript implementation is valid so check them to see if it’s compliant with the Google Bots. oh that’s a Chrome version 41 so like making sure that like there’s no JavaScripting in there that’s going to have a problem with that. It’s going to help you to be assured that the Google is actually going to pick it up and then there’s other ones like the other consumers don’t see it so Google is great to be supporting a render JavaScript but being in some of the other ones do not provide that kind of level of quality for JavaScript support. So, this is also kind of one of the challenges with JavaScript but it’s great to often let’s say bootstrap things because it’s pretty—if you have a tag manager—it’s pretty easy to setup and pretty easy, pretty quick to get done. The other solutions, I don’t know, maybe templated JSON-LD. Like if you have some sort of template for products or for articles then you may want to have the developer create schema markup in a JSON-LD block. So at least there you’re taking out the design of the page you’re just kind of you have that information layer they translate and see a markup and you maybe have the marketing person who’s asking for questions for changes in that schema markup and that can take a couple of weeks though, or months of time before you have those cycles where the development team actually gets the changes implemented. You know they’re also, as a developer myself, I also had to learn the lesson that I’m not a schema expert so even asking a developer just to put schema markup in there, it doesn’t mean it’s going to be right and it probably won’t be unless if they also had their hand and the schema markup pot for a while to understand some of the best practices.

Martha: And so, I’ll just interrupt you here to say like does schema app solve some of those problems like as part of the reason you built it to try to look at the best of those worlds. Can you just speak briefly about that since I want to also talk about schema ownership as another topic?

Mark: Yeah. So, yes, we solved a number of those things. So, especially the IT challenge so we provide marketing team the tools to be able to markup these things without having to touch a line of code so yes so like the time to change is much faster. They’re able to deploy markup on a whole site within a couple of hours. Even for big sites if it’s templated. So, yeah, our highlighter does quite a bit of the work for these large sites and then also our editor is quite robust as well at creating the detailed markup that they may want to do but then we also provide some of the add-ons that kind of bootstrap—get you up to it kind of a minimum level of quality for all your schema.org vocabulary on your site so for WordPress you can install our plugin or for Shopify or add-on and things like that and that’s kind of like getting you to at least I’ll say like table stakes and then after that we kind of want you to optimize even further yet. So, we give them that flexibility. It’s not bulletproof by any stretch of the imagination like maybe the designs could have an impact you to how we roll out our schema markup, but you know as long as it’s in single hands or single control of one person then I think it it’s at least something that can be solved.

Martha: Can you talk a little bit about schema ownership and this is something I think that’s starting to evolve in the market as we you know even see the datacommons.org or come out and talk about clean reviews and even some of the releases we saw today about talking, about how you can, you know, like have your site crawl their URLs for job postings or tell Google which sites have job postings if you talk a little bit about schema ownership how do you see that today and then where you see that going?

Mark: Yeah. The recent stuff is all around the SD publisher and SD license—structured data I don’t know why they abbreviated it but its stands for license structured data publisher I think it is and so those are attributes you can put into your schema markup to say I’m the owner and I license anybody under the Creative Commons you know XYZ license to use this data so this is definitely useful for instructing those consumers like Googlebot or Bing or you know Yandex or Schema App or whoever to kind of like give them the instructions for how they want to be used. So, I think that would allow you to kind of open source your data so if you want it to be kind of like really like shared through the data Commons you can include that publishing license to say like yes that’s great like put all my data or put certain parts of my data into this kind of you know broader web repository and I think that’s just it’s a very practical step for actually acknowledging their license of which you’re sharing this information so there’s that kind of ownership of that data but even within an organization I think there’s ownership like to questions sometimes like around who has the responsibility for that schema markup? Is this a question of like is there a data architecture team or is there a marketing function like who owns that that schema quality and that schema you know like quantity even like who’s responsible for the site wide maybe it’s segregated by sub-site and I don’t know if people really have a clear sense of who takes ownership for that and some businesses like I think often it comes to and falls to marketing but like it’s not really like a question we hear very often.

Martha: Very cool! Thanks for sharing.  So, one last question and we’ll be at a time. So, where do you see this evolving? Where do you see this going you know over the next? I’ll say one, two, maybe five years.

Mark: So, where are things going? Um – good question. So, I guess with the segway from the last one is that data Commons is kind of an interesting possibility so today that has the claim review and you can download all this basically schema markup from the data Commons and you can then kind of do what you want to, use it to do some analysis to determine whether or not the claim reviews should be true and when they’re actually stated as false and going to do some interesting add-ons from that and I would say like this is probably an early signal of what is to come. So, I do really think there’s the opportunity is hidden these kind of add-ons and other ways in which you can kind of look at your data like I have kind of a list of things that I’m thinking about or things that I think we’re starting to see emerge and you know some things are not even new likes just like maybe reimagine ways of doing things because now you have a common vocabulary for a lot of things in a lot of sites so like it could be their spins from talk for a couple of years around augmenting your analytics data with your business information or semantic analytics so like how do you kind of segment your data for your blog posts on your site by author or by tag or by category and to kind of provide additional insight as to which is the best performing types of content like this is its I still don’t think that’s got enough legs yet and so that’s got a long way to run because it’s still kind of difficult to get set up and what else like there’s still things like if you have a knowledge graph like let’s say if you’ve done a good job your schema.org markup on your site. What else? Can you do that like you could think of this as a data repository for informing a chatbot for instance like this would be a very common one – I think that people can look at. So, there’s lots of interesting actions in the vocabulary. So, maybe there’s a potential action to read a white paper. For instance, like if you have all your white papers or those downloadable things in a vocabulary then you can then provide that over to some sort of chat bot or just translation into some other consumers for that list of all the white papers or it could be other forms like contact forms like there’s a bunch of different actions I take a view actions and watch actions like for different tools and like we see a bit of the Google assistant stuff helping you to link up with podcasts and you can play like a podcast based on the schema markup so why couldn’t it be a good TV show or a video object and you know why does it have to be limited to Google when Alexa is also coming around doing similar things and you know or there’s maybe your own experience like you could just repurpose that same information and into your own internal assistant services or chatbots like maybe through the Facebook chatbot infrastructure and other things I’ve thought about are like different browser extensions like ways in which like consumers can just kind of like leverage the schema markup or maybe on-site search of the AdWords custom targeting I haven’t seen I personally haven’t seen too much of that implemented but I’ve heard people talking about it and I think that is kind of an interesting opportunity. Otherwise like I should just mention generally of the schema.org community like that vocabulary will I’m sure it’s going to continue to expand like we continue to see like two or three releases per year so we’re going to see more and more specific classes and enumerations and maybe we’ll see more extensions like the gs1 or maybe we’ll actually get a definition of Google’s own extension for the things that they keep putting out but it’s I think that can also be motivated by these other add-on so let’s say the strategy could be driven by this kind of augmented analytics so like knowing which content articles perform the best can inform your content strategy and so maybe there’s just a way to segment the data that you could expose with schema.org but it’s not in the vocabulary so you may want to just create your own extension and so I think that that’s probably got a long way to go but this is a good place to start. Yeah, otherwise I think I’d have to just add one more thing. Google’s adding so many things that are interesting lately that it’s going to be a lot of fun to see where Google continues to expand their feature set.

Martha: Lots to think about. I feel like we could have almost a podcast or an interview on like each of those add-ons sort of exploring you know how else can use your schema markup. So, thank you Mark so much for joining us today. If people want to find you online where should they look?

Mark: So, I’m semi-active in a number of communities including the Google+ semantic search marketing group from Aaron and Jarno so I participate there but I also I’m on Twitter too @vberkel and I’m on LinkedIn Mark Van Berkel, pretty easy to get and yeah through schemaapp.com that’s our main website so and somewhere in there you’ll find my handywork as well.

Martha: Yeah Mark at schemaapp.com, it’s easy to find him. Thank you, Mark, for joining us today and for helping us understand where this is evolving to and for sharing where schema is started. Thank you for joining us and have a great day.

At Schema App, one of our core values is to always be learning and teaching. That’s why we love talking with other structured data experts!

Are you ready to unleash the power of structured data?

 

The post Interview with Schema App Creator Mark van Berkel: Schema at Scale Now and in the Future appeared first on Schema App Solutions.

]]>
https://www.schemaapp.com/schema-app-news/interview-with-schema-app-creator-mark-van-berkel-schema-at-scale-now-and-in-the-future/feed/ 0
Additive Schema.org Data for Local Inventory Advertising https://www.schemaapp.com/schema-markup/additive-schema-org-data-local-inventory-advertising/ https://www.schemaapp.com/schema-markup/additive-schema-org-data-local-inventory-advertising/#respond Wed, 08 Nov 2017 15:38:43 +0000 https://www.schemaapp.com/?p=6019 Scenario We recently had a project that started as a National Retailer wanted to pilot Google’s Local Inventory Advertising (LIA) program. The Advertising program bridges the online and offline world. The retailer, which has 500+ stores nationwide, could advertise products that are in stock locally. If you search for “Bauer Excaliber hockey skates” you would...

The post Additive Schema.org Data for Local Inventory Advertising appeared first on Schema App Solutions.

]]>
Scenario

We recently had a project that started as a National Retailer wanted to pilot Google’s Local Inventory Advertising (LIA) program. The Advertising program bridges the online and offline world. The retailer, which has 500+ stores nationwide, could advertise products that are in stock locally. If you search for “Bauer Excaliber hockey skates” you would not only get an advertisement from the retailer but also if it’s schema.org/InStock at a location and distance to that store. It’s a powerful conversion opportunity, exploiting the retailer’s vast network of store and inventory, consumers get actionable information to go pick it up physically.

Local Inventory Ads Example

Part of the conditions to LIA program is to have Schema.org data on Product detail pages at a level that details the availability in each store. For each schema.org/Product and its schema.org/ProductModel, we need to expose their schema.org/Offer and its schema.org/availableAtOrFrom StoreID.

Approaches considered

Like many large businesses, the Marketing team has numerous initiatives dependent on IT for delivery. They had an aggressive timeline, 8-10 weeks, when started we provided them several schema.org at scale options to consider. To maximize the number of data consumers of schema.org data we wanted to render the schema.org data server side, so that it is available in the HTML Document Object Model on page load.

  1. Custom Programming: Typically this meant encoding data into the Product Detail Page template, using either Microdata or JSON-LD This approach, however, relies heavily on the IT team which couldn’t meet the aggressive schedule due to other commitments.
  2. Bulk Data Transformation: Our next best option was to implement a bulk data transformation, which would take the same Google AdWords Feed and supplement with BazaarVoice review data.
  3. Javascript Rendering: The third approach is to implement Javascript which builds the JSON-LD dynamically after page load.

Ultimately, we chose #3 Javascript Rending because it involved no IT, it could meet the LIA requirements for Google in a short time and the data/code remained within the control of the Business.

Problems with SDTT & GTM Datalayer

The initial implementation by the team was to provide schema.org data mapped from Google Tag Manager Datalayer variables. The data layer had Product Information, Offers for different skus for the selected store and the BazaarVoice review data (AggregateRating). The only catch was, this data while convenient was only available after 5-10 seconds because of all the Javascript on the page to prepare the data. To add complexity, the retailer has different pricing for different stores. For example, Rural stores which incur more shipping costs could have higher prices than urban centers. Therefore, on each page load needs to first geolocate the user and suggest the closest store, before the prices could be shown. Similarly, BazaarVoices elements loads using Javascript and its data is loaded after a few seconds as well.

When we tested the schema.org data, we found the <script type=”application/ld+json”> when we inspected the HTML. However, when we tested with Google’s Structured Data Testing Tool, the tool which does a good job processing Javascript, simply cut off the page after several seconds of Javascript processing and wouldn’t show the data.

The Additive Experiment method

We set up a workshop to go through step-by-step, testing the limits of what could be done. The relevant Product information was spread throughout the webpage, so we wanted to explore the extent of what was possible. In order for this to be successful, we relied heavily on the Google’s ability to reconcile JSON-LD data by its @id values. For example, given the two sample JSON-LD inputs below:

{
  "@context": "http://schema.org/",
  "@type”: "Product",
  "@id": "#product",
  "name": "Blue Widget"
}

{
  "@context": "http://schema.org/",
  "@type": "Product",
  "@id": "#product",
  "image": "http://cdn.amazon.com/TheEnterprise/product_blue_widget.jpeg"
}

Would be reconciled by Google as:

{
  "@context": "http://schema.org/",
  "@type": "Product",
  "@id": "#product",
  "name":"Blue Widget",
  "image": "http://cdn.amazon.com/TheEnterprise/product_blue_widget.jpeg"
}

Therefore, we would deconstruct the data into logical parts, then the additive data elements available to SDTT at the time it cuts off would show the problematic data by whats not shown. The first element, Basic Product Data that’s available in the HTML DOM that comes from the server before any Javascript. When viewing a webpage, if you right-click and View Page Source, that HTML is what comes from the server. This is data which is immediately available to Javascript for parsing and could be published immediately. There, we found Product Name, one image in the meta og:image, the Product Description. We coded some Javascript selectors, e.g. document.querySelector, to build out a basic Product schema JSON-LD data block. As an example, to get the Product Image URL:

document.querySelector('meta[property="og:image"]') 

We implemented this as a GTM Tag that triggers on DOM Ready and Published, ready for our first live test.

Testing & Validation

When testing on Google we found the data was being discovered.

{
  "@context": "http://schema.org/",
  "@type": "Product",
  "@id": "#product",
  "name": "Blue Widget",
  "description": "Lots of great features!",
  "image": "http://cdn.amazon.com/TheEnterprise/product_blue_widget.jpeg"
}

While it was a good first test step, we were still missing necessary aspects, including Offers and Reviews. The next attempt we made was to retrieve Offers. We looked backward from the data layer, to see how it was populated. We discover a Custom Javascript Event that published the Offers once a Store was select. We setup the trigger and once we found the pricing and availability information we were able to add the Offer. We appended this JSON-LD data block by referring back to the #product defined earlier and use the additive information.

{
  "@context": "http://schema.org/",
  "@type" :"Product",
  "@id": "#product",
  "offers": {
    "@type": "Offer",
    "price": 45.00,
    "priceCurrency": "USD",
    "availability": "InStock"
  }
}

If the store is selected, we could include the schema.org/availableAtOrFrom: “StoreID123”. This StoreID has to match those in the Google Business Profile (GBP) account. These are unique identifiers the Retailer assigns to them and reused for LIA and in schema.org markup.

This Offer data works great for Product with no variants or when for Multi-variant Products whose ProductModel (sku) is selected. In scenarios in which a Product has multiple variants / skus, instead of Offer we would show AggregateOffer with a lowPrice.

Rich Snippet Using AggregateOffer

When testing this data block, we were less certain that the data would be available in time for the Javascript rendering for the SDTT. In this test it was available and the data automatically merged into the earlier data block.

Next, we wanted to get the BazaarVoice data for AggregateReting data. We found the Javascript code that loads BazaarVoice and looked for a way to hook into the function. Being that the BazaarVoice Javascript was included in the Page Load DOM, our GTM tag is loaded too late to mutate the Javascript function. We instead attached a Javascript onChange listener to the DIV Review element. When that changes, we would get the rating count and rating value data from DOM elements innerHTML. We could then emit a third JSON-LD data block and tie in with the earlier data using the same @id : “#product”

{
  "@context": "http://schema.org/",
  "@type": "Product",
  "@id": "#product",
  "aggregateRating": {
    "@type": "AggregateRating",
    "ratingCount": 12,
    "ratingValue": 4.6
  }
}

When we published the tags and ran it through Google, the data was available in time and Google rendered the compiled JSON-LD data blocks. The Testing Tool also showed us the Preview button and Product rich result. Had the Javascript had too many delays and didn’t make the cutoff, we knew which was data source was the culprit. A decision would be made whether to include what was discoverable. The data that doesn’t show, we would revisit the Javascript event model to discover alternative methods.

SDTT != Search Console Report. In our experience, if you find schema data is shown in the Structured Data Testing Tool you will find it in the Search Console Structured Data report after 3-5 days (unless its a large site and the pages haven’t yet been reindexed). The opposite, however, isn’t always true. There have been times where we have not seen data in the SDTT but we do find the data in the Search Console. So, even if we’re at wit’s end trying to get the data to show in Search Console, I would leave the GTM Tag in place until we confirm the Search Console report.

Future Work

I really like the additive capability of composing schema.org markup data block by data block. Sometimes all the information isn’t available at runtime and to meet the needs of an early adopter you need to experiment to make it work. In the future, our work with the client will involve adding How-To Videos, cross-sell products, upsell products, related links, additional images all using the additive method.

Of course, this is all great for the retailer, they now receive Google rich snippets and LIA in time for the holiday shopping season. For other schema.org consumers, however, we are less fortunate as they less often support handle Javascript rendering like Google does. For these other consumers we need to include the data from the Server as part of the HTML sent back to the client. This is a business decision, however, what is the additional cost of the IT project to implement weighed against the provable ROI of supporting other data consumers.

If you need a hand getting started with your structured data strategy, we’ve helped customers such as SAP and Keen Footwear drive more quality search traffic to their websites. 

Start reaching your online business goals with structured data.

 

The post Additive Schema.org Data for Local Inventory Advertising appeared first on Schema App Solutions.

]]>
https://www.schemaapp.com/schema-markup/additive-schema-org-data-local-inventory-advertising/feed/ 0
How Does Google Reward You for Using JSON-LD? https://www.schemaapp.com/schema-markup/how-does-google-reward-you-for-using-json-ld/ https://www.schemaapp.com/schema-markup/how-does-google-reward-you-for-using-json-ld/#respond Thu, 10 Nov 2016 02:33:16 +0000 https://www.schemaapp.com/?p=4722 Schema markup is most easily represented in JSON-LD format. In fact, it is the format which Google recommends using when adding schema.org markup to your website. Additionally, many of Google’s search results page features (including rich snippets and Knowledge Graph cards) are enabled by JSON-LD markup. Using JSON-LD makes your content eligible to be presented...

The post How Does Google Reward You for Using JSON-LD? appeared first on Schema App Solutions.

]]>
Schema markup is most easily represented in JSON-LD format. In fact, it is the format which Google recommends using when adding schema.org markup to your website. Additionally, many of Google’s search results page features (including rich snippets and Knowledge Graph cards) are enabled by JSON-LD markup. Using JSON-LD makes your content eligible to be presented in these creative ways, but does not guarantee that Google will present your content in this way. That being said, since Google recommends JSON-LD, it is your best choice of structured data formats.

Google Features Enabled by JSON-LD

  • Rich Results: Schema markup for things like recipes, articles, and videos usually appear in the form of Rich Cards, as either a single element or a list of items. Other kinds of schema markup can enhance the appearance of your site in Search, such as Breadcrumbs, or a Sitelinks Search Box. A sitelinks box is usually found underneath a website’s main webpage result. It shows popular pages on the site as well as in-site search box.
  • Product Reviews: If you’re a merchant, you can give Google detailed product information that they can use to display rich results. This can include adding information about product ratings from your website. Google may then take this rating information and display it within the search engine results page.
  • Knowledge Graph Cards: If your website is reputable enough to be considered an authority on a subject, Google may treat the content on your site as factual and import it into the Knowledge Graph, where it can
    power prominent answers in Search and across Google platforms. This is most easily displayed when you search for definitions of complex concepts such as “structured data”. Knowledge Graph cards appear for authoritative data about organizations, and events. As you can see in this example, this knowledge card appears on the right side of the search engine results page when you search for “Home Depot Company”. Simply searching “Home Depot” usually gives you a knowledge card of the location nearest you, and not necessarily a company overview.
  • Actions in Gmail: JSON-LD markup can be used to enhance interaction with your customers directly in Gmail. It presents users with call-to-actions such as an event RSVP, subscription renewals, social media actions, etc. It can even prompt a product review that can be written without leaving Gmail. These actions enhance user experience by integrating some of the most common actions in one place, creating a more engaging experience for new and existing customers.

 

JSON-LD is very useful, however, it is very difficult and time-consuming to create your markup manually. Schema App is the most powerful JSON-LD creation tool in the world, allowing you to quickly and easily markup entire websites using the full schema.org vocabulary. Other major search engines like Yahoo, Bing and Yandex have similar features to those listed above, which can all be enabled by implementing JSON-LD markup. Schema App is your one-stop shop for JSON-LD creation and deployment, making it the simple solution for your website’s search result needs.

If you need a hand getting started with your structured data strategy, we’ve helped customers such as SAP and Keen Footwear drive more quality search traffic to their websites. 

Start reaching your online business goals with structured data.

 

The post How Does Google Reward You for Using JSON-LD? appeared first on Schema App Solutions.

]]>
https://www.schemaapp.com/schema-markup/how-does-google-reward-you-for-using-json-ld/feed/ 0