[[{"@type":["BlogPosting"],"@id":"https:\/\/www.schemaapp.com\/schema-markup\/how-to-leverage-your-content-knowledge-graph-for-llms-like-chatgpt\/#BlogPosting","@context":{"@vocab":"http:\/\/schema.org\/","kg":"http:\/\/g.co\/kg"},"url":"https:\/\/www.schemaapp.com\/schema-markup\/how-to-leverage-your-content-knowledge-graph-for-llms-like-chatgpt\/","publisher":[{"@id":"https:\/\/www.schemaapp.com\/#Organization"}],"audience":"https:\/\/schema.org\/PeopleAudience","inLanguage":[{"@type":"Language","@id":"https:\/\/www.schemaapp.com\/schema-markup\/how-to-leverage-your-content-knowledge-graph-for-llms-like-chatgpt\/#BlogPosting_inLanguage_Language","name":"English"}],"mentions":[{"@id":"https:\/\/www.schemaapp.com\/entity#Thing3"},{"@id":"https:\/\/www.schemaapp.com\/entity#Thing14"},{"@id":"https:\/\/www.schemaapp.com\/entity#Thing13"},{"@id":"https:\/\/www.schemaapp.com\/entity#Thing5"}],"dateModified":"2024-08-13T18:38:49+00:00","headline":"How to Leverage Your Content Knowledge Graph for LLMs Like ChatGPT","datePublished":"2023-07-04T16:59:54+00:00","image":[{"@type":"ImageObject","@id":"https:\/\/www.schemaapp.com\/schema-markup\/how-to-leverage-your-content-knowledge-graph-for-llms-like-chatgpt\/#BlogPosting_image_ImageObject","url":"https:\/\/www.schemaapp.com\/wp-content\/uploads\/2023\/07\/How-to-Leverage-Your-Content-Knowledge-Graph-for-LLMs-Like-ChatGPT-1.png"}],"mainEntityOfPage":"https:\/\/www.schemaapp.com\/schema-markup\/how-to-leverage-your-content-knowledge-graph-for-llms-like-chatgpt\/","name":"How to Leverage Your Content Knowledge Graph for LLMs Like ChatGPT","articleBody":"It\u2019s no secret that the AI revolution is well underway. According to a report by Accenture, 42% of companies want to make a large investment in ChatGPT in 2023.\nMost organizations are trying to stay competitive by embracing the AI changes in the market and identifying ways to leverage \u201coff-the-shelf\u201d Large Language Models (LLMs) to optimize tasks and automate business processes.\nHowever, as the adoption of generative AI accelerates, companies will need to fine-tune their Large Language Models (LLM) using their own data sets to maximize the value of the technology and address their unique needs. There is an opportunity for organizations to leverage their content Knowledge Graphs to accelerate their AI initiatives and get SEO benefits at the same time.\nWhat is an LLM? \nA Large Language Model (LLM) is a type of generative artificial intelligence (AI) that relies on deep learning and massive data sets to understand, summarize, translate, predict and generate new content.\nLLMs are most commonly used in natural language processing (NLP) applications like ChatGPT, where users can input a query in natural language and generate a response. Businesses can utilize these LLM-powered tools internally to provide employees with Q&A support or externally to deliver a better customer experience.\nDespite the efficiency and benefits it offers, however, LLMs also have their challenges.\nLLMs are known for their tendencies to \u2018hallucinate\u2019 and produce erroneous outputs that are not grounded in the training data or based on misinterpretations of the input prompt. They are expensive to train and run, hard to audit and explain, and often provide inconsistent answers.\nThankfully, you can use knowledge graphs to help mitigate some of these issues and provide structured and reliable information for the LLMs to use.\nWhat is a Knowledge Graph?\nGartner\u2019s \u201c30 Emerging Technologies That Will Guide Your Business Decisions\u201d report, published in February 2024, highlighted Generative AI and Knowledge Graphs as critical emerging technologies companies should invest in within the next 0-1 years. \nA Knowledge Graph is a collection of relationships between things defined using a standardized vocabulary, from which new knowledge can be gained through inferencing. When knowledge is organized in a structured format, it enables efficiencies in the retrieval of information and improves accuracy.\nFor instance, most organizations have websites that contain extensive information about the business, such as its products and services, locations, blogs, events, case studies, and more. However, the information is unstructured, because it exists as text on the website.\nYou can use Structured Data, also known as Schema Markup, to describe the content and entities on each page, as well as the relationships between these entities across your site and beyond. Implementing semantic Schema Markup can:\n\nHelp search engines better understand and contextualize your content, thereby providing users with more relevant results on the SERP\nHelp your organization develop a reusable content knowledge graph. This graph can provide valuable structured information to enhance your business\u2019s capabilities with LLMs.\n\nLearn the fundamentals of Content Knowledge Graphs and actionable steps to develop your own using Schema Markup.Enroll in our Course\nUsing an LLM to Generate your Schema Markup\nTo develop your content knowledge graph, you can create your Schema Markup to represent your content. One of the new ways SEOs can achieve this is to use the LLM to generate Schema Markup for a page. This sounds great in theory however, there are several risks and challenges associated with this approach.\nOne such risk includes property hallucinations. This happens when the LLM makes up properties that don\u2019t exist in the Schema.org vocabulary. Secondly, the LLM is likely unaware of Google\u2019s required and recommended structured data properties, so it will predict them and jeopardize your chances of achieving a rich result. To overcome this, you need a human to verify the structured data properties generated by the LLM.\nLLMs are good at identifying entities on Wikidata. However, it lacks knowledge of entities defined elsewhere on your site. This means the markup created by the LLM will create duplicate entities, disconnected across pages on your site or even within a page, making it even more difficult for you to manage your entities.\nIn addition to duplicate entities, LLMs lack the ability to manage your Schema Markup at scale. It can only produce static Schema Markup for each page. If you make changes to the content on your site, your Schema Markup will not update dynamically, which results in schema drift.\nWith all the risks and challenges of this piecemeal approach, the Schema Markup created by the LLM is static and unconnected for a page\u2014it doesn\u2019t help you develop your content knowledge graph.\nInstead, you should create your Schema Markup in a connected, scalable way that updates dynamically. That way, you\u2019ll have an up-to-date knowledge graph that can be used not only for SEO but also to accelerate your AI experiences and initiatives.\nSynergy Between Knowledge Graphs and LLMs\nThere are three main ways of leveraging the content knowledge graph to enhance the capabilities of LLMs for businesses.\n\nBusinesses can train their LLMs using their content knowledge graph.\nBusinesses can use LLMs to query their content knowledge graphs.\nBusinesses can structure their information in the form of a knowledge graph to help the LLM function more effectively.\n\nTraining the LLM Using Your Content Knowledge Graph\nFor a business to thrive in this technological age, connecting with customers through their preferred channel is crucial. LLM-powered AI experiences that answer questions in an automated, context-aware manner can support multi-channel digital strategies. By leveraging AI to support multiple channels, businesses can serve their customers through their preferred channels without having to hire more employees.\nThat said, if you want to leverage an AI chatbot to serve your customers, you want it to provide your customers with the right answers at all times. However, LLMs don\u2019t have the ability to perform a fact check. They generate responses based on patterns and probabilities. This results in issues such as inaccurate responses and hallucinations.\nTo mitigate this issue, businesses can use their content knowledge graphs to train and ground the LLM for specific use cases. In the case of an AI chatbot, the LLMs would need an understanding of what entities and relations you have in your business to provide accurate responses to your customers.\nUsing the Schema.org Vocabulary to Define Entities\nThe Schema.org vocabulary is robust, and by leveraging the wide range of properties available in the vocabulary, you can describe the entities on your website and how they are related with more specificity. The collection of website entities forms a content knowledge graph that is a comprehensive dataset that can ground your LLMs. The result is accurate, fact-based answers to enhance your AI experience.\nLet\u2019s illustrate how your content knowledge graph can train and inform your AI Chatbot.\nA healthcare network in the US has a website with pages on their physicians, locations, specializations, services, etc. The physician page has content relating to the specific physician\u2019s specialties, ratings, service areas and opening hours.\nIf the healthcare network has a content knowledge graph that captures all the information on their site, when a user searches on the AI Chatbot \u201cI want to book a morning appointment with a neurologist in Minnesota this week\u201d, the AI Chatbot can deduce the information by accessing the healthcare network\u2019s content knowledge graph. The response would be the names of the neurologists who service patients in Minnesota and have morning appointments available with their booking link.\nThe content knowledge graph is also readily available, so you can quickly deploy your knowledge graph and train your LLM. If you are a Schema App customer, we can easily export your content knowledge graph for you to train your LLM.\nUsing LLMs to Query Your Knowledge Graph\nInstead of training the LLM, you can use the LLM to generate the queries to get the answers directly from your content knowledge graph.\nThis approach of generating answers through the LLM is less complicated, less expensive and more scalable. All you need is a content knowledge graph and a SPARQL endpoint. (Good news, Schema App offers both of these.)\n\nThe Schema App application loads the content model from your content knowledge graph, which would be all the Schema.org data types and properties that exist within your website knowledge graph.\nThen the user would ask the Schema App application a question.\nThe Schema App application combines the question with the content model and asks the LLM to write a SPARQL query. Note: The only thing the LLM does is transform the question into a query.\nSchema App application then executes the SPARQL against your content knowledge graph and displays the results or requests as a formatted response using the LLM.\n\nThis method is possible because the LLMs have a great understanding of SPARQL and can help translate the question from natural language to a SPARQL query.\nBy doing this, the LLM doesn\u2019t have to hold the data in memory or be trained on the data because the answers exist within the content knowledge graph, which makes it stateless and a less resource-intensive solution. Furthermore, companies can avoid providing all their data to the LLM as this method introduces a control point to the knowledge graph owner to only allow questions on their data that they approve.\nOvercoming LLM Restrictions\nThis approach also overcomes some of the restrictions of the LLMs.\nFor example,  LLMs have token limits, which restrict the input and output number of words that can be included. This approach eliminates this problem by using the LLMs to build the query\/prompt and using the knowledge graph to query. Since SPARQL queries can query gigabytes of data, they don\u2019t have any token limitations. This means you can use an entire content knowledge graph without worrying about the word limit.\nBy using the LLM for the sole purpose of querying the knowledge graph, you can achieve your AI outcomes in an elegant, cost-effective manner and have control of your data while also overcoming some of the current LLM restrictions.\nOptimizing LLMs by Managing Data in the form of a Knowledge Graph\nYou can machine learn Obama\u2019s birthplace every time you need it, but it costs a lot and you\u2019re never sure it is correct.\u201d \u2013 Jamie Taylor, Google Knowledge Graph\nOne of the most considerable costs of running an LLM is the inference cost (aka the cost of running a query through the LLM).\nIn comparison to a traditional query, LLMs like ChatGPT have to run on expensive GPUs to answer queries ($0.36 per query according to research), which can eat into profits in the long run.\nBusinesses can reduce the inference cost of the LLM by storing the historical responses or knowledge generated by the LLM in the form of a knowledge graph. That way, if someone asks the question again, the LLM does not have to exhaust resources to regenerate the same answer. It can simply look up the answer stored in the knowledge graph.\nUnstructured data that the LLM is trained on can also cause inefficiencies in the retrieval of information and high inference costs. Therefore, converting unstructured data such as documents and web pages into a knowledge graph can reduce information retrieval time and produce more reliable facts.\nAs the volume of data in the hybrid cloud environment continues to grow exponentially, knowledge graphs play a crucial role in data management and organization. They contribute to the \u2018Big Convergence,\u2019 which combines data management and knowledge management to ensure efficient information organization and retrieval.\nBuild Your Knowledge Graph Through Schema App\nIn summary, the integration of knowledge graphs with LLMs can significantly enhance decision-making accuracy, especially in the realm of Marketing.\nThe content knowledge graph is an excellent foundation to leverage schema data in LLM tools, leading to more AI-ready platforms. It\u2019s an investment that could pay off handsomely, especially in a world increasingly reliant on AI and knowledge management.\nAt Schema App, we can help you quickly implement your Schema Markup data layer and develop a semantically relevant and ready-to-use content knowledge graph to prepare your organization for AI.\nRegardless of whether you use Schema App to author your Schema Markup, we can produce a content knowledge graph for you. Schema App can capture the Schema.org data from your existing implementation using our Schema App Analyzer to develop your marketing knowledge graph.\nGet in touch with our team to find out more about how Schema App can help you build your marketing knowledge graph to enhance your LLM.","description":"As the adoption of generative AI grows, companies can leverage their Schema.org content knowledge graph to enhance their large language models (LLMs)."},{"@context":"http:\/\/schema.org","@type":"Organization","address":{"@type":"PostalAddress","streetAddress":"201 - 412 Laird Road","postalCode":"N1G 3X7","addressRegion":"Ontario","addressLocality":"Guelph","addressCountry":"https:\/\/www.schemaapp.com\/#Country","name":"Schema App Address","@id":"https:\/\/www.schemaapp.com\/#PostalAddress"},"logo":{"@type":"ImageObject","width":"290","height":"93","url":"https:\/\/ezk8caoodod.exactdn.com\/wp-content\/uploads\/2020\/07\/SA_Logo_Main_Orange_w300-1.png?strip=all&lossy=1&ssl=1","@id":"https:\/\/ezk8caoodod.exactdn.com\/wp-content\/uploads\/2020\/07\/SA_Logo_Main_Orange_w300-1.png?strip=all&lossy=1&ssl=1"},"potentialAction":{"@type":"ScheduleAction","name":"Schedule a Demo","url":"https:\/\/www.schemaapp.com\/book-a-demo\/","@id":"https:\/\/www.schemaapp.com\/#ScheduleAction"},"image":{"@type":"ImageObject","width":"1350","height":"650","url":"https:\/\/ezk8caoodod.exactdn.com\/wp-content\/uploads\/2021\/04\/Schema-App-Featured-Image.png?strip=all&lossy=1&ssl=1","@id":"https:\/\/ezk8caoodod.exactdn.com\/wp-content\/uploads\/2021\/04\/Schema-App-Featured-Image.png?strip=all&lossy=1&ssl=1"},"description":"Schema App is an end-to-end Schema Markup solution that helps enterprise SEO teams develop a knowledge graph and drive search performance.","knowsAbout":["http:\/\/www.wikidata.org\/entity\/Q1891170","https:\/\/www.wikidata.org\/wiki\/Q6108942","https:\/\/www.wikidata.org\/wiki\/Q26813700","https:\/\/www.wikidata.org\/wiki\/Q180711","http:\/\/www.wikidata.org\/entity\/Q33002955"],"keywords":["Structured Data","Knowledge Graph","Rich Results","Semantic Search","Search Engine Optimization","Schema Markup","Semantic Technology"],"location":"http:\/\/www.wikidata.org\/entity\/Q504114","sameAs":["https:\/\/www.instagram.com\/lifeatschemaapp\/","https:\/\/www.linkedin.com\/company\/2480720\/","https:\/\/twitter.com\/schemaapptool","https:\/\/www.youtube.com\/channel\/UCqVBXnwZ3YNf2BVP1jXcp6Q"],"legalName":"Hunch Manifest Inc","name":"Schema App","telephone":"+18554448624","url":"https:\/\/www.schemaapp.com\/","email":"support@schemaapp.com","knowsLanguage":"http:\/\/www.wikidata.org\/entity\/Q1860","areaServed":"http:\/\/www.wikidata.org\/entity\/Q13780930","@id":"https:\/\/www.schemaapp.com\/#Organization"},{"@context":"http:\/\/schema.org","@type":"Thing","sameAs":["https:\/\/www.wikidata.org\/wiki\/Q117246174","https:\/\/en.wikipedia.org\/wiki\/Generative_AI","kg:\/g\/11vyrwhhl2"],"name":"Generative AI","description":"Generative artificial intelligence is artificial intelligence capable of generating text, images, videos, or other data using generative models, often in response to prompts.","@id":"https:\/\/www.schemaapp.com\/entity#Thing3"},{"@context":"http:\/\/schema.org","@type":"Thing","name":"Knowledge Graph","sameAs":["https:\/\/en.wikipedia.org\/wiki\/Knowledge_graph","http:\/\/www.wikidata.org\/entity\/Q33002955","kg:\/g\/11jtynfm6d"],"description":"information repository structured as a graph","@id":"https:\/\/www.schemaapp.com\/entity#Thing13"},{"@context":"http:\/\/schema.org","@type":"Thing","name":"AI","sameAs":["http:\/\/www.wikidata.org\/entity\/Q11660","kg:\/m\/0mkz","https:\/\/en.wikipedia.org\/wiki\/Artificial_intelligence"],"description":"field of computer science that develops and studies intelligent machines","@id":"https:\/\/www.schemaapp.com\/entity#Thing5"},{"@context":"http:\/\/schema.org","@type":"Thing","alternateName":"LLM","sameAs":["http:\/\/www.wikidata.org\/entity\/Q115305900","kg:\/g\/11kc9956b3","https:\/\/en.wikipedia.org\/wiki\/Large_language_model"],"description":"language model built with large amounts of texts","name":"Large Language Model","@id":"https:\/\/www.schemaapp.com\/entity#Thing14"}],{"@context":"https:\/\/schema.org\/","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"name":"Schema Markup","item":"https:\/\/www.schemaapp.com\/schema-markup\/#breadcrumbitem"},{"@type":"ListItem","position":2,"name":"How to Leverage Your Content Knowledge Graph for LLMs Like ChatGPT","item":"https:\/\/www.schemaapp.com\/schema-markup\/how-to-leverage-your-content-knowledge-graph-for-llms-like-chatgpt\/#breadcrumbitem"}]}]