AI is a vital enabler of the digitalisation of industrial processes; many industrial applications require AI systems that can be analysed or scrutinised by human users. Progress in explainable AI is supporting the development of safe and effective AI methods for these real-world challenges.
AI could be the trigger for a wave of innovation across industry, but explainability is one barrier to deep learning techniques revolutionising industrial practices. To be successfully deployed in industrial contexts, some element of interpretability or explainability of AI systems is often necessary. Dr Volker Tresp, LMU professor and Siemens Distinguished Research Scientist, notes that while interpretability is often desired, “it is difficult to get explainability in a meaningful way” to meet people’s differing needs and expectations.
However, he says that knowledge graphs “are by nature interpretable,” because they present contextual information that directly relates to human-understandable concepts or ways of representing information. In this way, they help make AI more transparent. A way of presenting relationships between interconnected objects, the graphs extract data from different sources and then identify interrelationships within the data. Machine learning can be employed to infer additional relationships and extract the meaning behind those relationships. The result is a tool that combines data analysis with contextual information; enabling knowledge graphs to be applied across industries and unlock new business benefits in the process.
Wider use of knowledge graphs is part of Siemens’ digitalisation strategy, which seeks to drive progress towards intelligent engineering and manufacturing, and they are of personal interest to Dr Tresp, who leads a research team working on machine learning approaches that operate at the human abstraction level, where the world is described by entities, concepts, and their mutual relationships. “We cover the machine learning aspects of knowledge graphs and connect them to text and visual data,” he says.
Siemens AG is interested in industrial applications for knowledge graphs, which range from turbine predictive maintenance to smart expert recommendation tools. Research activities focus on integrating the solutions across the large and complex businesses of Siemens, which operates in the fields of automation, electrification, and digitisation.
One of Siemens’ success stories in deploying knowledge graphs is its TIA selection tool, which allows project planners to rapidly set up new digitisation projects by performing hardware selection, planning and configuration tasks within one tool, without a manual or any detailed portfolio knowledge. The result is a streamlined, automated process “for error-free configuration and ordering”. The company says it is one of the most prominent examples of tools for supporting engineers who are configuring an industrial automation solution, which can easily consist of hundreds of components with thousands of configurable parameters.
Planning and engineering an automated process or service is a challenging task that requires time, experience and a lot of specific knowledge. The tool uses knowledge graph technology to support engineers in selecting the right components to solve their problems. “It’s a bit like a recommendation system, but it uses more technical background about the entities that can be bought,” Dr Tresp explains.
Knowledge graphs are helpful for users as they can integrate and evaluate in fractions of a second more knowledge than a human brain can process, enabling rapid data analysis, but are designed to assist us in making better decisions. For this purpose, the tool uses data built from an extensive repository of anonymised automation solutions, and a product knowledge graph that incorporates information about product types, variants, and technical features. This enables it to compare components on the technical level and recommends the best-fitting component to an engineer. Siemens AG says the tool aids knowledge transfer between experienced and less experienced engineers while its 60,000 active users save time when completing the configuration process.
The team behind the tool have extended the idea of a knowledge graph further by allowing it to accommodate changes happening over time. Its ‘temporal knowledge graph’ leveraged more contextual information hidden in the data, including temporal dependencies between the actions performed by the user, which improved the quality of recommendations.
Robustness and explainability
Though recommendation tools may not have to be as robust as plant management tools, for example, users still want their tools to be reliable and effective. The TIA selection tool is part of Siemens’ end-to-end approach of Totally Integrated Automation (TIA) that is designed to deliver maximum consistency and transparency and the company says the tool “makes error-free configuration possible”.
Dr. Tresp believes language models such as ChatGPT could be useful in adding a level of explainability to tools used by businesses. For example, language models could explain how AI arrived at a decision or recommendation in a way that is understandable to users.
However, there are risks with relying on AI-generated explanations. ChatGPT has already been shown to give plausible, but incorrect, information. Dr Tresp says there is a serious effort underway to make it more robust and accurate, he says, as well as find useful new applications and “we are not in a completely different situation” with ChatGPT and other language models. “One challenge is how to get the methods we’re developing in our programme to be effective in the context of Large Language Models and foundation models because there are new challenges and novel use cases,” Dr. Tresp says.
Looking into the future
Siemens AG believes smart recommendations are going to be an essential feature of all future engineering tools, helping users to navigate the ever-more-complex engineering landscape. Dr. Tresp and Siemens are passionate about the importance and value of world-class research, including knowledge graphs, but they also appreciate that potentially profitable applications for new AI technologies are crucial. The business case for AI can be difficult, but needs to be established, Dr. Tresp says. As well as testing and integrating new AI technologies, his team is also focused on using them to develop new applications. “It’s difficult to have business models with AI,” he explains. “If you’re solving one problem for one customer, that’s great, but a company like Siemens needs to have a multiplication factor so that the same solution can serve several customers.”