Delivering trustworthy AI requires a collection of responsible research and deployment practices, which include both technical and organisational interventions. Building understanding of the risks and ethical concerns associated with AI – and the organisational capability to respond to those concerns – is vital.
As businesses harness the power of artificial intelligence, the race is on to effectively regulate it. The EU’s AI Act is building a legal framework to regulate high-risk use cases, with the aim of ensuring AI is developed and deployed safely. Translating these legal frameworks to implementation will require clear obligations for developers. However, regulations may seem intimidating and time-consuming for companies, particularly start-ups and small and medium-sized enterprises (SMEs).
To address this challenge, Saidot helps businesses adopt AI governance and transparency best practices for trust and compliance, by helping them apply a systematic AI governance process across any AI tools they are building or use.
A tool for start-ups
The company has developed an AI governance model that is built to consider the ethical challenges from the perspective of AI SMEs. “The spectrum of risks is really broad and context-dependent,” says Meeri Haataja, CEO and Co-Founder of Saidot. They include privacy concerns and built-in biases, which could be particularly impactful if connected to healthcare, for example. “We need to recognise that the impacts of AI technologies go beyond their intended purposes,” she says.
Saidot’s B2B software as a service (SaaS) platform is designed to simplify AI governance workflows and empower companies to create and operate safe, equal, and transparent AI systems. An important part of this is helping product teams themselves to govern their systems by legal requirements while providing quick access to expertise whenever needed. “A lot of AI teams in previously non-regulated industries are facing strict new requirements. We need to simplify compliance to make this work” Haataja says.
The platform includes a guided AI ethics self-assessment. “It guides product teams through all ethical and legal requirements that they need to comply with when developing or using AI products in a given context, and helps involve stakeholders in the process,” Haataja explains.
Typically, users register on the platform when they begin developing an AI product or plan to purchase a third-party product that uses AI, to check how the technology and their use of it can comply with standards and regulations. The tool can also be used to share this information with external stakeholders. “Sharing transparent information on how their technology works and what risks are involved is a key part of responsible AI for AI SMEs” says Haataja. “It’s already appearing in ESG reports and will only grow as regulations come into force.”
Companies must also keep their records up to date throughout the product lifecycle as their system and use of it evolves. Saidot wants to automate this by monitoring systems and their risk environment, and nudging teams when updates or reviews are needed. Saidot does not audit data entered into its platform but helps get their systems audit ready.
Building the business
Saidot joined the ELISE programme to help European SMEs cope with AI-related ethical questions. Through Open Calls, Saidot’s ethics self-assessment process has guided more than 100 AI products already. “ELISE has been an incredible opportunity to work with a lot of different kinds of AI product companies from different sectors, helping us dive into the specific problems and risks they face,” Haataja says.
Users also include those in the heavily regulated AI healthcare space. “Working with teams developing promising AI tools for mental health and breast cancer detection, for example, we recognise the importance of supporting SMEs on their responsible AI journey to ensure that the full potential of AI for healthcare is realised,” she adds.
The company’s main platform for systematic AI governance and transparency is used by some 450 customers including the City of Amsterdam and the Scottish Government, along with hundreds of AI-focused SMEs.
The need to heed AI regulations is only set to increase as more governments, organisations and traditional businesses embed AI into their processes. Saidot is currently focusing on helping companies and governments transition to meet the EU’s GDPR, Digital Services Act and AI Act and expects demand for its platform to continue growing as more regulations are introduced worldwide, and more companies adopt third-party AI products. Getting ready for AI regulations will concern every company that uses AI, particularly the ones in high-risk domains that have additional regulatory obligations such as financial services, healthcare, education or recruiting, Haataja warns. “Everyone needs to know what kind of AI technology they are purchasing or using and ensure they can buy and use it responsibly. In the ongoing boom of generative AI applications, we’ve all learned that a lot of responsibility lies in the hands of a user.”