Measuring trust The necessity of a FICO score for every AI model

Measuring trust The necessity of a FICO score for every AI model

The Rise of Generative AI: Building Trust and Coexistence

Generative AI

According to the 2023 State of IT research conducted by Salesforce, generative AI is on the brink of becoming mainstream. In fact, 9 out of 10 IT leaders believe that generative AI will play a prominent role in their organizations in the near future. This aligns with McKinsey’s report, which states that 50% of organizations were already utilizing AI in 2022, and IDC predicts a staggering 26.9% increase in global AI spending in 2023 alone. The adoption of AI has also seen a substantial rise in customer service, with an 88% increase between 2020 and 2022.

However, despite the enthusiasm surrounding generative AI, there are concerns among IT leaders. A recent survey found that 64% of IT leaders are worried about the ethics of generative AI, and 62% have concerns about its impact on their careers. The primary concerns include security risks, bias, and carbon footprint. Furthermore, there is a trust deficit among customers, with 23% stating that they do not trust AI and only 56% maintaining a neutral stance. The level of trust can swing either way based on how companies utilize and deliver AI-powered services.

These concerns highlight the importance of building trust in generative AI. To gain a better understanding of how AI solution providers can achieve this, we spoke with Richie Etwaru, an expert in data privacy, advanced analytics, AI, and digital transformation. Etwaru, the co-founder and chief creative officer of Mobeus, shared his insights on how trust can be established.

Arthur C. Clarke once stated that “Any sufficiently advanced technology is indistinguishable from magic.” This sentiment holds true in the case of generative AI. The recent unveiling of OpenAI’s ChatGPT, a highly advanced language model, blurred the lines between technology and magic. While captivating, it also triggered a sense of unease and raised concerns about the limits of comprehension and the potential of AI. Our fear of AI stems from our lack of understanding of how it works and achieves what it does, leading us to imagine all the additional capabilities it could possess.

To demystify AI and alleviate these fears, we need to separate performance from competence, as Rodney Brooks argues in his article “Just Calm Down About GPT-4 Already.” We tend to overestimate the general competence of AI systems beyond their specific applications. Dr. Michael Wu also sheds light on the inner workings of generative models, emphasizing that their responses are based on mathematical foundations rather than conscious intelligence. By revealing the mathematical underpinnings of AI responses, we can confirm that these systems lack human-like awareness.

While our understanding of AI is improving, there is still much to learn. The progress of AI is comparable to being three steps into a 10K race, as AWS CEO Adam Selipsky puts it. As AI continues to evolve, models will become more advanced, necessitating enhanced data mastery, improved model management, greater ecosystem integration, human upskilling, and ongoing mathematical/statistical innovation. Trusting AI is crucial for its growth and adoption, as it is the biggest obstacle hindering its potential.

Coexistence with AI should be our guiding principle. Instead of framing the future as a battle between humans and AI, we must find ways to durably and sustainably coexist with this technology. To evaluate the alignment of AI models with coexistence, a scoring system is necessary. This system would indicate whether an AI model is trusted to serve human needs and align with cooperative human-AI coexistence or not. The European Union’s AI Act has taken initial steps towards this by requiring a CE marking and unique model number for each AI model, but it falls short in signaling trustworthiness. We need a framework that goes beyond technical metrics and explicitly evaluates human benefit, transparency, and coexistence potential.

Companies like Google and OpenAI have started using “model cards” to provide information about their models’ design, data, training, performance, and limitations. However, these cards are often lengthy and hard to understand for the general public. Therefore, it is necessary to develop a simple and easy-to-understand scoring system, such as the Human & AI Coexistence score (HAICO). This score would evaluate AI models based on attributes aligned with human-AI coexistence, producing a score that signals trustworthiness. The HAICO score could range from Non-Coexistent to Coexistent to Very Coexistent, indicating the model’s alignment with serving human needs.

Creating such a scoring framework is not impossible but will require inclusive development and continuous refinement. Tools like TensorFlow Data Validation, CleverHans, Adversarial Robustness Toolbox, Google’s Fairness Indicators, AI Fairness 360, and others can aid in measuring and monitoring AI models. The goal is to establish a trust score for every AI model, similar to a FICO score for determining financial trustworthiness. This framework would help build public confidence in AI and subsequently leverage its power to improve the human condition.

It is crucial that we move past the debate of who will win – humans or AI – and focus on coexistence. AI is not going away, and it is here to stay. Establishing trust and coexistence requires collaborative efforts from all stakeholders in the AI ecosystem. The absence of a scoring framework like the HAICO score could lead to increasing distrust, undermining the potential benefits of AI. The pieces are gradually falling into place, and the time is ripe for implementing a scoring system that serves as a North Star for coexistence.

In conclusion, as the adoption of generative AI continues to rise, it is important to address concerns around ethics, security, and trust. Building trust requires demystifying the inner workings of AI and establishing scoring systems that evaluate the alignment of AI models with coexistence. By doing so, we can foster public confidence in AI and harness its power to improve the human condition.