menu-control
The Jerusalem Post
The Jerusalem Post: Business and Innovation

Good intentions, bad outcomes: Path to AI runs through strong governance - opinion

 
 Itay Inbar (photo credit: Courtesy)
Itay Inbar
(photo credit: Courtesy)

The AI deployment landscape is ripe for innovation, the author explains.

In artificial intelligence, the road from concept to deployment is fraught with challenges – often posed by the models, especially when they produce outcomes that deviate from historical accuracy and ethical standards.Google's recent controversy with its AI image generator, Gemini, is a case in point. Intending to promote diversity (a noble cause), Gemini generated images that inaccurately portrayed historical figures, sparking a significant backlash highlighting the delicate balance between inclusivity and accuracy.

This incident underscores a broader issue within AI development: the journey from good intentions to successful outcomes necessitates robust governance and rigorous evaluation mechanisms. Without these, even well-intentioned AI models can lead to consequences that fail to meet their objectives and potentially cause harm or misrepresentation. 

While the most prominent companies like Google have sufficiently strong brand power to withstand events like this, the potential ramifications of such an event at another enterprise may have a devastating long-term impact. Ensuring appropriate responses and avoiding potentially harmful ones is a key barrier to widespread enterprise AI adoption, particularly in customer-facing applications.

Navigating the path from concept to impactful deployment demands a comprehensive framework grounded in high-quality data, adherence to ethical and accurate prompting guidelines, and robust governance and evaluation strategies – both before and after deployment. Together, these components ensure that models not only output valuable results but are also ethically sound and socially responsible, underpinned by continuous monitoring to adapt and improve over time.

Advertisement

This framework creates opportunities for exciting new startups to emerge, which we are already seeing both in Israel and across the globe – from optimizing and governing data quality through model testing and evaluation, real-time guardrails, and other areas, there exists a tremendous need to improve the toolkit that enables enterprise AI deployment, creating ample opportunity for startup innovation.

 Artificial intelligence (illustrative) (credit: MEDIUM)
Artificial intelligence (illustrative) (credit: MEDIUM)

The Blueprint: Model Guidelines

Clear, comprehensive model guidelines act as the blueprint for AI development, delineating the boundaries within which AI must operate. These guidelines should specify the technical standards and the ethical and historical accuracy standards that models should adhere to. In the case of Gemini, the lack of nuanced guidelines led to the generation of historically inaccurate images, revealing a gap in the model's governance framework. 

The practice of "prompt engineering," a term that has gained traction lately, plays a crucial role in setting these guidelines. Done correctly, it can steer AI models toward desired outcomes; however, missteps in this area can lead to incidents like Google's recent controversy. In response to these challenges, we anticipate introducing more sophisticated tools to refine the foundational root prompts of enterprise models. These improvements aim to enhance the quality of AI responses while ensuring compliance with ethical standards and regulatory requirements.

The Key: Governance and Evaluation

Ensuring these strategies yield the intended outcomes necessitates comprehensive evaluation tools to assess performance before launch. Like in the software development lifecycle, a stringent testing framework, complemented by a robust implementation toolset, is indispensable before deployment in a production environment. For language learning models (LLMs), these evaluative processes are still in their early innings.


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


The landscape is ripe for innovation, with opportunities ranging from tools that assess key performance indicators (KPIs)—for accuracy and ethical and regulatory compliance—to those that streamline manual evaluations, enhancing feedback and iteration cycles for refining prompt engineering. Moreover, despite the current cost barriers, the potential for leveraging AI to automate these processes suggests fertile ground for new companies and platforms to emerge. We anticipate significant advancements to be made in this domain. 

Google's response to the Gemini backlash—pausing the service and promising to refine the model—highlights the necessity of having mechanisms to identify issues of this sort before they meet customers.

Advertisement

Continuous Monitoring for Sustainable AI

Even if all seems in line at first, ongoing monitoring and user feedback mechanisms post-deployment are essential to ensure that AI models remain aligned with changing ethical standards and societal expectations over time, uncovering fringe cases that elude identification. These mechanisms enable developers to quickly identify and correct any biases or inaccuracies, maintaining trust and reliability in the long run. 

Just as observability is key in traditional software, LLMs necessitate going beyond monitoring technical performance to real-time guardrails that identify adherence to model governance guidelines. We are already seeing an abundance of new contenders vying for a leadership position in this domain, which is becoming increasingly important.

The Emerging Opportunities

In summary, the Gemini controversy highlights the challenges in AI development and underscores a significant opportunity for startups. In this evolving landscape, there's a burgeoning domain for companies specializing in AI governance, evaluation tools, and data quality management. 

Each area presents a massive opportunity to provide the needed tools and expertise to ensure that AI models are developed with ethical considerations and historical accuracy in mind. 

From offering sophisticated data auditing services to developing advanced bias detection algorithms and evaluation platforms, startups have the agility to innovate rapidly. By filling these critical gaps, startups can give enterprises the confidence to deploy AI at scale without unintended consequences. 

As AI evolves, the lessons learned from incidents like the Gemini controversy are clear: good intentions are insufficient. The future of AI deployment must be built on a foundation of solid governance, which will only materialize through new enabling solutions.

Itay Inbar is a principal at Greenfield Partners.

×
Email:
×
Email: