Sign Up for FREE Daily Energy News
Canadian Flag CDN NEWS  |  US Flag US NEWS  | TIMELY. FOCUSED. RELEVANT. FREE
  • Stay Connected
  • linkedin
  • twitter
  • facebook
  • youtube2
BREAKING NEWS:

Copper Tip Energy Services
Hazloc Heaters
Hazloc Heaters
Copper Tip Energy


AI Topics for Your Management and Board of Directors – Part 1 – Yogi Schulz


These translations are done via Google Translate

By Yogi Schulz

AI is sweeping through most organizations. It’s out of control, like the Wild West. AI output is showing up in reports and presentations. The Apple App Store and Google Play offer many free AI apps of varying quality. Every AI software vendor provides access to their prompt website. AI output is part of search results. AI capabilities are integrated into desktop software.

Management and board members see dramatic headlines in the media about AI fiascos. The articles describe disastrous outcomes that all boards want to avoid. These outcomes include:

  • High organization disruption and recovery costs.
  • Loss of reputation and revenue.
  • Distracting regulator investigations and fines.

On the other hand, the excitement around AI points to an incredible opportunity that no organization can afford to ignore. Improved organizational performance benefits include:

  • Accelerated product development.
  • Enhanced employee productivity and innovation.
  • Improved customer service with richer personalization.
  • Reduced capital and operating costs.
  • Optimized supply chain operations.

For example, SLB, the global oil and natural gas service company formerly known as Schlumberger, announced a 20% increase in digital revenue, totalling US$2.44 billion. SLB attributed the success to the company’s strategic use of AI to optimize operations and drive efficiency for its producer customers, including Aramco.

The board-level challenge is to provide corporate governance guidance and accountability that builds trust and fosters innovation while managing the risks. Effective oversight mechanisms address risks such as:

  • Biased output that leads to discrimination.
  • Privacy infringement that leads to lawsuits.
  • Misuse by staff that leads to Intellectual Property (IP) loss and erroneous recommendations.
  • Regulatory noncompliance that leads to investigations and fines.
  • Poor data management that fails to exploit this valuable corporate asset.

The board’s governance role suggests that future meetings should include a discussion of the following more specific topics:

  • Acceptable AI usage.
  • AI risk management.
  • AI hallucinations.
  • AI project best practices.
  • Cybersecurity for AI applications.
  • AI for cybersecurity defences.

Discussing these topics should lead to policies that form the basis for staff accountability. This first article explores the first three of these board AI topics that implement governance by design policies for AI. The second upcoming article will describe the remaining topics.

Acceptable AI usage

Staff is experimenting with generative AI routinely. They are oblivious to the risks. The board and the CEO should sponsor the development of an acceptable AI usage policy to encourage AI usage while reducing risk. Here’s what makes AI appealing:

  • Access to AI is easy. No approval is required.
  • The cost at AI software vendor websites is free or incredibly low.
  • AI technology is new, fascinating and triggers Fear of Missing Out (FOMO).
  • AI promises to make employees look smarter with less effort.
  • AI apps are readily available for every laptop and smartphone.

A corporate acceptable AI usage policy articulates guidelines and expectations of staff behaviour. The policy:

  • Educates staff about generative AI opportunities and risks.
  • Raises awareness of organization policies.
  • Avoids constraining innovation.
  • Ensures the responsible use of generative AI.
  • Reduces the risks associated with this technology.
  • Describes consequences of policy violations.

One example is IBM’s internal implementation of its AI Ethics policy. The company delivered in-depth training programs and created an internal platform to guide employees on the responsible use of AI, focusing on its benefits and potential risks. This hands-on approach cultivated a workplace culture of responsibility and innovation in leveraging AI technologies.

Related reading: Why You Need a Generative AI Policy

ROO.AI Oil and Gas Field Service Software
GLJ
Tarco | Delivering Engineered Solutions

AI risk management

Organizations and AI application projects are dealing with AI risk haphazardly. They are shooting from the hip. The board and the CEO should sponsor an AI risk management process to reduce AI risk.

The fastest and easiest way to implement an AI risk management process is to adopt one of the existing AI risk frameworks. For example, the MIT AI risk framework starts with these risk domains:

  • Discrimination and toxicity.
  • Privacy and security.
  • Malicious actors and misuse.
  • Human-computer interaction.
  • Socioeconomic and environmental.
  • AI system safety, failures and limitations.

The MIT AI risk framework elaborates on these risk domains with multiple sub-domains.

Organizations can establish a policy requiring every AI application to perform risk management and implement necessary mitigations repeatedly during every project phase. In this way, risk management becomes an integral part of the innovation process rather than an afterthought.

Related reading: What are the risks of Artificial Intelligence?

AI hallucinations

In their rush to complete AI projects, teams often do not pay enough attention to AI hallucinations. AI hallucinations occur when AI applications produce erroneous, biased or misleading output. To reduce the risk and frequency of AI hallucinations, the board and the CEO should sponsor the adoption of processes that reduce hallucinations in AI applications. Considerations to reduce AI hallucinations include:

  • Clear model goal.
  • Balanced training data.
  • Accurate training data.
  • Sufficient model tuning.
  • Precision prompts.
  • Fact-check outputs.
  • Limit the scope of responses.
  • Comprehensive model testing.
  • Adversarial fortification.
  • Ongoing human oversight.

The organization can adopt a policy whereby every AI application demonstrates that relevant processes that reduce hallucinations have been incorporated into the AI application’s design and planned operations before the application can be promoted to production status.

Related reading: How can engineers reduce AI model hallucinations?

Conclusions

Every board of directors should sponsor the development and use of AI governance policies that clarify staff accountability and build AI trust while controlling AI risks. The policies will help drive innovation and deliver measurable business results.

Every organization can develop AI governance policies at a modest cost through the collaboration of staff and external consultants. The operation and enforcement of the AI governance policies are typically assigned to HR and IT staff.


Yogi Schulz has over 40 years of experience in information technology in various industries. He writes for Engineering.comEnergyNow.caEnergyNow.com and other trade publications. Yogi works extensively in the petroleum industry to select and implement financial, production revenue accounting, land & contracts, and geotechnical systems. He manages projects that arise from changes in business requirements, the need to leverage technology opportunities, and mergers. His specialties include IT strategy, web strategy, and systems project management.

Share This:




More News Articles


GET ENERGYNOW’S DAILY EMAIL FOR FREE