5 Facts About ChatGPT and Generative AI That CEOs Should Know

5 Facts About ChatGPT and Generative AI That CEOs Should Know

If you’ve attended any industry conferences this year, you’re likely aware that ChatGPT and Generative AI, as well as artificial intelligence in general, have taken center stage in the discussions.

However, much of the content in these conferences tends to be vague and overly optimistic, with statements like “AI is going to be disruptive” or “AI is a game changer.”

CEOs and other senior executives are in need of, and are seeking, more specific insights into the impact of these new technologies and how to navigate them effectively.

Here are five crucial insights that CEOs should have regarding ChatGPT and Generative AI:

  1. Generative AI Isn’t Primarily About Cost Reduction

When deploying Generative AI tools and technologies, the initial focus should be on enhancing productivity, particularly by accelerating processes. Estimates of potential staff reductions vary widely depending on the type of role, ranging from 20% to as much as 80%. While there are instances of companies replacing employees with Generative AI to some extent, these cases are rare and have often yielded less than stellar results.

The true impact of Generative AI on businesses isn’t centered around replacing staff, but rather on expediting human productivity and creativity. According to Charles Morris, Microsoft’s Chief Data Scientist for Financial Services, Generative AI should be viewed as a co-pilot, aiding humans in performing tasks more efficiently. Whether it’s executing marketing campaigns, website development, coding, or creating data models, the value of Generative AI lies in reducing time-to-market, not merely reducing costs.

  1. Evaluating the Risks of Large Language Models (LLMs) is Essential

While ChatGPT might be the most well-known large language model (LLM) currently, other major technology vendors are developing or have recently launched their own LLMs (such as Microsoft’s Gorilla and Facebook’s Llama).

By the end of the decade, it’s anticipated that businesses will rely on anywhere from 10 to 100 LLMs, depending on their industry and size. There are two certainties in this landscape: first, technology vendors will claim to incorporate Generative AI technology even when they don’t, and second, they often won’t disclose the weaknesses and limitations of their LLMs.

As a result, companies will need to independently assess the strengths, weaknesses, and risks associated with each LLM. Chris Nichols, Director of Capital Markets at South State Bank, emphasizes the importance of thorough evaluation:

“Companies should adhere to specific standards when evaluating each model. Risk management teams must meticulously track and assess models based on criteria such as accuracy, potential bias, security, transparency, data privacy, audit frequency, and ethical considerations (e.g., intellectual property infringement, deep fake generation, etc.).”

  1. ChatGPT in 2023: A Parallel to Lotus 1-2-3 in 1983

Cast your memory back to the early days of personal computing in 1983 when the spreadsheet software Lotus 1-2-3 burst onto the scene. Although not the first PC-based spreadsheet, it ignited a revolution in personal computer adoption and earned the title of the “killer app” for PCs.

Lotus 1-2-3 significantly boosted employee productivity by enabling users to manage numerical data in ways previously unimaginable. Those who recall that era may remember the reliance on HP calculators for calculations and manual record-keeping.

However, despite the productivity gains, challenges emerged:

  1. Users often introduced errors into calculations, leading to issues for some companies.
  2. Assumption documentation within spreadsheets was scarce, resulting in a lack of transparency.
  3. There was a dearth of consistency and standardization in spreadsheet design and use.

Interestingly, these same challenges that companies faced four decades ago with Lotus 1-2-3 persist today in the realm of ChatGPT and other Generative AI tools. Issues include overreliance on ChatGPT’s sometimes incorrect output, the absence of proper documentation or a “paper trail” for tool usage, and a lack of uniformity in tool utilization among employees within the same department, let alone across the company.

In its heyday, Lotus 1-2-3 gave rise to numerous plugins that enhanced its functionality. Similarly, ChatGPT benefits from a wealth of plugins, many of which are responsible for generating non-text output like audio, video, programming code, and more—augmenting ChatGPT’s capabilities.

  1. The Critical Role of Data Quality in Generative AI

Consultants have long advised organizations to ensure data quality, and this becomes evident when employing Generative AI tools. The age-old adage “garbage in, garbage out” aligns perfectly with Generative AI.

For open-source Large Language Models (LLMs) relying on public Internet data, vigilance regarding data quality is essential. While the Internet is a goldmine of information, it often resembles a data landfill—a place where you may find valuable nuggets but can just as easily end up with worthless debris.

Companies have grappled with granting employees access to the data required for informed decision-making and job performance for many years. Part of this challenge involves deploying tools to access data and training employees to use them effectively.

Generative AI tools simplify some of the complexities associated with data access and reporting software applications, contributing to enhanced human performance. However, the quality of the underlying data remains paramount.

Ironically, it’s important to stop referring to “data” in generic terms. Instead, companies should assess the quality, availability, and accessibility of specific data types, such as customer data, customer interaction data, transaction data, financial performance data, operational performance data, and more. Each of these data categories serves as fodder for Generative AI tools.

  1. New Approaches Required for Generative AI

Banning the use of Generative AI tools is not a practical solution. Instead, it’s imperative to establish clear guidelines for their utilization. Consider implementing the following requirements for employees:

  1. Document Prompts: Mandate that employees document the prompts they use to generate results.
  2. Proofread Output: Ensure that Generative AI output is thoroughly proofread, and employees must demonstrate their proofreading efforts.
  3. Internal Document Standards: Enforce internal document guidelines encompassing the use of keywords, clear headings, graphics with alt tags, concise sentences, and formatting requirements.

Although this may seem like a substantial undertaking, according to Chris Nichols at South State Bank, inaccuracies in Generative AI often stem from poorly structured documents.

Management’s focus is evolving over the course of this decade. In the past decade, businesses embarked on a “digital transformation” journey primarily centered on digitizing high-volume transactional processes like account opening and customer support.

This focus is now expanding to enhance the productivity of knowledge workers across various organizational functions, such as IT, legal, and marketing.

In the short term, entrusting Generative AI tools to run a company autonomously, without human intervention and oversight, would be imprudent. The presence of inaccurate data can lead to undesirable outcomes and errors.

However, in the long run, Generative AI will indeed be “disruptive” and a “game changer.” CEOs must adopt a proactive approach and take substantial measures to ensure that these disruptions and changes yield positive outcomes for their organizations.

Post Disclaimer

Disclaimer: The views, suggestions, and opinions expressed here are the sole responsibility of the experts. No Sandiego Currents journalist was involved in the writing and production of this article.

You might also like …