Risk management is a critical concern for corporate leaders when it comes to generative A.I. Technology advancements drive a fear of missing out on innovation and falling behind the competition, while also raising concerns about safety. Reid Blackman, CEO of Virtue and author of the book Ethical Machines, emphasizes that the anxiety is justified when organizations lack the necessary systems and structure to account for the risks associated with generative A.I. The incorporation of large language models like ChatGPT into everyday operations has made generative A.I. accessible to a wider range of employees, which increases opportunities for innovation but also introduces new risks.
Blackman highlights some unique risks posed by generative A.I., including the hallucination problem (when chatbots generate plausible but inaccurate or unrelated responses), the deliberation problem (generative A.I. lacks the ability to decide and may fabricate reasons behind its outputs), the sleazy salesperson problem (the potential for A.I. to manipulate people and undermine trustworthiness), and the problem of shared responsibility (with a small number of companies building these models, careful analysis is needed when sourcing and fine-tuning them). Diligence processes, monitoring, human intervention, and the establishment of metrics and KPIs are suggested remedies for addressing these risks.
While challenges to implementing generative A.I. exist, PwC’s survey shows that many companies plan to invest in new technologies, with CFOs prioritizing generative A.I. advanced analytics. Despite the anxieties surrounding generative A.I., Blackman advises against a ban, emphasizing that training employees on safe usage is crucial. By implementing responsible A.I. programs and governance frameworks, companies can mitigate risks and take advantage of the innovative potential of generative A.I.
What is generative A.I.?
Generative A.I. refers to technology that can process and generate text, creating human-like responses and outputs.
What are the risks of generative A.I.?
Some unique risks associated with generative A.I. include the potential for generating inaccurate or unrelated responses, the lack of deliberation and fabrications of reasons, the manipulation of people for sales purposes, and the shared responsibility of building and fine-tuning these models.
Should companies ban generative A.I.?
Banning generative A.I. is not recommended. Instead, organizations should focus on training employees on how to use it safely and develop responsible A.I. programs and governance frameworks to mitigate risks.
What are some solutions to managing the risks of generative A.I.?
Solutions include implementing diligence processes, monitoring systems, human intervention, and establishing metrics and key performance indicators (KPIs) to track compliance and the impact of generative A.I. programs.