Liz Hecht, August 2024 |
|
“AI is far deeper and more powerful than just another technology. The risk isn’t in overhyping it; it’s rather in missing the magnitude of the coming wave … We really are at a turning point in the history of humanity.” ―From The Coming Wave The “21st century’s greatest dilemma” in this book’s subtitle is how we humans can allow new technology such as AI to realize its vast potential for good while not being destroyed by its vast potential for evil. On the good side of the ledger, The Coming Wave notes new medical advances and clean energy breakthroughs, expedited drug discovery, faster and more accurate medical diagnoses, farm robots maximizing yield while minimizing waste, an enzyme that can break down ocean-clogging plastics, drones that allow countries like Ukraine to defend against an aggressor such as Russia and adaptive education systems building bespoke curricula for individual students. In sum, exponential improvements in human life. On the evil side of the ledger, the book describes innumerable “tail risks on a deeply concerning scale.” Things like disinformation as a surgical strike, election tampering via deep fake videos, Russian bots designed to intensify pandemics, large numbers of people with populist leanings out of work and wars that “might be sparked accidentally for reasons that forever remain unclear.” Oh and one also might add: the potential for intellectual property theft on a grand scale,1 the outsized energy requirements of AI2 and the human cost of building AI systems.3 This book sends a starkly simple message: AI and other new technologies represent a tsunami of change with potential for great good but also great evil. Fully containing the tsunami is not desirable (or even possible), but we still must do everything in our power to avoid all that can go wrong. A highly valuable part of The Coming Wave is the summary of 10 interrelated, reinforcing steps toward containment―from technical safety and audits to a more proactive role for government and a stronger culture of learning from mistakes.
Mustafa Suleyman is the co-founder of two AI companies, DeepMind and Inflection, and now serves as the CEO of Microsoft AI. He knows the world of technology―its potential and its perils―from the inside out. And he is worried. He is worried about what he calls “pessimism aversion,” or “the tendency for people, particularly elites, to ignore, downplay, or reject narratives they see as overly negative.” Why “particularly elites”? Because elites―CEOs of companies, heads of state, leaders in their field―are used to being in control. But the coming wave is not easily susceptible to control. “Properly addressing this wave,” Suleyman writes, “containing technology, and ensuring that it always serves humanity means overcoming pessimism aversion. It means facing head-on the reality of what’s coming.” As Suleyman sees it, to confront this 21st century dilemma successfully requires navigating “a narrow path” between “techno-authoritarian dystopia on the one hand” and “openness-induced catastrophe on the other.” Think China’s hyper surveillance of all its citizens versus a misanthropic loner in his parents’ basement engineering a global cyberattack. Lessons for Investment Marketing Professionals The concept of investment marketing seems mundane in light of futuristic, existential concerns of this nature. The Coming Wave nonetheless inspired me to think about how investment marketers can excel in the complex world of AI. A few important lessons emerge: Encourage the use of case studies and examples. Financial journalists and potential investors are increasingly pressing AI company executives and investment managers for specific examples of how AI works. I have listened to many interviews with CEOs of AI companies and come away with zero sense of what problem the company solves or even what they’re selling. The same is often true of companies supposedly using AI to enhance their product offerings. And advertisements for AI companies often are similarly opaque. Many companies unworthy of the acronym now boast about being “powered by AI” without bothering to define what this might mean. Marketers can help their companies stand out with a few specific examples (or “use cases,” in the lingo of this world). What exactly is the product being put in customers’ hands? How does it make people’s lives better? What is the AI component, precisely? Without such information, companies are vulnerable to accusations of “AI washing.” Counter perceptions of AI washing. Technopedia offers an excellent and comprehensive definition of AI washing. Essentially, AI washing is similar to greenwashing―i.e., falsely claiming to invest for the public good so as to capitalize on investors’ social and environmental concerns. Technopedia provides guidance on how to avoid companies engaged in AI washing, including a short list of pointed questions designed to understand precisely how a company defines AI. Get ready for challenging questions. In a market that is often skeptical yet hungry for knowledge, investment company professionals should be prepared to answer a number of defining questions about the role of AI in their own businesses and portfolios:
All the acronyms (AI, AGI, LLMs). Million- and billion-dollar funding rounds. Trading performed mainly by algorithms … The world of AI is still complex and confusing to many asset allocators. Investment company professionals who help navigate the complexity likely will find favor with audiences starved for specificity and clarity. |
|
|