How can we use the “love your neighbor” principle to navigate and shape a world with AI?
With every new technology comes a host of benefits and drawbacks. The introduction of the automobile allowed people to have a wider radius of daily travel, but it also impacted the intimacy and cooperation of local neighbors. Cell phones have allowed us to stay connected with people who aren’t physically near us, but they have also distracted us from what’s immediately in front of us.
What is Artificial Intelligence?
Artificial Intelligence, or AI, is the ability of computers to do tasks that are seemingly human and holds great promise to make many aspects of our lives better. Yet, it’s important to understand AI’s limitations and drawbacks.
There are a wide variety of types and approaches to AI that have present or potential uses in different environments, using different types of large data sets to predict outcomes or automate tasks in a wide variety of scenarios. Specific applications of AI include automating back-office functions, helping teachers grade homework, updating routes for logistics companies, or assisting with medical diagnostics. Experts debate on whether a general, all-encompassing form of AI is decades away, or generations.
What is a Large Language Model AI?
A Large Language Model (LLM) is an AI algorithm trained on a massive data set consisting of written language, often one essentially scraped from the internet. LLMs like the one that powers Chat GPT are one way of achieving artificial intelligence and have experienced an acute spike in popularity in recent years. A large language model takes a written input, like a prompt or a question, and generates a response. Each response is based on the statistical likelihood of which word follows another, formed by reading millions of texts.
Essentially, LLMs are a glorified and infinitely more complex ‘auto complete’ function, similar to the function that provides you with short suggestions for how you should reply to emails you receive.
What are positive uses of AI?
Knowing that a Large Language Model AI produces its response based on statistical predictions about word patterns, not truth, can help us understand where it is most useful and where it can be counterproductive or even harmful. LLMs have tremendous implications for productivity: drafting the text of various communications, shortening research time, reading legal documents and even generating images or videos (using more complex text to image codes).
This increase in productivity will have a massive impact on investing—creating foundational shifts in how industries operate and enhancing processes for nearly every individual company.
Hallucinations and fictions: the present negatives of AI
There are, however, ways that LLMs can be counterproductive and even harmful if we don’t recognize their drawbacks. They can produce responses that are completely fiction, responses that are called “hallucinations”. In June of this year, a federal judge imposed fines on two lawyers for “submitting non-existent judicial opinions with fake quotes and citations created by Chat GPT.”1 These lawyers engaged with Chat GPT to help find opinions they could cite for a case they were working on, and it fabricated cases that sounded real, because it used its language formation process, but the cases and even the quotes with the cases were entirely fiction.
Another example of a hallucination is that you can ask Chat GPT what books you can read to better understand economics and it may give you the names of real authors and book titles that sound like the types of books those authors would have written, but the books don’t actually exist.
What are some of the risks of AI?
Understanding the drawbacks of LLMs should help us maximize their use in business and personal productivity, but caution us in how much or in what ways we depend on them.
The Risk of Deception: Language has power over how we understand the world and engage with each other. If AI models can beat humans at chess or generate a more accurate and faster report than any person, then what might a world look like when the the most compelling orators who teach and inspire us on how to live aren’t humans? In recent years we have had much discussion over “fake news” and misinformation that has spread widely on the internet and social media, with very real implications for our culture, politics, and social fabric.
We also risk deception in any scenario where an end-user relies on an AI that is fed by faulty data or providing faulty conclusions. These can include scenarios where input data reflects human biases and assumptions, allowing a veneer of objectivity to be placed over outcomes that fortify previously-existing problems
The Risk of Identity Loss: The recent emergence of AI has re-sparked age-old conversations about what it means to be sentient and what it means to be human. Here the New York Times contemplates, “Can Intelligence Be Separated from the Body?”2 At Eventide, one of our core values is the intrinsic and immense worth of every human being. This value is rooted in the Christian concept of the imago Dei, the idea that every human is in some way a reflection of the image of God.
The risks of identity loss and the persuasive powers of LLMs combine if we don’t have an unshifting anchor that grounds us in an understanding that every human being is imbued with intrinsic dignity and worth. One can imagine a scenario in which an LLM could form an instrumental valuation of humanity, fed by texts reflecting utilitarian beliefs about human value, and then persuades a mass of humans to judge other humans by how productive they can be, which would lead us to devastating conclusions for anyone ill, disabled, or too young or old to be sufficiently “useful.”
The Risks of a Changing Economy: Just as during the Industrial Revolution many jobs previously done by humans were taken over by machines, we face a similar revolution before us. Today, we have the opportunity to imagine and plan for a situation that maximizes human flourishing–and if we do not, we risk a scenario in which a small number of people benefit, but society as a whole suffers.
Jeff Van Duzer, former Dean of the Business School at Seattle Pacific University, writes that the purpose of business is twofold: “To provide a community with goods and services to enable it to flourish. To provide people with opportunities for meaningful work.”3
Thinking about business in this way provides a useful guide for optimal ways of integrating AI into a future economy: by offloading work that is unsatisfying, and finding ways to financially reward the types of work that are uniquely human.
Our Responsibility as Investors
As investors, we are at the helm of fueling and shaping the businesses that are influencing how our society functions. At Eventide, we take this role seriously and look for ways we can allocate capital to maximize the optimal uses of each technology that we’re introduced to while avoiding or minimizing potential downsides. For large language models in particular and artificial intelligence in general, it is increasingly important to anchor our approach in a proper anthropology, informed by the intrinsic and infinite value of every human being and promoting the flourishing of all.