I’ve spent some time dabbling with large language models like ChatGPT and Bard.
I’m absolutely convinced that AI and large language models, despite their challenges, are the future.
I thought I would try putting into words what it is I think they offer, and also recognize some of their limitations.
What is an LLM and Generative AI?
It might help to level-set a little bit by providing a simple description of what a large language model is. Think of it as a large data set and a self-supervised learning algorithm to understand and summarize statistical relationships in information. The result is a program that can generally understand language and generate probably accurate responses.
I’m adding extra emphasis on probably because it’s important to understand that the algorithm is a probability engine. This means that the algorithm will sometimes get it wrong, leading to some interesting results. The other challenge with the algorithm is with the quality of the data set. Humans are also prone to errors and bias, which can show up in data and influence the outcome.
However, people are also influenced by poor data and the bias of others. It doesn’t change the fact that an AI algorithm is very good at learning processes and systems that people create for themselves for the efficiency of repetitive tasks.
Tasks that can potentially be augmented
AI will shine as an augmentation tool in any kind of repetive process. I suggest this includes fields that have defined “best practices”, including in my field of design and development. I emphasize augmentation because there is a fallacy in the argument that AI will negate the need for people: The data to train AI has to come from experienced people, and experienced people have to start from the bottom.
There are already some instances where AI is used in the market. Chatbots and fraud detection are some examples. But surprisingly, AI is also being tested and used in these fields.
Medical
The pattern-recognition capabilities of AI has been helpful in diagnostic imaging and cancer care. Recently, A generative AI discovered and designed drug for Idiopathic Pulmonary Fibrosis has been in clinical trial.
In another study, AI was found to provide clinical accuracy and safety comparable to that of human doctors, while also providing safer triage recommendations.
Law, specifically Paralegal
AI is already used in law for processes like e-discovery: Collecting, storing, reviewing, and exchanging information related to a case in digital format. It’s also useful to expedite searching for information in case documents. Contract-related functions is another area where an AI is useful, being able to draw from market standard publicly-filed examples.
Education
AI is a capable tutor. Its ability to generate natural reading language makes it useful to explain complicated math problems.
Graphic Design and Web Development
I have to admit, this one is a little scary for me as it’s my wheelhouse. There are several graphic-generating engines available online, such as DALLĀ·E. There are potential issues with using tools like this (see “What About Copyright?”). AI can also write code, in just about any language. I have personal experience using it to write complicated Regular Expressions (see next section).
What About Copyright?
The challenge with LLMs is that due to their being generative, it’s producing content based on probable statistical relationships. Unless a subsequent check is performed afterward, there is no way to know whether it reproduced a copyright-protected work.
This happens in the creative world too — but there are measures for accountability in these instances. We don’t yet have a good understanding of accountability for the recreation and usage of copyrighted works produced by an AI.
Personal Experience
I have some personal experiences using LLMs. I find ChatGPT is great for less volatile scenarios that rely on a mature knowledge base. In my case, this includes: Writing complicated regular expressions and summarizing video captions into a blog post. But there’s one particular example that I want to dive into:
“Patient Symptom Game”
In May 2023, I decided to tinker with ChatGPT to test a hypothesis. My hypothesis was that, if instructed to do so, ChatGPT was potentially a useful model for diagnosing difficult medical cases based on seemingly isolated symptoms.
The cool thing about tools like ChatGPT is that, if you don’t think it’s giving you responses that are leading in the right direction, you iterate on your prompt to anticipate that. After a few iterations, I arrived at this prompt:
"I'm going to give you a patient's symptoms, one at a time. You will present a possible list of diagnoses in order or most likely to least likely. You will also ask me any clarifying questions to aid in your diagnoses. With each additional symptom I present, you will revise your diagnoses from most likely to least likely, and ask further clarifying questions to aid in your diagnoses."
I began to input the symptoms my son Toby was having over the course of a year, before we later learned he was sick with cancer — fibrolamellar hepatocellular carcinoma. This type of cancer that there is little data on how many people have it. It is thought to make up anywhere between 1-5% of all liver cancers. Until the tumor gets larger, many people with this cancer do not experience symptoms. When they do, they can include:
- Pain in the abdomen, shoulder, or back
- Nausea and vomiting
- Appetite and weight loss
- Malaise (having less energy or just feeling unwell)
Toby did have these symptoms, but not at the same time. Complicating this was the fact that his family doctor at the time was dismissive of Toby’s symptoms, so we started taking him directly to the hospital. Then, COVID-19 locked everything down.
Around summer of 2019, Toby was ususually sweaty, often.
In February of 2020, he experienced night nausea.
That summer, he appeared to be losing weight and had less energy.
In August, he looked thin and pale.
In August/September his back was hurting. But his liver hadn’t failed yet, and at this point ChatGPT was correctly diagnosing an issue either with his liver and/or cancer. By October, Toby’s liver had failed and he received his cancer diagnosis.
It is said that with cancer, early diagnosis is key to maximizing chances of survival. Toby’s case was challenging because even after his liver failed, diagnostic imaging was having trouble locating the tumour that they knew was there. So, it’s possible that they would not have been as persistent in scanning for a liver tumour if his liver hadn’t failed yet.
But, there’s also a chance that, with enough compelling data, they might have known where to look and find the tumour. They they might have gotten Toby onto immunotherapy sooner, before his liver failed, and that it might have been sufficient to shrink the tumour to the point where it could have been resected, and that he might still be alive today, 3 years later — long enough to be able to trial a promising vaccine that might be viable for managing the tumour.
Diagnosing a month or two sooner could have given Toby years more of life. AI augmented diagnostics will absolutely make a difference for countless cancer patients in the future.