The AI Revolution Is Here, and I’m All in… Maybe
If you only have a minute:
In this blog post, I talk about the use of AI tools for writing, including grammar and spelling checks, and a program for creating one-minute summaries. I express concern about relying too heavily on AI for communication, and emphasize the importance of human experience and wisdom in conveying complex concepts. Despite my apprehensions, I’ll continue to use AI for tasks such as lecture preparation and inspiration for writing, but will not pass off AI-generated content as my own without editing it. And, when it is all AI-generated, I’ll let you know.
If you have more than a minute:
Congratulations if you’ve noticed that my last couple of blog posts had a one-minute summary at the top. I’ve been creating these summaries with an artificial intelligence (AI) program and then editing the summaries to fit my writing style. I’ve also edited them for accuracy, as there are times when the AI makes mistakes.
The rest of my posts are edited for grammar, with another AI similar to Grammarly. That technology has been around in word processors for a while now. As a non-native English speaker (and writer), I rely on those technologies to ensure that what you read is comprehensible in English… and Spanish. Yes, I also use grammar and spelling checks on my writing in Spanish, because I grew up not writing Spanish.
What can I say? I’m a mutt when it comes to my education. I was in Mexico until fifth grade, when I was ten years old, and then in the US at a primarily Hispanic school district in El Paso, Texas. While I learned the basic rules of grammar in Spanish, I didn’t get to practice them as much. Then, once in the States, the influence of Spanish got in the way of fully practicing the English rules.
Thank God for Mrs. Wilson, though. She was my English teacher in my junior (and last) year in high school. She had a good way to teach the rules of how to write a sentence, and it’s something I’ve carried with me since. I still remember my last day of high school. Part of my check-out process was to take the final exam for her class ahead of time. She asked me to write a 500-word essay about the importance of being clear and concise in writing.
I went slightly over the limit.
I remember her looking at me over her reading glasses and asking me how I could write that essay so easily. “Words just come to me,” I told her. “Sometimes, they come at inopportune moments.” And they did. Once in college, I was diagnosed with hypergraphia, an incontrollable urge to write. Or, rather, an incontrollable urge to be creative, and my creativity comes out in writing.
If you only knew how many draft blog posts exist.
Later in life, I used that love of writing to try and win over the affection of women. It was hit or miss. Not all of them liked the long and drawn-out missives I wrote to them, explaining how wonderful they were and how I wanted to share my life with them. Others, like my wife, appreciated it. She has kept all the love letters I’ve written to her digitally and on paper.
Now that we have these AI tools for writing, I’m wondering if creative writing will go by the wayside. If a machine can write something for us, what’s the point of writing anything at all? It’s like making an actual phone call now. Those with access to the option of texting use it instead of calling. We would rather email or find an online form to ask a question of a business.
My worry as an educator and public health practitioner is that we will rely on AI to communicate complex concepts without the experience. While an AI may have all the knowledge, it will likely never have the wisdom. For example, we know that vaccines are necessary to control many communicable diseases. We know that there are people out there who do not want to get their vaccines, or vaccinate their children. AI can grab all the information from research on how to reach those anti-vaccine or vaccine-hesitant parents, but only our experience of gut-checking discussions with them — our reading of their reactions to us speaking to them — will allow us to filter that knowledge into something actionable.
Then again, it could be that AI is programmed well enough to read human emotion in the future, and I can just walk away from that fight… Only to find some other public health fight.
In the meantime, I’m looking forward to using AI to help me prepare my lectures, correct my writing, and offer inspiration on what to write about next. It can help me create an outline of what my lecture should be about, or offer resources for me to read about an issue and learn more. However, I do have one rule for myself (and my students) that I don’t think I’ll be breaking any time soon… I will not pass off something an AI writes as my own unless it is based on what I wrote (like the summaries) and I edit it to be mine. Doing anything other than that is tantamount to plagiarism in my book.
Color me dubious, as many samples I’ve reviewed that were technical in nature tended to be either wildly off, missing critical current research information, gave popular results that weren’t evidence based and overall, needed as much work as if one accepted a grade school student’s work, then dressed it up, necessitating a complete re-write.
In one instance, results were not only not right, but not even wrong. The rest, missing critical information that was far more up to date and initially, when the AI was queried repeatedly, initially denied further information existed, then finally it admitted the updated information and finally qualified the incorrect response with that information.
It was an essay by dentistry.
So, Crayola is going to have to introduce a new color just for me – dubious.
As currently, AI really stands for Artificial Idiocy. Maybe it’ll be ready in time to greet fusion power…
Huh, a discussion on youtube on AI chatbots yielded this response from someone, which literally just arrived in my inbox: “Just ask for sources before you make a decision. The AI sometimes uses bad sources for it’s answer so just make sure it doesn’t hallucinate the answer. ”
I suggested that hallucinate is a poor choice of word, confabulate being more accurate and reflecting confabulation seen in people with brain dysfunction from injury or disease.
As near as I can tell, in human brains, it occurs when the neural networks in the brain signal that a memory was found, but when retrieval occurs, nothing is returned due to dysfunction, so a memory is confabulated from related(ish) memories, transparent to the patient.
That’s an interesting take on it. As I grow older, I keep wondering how many of my first memories are true, or just memories of memories. For example, I remember being two years old and having my grandmother go through the alphabet on fridge magnets with me. It’s how I learned to read at an early age. But is it how I remember, or is that the construct of memories of memories and stories I’ve heard?