Douglas Squirrel
Director, Squirrel Squared Limited

Douglas Squirrel has been coding for forty years and has led software teams for twenty. He uses the power of conversations to create dramatic productivity gains in technology organizations of all sizes. His experience includes growing software teams as a CTO in startups from fintech to biotech to music, and everything in between; consulting on product improvement at over 200 organizations in the UK, US, Australia, Africa, and Europe; and coaching a wide variety of leaders in improving their conversations, aligning to business goals, and creating productive conflict. He lives in Frogholt, England, in a timber-framed cottage built in the year 1450. He is the author of Squirrel’s Tech Radar, Decoding Tech Talk, and Agile Conversations: Transform Your Conversations, Transform Your Culture, co-authored with Jeffrey Fredrick.  

 

What’s up with AI bots and food? First there was a grocery app giving users a recipe for chlorine gas, then Google’s Gemini started telling searchers they should eat rocks every day, and recently McDonald’s had to fire its AI drive-thru order-taker when, among other bloopers, it tried to deliver bacon ice cream to a bemused customer. It seems we’re a long way from the Star Trek replicator that produces a perfect cuppa the instant the Captain announces he wants tea.

My Three Laws of AI

Here is the advice I gave at a recent session with London CEOs.

1. Keep your bot offstage. Let your bot do research or roleplays with you, but don’t let it talk to customers directly. This is the main law violated in all three of the food flubs above.

2. Remember, you might not own what your bot writes. Authors and actors certainly don’t agree that Google and Microsoft have a right to “scrape” their compositions and serve them back to you via a large language model. Don’t assume that creators are going to lose in court or in popular opinion; have a plan B if your AI’s writing turns out to belong to someone else. This is especially important if your developers use Github Copilot and similar tools to write code. Who owns that code may matter a lot to your investors, for example.

3. Trust–but verify–what your bot tells you. Confabulation – the generation of plausible but potentially inaccurate or fabricated information – is a common characteristic of AI language models when they produce responses based on limited or incomplete knowledge. An even larger problem is that what they produce is so terribly boring and obvious. I say that you need a centaur, a human/machine hybrid, to guard against both the comical and the commonplace.

Putting My Own Advice to Use

Let me illustrate further by briefly describing how I use AI in my own work and business, while keeping the Three Laws in mind.

1. ChatGPT suggests synonyms when I don’t want to repeat words, and short sayings when I’m looking for a particular turn of phrase. For example, I asked it to think up two words, one meaning “funny” and another “boring”, and both starting with the same letter. It came up with several examples, and I picked my favorite pair – comical and the commonplace — which I used earlier in this article.

2. I ask ChatGPT (who in turn asks Dall-E) to make me images to use in promoting my free community events on LinkedIn and Twitter, like this cute centaur:

3. I have a researcher find public sources for clients, like their LinkedIn posts and blogs and Twitter feeds, then put those pages into Zenfetch and ask it which of my “chestnuts” and other one-page summaries might best match their current interests. Then they get a personalized email – every word from me, no bots allowed! – sharing what I’ve written and its relevance to them.

4. Laura, our community manager, ran a survey recently asking for the interests of all my Squadron members. This generated a few hundred categories, from accounting to well-being, and I put all those into ChatGPT, plus the text of one of my forum posts. The model quickly tells me who’d be most interested in that particular article, and we @-mention (tag) those people to draw their attention to it.

5. I’m hoping to get my hands on an Apple Vision Pro soon so I can be (I’m told) 42% more productive with lots more monitor space. The goggles are full of computer vision wizardry and machine-learning algorithms, but I won’t have to think about those models at all; I can just enjoy putting apps all over the study and deepfaking myself.

6. Finally, purely for fun I generate trashy trance music using Suno AI It’s surprisingly good at standard genre elements like breakdowns, pulsing synths, and “happy” chords, though the lyrics are utterly laughable and the overall effect is mindlessly derivative. But this is just background “junk food” music for me and it does that job perfectly adequately.

It’s Your Turn to Put My Advice to Use

I hope these examples help you see how you might use AI for your personal and business productivity without falling foul of the Three Laws. Notice, for instance, that I’m using ChatGPT to generate brief phrases and images, but I don’t let the bot do any actual writing for me, and I could easily delete the images from social media if someone complained about their provenance.

In addition, I act as the centaur for my research on client and Squadron-member interests, supervising and interpreting the research results rather than blindly trusting the model. That means putting humans in the driving seat, having a contingency plan for intellectual property challenges, and keeping the robots safely hidden backstage.

If you follow these simple rules, you won’t be serving delicious glue pizza — “Add some glue. Mix about 1/8 cup of Elmer’s glue in with the sauce. Non-toxic glue will work.” – as recommended by Google when asked how to keep the cheese from falling off.

Content Disclaimer

Related Articles