5.5.5 A Pragmatic View of Chatbots: Part 6

Using Chatbots Seriously: Be Very Wary But Don’t Be Cynical

For pragmatists like me it is the utility of a conceptual scheme or practical tool that determines its value. Rather than occupy ourselves with ill-conceived speculations about when error prone systems will reach artificial general intelligence,  it is more productive to consider what can be done with tools presently at our disposal.

Chatbots are probably more suited to fields such as philosophy where there is much text available, an abundance of abstract ideas, very carefully written prose, differing views on almost any sub-field you care to think of, and no unavoidable emergencies.

Click on the image above for a video providing a different outlook on chatbot use by a contemporary philosopher.
An AI generated image.

If the stakes are high in educational, health, financial, professional, or legal matters do not rely on chatbots that are prone to error and have no connection to the world. Non-specialist AI can however be used for more mundane task in these fields. Doctors and those involved in client meetings now report that AI speech-to-text transcription can be very time-saving. 

If anyone tries to sell you a system that will predict the future based on the sort of technology used in current chatbots you are probably best avoiding them. With the possible exception of short-term weather forecasting, where there are vast amounts of highly relevant data available, there is no training set for the future! Indeed the very existence of a prediction in human affairs, such as an impending stock market crash, is likely to significantly change future events. In such cases predictions can become more like self-fulfilling prophecies. Where the stakes are low, for example in carrying out preliminary enquiries, learn how to apply these systems, then do so wisely, while continuing to think for yourself and absorb ideas from higher quality sources.  

It is my guess that many users of AI will be happy to use free systems when they can and be content with a quick reply and brief output. However, by using chatbots in this way you are more likely to encounter errors.  For those interested in serious and deep enquiry we have now entered the era of publicly available but very much slower Retrieval-Augmented Generation (RAG), for a modest subscription fee. RAG might include the addition of relevant documents on your part or an automated and extensive internet search forming part of preliminary ‘thinking’. At a technical level, a well designed RAG system changes the probability of which text strings will be generated in the output, but does not abolish errors.  When the Google Gemini bot user options  are set to ‘thinking’ and ‘Deep Research’, the preliminary text output now gives the impression that this bot is using a more sophisticated technique called Chain-of-RAG. [ See this example created for me] In this type of bot architecture the initial query can be split into sub-queries and used in the retrieval of relevant source documents. The response to an initial document retrieval appears to iteratively influence the future steps of the generation process. With the present day general purpose transformer models, Chain-of-RAG should probably now be used for all professional purposes unless operating in a narrow domain that has all of the needed pre- and post-training. RAG is essentially a productive, although not infallible, ‘workaround’ for the tendency of current chatbots to make both blatant and subtle errors.

For any serious form of inquiry do not use systems that fail to provide sources and in-line links. Of course, if you have no expertise in the domain about which you are enquiring, check the sources given in the prompt outputs. When using general purpose chatbots pay for the best service you can reasonably afford and use a slower and more computationally intensive ‘Thinking’ option combined with ‘Deep Research’. Compare the output of different systems as they can be informative in different ways. Take warnings about possible errors very seriously, such as ‘Gemini can make mistakes’.

Where a professional reputation is at stake, such as in legal cases where a general purpose chatbot might invent non-existent cases for example, use secure specialist AI systems (like Harvey AI) that are designed to make the work of checking easier and less time consuming.

Creating Prompts

Remember chabots are machines so turn off flattering comments about your prompts in the output. Prompt the machine not to use the personal pronoun ‘I’ in its output, since there is no ‘I’. Avoid the use of the word ‘you’ when creating text prompts. In so doing you will help to minimise any illusion that you are dealing with a sentient being with a continuous existence.  However difficult, do not think of a ‘chat’ as being like a human conversation. Human interlocutors can learn by holding conversations. Present commercially available chatbot applications ‘learn’ only during model creation (also known as training). The chatbots are therefore not ‘interacting’ with you in the normal human-to-human sense. Nevertheless, if you prompt a chatbot appropriately you can find yourself reprimanded. Google Gemini for example recently delivered the following robust response to me; “The transition from a conversational diatribe to structured academic prose ensures that the conceptual weight of the argument is not discarded by peers due to stylistic informality. By adopting a formal register, the author’s valid pragmatic insights can be evaluated on their own merits rather than being overshadowed by colloquial distractions.” … “..in several instances, the synthesis relies on outdated assumptions, mischaracterizations of complex philosophical concepts, and a superficial engagement..” Not, I think, the kind of sycophantic response Victor Gijsbers was talking about in his video Rise of the Deception Machines !

Use chatbots as a stimulus to your creative and analytical thinking, not as a mindless solution. Ask chatbots for contrary arguments if you are a student, since the richness of the relevant literature is often expressed in differing interpretations. Include a ‘for and against’ request in your profile or prompt request where appropriate. Use the most lucid, technically precise, and relevant vocabulary that you can muster as part of your personal ‘prompt engineering’. Use a mix of short and long and detailed prompts that are appropriate to your stage in a project. However note that longer prompts and replies consume more energy. Try using short prompts that can return text that can be mined for ideas that can then be used in ‘Deep Research’. Mention the names of relevant people that you know of, and important ideas that they might have contributed. In other words exploit what you already know or have been taught. However be warned that instead of delivering a feast of trite superficiality, chatbot output particularly of the ‘Deep Research’ type, can become an all encompassing whirlpool of complex ideas that can suck in the user, when we would be better engaged reading human-generated text or engaging in healthy physical pursuits that involve interaction with real people. I speak from experience!

If you have a narrow focus to your request, include what you believe to be keywords. Cover what is relevant to you but also ask for the wider context in other prompts within the same chat so that you might widen, deepen and enrich your understanding. Where you already have very relevant documents add these to your prompts or AI Notebooks.  Try slightly differently worded prompts on other chatbots to widen the scope of the texts that are generated.

When carrying out research or study, one of the best uses of chatbots is to find good sources of information or ideas. However, bear in mind that both chatbots and humans are fallible.

Further Reading

Prompt engineering: overview and guide 
https://cloud.google.com/discover/what-is-prompt-engineering

Relevant Videos 

  1. RAG vs Fine-Tuning vs Prompt Engineering: Optimizing AI Models

 

2. A very interesting Substack discussion about the problems in use of Chatbots in medicine
Robert Wachter & Eric Topol – Discuss a Giant Leap Book

 < Previous Part | Contents Index | About page >