Let AI Do the Heavy Knowledge Lifting

Much has been written about the avalanche of AI bombarding our engagement in the digital realm. For instance, copywriters and editors on LinkedIn lament the myth that an em-dash means “AI did it.” YouTube videos and podcasts [1,2] are helping viewers recognise when “AI did it.”

Bloggers draw attention to the AI-slop served up in the marketing world [3], and those who work online express irritation about how intrusive it is, with a “How can I help you?” at every click. Now Firefox has “Kit.”  

So, what does AI like ChatGPT do? Basically, it gathers information from a very large digital data pool (very quickly) and synthesises that encoded information for a purpose. However, its purpose rests with the instruction given or the question asked. So, bullshit in, bullshit out: I learned that in the very early 1980s when processing survey data in a corporate environment.

AI’s capacity to synthesise information is incredible, but its thinking process, the putting together of that information, is purely (and only) rational. AI does not have “ah hah” moments, flashes of brilliance, creative spurts, or a sense of wonder when patterns are recognised. These non-rational thinking processes have generated some of the most outstanding art and scientific breakthroughs. For example, chemist August Kekulé dreamed of a snake biting its tail, which led to the understanding of the benzene molecule’s cyclic structure [4], René Descartes imagined the world as a grand machine, laying the groundwork for machine consciousness [5]. 

AI is also not inspired by angels, nor does it struggle with demons. Nor does AI respond with its instincts, as do all flesh and blood creatures. AI is a machine; non-rational and irrational thinking processes are exclusively human in comparison.

So, considering that the various types of knowledge, 14 according to some [6], can be loosely categorised as personal, social, and digital, we know the following: First, AI has no access at all to the personal level. Even if it collects your health data via your smartwatch, it is data, and data is always second-hand. Second, the knowledge we share about our experiences is already detached when it is encoded with language (words, numbers, symbols, images) to become social knowledge. Then AI must digitalise that second-hand, social knowledge, making what it spits out third-hand information. Third, AI is limited with respect to its thinking process. It is incapable of being unpredictable, following a hunch, or taking an imaginative leap: Those are human strengths.

So, let AI do the heavy knowledge lifting with its logical, systematic, and methodical processes and instead focus on the blessing of being human: unpredictable, gutsy, and imaginative.

  1. NOVA PBS Official. (2025, Oct. 12) How to Detect Deepfakes: The Science of Recognizing AI Generated Content.  https://www.youtube.com/watch?v=GMoOCKkcd_w,
  2. NOVA PBS Official. (2025, Aug. 26). The Deepfake Detective | Particles of Thought. https://www.youtube.com/watch?v=nG2_GhNdTek
  3. Robinson, Stephan. (2025, Oct. 21). AI Slop is Creating New Freelance Work: Why Businesses Still Need Human Experts in 2025. https://www.peopleperhour.com/discover/guides/ai-slop-is-creating-new-freelance-work-why-businesses-still-need-human-experts-in-2025/
  4. Read, John (1957). From Alchemy to Chemistry. Courier Corporation.  
  5. Sanderson, Daniel. (2025, Oct. 11). The Role of Imagination in Scientific Hypotheses and Memory and Imagination. https://www.planksip.org/the-role-of-imagination-in-scientific-hypothesis-and-memory-and-imagination-1760233400612/
  6. Drew, C. (2023, March 2). The 14 Types of Knowledge. https://helpfulprofessor.com/types-of-knowledge/

Leave a comment