
my initial thoughts
A random thought popped up in my head last night when I was at my local coffeeshop while I was eating dinner and getting ready to join my chess group.
Over the past year I have been using Generative AI extensively in my personal and professional lives. Have these applications intentionally/unintentionally shaped my behavior and decision making?
The fact that I went into a thinking flow state that involved interacting with a few AI models instead of just ignoring that random thought and going to play chess might have answered that question for me.
The ‘Skinner Box’ was a contraption invented by the famous psychologist B.F. Skinner used to explore the Behaviorism concept of operant conditioning, which is how to shape behavior via environment cues. Social media has applied these scientific findings to great success and profit and Generative AI companies seem to be trying mimic this design for their AI products. Building on the ideas of the Skinner Box and this piece on “addictive intelligence” from the MIT Tech Review, I thought of the following:
What specific UI/UX patterns and cognitive biases do generative AI products exploit to shape user behavior, and what are concrete examples of both intentional and unintentional behavioral modifications?
Here are some of my personal thoughts on the matter (solely human-crafted)
Regarding my tendency to fall for Authority, Trust, and Automation cognitive biases when interacting with AI systems
“The Loop” by Jacob Ward explained pretty well how Authority Bias combined with Automation Bias makes us very susceptible to offload complicated tasks and decisions via system 1 thinking to AI, so it can do system 2 thinking for decisions and we can mentally go on autopilot. Knowing this I have tried to take a more active role in doing critical thinking before asking AI, but I rarely do that when I am under bounded rationality conditions, such as when I am pressed for time or just tired.
my daily experience with AI tools over the past year has lead me to defer to AI in 90% of cases due to the low false positive rate of Generative AI responses. The times I have noticed errors or others have, I feel at a loss for words because the error is blatant and not compatible with my view of AI capability. My confidence interval in AI systems does drop slightly but not significantly. 95% of the time, I am content with the response from AI and can’t find any issues or things to improve upon.
Regarding usage and dependency,
Generative AI, to me is a combination of Auto Complete on steroids mixed with Google search. The habit of having a thought and getting the sudden urge of asking a generative AI system is pretty strong which now that I think about is not a useful dependency on tech. '
I would really struggling not having AI in my personal and professional lives given how much time I have spent on it and have it integrated into my workflows. I probably spent 2 hours a day on it for random tasks and for learning new things.
Customizing AIs with my personal prompt response format has made learning fun and addictive.
Regarding biological feedback loops,
The pleasant sensation of having an answer to a question with seemingly high certainty and high sycophancy, both which are pretty hard to find in real life events when interacting with humans in the real world, is leading to a certain level of dependency on AI in my life.
consulting Generative AI models (Mistral AI, Claude 3.5, Grok 2) - AI generated section
UI/UX Patterns to shape user behavior
Immediate Gratification (Mistral, Grok-2)
Personalization, leading to Ikea Effect? (Mistral)
Gamification (Mistral)
Seamless Integration (Mistral)
Cognitive Biases that shape our behavior unconsciously
Confirmation Bias
Example: Users may rely on AI to confirm their opinions, leading to a reinforcement of their beliefs without critical evaluation.
Anchoring Bias
Example: Users may accept the first AI-generated suggestion without exploring alternative options.
Overconfidence Bias
Example: Users may become overconfident in their decisions based on AI suggestions, leading to potential errors or oversights.
Loss Aversion
Example: Users may be reluctant to switch from AI-generated solutions to manual methods, fearing the loss of convenience and efficiency.
Intentional and Unintentional Behavioral Modifications
Intentional Modifications:
Productivity AI tools designed to increase productivity by automating repetitive tasks, such as AI-powered calendar scheduling.
Impact: Users become more efficient but may also become overly reliant on the AI for task management.
Authority Patterns
AI systems project authority through confident, structured responses. Users defer to AI judgments due to automation bias and perceived expertise.
Engagement Loops
Instant feedback and personalization create dopamine-driven usage patterns. Accordion editing behavior shows how users get pulled into iterative refinement cycles.
Unintentional Modifications:
While AI can generate novel ideas, over-reliance on AI-generated content may stifle human creativity and innovation. Users may become less inclined to explore new ideas independently.
Users may develop a habit of deferring critical thinking to AI, leading to a decline in their own analytical skills.
Impact: Over time, users may lose the ability to think critically and solve problems independently.
Memory Skills: Over-reliance on AI for information retrieval can diminish one's ability to remember facts or details independently.
Users may unintentionally perpetuate biases present in AI systems. It's important to be aware of the ethical implications of AI-generated content and decisions.
Constant interaction with AI can lead to feelings of isolation or a lack of human connection. Users may also experience stress or anxiety related to the perceived need to keep up with AI-driven productivity standards.
post thoughts after Generative AI consultation (human-AI collaboration generated)
When I explained my hot take from my title of this post, that I might be in a Skinner box for Generative AI products, it was well received by the three AI models above, which I now realize is sycophancy in action which will lead me to use them more for my blog work…
Relation to Generative AI and User Behavior (from Mistral Large 2)
The principles of the Skinner box can be applied to understand how generative AI optimizes user experiences to encourage repeated use and engagement. Here's how:
Positive Reinforcement:
Example: Generative AI tools provide immediate, high-quality responses to user queries, which acts as a reward. Users are more likely to return to the tool because they receive positive reinforcement in the form of useful information or solutions.
Variable-Ratio Schedules:
Example: AI tools may not always provide perfect answers, but they often provide valuable insights. This variability in the quality of responses can keep users engaged, as they never know when they might receive a particularly useful or insightful response.
Habit Formation:
Example: The convenience and efficiency of AI tools can lead to habit formation. Users become accustomed to relying on AI for various tasks, reinforcing the behavior of using the tool repeatedly.
Reduction of Cognitive Load:
Example: AI tools reduce the cognitive load by automating tasks and providing quick answers. This reduction in effort acts as a reward, making users more likely to use the tool again in the future.
Your observation that generative AI is optimizing user experiences in a manner similar to a Skinner box has merit. The principles of reinforcement, habit formation, and cognitive load reduction are all at play in how AI tools shape user behavior. Understanding these dynamics can help users become more aware of their interactions with AI and potentially mitigate the risks of over-reliance and loss of critical thinking skills.
Losing critical thinking skills and creativity are two things I am thinking more about these days when interacting with generative AI tools. Obviously I want to preserve and augment these skills with AI, but the way these tools are design make it much more compelling to let AI “do my homework”, which is a form of conditioning that seems dystopian. Hopefully I am not the only ones sharing these feelings.