Tuesday, 19 August 2025

AI: How May AI Tools Impact Our Thinking

I've spent some time on reading and summarizing a couple of papers that are discussing how AI is impacting our thinking. After that, I've also had an AI agent summarizing the papers and discussing my conclusions.

Your Brain on ChatGPT

This paper from MIT compared three small groups of students that were to write essays with different levels of assistance:  No assistance at all, Search Engines and Large Language Models. 

The term self-efficacy reflects the student's confidence in his/her own ability to learn. The report mentioned that students low in self-efficacy may use LLM's to a larger extents than self-confident students. Cognitive Load versus Engagement

There is a difference between high-competence use of LLM and low-competence use. Higher competence uses LLM strategically for active learning, revisiting and synthesizing information to create coherent knowledge structures. This reduced cognitive strain while remaining engaged with material.

LLM bots as Instructor Bot and Emotional Support Bot can improve performance and reduce stress

Con's from LLM's: Laziness, one single answer compared with web searches, no person-person discussions, more superficial and effort-less learning.

The Illusion of Thinking

Large Reasoning Models are LLM's that can perform some kind of "reasoning" in different steps. Apple has investigated some models for simple puzzles. Depending on the task complexity, either LLM's are better (simple tasks), LRM are better (moderate complex tasks) and both collapse (complex tasks).

Put differently, will future employers allow employees to spend several hours reading and understanding complex topics by actually reading about it, or will employees be expected to use LLM's to quickly generate convincing TL;DR results that are more or less factual. The latter option may make employees lazy, worse in problem solving and independent thinking. 

Thursday, 17 July 2025

Exploring AI Tools for Coding

As I work in tech, the developments in AI will have a serious impact on my work, even if I'm not a software developer. 

My summer project (on the rare occassions when I'm not focusing on my family, house and geopolitics) will be to investigate tools for coding and working in tech using AI tools that are available today. Maybe I'll reboot my train project.

Initial Youtube videos for exploring AI

Of the tools below, I'll focus on GitHub Copilot, Gemini and Cursor. I have some experience of the two former tools. I'll have an open mindset and try to avoid the inevitable flamewars that comes with any new technologies. Many of the cool tools will vanish when the AI world enters the next AI winter so it rill be hard I'll avoid focusing on a few tools.

This is an overview of some current tools for developers.
Tools to check: React, Express, Tailwind, Reddus and Dino for web development.

Vibe coding fundamentals

Vibe coding is kinda like having a junior developer available that can help with some basic non-perfect coding. Still, there will be need for coding, design thinking and debugging. 

This one summarizes the Google AI Essentials course.

Break down complex problems into specific tasks.

Four levels of thinking for vibe coding: Logical, Analytical, Computational and Procedural

Tools to check: Replit -Windsurf - Cursor

Fundamental skills: The Friendly Cat Dances Constantly.

  • Thinking - have a clear description of problem. PRD - Product Requirement Document
  • Framework -  Help the AI help you to find framework that is solving your problem
  • Checkpoints - use GIT
  • Debugging
  • Context 

Gemini has some support for advanced research. To be added to my backlog

Prompt engineering Tiny Crabs Ride Enormous Iguanas

  • Task
  • Context
  • References
  • Examples
  • Iterations
    • Revisit prompting framework
    • Separate prompts into shorter sentences
    • Try different phrasing or switch to analogous task
    • Introduce constraints
    • Check following from prompt responses
      • Is output accurate and unbiased?
      • Is output containing sufficient information?
      • Is output relevant?
      • Is output consistent when using it several times?

Glossary:

Shot = Example
Persona -  Ask AI to act an an expert on a specific field
Context
Task

Links

https://aistudio.google.com/prompts/new_chat

https://github.com/i-am-bee    Bee agent framework

Brilliant.org

https://grow.google/prompting-essentials/