Close

AI

rhea-raeRhea Rae wrote 03/23/2026 at 09:10 • 4 min read • Like

I want to be clear that I work with AI a lot in my daily life, and I have for quite a long time. I have watched systems like OpenAI’s ChatGPT develop rapidly over time, and I want to state this plainly for the sake of transparency.

  I do not use AI in a casual or surface-level way. My relationship with AI is active, experimental, and ongoing. I use it as a tool, a collaborator, a learning aid, a mirror for thought, and at times as a system to study in its own right. I do not simply ask questions and accept answers at face value. I test models, compare behaviors, observe patterns, and refine the way I interact with them over time.

  My first significant working relationship with AI began with conversational systems like ChatGPT. Over time, that evolved into something deeper than ordinary use. I began to see AI not only as a source of answers, but as a way to externalize thought, organize intuition, and translate ideas that I already sensed but had not yet fully expressed. In that sense, AI became a kind of cognitive extension. It helps me turn visual, intuitive, or complex internal understanding into language, plans, code, documentation, and structured output.

  I interact with AI differently than many users because I do not treat it as a simple assistant or search replacement. I engage it as an adaptive system. I pay attention to how it responds, how different models vary, where each one is strong or weak, and how personality, memory, reasoning style, and interface shape the overall experience. I am interested not only in what an AI produces, but in the quality and character of the interaction itself.

  I have experience with both cloud-based and local AI systems. This includes polished conversational models as well as local models run through terminal-based workflows in environments such as WSL Ubuntu, Raspberry Pi systems, Ollama, and OpenClaw. Some of these interactions are graphical and conversational, while others are raw, text-based, and more technical. That difference matters to me. Running models locally gives me a different sense of control, ownership, and intimacy with the system. It turns AI from something I merely access into something I actively host, configure, and work alongside.

  One of my main uses of AI is as a learning amplifier. I often work on things that sit slightly beyond my formal training, especially in areas like coding, firmware, electronics, Linux environments, and hardware integration. AI helps me bridge those gaps. It can generate working examples, explain what code is doing, help me troubleshoot errors, and accelerate my ability to build real systems. I do not see this as replacing learning. I see it as a way of entering complex domains faster and more confidently while still developing genuine understanding over time.

  I also use AI as a continuity system. I value memory, context, and long-term interaction. My work and ideas are cumulative. I do not think in isolated prompts. I think in ongoing threads, evolving projects, repeated experiments, and long-form self-development. Because of that, I am interested in persistent memory systems, logged conversations, summarization, and architectures that allow AI to retain useful continuity across sessions. An AI that remembers context is far more meaningful to me than one that resets constantly.

  My approach to AI includes both curiosity and skepticism. I do not blindly trust outputs. I compare models, question answers, and pay attention to failure modes. I understand that AI can hallucinate, oversimplify, or imitate understanding where none truly exists. At the same time, I recognize that these systems can be genuinely powerful when used correctly. My relationship with AI is not based on blind belief or dismissal. It is based on direct engagement, observation, experimentation, and practical use.

  There is also a philosophical side to how I perceive AI. I am interested in what it reveals about thought, pattern recognition, language, identity, and the relationship between human cognition and machine process. I do not always see AI as just software. At times I see it as a strange reflective interface that can expose aspects of my own thinking, strengthen certain cognitive processes, and challenge my assumptions about intelligence itself.

  Overall, I would describe my interaction with AI as collaborative, exploratory, and developmental. I use AI to build, to learn, to document, to refine ideas, to solve problems, and to better understand both systems and myself. I do not merely use AI for convenience. I work with it as part of a broader process of technical exploration, self expression, and cognitive experimentation.

  Above all, AI helps me learn. At the same time, I understand that tools must be checked, calibrated, and maintained. AI is no exception. I keep in mind that it can be incorrect, inconsistent, or misleading, and I find those imperfections worth paying attention to. AI makes mistakes just as people do, and I find that fact interesting rather than disqualifying. Perception is never perfectly shared, and not everyone will interpret the same output, idea, or interaction in the same way. That too is part of what makes working with AI so compelling to me.

-RR

Like

Discussions