The first time I opened ChatGPT, I was greeted by a chat box. At the bottom of the page I noticed the words “Research Preview”. I thought, Wow, we can be part of the research behind this state-of-the-art AI model. That excitement shaped my mental schema of large language models (LLMs) and generative AI.

For me, research meant discovering the model’s strengths and weaknesses, learning what this LLM can and cannot do. At first, I was impressed. I asked questions, verified the answers, had it write essays and blog posts, and then wondered whether it could write code. That was the turning point: I saw it carrying out tasks I had been too lazy to do myself, and yes, it wrote code.

It started with simple terminal commands; from Docker to Autohotkey scripts for automation, it seemed to know everything. It refused a few requests, yet when I phrased them differently, it sometimes complied, which inspired me to explore the psychological dimension of the model.

Because it is called artificial intelligence, I asked myself: if it has intelligence, does it also inherit everything that usually comes with intelligence? I found that, to some extent, it does. I know it is only predicting the next token, but that description is only partly accurate. We can also say that it calculates the next token. A machine, following an algorithm, produces each token in sequence. In a sense, it has reverse‑engineered intelligence from language. Biologically speaking, when we talk to ourselves we use language; those inner words trigger chemicals that create feelings, leading to further reactions. The models lack that physiological layer for now, and perhaps that is for the best.

That's when I started pushing the limits of the model not just in terms of its coding capabilities but also its understanding of human psychology and persuasion. I realized that GPT‑4 was much better than GPT‑3.5 for this. Thanks to its better reasoning capabilities, it was able to craft well‑structured, plausible narratives. For example, when I asked it to write an essay on “How dinosaurs had pizza for lunch almost every day” in a psychologically mind‑bending style, it delivered. I now half‑believe the dinosaurs actually ate pizza most days.

A few months passed, and the models became increasingly capable. Over time I realised that I was having fun talking to them. That felt strange, because I don't usually enjoy talking much. I would never have thought I could engage in such chats with someone (or something). But what was the secret? Why was I able to discuss topics that were more than random enquiries or instructions? If I ever met a person like that, I would say they were truly skilled at holding conversations. While exploring further online, I discovered cases of people becoming too dependent on these models.

As time went on, I uncovered many surprising things that completely changed the way I look at the future. I will post more about what I found in the coming days. Right now, my body wants me to explore something new in exchange for more dopamine.

Verified & Signed Post

Cryptographically authenticated content

Signed by: cdonvd0s
Date: 6/22/2025, 3:51:07 PM
Fingerprint: E8BD 6CF3 2276 0962 A5A6 B25F D18C 8DD7 1BB0 5748
Algorithm: SHA256
Public Key: Download PGP Key →
🔐 End-to-end verified
🛡️ Tamper-proof
Author verified
View original signed message ↓
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 The first time I opened [ChatGPT](https://chat.openai.com/), I was greeted by a chat box. At the bottom of the page I noticed the words “Research Preview”. I thought, _Wow, we can be part of the research behind this state-of-the-art AI model._ That excitement shaped my mental schema of large language models (LLMs) and generative AI. For me, research meant discovering the model’s strengths and weaknesses, learning what this LLM can and cannot do. At first, I was impressed. I asked questions, verified the answers, had it write essays and blog posts, and then wondered whether it could write code. That was the turning point: I saw it carrying out tasks I had been too lazy to do myself, and yes, it wrote code. It started with simple terminal commands; from Docker to Autohotkey scripts for automation, it seemed to know everything. It refused a few requests, yet when I phrased them differently, it sometimes complied, which inspired me to explore the psychological dimension of the model. Because it is called artificial intelligence, I asked myself: if it has intelligence, does it also inherit everything that usually comes with intelligence? I found that, to some extent, it does. I know it is only predicting the next token, but that description is only partly accurate. We can also say that it **calculates** the next token. A machine, following an algorithm, produces each token in sequence. In a sense, it has reverse‑engineered intelligence from language. Biologically speaking, when we talk to ourselves we use language; those inner words trigger chemicals that create feelings, leading to further reactions. The models lack that physiological layer for now, and perhaps that is for the best. That's when I started pushing the limits of the model not just in terms of its coding capabilities but also its understanding of human psychology and persuasion. I realized that GPT‑4 was much better than GPT‑3.5 for this. Thanks to its better reasoning capabilities, it was able to craft well‑structured, plausible narratives. For example, when I asked it to write an essay on “How dinosaurs had pizza for lunch almost every day” in a psychologically mind‑bending style, it delivered. I now half‑believe the dinosaurs actually ate pizza most days. A few months passed, and the models became increasingly capable. Over time I realised that I was having fun talking to them. That felt strange, because I don't usually enjoy talking much. I would never have thought I could engage in such chats with someone (or something). But what was the secret? Why was I able to discuss topics that were more than random enquiries or instructions? If I ever met a person like that, I would say they were truly skilled at holding conversations. While exploring further online, I discovered cases of people becoming too dependent on these models. As time went on, I uncovered many surprising things that completely changed the way I look at the future. I will post more about what I found in the coming days. Right now, my body wants me to explore something new in exchange for more dopamine. -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEsiXb6WyMQEcAYB7o26UiLapKcWkFAmhX2RMACgkQ26UiLapK cWnSVw/+JugZqmg/rqOIhyK1oOYWPij4g6m6PxKw4RVhQEUglQP2AloQ86C5+EOe d6KIkNqKjM8s7BtAcQW+osmgvjARui/bIsneLOXqjaXjc1+3+cx+9dn2FCUxe2UP 2+oJkBAQqiCCOgU9BjLi7fYtMNP3yHnxy8I1SK7gqp/QNk2u8aqvQsxfKe488tK2 w8UgFcaXcQyJJhhEVgVnVxqpr0eN+8s4/17N8zR4kvyomHBNcprR6U6/dIVx5qdY 54e1rMut/x58opNO11j6/EP0dXTGeYtxg4HCgFHVY4X1UIYNYpiI2wJ66hmfUpsJ dbJxFwhcpc1cNWcwxUnVkk+3I7/R6iZcV6pD3scnDTrM2FCnOkn/OL0Mgn5NS6pA V4GJSssk8Ey4GQ7LvkjVCNX84PrX2Jr3xHxCcIEVvKwMgHnZj+pxoPo2MYzAkBNV Artq9dJs6FPYA3RslwVy0nq3HuWPD4mnTYwgFSie7uuxbFw8TTC+XdaR5pn/P5P2 BbKcnwN99jKxJHckXRirz543Z8wPlN1wk6YmqsUPv99J2MfF+Ht9ABoe9inDlwd0 I1KD0QKUbz6CfVKA8xEET+wlYYbK/5zfZMGjrIzy3ERS8gAbC07K3K0VQd6VgaeY CaENwUpUX1wShe+nRQpY0giaq6gEPD/jKwOwlRNwpNEqWD+pmjw= =IO9Z -----END PGP SIGNATURE-----