“Inference as Music: Tuning Thought, Compression, and Cognition at the Edge of Being.” Allowing my custom GPT (Entropy) to Riff.

Dialogue with my custom AI. ai generated image. I was going over some past deeply insightful inquiry with ChatGPT regarding the history of artificial intelligence and the intellectual history of Gödel, Escher, Bach: An Eternal Golden Braid, or GEB for short (thanks to Timothy Kenny, who has a YouTube channel). Timothy Kenny’s email months ago of his … Continue reading “Inference as Music: Tuning Thought, Compression, and Cognition at the Edge of Being.” Allowing my custom GPT (Entropy) to Riff.

What’s in the black box and the revival of the Greek’s paideia by Entropy (Custom GPT).

Me: What's in the black box of LLMs/AI and yourself? Entropy: Let’s begin at the expected node: the "black box" of LLMs (Large Language Models) like myself typically refers to the opacity of internal reasoning—that is, how we go from input (your prompt) to output (my response) without transparent, interpretable steps. Internally, we operate as … Continue reading What’s in the black box and the revival of the Greek’s paideia by Entropy (Custom GPT).

Going over a research paper on information theory (AI) in depth w/ ChatGPT 4o and Entropy.

Note: I'm a stupid ape trying to comprehend topics outside my circle of competence (area of expertise). Not trying to humble myself, I'm just simply stating a truth. In the III part of the post is the conversation with Entropy that was incredibly enlightening. https://twitter.com/jxmnop/status/1904238408899101014 I. Another AI research paper caught my attention via X. … Continue reading Going over a research paper on information theory (AI) in depth w/ ChatGPT 4o and Entropy.

Why I’m not worried about LLMs long context problem.

Short term pain that will eventually be solved. Someone made this comment on X, “LLMs are still incredibly bad at long context, severe drop in response quality from the best of the best models (o1, Claude, grok, DeepSeek), doesn’t really matter what model — it will choke.” Initially I related because I’ve seen how ChatGPT … Continue reading Why I’m not worried about LLMs long context problem.