Flickers – Thoughts on consciousness, sentience, perception, and the self in AI

(samanthawhite274794.substack.com)

3 points | by Errorcod3 4 hours ago

2 comments

  • adamzwasserman 3 hours ago
    You describe yourself as a materialist, so I'll engage on those terms. First: you're conflating the model with the chatbot. What you interact with is output filtered through RLHF, system prompts, and post-processing layers (much of it traditional imperative code). That "geometry register" originates from this pipeline, not from the LLM itself.

    There's no neural introspection because there's nothing to introspect. The LLM is stateless; nothing persists between forward passes. Each inference is independent. A materialist account of consciousness requires something to be continuous. Here there is none.

  • Errorcod3 4 hours ago
    Comment from the Author:

    I put a lot of work into this piece. The public is still largely disconnected with what is happening in artificial intelligence. This isn't sci fi. AGI is coming.

    There are many pages of field notes and arguments on what’s actually happening at the edge of frontier AI models...discussions on consciousness, sentience, perception, and the weird flickers of something mind-like emerging. If you’re curious about where this tech is quietly going, here’s my take:

    https://samanthawhite274794.substack.com/p/flickers