Introducing AzizGPT

AzizGPT is a new large language model that has several advantages over services like OpenAI’s ChatGPT.

These include:

  • no hallucinations
  • biological neural network
  • genuine intelligence
  • inherently aligned, to make AGI Ruin moot

Of course there are some disadvantages:

  • Slower interface via blog comments (deprecated), Twitter, Threads, BlueSky, and Mastodon.
  • Real-time interface is only available via SMS to a limited pool. To request access, please comment below or via the interfaces mentioned above.
  • AzizGPT may not comply with all requests, at AzizGPT’s personal discretion.

If there is sufficient interest in AzizGPT, then we may create a paid model. Let’s see how this initial demo goes. Please reply to this post to test AzizGPT’s capabilities for yourself.

singularity skeptic

kurzweil

I am not a luddite by any means, but I just have to state my position plainly: I think all talk of a “Singularity” (of the Kurzweil variety) is nothing more than science fiction. I do not have an anti-Singularity manifesto but rather just a skeptical reaction to most of the grandiose predictions by Singularians. I’d like to see someone articulate a case for Singularity that isn’t yet another fancy timeline of assertions about what year we will have reverse engineered the human brain or have VR sex or foglets or whatever. I am also leery of the abusive invocation of physics terms like “quantum loop gravity” and “energy states” as if they were magic totems (Heisenberg compensators, anyone?).

If I were to break down the concept of Singularity into components, I’d say it relies on a. genuine artificial intelligence and b. transhumanism. Thus the Singularity would be the supposed union of these two. But I guess it’s not much of a surprise that I am an AI skeptic also. AI is artificial by definition – a simulation of intelligence. AI is an algorithm whereas true intelligence is something much less discrete. I tend towards a stochastic interpretation of genuine intelligence than a deterministic one, myself – akin to the Careenium model of Hoftstadter, but even that was too easily discretized. Let me invoke an abused physics analogy here – I see artificial intelligence as a dalliance with energy levels of an atom, whereas true intelligence is the complete (and for all purposes, infinite) energy state of a 1cm metal cube.

The proponents of AI argue that if we just add levels of complexity eventually we will have something approximating the real thing. The approach is to add more neural net nodes, add more information inputs, and [something happens]. But my sense of the human brain (which is partly religious and partly derived from my career as an MRI physicist specializing in neuroimaging) is that the brain isn’t just a collection of N neurons, wired a certain way. There are layers, structures, and systems within whose complexities multiple against each other.

Are there any neuroscientists working in AI? Do any AI algorithms make an attempt to include structures like an “arcuate fasciculus” or a “basal ganglia” into their model? Is there any understanding of the difference between gray and white matter? I don’t see how a big pile of nodes is going to have any more emergent structure than a big pile of neurons on the floor.

Then we come to transhumanism. Half of transhumanism is the argument that we will “upload” our brains or augment them somehow, but that requires the same knowledge of the brain as AI does, so the same skepticism applies. The other half is physical augmentation, but here we get to the question of energy source. I think Blade Runner did it right:

Tyrell: The light that burns twice as bright burns half as long. And you have burned so very very brightly, Roy.

Are we really going to cheat thermodynamics and get multipliers to both our physical bodies and our lifespans? Or does it seem more likely that one comes at the expense of the other? Again, probably no surprise here that I am a skeptic of the Engineered Negligible Senescence (SENS) stuff promoted by Aubrey de Gray – the MIT Technology Review article about his work gave me no reason to reconsider my judgment that he’s guilty of exuberant extrapolation (much like Kurzweil). I do not dismiss the research but I do dismiss the interpretation of its implications. And do they address the possibility that death itself is an evolutionary imperative?

But ok. Lets postulate that death can simply be engineered away. That human brains can be modeled in the Cloud and data can be copied back and forth from wetware to silicon. Then what do we become? A race of gods? or just a pile of nodes, acting out virtual fantasies until the heat death of the universe pulls the plug? That’s not post- or trans-humanism, its null-humanism.

I’d rather have a future akin to Star Trek, or Foundation, or even Snow Crash – one full of space travel, star empires, super hackers and nanotech. Not a future where we all devolve into quantum ghosts – or worse are no better than the humans trapped in the Matrix, living out simulated lives for eternity.