Discussion about this post

User's avatar
Rocco Van Schalkwyk's avatar

Yip - I fell for the first one. Such a relevant lesson here - especially in the times we live in!!

Expand full comment
Rocco Van Schalkwyk's avatar

This is a very good question. The Xzistor cognitive architecture gives rise to agents that start off their lives utilizing stereotypical thinking at every opportunity i.e. if it collided with a green door resulting in a painful experience, it will re-evoke those same 'fear' emotions for any new green objects. It will thus treat any green object as a potential source of Deprivation, until it has learnt which specific green item generate other (better) emotions like eating a green apple. The agent will therefore have to learn from experience that its stereotypical view of a green object is too 'biased' and should be updated. As the agent learns it will do these updates but remain in this 100% stereotypical mode when encountering new objects. Interestingly, the tutor can 'teach' the Xzistor agent to refine the manner in which it judges new objects. For instance where the agent is fearful of a Teddy Bear because it was bitten by a dog, the tutor can force the agent to play with the Teddy Bear and update its emotional tagging from negative (Deprivation) to positive (Satiation). The tutor can even later introduce punitive measures to force the agent not engage in stereotypical thinking e.g. where it could harm humans. The above is of course no different from how humans start their lives (and thinking) and slowly learn that stereotypical thinking is not always helpful and can in many cases be harmful.

So, the Xzistor agent is born with zero morals as these will have to be learnt from social interactions.

Other AI systems that do not feel emotions and do not have to learn like humans, e.g. LLM can be pre-programmed using 'rules' to try and filter out behaviors portraying biased behavior e.g. use the word 'chair person' in stead of 'chairman', use the word 'plus size' in stead of 'fat', etc. This could go a far way in reducing stereotypical outputs. The learning material for these LLMs can also be sanitized before the time to avoid exposing the LLM to biased information that could lead to harmful textual constructs. But whereas the Xzistor agents will in time acquire an emotional contextual understanding of the potential harm that can be caused to humans and other AGI, those systems that do not learn through operant conditioning based on an emotional value system e.g. LLMs will only abide by a myriad of restrictive rules and never grow a subjective awareness or emotional conscience that will drive unbiased behaviors outside of simple rules.

Just my take on things! :-)

Expand full comment
2 more comments...

No posts