In this article for Spanda Journal, Charles Eisenstein explores the concept of collective intelligence and its difference from artificial intelligence. He takes the debate on the hard problem of consciousness to an entirely new level by asking an important question: why do we assume that we have no direct access to the subjective experience of another person? Eisenstein says that this statement can only be true if we are fundamentally separate from one another.
Eisenstein stresses that organizational behavior already accepts that collectives have a self, and that this collective self has character, fears, and motivations that is distinct from its individual members.
Implications for AI
If collective beings exist, then we will begin to see why groups differ, and why even the best individuals can be corrupted when exposed to an in-group that is corrupted.
While it is true that some AI developers truly want to use this technology for the greater good, the systems by which they are subjected, and the general mindset of the scientific community is prone to hijack the purity of that intention. Remember Google [see The Demise Of The Ethical Google].
On the other hand, this article highlights the innate power all individuals have in affecting the field where this collective self resides, for after all, the character of this being is the unity of all the characters within the group. When one person in such group transforms, he or she could transform the organization.
Read Original Article
Click the button below if you wish to read the article on the website where it was originally published.
Click the button below if you wish to read the original article offline.
You may also like
Can we change the world with our minds?
Science shows power of belief: Mere suggestion of side effects is enough to bring on negative symptoms
We heal one another
Are we agreeing that governments can breach our fundamental human rights?
To be truly free, we must overcome the prison we have created for ourselves