AI has enormous potential to transform our societies. And we have an enormous potential to use AI for advancing the highly needed transformation into more conscious and healthy societies.
If we take the development and adoption of AI into our own hands, rather than leaving it to the “AI arms race” among big tech corporations, we might actually be able to make good use of the power of the AI Tsunami, instead of being overrun by it.
Here I describe my vision for CosyAI, our initiative to create a cooperative approach to AI development and adoptation. We don’t need to wait for anyone to
1. Build a framework for a basic income backed by revenues from a suite of safe AI-based apps and cooperative data-pooling.
2. Build up a global network of regional multi-stakeholder cooperatives for defining and implementing standards for AI that remain indipendent of corporate interests and political power games.
3. Create an AI Safe Space, where everyone who wishes has a real choice on how AI is impacting their lives – and finds protection from unwanted effects.
We’re currently building a coalition of founding partners and supporters for CosyAI. But here I’m outlining the vision, so let’s look at the three points in more detail:
1. A Suite of safe AI-based apps that back a basic income
AI does enable new tools and applications that can bring real value to people, businesses, and other types of organizations.
CosyAI will provide a framework for building and accessing a whole CosyAI Suite of tools and applications, while putting a special emphasis on safety, data sovereignty, and promoting human trust. It will also provide a cooperative framework for safe data pooling that can be used for product and service improvement (and potentially training better AI models) by all members.
The cooperative model is particularly valuable here, because it avoids incentives for misuse. We thereby offer an alternative to the public regulation approach, where classical types of corporations have misaligned incentives and can choose to simply accept paying fines for ignoring regulations that don’t suit their interests. They can also take advantage of information assymetries that come with complex technologies to avoid effective regulation.
Contrastingly, CosyAI allows stakeholders to join the cooperative, ensuring no individual can leverage financial power to promote their specific interests. Both, providing safe AI-based apps and trustworthy frameworks and technologies for data pooling allow for transparent, sustainable business models that can generate substantial revenue for the CosyAI coop.
By channeling part of the surplus into a basic income pool, we can make the potential of AI to really benefit all of us a reality without having to wait for governments or other actors who might not necessarily have a track record of putting the interests of communities before their own private interests.
This is also part of a scaling strategy that is in alignment with promoting responsible use of AI. The more people who sign up for the basic income, the more have a shared common interest in helping to promote the CosyAI Suite – whilst the cooperative framework helps to ensure that CosyAI lives up to our common safety standards.
The other part of the surplus can be used for a fair compensation system for software developers, product designers, promoters etc. who contribute apps to the CosyAI Suite, as well as for companies who join the CosyAI cooperative to provide the technology and management of the tools and data pooling infrastructure.
2. A global network of regional multi-stakeholder cooperatives
CosyAI will start with a cooperative at the European level, simply because there exists a ready made legal form that allows for global membership. IIn the CosyAI coalition, we have many members with extensive experience in designing, creating, and actualizing cooperative structures..
In the next step, we can support the creation of further regional and local cooperatives. One of the advantages of the cooperative model is that it doesn't inherently incentivize the centralization of power in a single, ever-expanding entity. Rather, CosyAI can embrace a sincerely localized strategy of keeping ownership in the hands of the people and communities who are affected on the ground.
Within the platform Coop movement, we already developed models for cooperative franchise that enable the pooling of data and tech development, whilst sharing ownership and decentralizing local adoption.
Given that AI will affect an increasing number of aspects of our lives, it is important to allow for sincerely localized adoption where communities can decide for themselves and in their way if and how they want to embrace it.
This framework will also allow local communities to develop their own, culturally specific standards and demands for AI – and feed this back into the common process of developing responsible AI that really serves all people.
Above all, this framework enables us to leverage decades of practical cooperative experience, during which cooperatives have fostered networks built on genuine human trust and connections, which we can combine with a wide diversity of other succesful efforts of sincere community building.
3. An AI Safe Space
This leads to the biggest part of the vision. AI offers enormous potential, but also poses unpredictable risks. A growing number of leading AI experts are even worried about that it might pose an existential threat to humanity.
Obviously, such abstract, complex threats are hard to grasp. But that doesn't imply we must passively wait for our fate, or for a miraculous shift that suddenly renders our political institutions capable of instituting comprehensive, incorruptible, and effectively enforced global regulations.
Instead, the first two parts of CosyAI can be the base for a global AI Safe space for anyone who wants to join. Together, the creation of a revenue-backed basic income system and a global network of regional and local cooperatives can provide people with a space where AI can be adopted responsibly and safely, and be used for sincerely advancing the common good.
We can use this framework for building and training our own, responsible AI models based on transparent and fairly composed data. We can also harness our collective power to mobilize public support and the economic strength of participating organizations to advocate for truly effective regulations.
We can also use this framework for actually having sincere discussions about if “we” really want to push for the “singularity”. And this time, the “we” can actually include any human on Earth who wants to join, through the global network of local cooperatives.
If we collectively decide against developing superintelligent AI due to its unpredictable nature, we have a framework to advocate for this decision.
And even if it would take too long to effectively stop “big tech” in their arms race, we will have the organizational and technical framework for building a superintelligent “protection AI” to shield anyone who joins the common safe space from potentially harmful AI actors.
Of course, these thoughts leave a lot of open questions. But I’m convinced that a cooperative framework with a focus on fostering human trust and connections can be a valuable contribution to developing responsible AI.
Anyone is invited to join the coalition of CosyAI to move the discussion forward and to help developing a concrete framework that sincerely serves us all.
The most important thing: Start acting now.
I believe the worst we could do is just stand by watching. Tristan Harris has put it in clear words once again at the Nobel Prize Summit: We need to update our institutions to bind our technologies so that they actually serve us. But with every day we wait, we allow misaligned technologies to strengthen their grip around our brains and make it harder to reverse the race to the bottom.
So let’s not freeze because it feels scary - or stick our head into the sand of business as usual because we don’t feel immediate pressure.
Many are working on responsible AI.
In the tech world itself, it seems common sense since a while that this topic needs a special priority. Rarely have there been so many concerted efforts dedicated to the responsible development and implementation of a novel technology. The website Public Interest AI lists more than 100 relevant organizations. And this list is by no means exhaustive.
Specialized think tanks and NGOs like the Center for Human Technology or the Future of Life Institute, continue to raise awareness about AI risks and advocate for responsible regulation, a mission shared by numerous AI researchers globally.
Established players in digital transformation like the Mozilla Foundation have turned AI into a priority, starting Mozilla.ai earlier this year.
It's encouraging to see governments increasingly giving the topic attention. The currently evolving EU AI Act stands as the most ambitious policy project to date, even though it's already criticized for its lack of teeth and flexibility.
Even the big tech companies themselves continually reiterate their commitment to responsible AI development and deployment.
But that is not enough.
To keep it short, all of this is not enough. Given the high pace of AI development and its enormous potential in terms of money and power, it would simply be naive to believe that mere awareness campaigns, self-regulation by stakeholders with vested interests, or policies from lawmakers subject to familiar lobbying pressures and information asymmetries, could result in a balanced and responsible AI future.
One example is the path of OpenAI itself, the company behind ChatGPT. It started of as a promoter of responsible AI, but has since been criticized for sacrificing it’s original mission.
Let’s be sincere
That's why I believe we need to construct a new approach to AI that circumvents the corrupting influences of money and power. And we need to do it in a way that promotes human sincerity and trust, rather than our ability to tell ourselves and others stories about our good intentions.
Fortunately, we can base such an approach on many years of work on responsible AI by hundreds of committed researchers and organizations. And we can combine this with the years of experience in the Platform Coop movement, which has piloted new approaches to ownership of powerful technologies.
With CosyAI, we started a real, on the ground initiative to collaborate on an alternative path, and our coalition of like-minded organizations is growing every week.
Join us now to contribute to making it happen, or simply sign up to stay updated.