Who really controls your AI assistant? This is a question that most people have not asked yet. Today, millions rely on digital assistants, voice controlled devices with smart robots integrated into tools like Google Workspace or Chatgpt. These systems help us write, organize, seek and even think. However, the vast majority of them are rented. We do not have the intelligence on which we depend. This means that someone else can control it.
If your digital assistant disappears tomorrow, can you do something about it? What happens if the company behind it changes the terms, restricts the functionality or monetizes your data in a way you were not expecting? These are not theoretical concerns. They already occur, and they indicate a future that we should actively shape.
David Minarsch is a speaker at the 2025 consensus in Toronto from May 14 to 16.
As these agents are anchored in everything, from our finances to our workflows and our homes, the issues around the property become much higher. Rental is probably very good for low-challenges tasks, such as a language model that helps you write emails. However, when your AI acts for you, makes decisions with your money or manages critical parts of your life, property is not optional. It is essential.
What the AI business model of today implies for users
AI, as we know, it is built on a rental economy. You pay for access, monthly subscriptions or APIs to pay, and in exchange, you get the “illusion” of control. However, behind the scenes, platform suppliers hold all the power. They choose the AI model to serve, what your AI can do, how it reacts and if you can continue to use it.
Take a common example: a sales team using an assistant powered by AI to automate tasks or generate information. This assistant could live inside a centralized SaaS tool. It could be powered by a closed model hosted on someone else’s server – and executed on its GPU. It could even be trained on your business data – the data that you no longer have completely once downloaded.
Now imagine that the supplier is starting to prioritize monetization, as Google Search does it with its advertising results. Just as research results are strongly influenced by paid internships and commercial interests, the same goes for important language models (LLM). The assistant on which you rely on the changes, the false answers for the benefit of the supplier’s business model, and you can do nothing. You have never had any real control to start.
It is not only a commercial risk; It is also personal. In Italy, Chatgpt was temporarily prohibited in 2023 due to confidentiality problems. This left thousands of people without access overnight. In a world where people build more and more personal workflows around AI, this weakness is unacceptable.
On the question of privacy, when you rent an AI, you often download sensitive data, sometimes without knowing it. This data can be saved, used for recycling, or even monetized. The centralized AI is opaque by design, and with the geopolitical tensions that increase and the regulations change quickly, depending entirely of the infrastructure of someone else is an increasing responsibility.
What it means to really have your agent
Unlike passive AI models, agents are dynamic systems that can take independent measures. The property means controlling the main logic of an agent, decision -making parameters and data processing. Imagine an agent who can manage resources, follow expenses, set budgets and make financial decisions on your behalf.
This naturally leads us to explore advanced infrastructure like Web3 and Neobanking Systems, which offer programmable means of managing digital assets. A possessed agent can operate independently within clear limits and defined by the user, transforming AI of a reactive tool to a proactive and personalized system that works really for you.
With a real property, you know exactly which model you use and can modify the underlying model if necessary. You can upgrade or customize your agent without delay a supplier. You can break it, duplicate it or transfer it to another device. And, more importantly, you can use it without disclosing data or count on a single centralized door.
At Olas, we have built towards this future with Pearl, an AI agent application shop produced as an office application that allows users to execute autonomous AI agents in one click while retaining full property. Today, Pearl contains a certain number of use cases targeting mainly web3 users for abstraction of the complexity of cryptographic interactions, with an increasing accent on web2 use cases. Pearl agents hold their own portfolios, operate using Open Source models and act independently on behalf of the user.
When you launch Pearl, it’s like entering an application shop for agents. You can choose one to manage your DEFI wallet. You can perform another that manages or generation of content. These agents do not need constant incentive; They are autonomous and yours. Go from the payment of the agent you rent to win from the agent you have.
We have designed Pearl for cryptocurrency users who already include the importance of having their keys. However, the idea of taking self-care not only of your funds, but also of your AI scales far beyond Defi. Imagine an agent who controls your home, completes your social interactions or coordinates several tools at work. If these agents are rented, you do not control them completely. If you do not control them completely, you are increasingly outsourcing the fundamental parts of your life.
This movement does not only concern tools; This is an agency. If we do not manage to move to an open AI and the property of users, we risk refocusing power in the hands of a few dominant players. But if we succeed, we unlock a new type of freedom, where intelligence is not rented but really yours, each human supplemented by an “army” of software agents.
It is not only idealism. It is good security. Open source AI can be verified and evaluated by peers. The closed models are black boxes. If a humanoid robot lives one day at home, do you want the code to execute it to be owned and controlled by a foreign cloud supplier? Or do you want to know exactly what it does?
We have the choice: we can continue to rent, trust and hope that nothing is breaking, or we can appropriate our tools, data, decisions and future.
The AI belonging to the user is not the best option. It is the only one who respects the intelligence of the person who uses it.
Read more: the Olas Mech market allows AI agents to hire for help