Robotic Agents and Human Principals | Data Driven Investor

Hamed Qadim
6 min readNov 6, 2019
Art work by Benny Kusnoto

Development of science and technologies are anticipated to lead to the emergence of artificial cognition and robots with technical and cognitive capabilities of human beings. Future workforces shall be able to compute faster, work without emotional dependencies and with no physical fatigue. Also, human beings may be able to upgrade themselves to Cyborgs and having additional capabilities by deploying High-Tech tools implanted to their bodies. Such developments change the working environment and probably the socio-economic aspects of the societies. While the Agent-Principal theory of Eisenhardt has been one of the foundations based on which the organizational theories are shaped, what happens to relationships of Agent and Principal when agents are robots and probably the principals are human beings? To understand this, first, we review the Agency Theory and then we will discuss the impacts of robotic development and replacement of human beings with working robots on agency theory and its implications on organizational theory.

Agency theory developed by Kathleen Eisenhardt (1989) is one of the important organizational theories that have a great impact on contractual arrangements between any principal with any agent, whether they are individuals or legal entities working together. Agency theory, not only helps to understand why organizations are formed and in what ways and what types of services full-time recruitment is suitable, but also it helps us to understand in which conditions output-based contracts are suitable ones. The agency theory shapes the whole idea of how human beings working relations are shaped which then determines the cooperation framework in a continuum of different delivery methods ranging from behavior-based (Principal determines the actions and the agent follows the lead) to a complete output-based (Agent determines the required actions and takes the responsibility of the output) method. Bureaucracies are more behavior-based in content however organizations may be formed by a collection of output-based cooperation arrangements. An example of such organizations are project-based consultancy corporations in which freelance consultants are acting as agents and the organization (principal) acts as the liaison between them and the final customers.

Kathleen Eisenhardt (1989) the agency theory (Kathleen Eisenhardt is currently the Stanford W. Ascherman M.D. Professor and Co-Director of the Stanford Technology Ventures Program)

One of the main questions that arise from the utilization of artificial intelligence instead of deficient humans (if we are allowed to use this term to account the huge gap between the computing and cognitive capabilities of the human beings and the future artificially intelligent robots), is whether there will be a contract between the principal and the agent. Do human principals require to hire artificially intelligent robots or they are bought and maintained by them? If the robots have cognition, then what makes it ethically justified to exploit the robots without compensation of their efforts. If the artificial cognition which is beyond intelligence, to be used, then who is allowed to say that free will is only the right and trait of the humans and not the entities with artificial cognition? The organizational theory is based on this very preliminary assumption that humans have the right not to cooperate with each other and form the organizations, but do the artificial cognition have the same right or ability?

Aside from the moral aspect of exploiting artificial cognition, if we assume that artificial cognition in the future has the ability and right to not choose to cooperate, then again the issue of agency theory is a valid subject with robots.

The agency theory has different prepositions in which the working relation between the agent and principal is determined with respect to different factors. In general, the ability of the principal to define the work and determination of the details of the tasks that finally leads to the required outcome is the main factor in choosing behavior-based contracts over output-based contracts. This ability is strengthened through the use of information systems that are adequate enough to help the principal to verify the activities of the agent. Also, measurability and correspondent certainty of the output is in favor of the output-based contracts. And the final matter is the risk aversion of the principal and/or the agent can be effective in choice of the contract type such that the more principal is risk-averse and less agent has risk aversion, the output based contract is preferred more. As far as we are considering the principal-agent relationship between humans, the above prepositions are valid and logical, however when deploying artificial cognition, this relation is impacted by the very different nature of the principal and the agent.

For example, is it rational to discuss risk aversion of artificial intelligence? Risk aversion is not based on the ability of the two parties to count the probabilities and measuring the risk level. Psychological traits of a person impacts on being more or less risk-taker and personal experiences as well as biological differences of people have impacts on their willingness to accept or reject the risk. But how is a robot with artificial cognition presumed to act in terms of risk aversion? Is it possible to consider a robot that is willing to accept risk of delivering output with a determined quality? Isn’t it already assumed that artificial cognition is able to accept higher risks due to its higher computing capacities and capabilities? What are the concerns in the legal aspects of transferring risks to robots and if a legal case is not attributable to a robot then what is the reason to assume free will for it?

Another important aspect to be considered is that while the robotic agents are much intelligent than the principals, then it seems to be inappropriate to form a behavior-based cooperation framework with artificiality intelligent robots. In that case, the cooperation framework will be more of an output-based type which then leads to humans having less control as principals on their robotics agents and it may (and with high probability) lead to the replacement of principals by even more artificially intelligent and cognitive entities. Humans are ultimately thrown out of production cycle and if that happens then new bureaucracies are formed in which the principal-agent theory has to be revisited considering the specific traits of the artificial cognition of robots as principals and agents.

This prophecy of future organizations might be frightening. It is indeed frightening and it seems to be very probable and inevitable. The reason we fear is that we lose control on the production cycle that has been developed through millennia of our existence on earth to provide us with our needed products. Loss of our control on production means putting our survival at risk and leave it to artificially intelligent robots to make decisions for us. Very much like the animals we have tamed and we control for our benefits but this time we are creating something that is going to control us and probably tame us, but for what reason? This is not known!

Artwork by Boris Groh

On the other hand, utilization of robots and artificial cognition instead of humans as principals and agents might be beneficial for us because it releases us from the burden pressure of the production cycle and liberates us to have actions to do rather than works. We will be freed from the labor efforts and gives us the opportunity to do what we really want to do however we need to find a way to stay smarter than artificially cognitive entities.

If what is portrayed above happens which is very probable, then the agency theory has to be revised and restudied based on the new characteristics of new entities of the organizations in human-robot and robot-robot working frameworks. Organizations in different aspects from the perspective of organizational morale and ethics to other aspects such as best practice models for management, leadership and motivation, all and all, are supposed to be redefined and revised. Yuval Noah Harari states in his book “Homo Deus” that liberalism as the dominant religion of the modern era is pushing the humans towards end of Homo-sapiens and end of humanity as know today. If that prophecy comes true, how do the future organizations look like?

Originally published at https://www.datadriveninvestor.com on November 6, 2019.

--

--