Does Apple Know Right from Wrong?
JASON IULIANO
Apple is known for a lot of good things. The iMac. The iPod. The iPhone. The iPad. Unfortunately, Apple is also known for its darker side. When the company is not improving our lives by releasing revolutionary products, it’s busy engaging in activities that are sometimes less than praiseworthy. From the Foxconn suicides to child labor to environmental pollution, Apple has been involved in some morally questionable situations (see the articles by Sam Gustin, Juliette Garside, and David Barboza).
Whenever some Apple wrongdoing comes to light, people always debate whether the company is morally flawed. In light of the Foxconn suicides, Wired magazine even ran a cover story titled “1 Million Workers. 90 Million iPhones. 17 Suicides. Who’s to Blame?” For many people, the answer is Apple itself. They believe that the corporation is at fault for the suicides. The same question could be asked with respect to Apple’s child labor transgressions, its environmental pollution, and even the tax evasion charges that forced Tim Cook to testify before Congress. Who’s to blame?
However, for philosophers there’s an even more fundamental issue: is Apple the kind of entity that can be praised or blamed for its actions? Does it make any sense to blame Apple for anything it does? In other words, is Apple a moral agent?
I hate wasps. If you’ve ever been stung by one, you surely hate them, too. When a wasp manages to find its way into my house and sting me, it probably makes sense for me to blame myself for not sealing off all the openings carefully enough. It may make sense for me to blame the exterminator for not finding and eliminating all the wasp nests in my yard. It might even make sense for me to blame my neighbor for borrowing my wasp and hornet killer and not returning it. There’s one individual here; however, here that it doesn’t make any sense to blame. That individual is the wasp.
Why is this? If anything, the wasp is the direct cause of my injury. If it hadn’t stung me, I would have been able to continue on and enjoy my day. The answer lies in the wasp’s lack of moral agency. The wasp is not an agent, in the sense of someone who purposefully acts. The wasp has no capacity to make a moral decision and therefore can’t be held accountable for its actions.
So, who or what does qualify as a moral agent? Most philosophers have identified two conditions for moral agency. First, the agent must be autonomous. Second, the agent must have the capacity to distinguish between right and wrong.
The wasp passes the first hurdle (autonomy), but it clearly fails the second. I have not yet met a wasp that has pondered the morality of stinging me—if you find one, please let me know. You and I, on the other hand, meet both conditions. We control our actions, and we’re able to make value judgments about such actions. Combining those two, it follows that we have the ability (even if we frequently fail) to let our value judgments influence our actions. Therefore, we are moral agents.
Despite the apparent clarity of this definition, there are a whole slew of intermediate cases. Take animals, for instance. It seems like dogs are pretty smart. They appear to debate the issues confronting them (admittedly in a simplistic manner) before taking action. Could they possibly be moral agents? To go one step further, how about chimps or apes? Animal moral agency is a hot debate in philosophy at the moment. I won’t pursue it here, but it does illustrate the point that there can be different opinions about who or what is a moral agent.
Instead, let’s look at another cutting-edge and contentious philosophical debate. Can corporations be moral agents? In particular, when Apple does something good like donate to charity or something bad like contract with factories that employ child labor, does it make sense to praise or blame Apple itself? What do you think? Does Apple approach the world like a pesky wasp—buzzing about based on pure instinct? Or does Apple approach the world like you and me—capable of knowing right from wrong and using that knowledge to guide its actions?
Apple’s Desires
Intentionality is at the heart of the debate over corporate moral agency. Intentionality is mental directedness—the ability of minds to represent properties. Believing, desiring, fearing, loving, and hoping are all intentional states. You can pick out an intentional state by the fact that it exhibits a mental relation with some object or state of affairs.
For example, anyone who has a desire must have a desire that is directed towards some thing or some occurrence. Someone could desire a new iPad Air. I could desire popcorn to eat while I stream a show from my Apple TV. Or you could desire a new app for your iPhone. However, none of us can form a desire in isolation. Our desires must be directed towards some object or state of the world. This feature is what makes desire an intentional state.
On the other hand, flying, sailing, and talking are all non-intentional relations. They do not speak to the mental properties of the entity that is performing the action. A plane can fly. A boat can sail. SIRI can talk. As you can see, whereas intentional states give insight into the inner workings of the mind of an individual, non-intentional states do not.
Some people take the view that business corporations like Apple are true intentional agents. Corporations, just like you and me, have the capacity to form and unform desires, beliefs, wants, and other mental states and are able to do so in a rational manner. Proponents of corporate moral agency (CMA) argue that corporations like Apple are intentional agents.
To many people, it seems obvious that Apple is an intentional agent. Apple fears; it loves; it worries, and it even gets angry. It has wants, hopes, and desires that it seeks to fulfill. So we can point the finger at Apple and blame it for wrongdoing. Newspapers and magazine articles routinely make this assumption. Journalists just can’t resist attributing mental states directly to the corporation.
For instance, a recent piece in the Washington Post pronounced, “Apple loves Clean Designs.” While discussing the feud between Apple and Samsung, a PBS article stated, “Apple is upset over the Google Android phone operating system used by Samsung and other manufacturers.” And a headline in Forbes boldly proclaimed, “Apple’s New iPads Show Company Believes It’s Alone in the Tablet Market.”
Notably, these excerpts don’t say, “Tim Cook loves clean designs,” “Apple’s shareholders are upset,” or “Apple’s board of directors believes it’s alone in the tablet market.” Instead, it is Apple—the corporation itself—that possesses these mental states.
We’ll call this way of looking at things the theory of Corporate Moral Agency. Believers in Corporate Moral Agency seem to take claims that Apple can think or feel, love or hate, quite literally.
In response to this claim, you’re probably thinking, “Not so fast! When people say things like, ‘Apple wants to release a new iPhone next year,’ they don’t mean it in a literal sense. Instead, we should take their statements metaphorically.”
Indeed, this is a common argument against the existence of corporate intentionality. As William G. Weaver has written, scholars who argue that corporations truly possess mental states have simply create “a metaphysics out of an accident of metaphor.”
But it’s not that straightforward. As we’ll see, some serious thinkers do argue that no metaphor is involved. According to these writers, a corporation like Apple does quite literally think, feel, and make decisions. If they’re right, Apple does possess Moral Agency, and can reasonably be praised or blamed for its actions.
Is Apple More than the Sum of Its Parts?
Opponents of Corporate Moral Agency don’t believe that groups have mental states (see the articles by Manuel Velasquez, David Ronnegard, and R.S. Downie). They hold that the people who do think that way have been duped. To attribute mental states to a corporation is always an indirect way of attributing mental states to its members. Therefore, to say that Apple wants to produce innovative products is simply shorthand for saying that all or most of Apple’s employees, or perhaps Apple’s CEO or board of directors, want to produce innovative products. Many opponents of Corporate Moral Agency would agree with Anthony Quinton that an organization like Apple is nothing more than the sum of its parts.
Defenders of Corporate Moral Agency strongly disagree with this claim. In their view, groups can be divided into three distinct categories: 1. pure aggregates, 2. unorganized groups, and 3. incorporated groups—the last of which does indeed exhibit intentionality.
At the most basic level is the pure aggregate. This group is composed of individuals who share some common characteristic but are not co-ordinating their actions to achieve a common goal. “Shoppers at an Apple store,” “people with blonde hair,” or “middle-class Americans” are examples of pure aggregates. If someone were to say, “The shoppers at the Apple store want to donate to Apple’s favorite charity Product Red,” the speaker could mean only one thing: all or most of the shoppers want to donate to Product Red. Although the individuals possess intentionality (namely the desire to donate to Product Red), there is no group intentionality to speak of. Accordingly, the group cannot be a moral agent. Therefore, if someone were to follow up the first statement and say, “The shoppers at the Apple are morally praiseworthy,” that person would just mean that all or a majority of the shoppers deserve moral commendation. So far the two sides in this debate are agreed.
A more complex collective is the unorganized group. Although the members of this group have co-ordinated their actions, the group itself lacks decision procedures. Examples of unorganized groups include two people who are walking together and a group of beachgoers who have banded together to save a drowning child. Members of these groups have joined together and co-ordinated their actions to achieve a common goal (walking together and saving a drowning child, respectively). However, these groups still lack internal decision-making procedures. There is no leadership structure or locus of decision-making authority that would control these groups in other circumstances. Although these groups have a central goal, their mental states are still reducible to the mental states of their individual members. Accordingly, this group also lacks intentionality. Again, both sides agree.
Finally, the third and most complex type of collective is the incorporated group. This is (according to one side in the debate) the only group that exhibits intentionality. Corporations are perfect examples of incorporated groups. They have standing decision-making procedures that allow the group, as a whole, to update its beliefs and revise its goals. Here’s where the divide occurs. Whereas opponents of Corporate Moral Agency again see no evidence of group intentionality, proponents of Corporate Moral Agency believe that corporate decision-making procedures are constructed in such a way as to produce mental states that can properly be attributed to the corporation itself. According to this point of view, Apple really does have a mind of its own—a mind that’s quite distinct from the minds of its employees and stockholders.
Does Apple Make Its Own Decisions?
Peter French has mounted one of the most comprehensive defenses of Corporate Moral Agency. In a number of books and articles, he argues that corporations possess Corporate Internal Decision structures that allow them to qualify as moral agents. French identifies two main parts to this Corporate Internal Decision structure: 1. an organizational hierarchy that sets forth the corporate power structure and 2. rules that indicate when a decision is validly made and can therefore be attributed to the corporation itself.
Let’s suppose Apple needs to decide whether or not to pay a one-time dividend to its shareholders. In Apple, as in other corporations, the Board of Directors makes this decision. On the table before the Board members lies a daunting stack of papers that has been drafted by subordinates for the purpose of informing this decision. Some of the papers have been prepared by the Chief Financial Officer. Others have been prepared by Apple’s general counsel. Yet others are recommendations from the Senior Vice President of Marketing on better ways to use the money to grow Apple’s profits. All of these reports have been developed within Apple’s Corporate Internal Decision structure. According to French, employees’ personal reasons for wanting the corporation to act in a certain manner (either pay the dividend or withhold it) will be diluted by virtue of the fact that they have been filtered through Apple’s Corporate Internal Decision structure.
For example, the General Counsel may personally want the dividend to be paid because he owns many shares of Apple and wants to receive a large payout now. However, Apple’s Corporate Internal Decision structure requires him to prepare the report from an impersonal vantage point. The Corporate Internal Decision Structure—as governed by corporate law—forbids personal considerations from coming into play and works to prevent this from happening.
After reviewing and discussing the reports, the Board of Directors votes to pay out a dividend. They, too, have evaluated the information from an impersonal perspective. By voting, French maintains, the Board is ratifying a corporate decision based on corporate reasons, not aggregating a variety of personal decisions based on personal reasons.
In fact, when the Corporate Internal Decision structure is followed, the corporation has reasons for paying out the dividend that are distinct from any personal reasons individual Board members may have had. This would be true even if all of the Board members personally preferred that the dividend be withheld yet still voted to pay out the dividend. When these discrepancies arise, it is actually evidence of a well-functioning Corporate Internal Decision structure. French argues that the corporation has its own reasons for acting as it does, and therefore, its intentionality cannot simply be a mere aggregation of the preferences of its members.
The Apple iMind
Peter French is not the only one to advance a theory of corporate moral agency. Christian List and Philip Pettit have developed one of the most innovative accounts of Corporate Moral Agency. Their argument is based on the doctrine of functionalism. Functionalism is the view that mental states are defined, not by their internal characteristics, but rather, by the manner in which they function in a given system. The central idea of functionalism is that thinking is equivalent to computation; our minds are essentially computing machines. According to Paul Churchland, functionalism is currently “the most widely held theory of mind among philosophers, cognitive scientists, and artificial intelligence researchers.” Therefore, its application to Corporate Moral Agency should be given careful consideration.
As a basic illustration of how functionalism works, consider the concept of desire. A functionalist would identify desire according to the causal role it plays within a system. For instance, the mental state of desire would occur when certain inputs are introduced into a system and the system reacts by producing a certain desire-related output. More specifically, desire could be identified as the mental state that results when a system experiences a stimulus that causes it to work towards achieving a goal.
Functionalism has frequently been used to defend a theory known as Strong Artificial Intelligence. According to this view, if your Mac were to run the appropriate software, it would be capable of experiencing mental states and would have a mind of its own. Can you even imagine how long the lines would be on launch day if Apple were to develop this software and release it as iMind?
Although the release of iMind is likely far off, the day of corporate minds is already here, at least according to List and Pettit. These two philosophers build off this functionalist framework to show that a corporation functions in the appropriate manner in order to be considered a moral agent. In particular, they argue that decision-making procedures allow corporations to form and unform intentional attitudes and to act on those attitudes in a rational manner. Like Peter French, List and Pettit identify a distinction between personal mental states and corporate mental states. In particular, they argue that, because of the distributed decision making that occurs in corporations, corporate mental states are almost certain to diverge from individual mental states.
According to this view, it is possible to think of Apple as a brain and to think of the members of the corporation as individual neurons. Employees have their own personal reasons for taking actions and their own mental states; however, the process by which they interact with each other subordinates these personal intentions and allows a collective corporate intention to emerge.
So far, we’ve seen that Apple is an autonomous agent. It has desires, beliefs, and other intentional states, distinct from the intentional states of its members. This satisfies the first condition for moral agency, but what about the second? Does Apple have the ability to evaluate whether a given action is moral?
Proponents of Corporate Moral Agency emphasize that in order for an agent to be held morally accountable, the agent need only possess the capacity to make value judgments. The agent doesn’t actually need to consider the morality of a given decision—the mere potential to do so is sufficient.
There is a strong—and straightforward—argument that corporations meet this requirement. The line of thought is as follows: the individual humans who collectively make up the corporation have the ability to make value judgments in their individual capacities. Nothing prevents individuals from making value judgments in a collective manner. Therefore, the corporation, as an entity, has the ability to evaluate the morality of its actions.
Like in the case of intentionality, the corporation can reach moral conclusions that differ from the moral conclusions of any of its members. Again, this is possible because of decision-making structures within the corporation. List and Pettit develop a judgment aggregation paradox that shows how this disconnect can occur. Their example is not meant to replicate precisely the decision-making process within a corporation. Instead, it is meant to provide a basic example that can be generalized to the broader corporation.
Suppose Apple needs to choose a new processor supplier for its upcoming Apple Watch. Apple has settled on one factory but will only proceed with the deal if working with the factory is, on balance, a morally good decision. The corporation tasks three executives with determining the morality of the action. Amongst themselves, the executives decide that the decision would only be moral if the factory is 1. environmentally friendly and 2. economically beneficial to the local community. In other words, the executives agree that the decision to source Apple’s processors from that factory should be affirmed only if those two conditions are met.
Executive A believes that the factory is both environmentally friendly and economically beneficial, so he votes that the deal is moral. Executive B believes that the factory is environmentally friendly but also that it would be economically harmful to the local community; accordingly, he votes that taking the deal would be immoral. Finally, Executive C feels that the factory is not environmentally friendly but that it is economically beneficial to the local community. Due to his concern over the negative effects on the environment, he believes that accepting the deal would be immoral. The votes of the executives are reproduced in the table below.
From this voting pattern, we see that each premise has majority support, but the conclusion does not. More specifically, the majority belief is as follows: 1. the factory is environmentally friendly; 2. the factory is economically beneficial to the local community; and 3. it would not be moral to finalize the deal with that factory. Since the executives have already stipulated that the act would be moral if both criteria are met, this set of beliefs is inconsistent.
In circumstances like this, there are two options: the corporation can adopt either a premise-based approach or a conclusion-based approach. If the corporation goes with the conclusion-based approach, it will lack any justification for reaching that conclusion (after all, a majority of its decision makers believed that the factory was environmentally friendly and a majority believed that it would economically benefit the community). If, however, the corporation adopts a premise-based approach, it will experience no such problem.
Observing this dilemma, List and Pettit argue that corporations have no choice but to adopt the premise-based approach. Doing otherwise would preclude corporations from providing reasons for their actions. List and Pettit emphasize that, without the ability to provide reasons for their actions, corporations would be irrational and unpredictable agents. Since corporations are not irrational and unpredictable agents, they must, in practice, utilize the premise-based approach.
This insight shows that corporations such as Apple are agents, and Apple’s intentional states and moral judgments are not reducible to the intentional states and moral judgments of its members. Apple has its own reasons for acting and evaluates the morality of its actions from its own perspective. Because of this, Apple is a full-fledged moral agent.
The Upshot
It’s one thing to say that Apple is a moral agent. However, it’s quite another thing to say why that matters. Proponents of Corporate Moral Agency (like P.A. Werhane and R.E. Freeman) argue that there is a responsibility deficit that frequently arises when groups take actions. For instance, in a tight group of cars going dangerously faster than the speed limit, each individual driver may rightly choose not to slow down for fear of causing an accident. In this scenario, no individual driver is blameworthy for speeding. After all, any driver who unilaterally slowed down would make the situation worse. Despite the absence of individual culpability, it nonetheless makes sense to say that all of the drivers were responsible for creating the dangerous situation.
When corporations take morally bad actions, individuals within a corporation can frequently disclaim responsibility for any number of reasons. Perhaps every individual was reasonably unaware that the action would cause harm; maybe each one believed that he could not mitigate the harm or that failure to go along with the group would only increase harm. By holding corporations like Apple morally accountable for their actions, we greatly reduce the responsibility deficit. Even when none of the individuals within the corporation is morally blameworthy, the corporate entity as a whole can still be the subject of moral rebuke and legal sanction.
Do you think that a company like Apple is an entity capable of acting morally or immorally, independent of the beliefs or wishes of the individual humans who work at Apple? Regardless of which side of the Corporate Moral Agency debate you ultimately come down on, this chapter may have given you some ideas about what it means to be a moral agent. And maybe the next time you come across someone asking, “Is Apple to Blame?” you’ll have a different perspective on what the question really means.