The T-shirts Accelerating Robot (TAR) team was another Senior Design Project (SDP) team that we observed in the second year of the study. The TAR team was designing a robot that can launch T-shirts to spectators in the stadium at the sports event. This team had an ethics advising team as a collaborative partner in ethics discussion. The ethics advising team consisted of students who were taking philosophy of science and technology course, and they also received an instruction about how to do ethics advising to help engineering design teams. The TAR team and the ethics advising team discussed various ethics issues directly or indirectly related to the TAR team’s project.
In the selected segment, they discussed the concern of emergency shut-down of the robot. At the ethics advising team’s suggestion, the TAR team agreed to design a physical shut-down device, so the operator or anyone nearby can stop it at the emergency. The team also mentioned that their initial solution would be installing the emergency shut-down software. Here is the discussion segment.
To analyze the discussion, we (researchers) identified a few types of meaningful keywords, and highlighted them in different color. This conversation was focused on the safety issues, so we first marked keywords representing safety in green. Then we marked user related keywords in purple. Then we marked keywords about engineers’ actions in blue. The below is color-coded conversation with index.
In this discussion segment, lots of non-verbal expression such as laugh, gestures, and sound. So we marked them in bold. See the below.
Now, we studied relationship among keywords, interpretive meanings in the discussion, team members’ gestures or any non-verbal expressions such as laugh, the team’s particular habit or way of talking, and any other noticeable clue. First, we noticed that the safety concern, users’ perspective, and engineers’ perspective were continuously discussed through the discussion segment. All of the color-coded keywords were appeared from the beginning to the end of the segment, indicating the discussion was continuously developed. Second, we found an important point regarding TAR team’s dealing with ethics issues in this discussion. Read carefully the following conversation again.
Advisor: Ummm and besides the operator, I know there are only two operators, but will there be anyone actually physically able to like stop it if something does go wrong with it?
TAR Team: What we might do also, is you know, if we talked about the emergency stop earlier, and I think you know, good practice is,… what you usually do is, you have, a… software…but then also you have a physical switch all over the robot which you can run up and you know, pull the lever and it’ll shut off, so we’ll make sure to have one of those on there too.
As seen in this conversation segment, although the TAR team thought of the software-based solution, they accepted the ethics advising team’s suggestion and revised their design by adding a physical back-up plan. It indicated that the TAR team had a different cultural model from the other SDP teams in regard of the relationship between the safety concerns and the users. Unlike the other SDP teams, the TAR team included a user’s role in the emergency stop process. It indicated that the TAR team thought that a safe design could include not only a design product but also an active role of the users. The TAR team’s cultural model was shown below.
The Saber Sound Effects (SSE) team was one of the Senior Design Project (SDP) teams we observed in our second year of the study. The SSE team was designing sound effects for an electric toy saber. The toy saber for which they were designing sound effects was not a simple toy but an elaborately designed electric toy resembled a light saber in the movie. Their sound effects would have made a toy saber as a fine collectible figure. The SSE team concerned about the safety issues because electric toy could inflict harm if incorrectly used. In the selected episode, the SSE team discussed the possible danger of their design product when used by young children, and concluded that charging a high price would reduce risk because young children cannot afford it.
Here is the discussion segment.
We (researchers) began to analyze the team’s conversation by identifying a few types of meaningful keywords and highlighted them in different color. This conversation was focused on the safety issues, so we first marked keywords representing safety in green. Then we marked user related keywords in purple. They suggested of charging a high price to prevent young children from buying this electric toy saber, so we marked keywords about price in orange. See the below.
Then, we noticed that, in this conversation, most of important actions were supposed to be done by engineers. For example, “give responsibility” “charging a high price” “make more” “limits” were all supposed to be engineers’ actions. So we marked keywords indicating engineers’ actions in blue. The below is color-coded conversation with index.
Apparently, the SSE team concerned about possible accidents which might happen when a young children misused the toy saber. They said that if this toy saber is expensive, young children would not be able to buy it, so it would prevent a possible harm. Again, we studied relationship among keywords, interpretive meanings in the discussion, team members’ gestures or any non-verbal expressions such as laugh, the team’s particular habit or way of talking, and any other noticeable clue. Based on them, we could find a few characteristics in this discussion.
First, the SSE team’s conversation indicated that the team approached to the safety issue in the perspective of the engineer. The following conversation showed that they (engineers) would give responsibility and they would charge a high price.
They seemed to think, if a safe product is provided to the qualified users, the possible safety issue could be resolved. In this perspective, the users are recipients, and engineers and other manufacturing or marketing parties are providers who control the safety issues. We represented the SSE team’s cultural model about the relationship between safety concerns and the users as seen below.
Second, although the SSE team brought up an important safety issue, the solution they found was actually out of their hands. For example, charging a price is not usually engineers’ job. It is usually decided by markets. Also the responsibility for the safety was shifted to parental guidance.
The Helmet Display (HD) team was another Senior Design Project (SDP) team that we observed. This team designed a “heads up” information display system for motorcycle helmets. By displaying necessary information such as speedometer and fuel gage in the visor of the helmet, the driver needs not frequently look down to the dashboard to check the information. The HD team expected that this system would help the driver in safe driving by reducing distractions.
During the discussion, the team discussed the safety issues, security issues, copyright issues, and possible environmental issues. The members of this team were particularly interested in legal issues such as patent issues. Also, this team largely relied on legal standards in solving potential ethical problems. For example, they mentioned that any danger related to users’ mistakes could be prevented if the exam for motorcycle drivers’ licenses was adequate to keep unqualified drivers off the road. They also mentioned that any environmental hazard related to their product could be prevented if they followed the applicable laws and regulations for environmentally safe materials.
The HD team held two discussions about the ethics issues involved in their project. On their second discussion, an ethics advisor joined. The ethics advisor was a student volunteer who was taking a philosophy of science and technology course. We (researchers) hoped that the ethics advisor could help the team explore various ethics issues. Unfortunately, the discussion went more likely a Q & A session. The ethics advisor mostly asked questions, and the HD team answered those questions. In the selected episode, the ethics advisor asked, “what if a driver relies so much on this helmet technology that he cannot drive without this helmet?”
Here is the discussion segment.
To analyze this conversation, we (researchers) first identified a few types of meaningful keywords and highlighted them in different color. We marked ethically salient keywords in red, and design product keywords in blue. We noticed that the word “ridiculous” was said three times, so we marked it in green. How the team addresses others was also important, so we marked the word, “you” in purple, and the word, “someone” in orange. See the below.
We also added non-verbal expression in bold. See the below.
Again, we studied relationship among keywords, interpretive meanings in the discussion, team members’ gestures or any non-verbal expressions such as laugh, the team’s particular habit or way of talking, and any other noticeable clue. Based on them, we could find a few characteristics in this discussion.
First, the ethics advisor did not say anything about “not to design this helmet”, but the team automatically took the question as a challenge to the very existence of their design and tried to argue that the possibility asked in the question could not be a relevant reason to give up on their design. Overall, the team’s attitude and affect during the discussion appeared to be defensive and protective of their design.
“It’s ridiculous not designing this product because of possibility that someone forgets how to look at the dash , then you should say why….you know it’s ridiculous”
Second, the team dismissed the problem suggested by the ethics advisor as minimal and unimportant, saying “it’s ridiculous” to consider it. In fact, the word, “ridiculous”, was mentioned three times in this selected episode. Overall, the Helmet team showed defensive, protective, and dismissive reactions toward the social implication question posed by the ethics advisor.
Third, the HD team seemed to think only in the designers’ perspective. Unlike the SRC team in the previous case, who addressed the users as “you”, the HD team addressed the users as the third party, such as “someone.” When the Helmet team used a word, “you”, it indicated themselves or other engineers, as seen in “You know it’s ridiculous”, and “You have to say okay.”
The team seemed to assume that questions not directly related to engineering, such as social implication questions, were unhelpful to their design, so they tried to defend their design from the (perceived) unfavorable opinions of non-engineers. There was no indication that the team might see the problem from the users’ perspective. Considering these results, the HD team seemed to think that issues not directly related to the technical requirements of engineering are irrelevant for their design, so they needed to protect their design from those irrelevant issues. We represented this relationship as a cultural model of the HD team’s engineering ethics understanding (see below).
The Smart Recipe Cart (SRC) team was one of the Senior Design Project (SDP) teams which we (researchers) observed. The SRC team was designing a tablet screen to attach to shopping carts that would suggest possible recipes based on items in the cart. The team broadly discussed about safety issues such as safe uses of batteries in their product, copyright issues such as sharing recipes with recipe providers, and security issues such as the theft or damage of the attached tablet screen. The SRC team also discussed a few issues concerning social implications of their product such as the possibility of changing users’ life style by depending on their product, the potential impacts of encouraging users to purchase more food than planned by suggesting various recipes and the possible effect on users and grocery stores.
In the selected episode, one of the team members posed a question about their responsibility for the difference between food depicted in the suggested recipe and the actual food the user would produce. If the picture of food that accompanied the suggested recipe was attractive enough to tempt users to buy it, but the result was disappointing, would the team be ethically responsible for this outcome or not?
Here is the SRC team’s discussion segment about this issue.
To analyze this conversation, we identified a few types of meaningful keywords and highlighted them in different color. We marked ethically salient keywords in red, and design product keywords in blue. We also added non-verbal expression in bold. See the below.
Then, we noticed that this team was addressing users as “you”, not as “users” not as “customers.” This is a unique behavior in engineering student teams because most of student teams addressed users, either “users” or “customers.” Thus, we highlighted keywords addressing users in purple.
Finally, we added explanatory words to the conversation and completed an interpretive version of the discussion.
Now, we studied relationship among keywords, interpretive meanings in the discussion, team members’ gestures or any non-verbal expressions such as laugh, the team’s particular habit or way of talking, and any other noticeable clue. Based on them, we could find a few characteristics in this discussion.
First, the SRC team seemed to take responsibility, or at least partly take responsibility, for the users’ possible disappointment. The picture of the food in the suggested recipe in their design product will most likely be a professionally prepared picture, and the users may not be able to make such a professional cuisine, and be disappointed. The team brought this issue by questioning, “Is it ethical that really, really looking good food there and have all the recipes and then sell it as a crappy food?” The team member also gestured as if offering something (see the picture below). Although they said that “It is truly not our problem” and shifted the responsibility to the recipe provider, saying “Whoever made the recipe, their fault that they put the picture that’s not true to the recipe,” this team showed that they were concerning this issue as a possible ethical problem related to their design product.
Second, as mentioned above, the SRC team referred to the users with variations of the second-person pronoun “you.” This language was unique habit in this team, as all other engineering student teams we observed addressed users as “users,” “customers,” or using third-person pronouns like “him” or “they.” Phrases such as “how good a cook you are”, “After you make it, it doesn’t look the same”, and “If you keep trying, it will eventually look even better,” indicated that they were imagining themselves in users’ position while they discussed this matter. They discussed the responsibility issue as if they were giving advice to their friend, and even when they said, “It’s your fault,” it was said in friendly manner with laughing and teasing.
Third, the SRC team seemed to side with users rather than with the recipe providers who might become their business partners. In their conversations, they maintained a friendly manners when they talked about users, addressing them in the second person, however, the team demanded responsibility from the recipe providers in accusing manner, calling them “whoever made the recipe” and pointing out that it was the recipe providers who did make the unreliable picture, saying “the picture that’s not true to the recipe.” It is often expected that business partners including designers, manufacturers, and marketers take the same side, and the users or customers take the other side. In this case, however, the SRC team, who designed the product, seemed to be more in favor of users than potential business partners. We represented this relationship as a cultural model of the SRC team’s engineering ethics understanding.