Introduction
In recent years, a number of terms have emerged that associate technology with a conception of the good. Ethical, responsible, safe, trustworthy, sustainable, humane, human-centered, and accountable are just some of the adjectives in contemporary discourse to denote what technology should be like. As socially-minded individuals and organizations grapple with their role in advancing socially desirable goals, an essential first step is to understand and draw upon common definitions of existing terms to enable greater impact, coalition building, and collaborative action all toward strengthening a broader agenda.
This document provides brief definitions and history of responsible tech, ethical tech (and its offshoot, ethical AI), public interest tech and tech for social good, humane tech, tech stewardship, human-centered design, safety by design, design justice, relational design, prosocial design, human rights, corporate social responsibility (CSR), environmental, social, and governance (ESG) goals, and diversity, equity, and inclusion (DEI).
The terms can be divided into three broad categories: technology-specific terms, design terminology, and generic terms that are relevant to technology debates. Within their respective categories, the terms are ranked by frequency of use: a simple Google search reveals that responsible tech is the most popular term, followed by tech for social good, ethical tech, public interest tech, humane tech, and tech stewardship. Within design terminology, human-centered design and user-centered design are the most frequently used ones by a comfortable margin, followed by privacy by design, safety by design, design justice, relational design, and prosocial design.
While some terms, like responsible tech and ethical tech, encompass many other definitions and share the common goals of strengthening ethical practices, safety, trust, transparency, and accountability, other terms emphasize product-level considerations or the human, sector, and societal level factors. For example, ethical tech and tech stewardship are more focused on business practices while humane tech focuses on how the outcomes impact individuals and communities.
The choice of terminology and how these terms are communicated and implemented can be a reflection of political and ideological differences or ‘tech washing’. For example, terms like ethical AI have at times been criticized for emphasizing self-regulation by companies at the expense of binding legal regulation. Some of the terms are used by virtually everyone, albeit with different meanings, while others have group-specific uses. For example, humane tech was first introduced by scholars and activists in the 1970s, but its adoption by the Center for Humane Technology has resulted in a close alignment between the term and the organization. Likewise, tech stewardship and prosocial design are generic terms that are today associated with specific networks organized around those terms. Therefore, the definitions below describe both the origins and genealogy of these terms, and their more recent usage by specific organizations and institutions.
Another factor is practical use. Terms like responsible tech invoke multiple value systems, legal requirements, and business practices; therefore, they cannot be associated with one set of guidelines, standards, or practices. Most of the terms in this report are similarly vague, generic, and flexible. Other terms like human-centered design also started as generic ideas, but have been incorporated into specific international standards over time. The adoption of humane tech, tech stewardship, and prosocial design by practitioner networks, NGOs, and consultancies has likewise narrowed their scope in recent years. Finally, human rights, CSR, ESG, and DEI are not technology-specific terms at all; their incorporation into technology law and practice is newer and is contingent on relevant stakeholders’ definitions, mission, and practical considerations.
All in all, this report is an attempt to synthesize our understanding of some of these key terms, acknowledging that others may use the same terms to mean slightly or entirely different things. This menu of terms can help individuals and companies develop a more clear and more robust understanding of how to frame and address their own impact and where gaps might lie.
It is important to acknowledge the challenges in implementing the principles of all definitions and at all levels in technology companies and technology-enabled organizations. Yet the overlapping nature of these definitions also allows organizations to see the importance and interconnectedness of concepts that show up over and over again. Consequently, while these definitions are distinct in their scope, they should also not be seen as mutually exclusive.
Technology-specific Terms
Responsible Tech
Responsible tech serves as an umbrella term covering the entirety of socially desirable technological pursuits and practices. The term itself and related terms like responsible research and innovation have been in use for decades. According to the “The State of Responsible Technology” report published by the MIT Technology Review Insights in 2023, responsible technology is “the active consideration of values, unintended consequences and negative impacts of technology, it includes a wide variety of voices in the adoption and deployment process and seeks to manage and mitigate potential risk and harm to all communities affected by that technology.” Other definitions highlight a critical relationship between stakeholders and technology through words such as protection, fairness, respect, unintended consequences, impacts, inclusion, diversity, transparency, and agency.
Responsible tech can be said to prioritize the responsibility of businesses, as well as other stakeholders, in producing technology in community-preserving ways. It links ethical considerations about tech development, use, and impact with the concrete responsibilities of businesses. It may also include questions about what kind of governance is needed internally and externally to ensure the health and safety of the stakeholders, which include end-users but also the people inhabiting the broader ecosystems in which businesses operate. An additional challenge in bringing visions for responsible tech to life is the difficulty of operationalizing principles into practice. Finally, generic terms like responsible tech and ethical tech may or may not mention the role of political, socio-economic, and cultural power relations in impacting societal outcomes. Thus, calls for acknowledging and reversing these power relations have become a demand on the part of critics who resist the reduction of responsible tech to good business practice.
Public Interest Tech (PIT) and Tech for Good
Public interest tech refers to “the application of design, data, and delivery to advance the public interest and promote the public good in the digital age.” As a worldview whose guiding idea is to advance public benefits and the public good through technology, it has a long genealogy and has generated enormous momentum since the mid-2010s. Definitional variations notwithstanding, the broader field is also known as tech for social good, although one nuance should be noted: tech for social good emphasizes the use of technology to solve social problems, while public interest tech understands technology as a solution as well as a potential problem.
What sets PIT apart from many of the other terms in this report is that while others seek ways to make the private incentives of businesses and customers aligned with a broader idea of public good, PIT requires the development, deployment, design, and use of technology to be inherently driven by public good concerns, such as the collective needs for justice, dignity, and autonomy. Its frame of reference is society as a whole, rather than the private actors operating within it. Today, there exist professional networks built around PIT principles.
Indeed, its own practitioners view PIT as an emerging field without clear definitions and practices. A systems-level critique is that PIT may favor piecemeal solutions at the expense of systemic reform. Accordingly, even when its practitioners seek socially beneficial transformations, they risk working within or else reproducing existing social and political problems.
Ethical Tech
Ethical tech brings together philosophical considerations about what is right with practical applications of tech development and use, including (but not limited to) how a product will be used, whether or not it could be addictive, data privacy and data use, engineering and engineered bias, and transparency of operations. For example, Santa Clara University’s Markkula Center sums up the definition of ethical tech as “the application of ethical thinking to the practical concerns of technology.” It is typically used synonymously with responsible tech, but some analyses point to differences between the terms.
In some cases, the word ethical refers to hard choices in the face of dilemmas. Professional ethicists typically associate ethical conduct with Kantian deontology, utilitarianism, and virtue ethics. However, technology companies often take a more philosophically shallow approach to ethics. Finally, it is worth mentioning that ethics may be understood as compliance with fundamental values above and beyond what the law requires.
Perhaps the most prolific subcategory of the term ethical tech is ethical Artificial Intelligence (AI) or AI ethics. The operationalization of AI ethics should take into account the multiplicity of definitions around it. A research paper devoted to a comparison of AI ethics guidelines notes that there is global agreement that transparency, justice and fairness, non-maleficence, responsibility, and privacy should be core AI ethics values, but also registers a tension in the implementation of these values due to differences in definitions and interpretations.
Ethical tech in general, and ethical AI in particular, have generated interest among businesses and lawmakers. Some research and practitioner communities have named themselves after the term, such as the Ethical Tech Collective. Others have internal AI codes of ethics for their businesses, which typically address high-level points related to policy, education, and technology. According to a report published by TechTarget, companies develop their AI Code of Ethics based on “principles” and “the motivation that drives appropriate behavior,” although the definition of “appropriate behavior” is extremely vague and broad. Calls for ethical AI have galvanized lawmakers in the European Union, Canada, and elsewhere to pass laws that address AI risks and harms specifically. In the United States, the White House released a “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” in March 2023 to ensure safe, effective, non-discriminatory, privacy-affirming, explainable systems under human control, but these concepts are yet to be enshrined into law and face a difficult path forward.
One of the main criticisms of ethical tech is the tendency to rely on businesses to self-regulate. The term’s popularity owes to public pressure for companies to exercise self-restraint when laws, especially in the United States, fail to regulate some of the excesses of contemporary technologies. Thus, the accusation that ethics guidelines and principles are toothless, or that they serve the function of “ethics-washing”, is often brought up.
Humane Tech
Humane technology was a relatively popular term in the mid-1970s when researchers known for their humanistic education work began research projects on the intersection of human-centered concerns and technology. More recently, the term has been popularized by The Center for Humane Technology (CHT), whose work focuses on aligning “technology with humanity’s best interests.” In other words, this definition is based on a human-centric design that prioritizes the health and wellness of human beings and society instead of the rights or profits of the companies. They emphasize the need to consider future generations, attention and mental health, the information environment, democratic functioning, and privacy and safety.
As of 2023, humane technology is in part associated with the CHT’s agenda, but the use of human, humane, and human-centric as a reference point for the interaction of technology, humans, and the ecosystems they are embedded in has become quite common. This framing has drawn praise for questioning the ethical commitments of tech companies but has also drawn criticism for stifling planetary and species-transcending conversations about technology. Beyond this general critique, much of the scrutiny focuses less on humane tech as a concept, and more on the CHT’s problem definitions and proposed solutions.
Tech Stewardship
Technology stewardship has a relatively long genealogy within the technology sector. Research suggests that some companies had technology stewardship practices built into their information technologies (IT) departments in the 1990s. Nonetheless, the term’s popularization has taken place in the 2010s, first by the Engineering Change Laboratory, and later by the organization Tech Stewardship. The latter describes its line of work as building “[a] professional identity, orientation, and practice.” They advocate discussing, refining, and imagining new ways to shape technology to be purposeful, responsible, inclusive, and regenerative. Accordingly, tech stewardship takes into account the effect of technology on communities’ economic, environmental, and political well-being.
Design Terms
Human-Centered Design
The term human-centered design originates in industry and is among the most popular design frameworks, with thousands of courses, company profiles, toolkits, and news articles mentioning it. The International Organization for Standardization has developed an ISO for human-centered design for interactive systems. Broadly speaking, human-centered design aims to put the needs, expectations, contexts, and constraints of people - especially the end users of products - at the center of design. As such, its goal is to align business strategy with value systems. Potential users are considered participants at every stage of design. Although the term is not technology-specific, inspired practitioners have used human-centered computing and human-centered artificial intelligence as specific applications of human-centered design principles.
As the vague definition above suggests, human-centered design does not advocate a uniform approach to design beyond the minimal commitment to user experience and context. Its user-centric approach to human interactions, and overall anthropocentric approach (human-centric) to design as such, have been a focus of criticism. Design practitioners have responded by introducing a range of new concepts, theories, and solutions under this umbrella term. Dissatisfaction with this term has also led to the proliferation of new approaches to design, such as safety by design, design justice, relational design, and prosocial design.
Safety By Design (SxD)
Safety by design prioritizes safety as the fundamental value around which design choices should be structured. Similar to approaches like security by design and privacy by design, the goal is to be mindful of future risks and harms at every stage of product design. Due to their emphasis on the prevention of harm, these terms have become popular among lawmakers, as well. While a web search of human-centered design reveals overwhelming business interest in defining and applying the term, safety by design yields query results from electronic safety bureaucracies, lawmakers from around the world, and policy-driven initiatives, in addition to businesses. The incorporation of privacy by design principles into the European Union’s General Data Protection Regulation (GDPR), the region’s data privacy and protection law since 2018, is a testament to policy interest in embedding values in design through legal regulation.
Design Justice
Design justice refers to a field of design practice, but the term itself is generally attributed to design scholar Sasha Costanza-Chock’s 2020 book Design Justice: Community-Led Practices to Build the World We Need. She believes that the goal of design should be collective liberation, ecological survival, and the dismantling of structural inequalities. Arguing that human-centered design, user-centered design, and design for social good frameworks ignore existing power structures and inequalities, she calls for reimaging the production and design of technologies to center the needs and perspectives of marginalized communities.
Relational Design
Its human-centric value system is precisely what some critics find problematic with human-centered design, who instead argue that design choices should reflect the primacy of relationships. Some relational design scholars and practitioners emphasize the role of community-building through design, whereas others go beyond human communities to take into consideration the interrelatedness of humans and non-human elements of nature. Differences notwithstanding, relational design approaches all seek to deconstruct the hierarchical idea of design whereby the needs and constraints of some individuals are prioritized over those of communities and lived environments.
Prosocial Design
Prosocial design refers to efforts to ensure cohesiveness, trust, inclusion, and cooperative behavior in group settings – especially corporate ones. The Prosocial Design Network, an organization that defines prosocial design as “evidence-based design practices that bring out the best in human nature online,” has repurposed it to improve online spaces. Likewise, game design has received attention from prosocial design research and practice.
Generic Terms Relevant to Technology
Human Rights
Human rights have broader applications than those associated with technology. Nonetheless, the ensemble of international treaties, domestic constitutions and laws, and civil society advocacy subsumed under the term human rights can be an invaluable resource for the technology sector. Many of the potential risks and concrete harms resulting from the implementation of today’s technologies can be framed as privacy rights, civil-political rights, consumer rights violations, or violations of the principles of equality and non-discrimination. Human rights are comparatively well-known norms supported, at least in principle, by United Nations member states. While it is true that there are no international treaties specifically addressing the risks and harms of contemporary technologies, such as artificial intelligence, the broadly defined set of rights in the Universal Declaration of Human Rights (1948), treaty law, and every country’s constitution is well-positioned to serve as a normative framework to regulate technology. In fact, a number of bills aiming to regulate digital content, artificial intelligence, or tech market competition in the European Union, Canada, and beyond make explicit references to fundamental rights enshrined in national constitutions and regional treaties.
Even though legal implementation through state institutions and civil society pressure are key mechanisms for the operationalization and enforcement of human rights norms, businesses have increasingly resorted to conducting human rights impact assessments as part of their social responsibility efforts. Following the publication of the UN Guiding Principles on Business and Human Rights in 2012, dedicated conferences and academic journals have been exploring avenues for collaboration between the business and human rights communities. What is more, the risks posed by contemporary technologies have pushed conventional human rights organizations like Amnesty International and Human Rights Watch to become more active in shaping the conversations around the future of technology. Finally, consultancies like BSR® develop business toolkits in which human rights serve as part of the guiding framework.
Corporate Social Responsibility (CSR)
CSR refers to companies’ efforts to contribute to social good and to minimize harm to society, broadly defined. It encompasses a broad range of social initiatives guided by a company's values, including elements of philanthropy, ethics compliance, and assessment. As a practice of self-regulation, CSR is closely tied to business ethics. It can be said to have much in common with environmental, social, and governance (ESG) goals (see below), although CSR focuses on internal programs, policies, and practices, assessing a company's impact on stakeholders, while ESG goals evaluate a company's external impact on society through environmental sustainability, social responsibility, and corporate governance criteria. Today, many companies have self-designated CSR goals and teams, and CSR terminology is broadly acknowledged by international organizations, too.
Companies’ compliance with their CSR commitments is evaluated qualitatively. The benchmark against which evaluation takes place is often the performance of others in their industry. Although not an exhaustive list, CSR is understood to have four pillars: environmental social responsibility; ethical/human rights social responsibility; philanthropic corporate responsibility, and economic corporate responsibility. CSR efforts may be measured against the broader social values they claim to serve, as well as their strategic effectiveness. CSR strategy can be considered effective if it leads to increased employee engagement, improved bottom-line financials, more support for local and global communities, increased investment opportunities, press opportunities and brand awareness, increased customer retention and loyalty, and a strong employer brand. Whether this combination of other-regarding and self-regarding motivations align is of course a matter of much debate.
As technological innovation gains pace, the CSR framework, as a well-established field of self-regulation, has come under the spotlight. Some Big Tech companies, like Microsoft, have a clearly identified CSR approach, while others, like Amazon, use the language of ESG or sustainability to state similar commitments. Perhaps many of the terms discussed in this report, particularly ESG, have replaced CSR in popularity, but it is safe to argue that thanks to its long-standing value commitments and regulatory practices, CSR will continue to guide efforts to develop and use technology safely and responsibly.
Environmental, Social & Governance (ESG)
Environmental, social, and governance goals (ESG) have been circulated by international organizations since the 2000s, and have become a popular terminology in business ethics and practice in the mid-2010s. The increasing attention to the environmental cost of doing business has resulted in the adoption of sustainability goals in business self-regulation. The realization that addressing environmental concerns as stand-alone challenges would be futile in the face of other social and political problems has led to the incorporation of a more comprehensive framework for change under the term ESG. The European Federation of Financial Analysts Societies (EFFAS), in conjunction with the Society of Investment Officials in Germany, has developed topical areas for ESG reporting. Based mostly on the 2021 EFFAS report, the subjects under ESG include environmental factors, such as energy efficiency, greenhouse gas emissions, water consumption, and waste; social factors, such as staff turnover, training, and qualification, maturity of the workforce, absenteeism rate, diversity, and human rights; and governance factors, such as executive policy, executive oversight, litigation risks, corruption, and revenues from new products. Needless to say, different businesses or organizations come up with similar yet distinctive values, commitments, and procedures.
ESG goals tend to overlap with responsible, ethical, or humane tech principles. Environmental issues related to technology can range from the handling of e-waste to the energy and water usage of data storage centers, and to the real-world implications of climate disinformation. The social impact overlap between ESG and technology is especially clear, as numerous responsible tech organizations have been pushing for a safe, responsible, human-centered approach to technology development and accountability. The overlap can be observed in debates around bias, diversity, human rights, privacy (which applies to the workplace as much as anywhere else), the digital divide, and digital literacy. Finally, governance goals have a bearing on anti-corruption, antitrust, executive oversight, overall legal compliance, and the potential for litigation in the technology industry.
It goes without saying that ESG has attracted negative attention, too. The spokespersons of some large technology organizations argue that ESG mandates present a distraction from unfettered innovation. Companies that seek an alignment between the normative demands of ESG and the profit motive find themselves at a disadvantage against companies that ignore ESG. Another criticism takes note of the minimalist interventions mandated by the ESG and CSR discourse, which may fall short of the broad transformations required for a healthy climate and just society. Finally, some argue that the language of sustainability assumes that the right kinds of technological change may constitute the balance between ecological goals and capitalist growth, whereas in reality this optimistic vision may not be realized.
Diversity, Equity, and Inclusion (DEI)
The technology sector and tech investors have long been criticized for failing to hire and support women and people of color, for deploying end products that replicate and amplify bias and discrimination, and for deepening existing inequalities on the basis of class, race, gender, and ability status. For example, the World Economic Forum’s 2023 Global Gender Gap Report finds that 30% of AI sector employees were women in 2022 – up a mere four percentage-point increase from 2016. The increasing popularity of DEI initiatives among policy, business, and academic circles has resonated with both potential and potential employees as well as concerned activists’ demands for improving the technology sector. Some industry leaders have invested in DEI to rectify the historic lack of diversity and inattention to equity. However, a recent survey reveals that DEI efforts have a long way to go in the sector. A more recent survey finds that more than half of respondents among tech workers want their company to do better in terms of gender and race-related DEI practices. While more companies appear to have guidelines offering targeted interventions for the tech industry, whether those guidelines are able to produce sustainable change remains to be seen. Additionally, ongoing efforts to discredit DEI initiatives by US-based ‘anti-woke’ activists, coupled with tech sector layoffs that disproportionately affected DEI initiatives raise serious concerns about how meaningful and sustainable any commitments will be in the long term towards addressing inequalities created and exacerbated by technology.
Conclusion
This report provides a list of some of the key terms and definitions relevant to the responsible technology sector. Newer terms reflect both emerging concerns around the development and use of contemporary technologies and dissatisfaction with the scope of existing terms. While technology is not the sole cause of the economic, social, political, legal, cultural, and military problems facing human societies, the pace of technological change and the risk that changes may not always align with socially desirable outcomes introduce questions that urgently need to be answered.
Additionally, as societies have undergone rapid transformations brought about by the Internet, personalized computers, digital media, smartphones, and AI, they have faced a phenomenon known as the Collingridge Dilemma, which states that regulating technology to mitigate negative impacts gets more difficult over time, but in the early stage when regulation is possible, we lack the hindsight and urgency needed to assess and prevent these impacts. Every society around the world has experienced many harmful outcomes stemming from the failure to govern technology - for example, in the form of disinformation increasing conflict, and decreasing trust in government - at a time when we are also on the cusp of the unknown as the consequences of generative AI which are beginning to manifest across our societies. Getting the right vocabulary to understand, assess, and influence technological change is crucial.
These debates reveal the scale and complexity of issues and raise urgent questions including:
What terms do companies choose and how do these choices determine the harms companies choose to identify, monitor, and address?
Do companies believe they have primary responsibility for preventing harm? Are they incentivized to proactively address issues arising from business models and platforms? If they aren’t, could they be, and if so, how?
What commitments are companies making now? Is it possible to assess the extent to which they operationalize these commitments? If so, how can this be achieved?
What organizations/ entities (e.g. advertisers, investors, etc.) in the value chain of companies whose practices lead to harm do not currently perceive that they have a role and responsibility to advance responsible tech, but should be? How can this sense of responsibility be cultivated and normalized?
Who owns the responsibility for advancing healthy technology ecosystems?
Is it enough for businesses to focus on consumer needs? Should human well-being be the overall goal? Or, is there a need to widen the lens to also include environmental considerations and collective governance?
Should ethical decision-making target case-by-case dilemmas or should it take into account larger and more structural change?
Can incremental reforms alleviate the growing range of concerns, or do societies need a revolution to rethink how we conceive, design, develop, and deploy technology?
Who defines socially desirable outcomes and who is responsible for enacting them?
If Safety-by-Design and Privacy-by-Design have become alternatives to human-centered design in policy circles, how can these more proactive approaches be expanded to all stakeholders?
Finally, what do the Collingridge Dilemma and the debates on terminology choices reveal about when to enact policies and what policies we need to govern technology?
These are some of the questions to consider as technology increasingly disrupts social, economic, political, and cultural norms at home and around the world.
Version 1.0. Contributors: Onur Bakiner, Renee Black, Erika Finlay, Mahtab Laghaei, and Heather Openshaw. September 2023