Beruflich Dokumente
Kultur Dokumente
Tech Guide
The people, organizations, and ideas of the Responsible Tech
ecosystem and actionable ways to get involved.
There’s a vibrant Responsible
ALL TECH IS HUMAN | 2023
Tech community
2
Welcome to the Responsible
Tech Guide!
ALL TECH IS HUMAN | 2023
You can find the latest version, along with additional resources like
our responsible tech org list, at responsibletechguide.com!
3
Table of
Contents
01
Overview and Welcome
The Five Subject Matter Areas of Responsible Tech, About All
Tech Is Human, Ten Principles, Welcome Letter | Pages 5- 10
02
Getting Involved
Common hurdles, ways to get involved, diverse range of
backgrounds, ways to affect positive change, what better tech
future look likes, and responsible tech activities | Pages 11- 28
03
Profile Interviews
Learn from others about their roles in responsible tech, their
career trajectory, and issue areas crucial to building a better
tech future | Pages 29- 59
04
Five Areas of Responsible Tech
Responsible AI, Trust & Safety, Tech & Democracy, Public
Interest Tech, and Youth, Tech, & Wellbeing, and our list of
contributors | Pages 60- 94
05
What We’re Learning
Highlights from recent All Tech Is Human panels, reports, podcast
series, and key takeaways | Pages 95- 107
06
Staying in Touch
Learn about All Tech Is Human’s team and how to stay in touch! |
Page 108- 110
4
ALL TECH IS HUMAN | 2023
1
Responsible AI
Responsible AI is at the forefront of ethical technology
development. It emphasizes the need for transparency,
fairness, and accountability in AI systems. Our community of
ethical practitioners works to ensure applications of AI benefit
society without causing harm or reinforcing bias. In the
1980s
responsible technology space, it’s important to craft
guidelines, implement robust testing, and advocate for
policies that prioritize ethical considerations while respecting
human rights and equity.
2
Trust & Safety
Trust & Safety teams play a vital role in maintaining the
integrity of online platforms and digital spaces. Practitioners
are not only dedicated to combating harmful content,
disinformation, and cyber threats, they also work to foster an
2000s
environment of trust among users. The Trust & Safety field is
evolving and emerging with difficult tradeoffs and challenges
in promoting a secure and trustworthy digital ecosystem.
5
3
ALL TECH IS HUMAN | 2023
4
Public Interest Tech
5
Youth, Tech, & Wellbeing
6
About All Tech Is Human
ALL TECH IS HUMAN | 2023
All Tech Is Human’s activities are entirely free — thereby creating a low
barrier for entry — through the support of the Patrick J. McGovern
Foundation, Schmidt Futures, and the Siegel Family Endowment.
7
The ten principles of
All Tech Is Human
ALL TECH IS HUMAN | 2023
The future of technology is intertwined with the future of democracy and the
1 human condition.
In order to align our tech future with the public interest, we need to involve the
2 public.
People often struggle to “find the others” and discover the wide variety of people
6 and orgs committed to co-creating a better tech future.
Top-down models have power but often lack a diversity of ideas; grassroots
8 models have ideas but often lack power. We unite these models.
Tech innovation moves fast, while our consideration of its impact often moves
9 slow. We need to reduce the gulf between these.
There is a growing awareness of the root causes of our current dilemma, but limited
10 action toward understanding values, trade-offs, and best paths forward.
We cannot align our tech future with the public interest unless we
actively involve the public. All Tech Is Human’s approach brings
together people of all backgrounds and skill levels to learn from
each other, build community, and co-create a better tech future.
Find out more at AllTechIsHuman.org
8
It’s time for a better approach
to tackling wicked tech &
ALL TECH IS HUMAN | 2023
society issues
As a society, we are facing a slew of complex tech and society challenges
that evolve each day. Whether it’s understanding the impact of generative
AI, reducing harms online, or considering emerging technologies’ effect on
our civil liberties, the problem space feels endless. But one question
remains: What can we do to ensure our tech future works for all of us?
All Tech Is Human has built a better approach for tackling wicked tech and
society issues. Learning from our interactions with tens of thousands of
individuals around the world through our activities (our Slack community,
mentorship program, summits, mixers, and working groups), we are
disrupting the current approach to tech problem-solving (that is not
working). The three main problems we are committed to resolving are:
9
It’s time for a better approach
to tackling wicked tech &
ALL TECH IS HUMAN | 2023
Tech and society issues will never be resolved by relying on the wisdom
of a small sliver of society. Social media and emerging technology have
profound impacts on our lives, so it behooves us to have an approach
that incorporates these viewpoints.
10
Three common hurdles
to getting more involved
ALL TECH IS HUMAN | 2023
01
Where to start?
02
Finding community
03
Getting support and mentorship to grow in
responsible tech
03 Attend a mixer:
Thousands have come
04 Use our free career
resources: We offer a
together in-person robust job board, talent
around the world pool, support materials,
through All Tech Is and more.
Human.
05 Participate in our
mentorship program:
06 Attend a livestream:
Our community is
Be paired with a intentionally global;
mentor based on individuals are able to
topical interest and gather virtually.
geographic location.
12
No matter your skill
level, you are needed
ALL TECH IS HUMAN | 2023
13
We need a diverse range of
ALL TECH IS HUMAN | 2023
backgrounds in the
responsible tech ecosystem
Environmental
Studies
How are
technologies and
computing altering International
Computer Science + Relations
our environment?
Engineering Anthropology
How does What role can
How can I develop
technology and technology play in
technologies
responsibly? culture influence one Economics international affairs
another? and politics?
In what ways can we
balance responsible
innovation and
econnomic growth?
Law Digital
Design
How can we ensure What is the impact of
Education the legal protections
of individuals in
thoughtful design on
technology?
digital spaces?
How will technology
shape the way we
learn? Statistics
Responsible How can we
demystify the
statistical Information
Tech foundations of "AI"
and "Big Data"?
Science
How is information
Philosophy Sociology stored and disseminated
online?
How can we harness In what ways does
theories of technology impact
philosophy and our social
ethics to better organizations and
shape technology? relationships?
Art
Health Social Work
Community How does
What role can
Psychology Development technology play in How can we apprise
technology shape the
way we view art and
How can developing a more individuals about
media?
How can technology communities equitable healthcare their digital rights
influence our minds leverage technology system?
Policy and protections?
and behavior? for equity and
access? As technology
becomes more
ingrained in society,
how can policy
evolve to the voice of
citizens?
We can’t solve complex tech and society issues alone. Instead, we must
incorporate these disciplines and backgrounds to ensure a better
understanding of the evolving ways technology impacts us — and the options
improving the current situation.
14
Three ways to affect
positive change
ALL TECH IS HUMAN | 2023
01
Change happens from the inside.
02
Change happens from the outside.
03
Change happens from reimagining
potential tech futures.
The responsible tech movement needs to allow for speaking, but also
listening. A few voices should not drown out many.
16
What does a “better tech
ALL TECH IS HUMAN | 2023
Better Tech
Education
Multi-stakeholder
Collaboration on
Tech Issues Human
Diversifying Flourishing
Tech Pipeline Alongside
Tech
Responses to the question, "What does your better tech future look like and
what can we do to achieve it?” surfaced eight distinct categories related to a
perceived root cause of today's problems or an avenue for improving systems
and structures.
17
What does a “better tech
ALL TECH IS HUMAN | 2023
18
What does a “better tech
ALL TECH IS HUMAN | 2023
19
What does a “better tech
ALL TECH IS HUMAN | 2023
20
Three ways to build a
better tech future
ALL TECH IS HUMAN | 2023
01
Creating a more cohesive ecosystem
02
Moving at the speed of tech
03
Getting new voices into responsible tech
ecosystem
Complex tech and society issues require a
diverse range of backgrounds, disciplines,
and perspectives involved. Too often,
individuals with valuable insight and ideas
are not sitting at the proverbial table; this
needs to change.
21
There is a global network
ALL TECH IS HUMAN | 2023
If you are looking to get more involved in the growing responsible tech
movement, All Tech Is Human has multiple in person and virtual
options. Our Slack community (illustrated above) now has over 6k
members across 77 countries. Individuals in our Slack are sharing
resources, learning from each other, and meeting in person in cities
around the world.
So, no matter the location, there is a community waiting for you. Find
all of our projects here or at alltechishuman.org.
22
ALL TECH IS HUMAN | 2023
Mentorship Program
All Tech Is Human’s Responsible Tech Mentorship Program has continued to grow
throughout the last two years. In our 2023 cohort, 117 mentors representing 19
countries are leading 300 mentees representing 40 countries. We have had
nearly 1,000 mentees complete the program since its first cohort in 2021!
Below are just some of our incredible mentors who are paying their knowledge
forward to build a more robust responsible tech ecosystem.
23
ALL TECH IS HUMAN | 2023
Mentorship Program
The Responsible Tech Mentorship Program is a free program run annually to help
build the Responsible Tech pipeline. The program accomplishes this by
facilitating connections and career development opportunities among talented
students, career changers, and practitioners.
Our mentorship program cohorts represent people from a range of fields who
work in Responsible Tech all over the world. In 2023, we had mentors and
mentees from the following fields: Ethical AI, Digital Governance, Tech &
Democracy, Public Interest Technology, Research, UX Research, Product,
Responsible Technology in Healthcare, Tech Journalism, Technology & Wellbeing,
Trust & Safety, Privacy, and Tech Policy.
The All Tech Is Human team reviews applications and creates mentorship pods
that consist of one mentor and three mentees. Mentors are fully vetted, and
many return to participate with us every year. Mentees are college students, new
grads, early and mid career practitioners, and seasoned professionals.
Mentors lead a one 1-hour virtual meeting per month with an optional monthly
curriculum provided by All Tech Is Human. Meeting topics include insight into
what it looks like to be a responsible tech practitioner, career search advice,
navigating the field, and more. Depending on what the pod looks like, mentors
have the freedom to tailor the program however they wish. Some groups choose
to work on a group project over the course of the program together, which might
be an article, webinar, or podcast.
If you’re interested in applying to participate in the future, join the waitlist linked
on the mentorship program page of our website to be notified when applications
open for the next cohort. And find additional info at AllTechIsHuman.org and
reach out to our team.
24
Responsible Technology Career
ALL TECH IS HUMAN | 2023
25
ALL TECH IS HUMAN | 2023
Learnings from the Job Board Despite the recent challenges in the
hiring environment across the tech
Rebekah Tweed first started the industry, there has been an uptick in
Responsible Tech Job Board in available opportunities related to trust
September of 2020 in an effort to curate and safety and artificial intelligence,
into a single resource the many thanks to shifting priorities of many
disparate opportunities that constellate companies in the wake of the
our shared center of gravity – tackling widespread availability of generative AI
the thorny issues at the intersection of tools and the corresponding regulatory
tech and society. This job board, now interest from policymakers across the
curated by Elisa Fox and expanded to globe. We expect to see early career
regularly feature more than 500 opportunities grow as Responsible Tech
opportunities, has quickly grown into a departments within the industry
go-to resource for both applicants and continue to grow throughout the next
hiring managers to understand the year.
evolving field of Responsible Tech. We
track roles across sectors, including: Our conversations with hiring managers
provide insights into the skills,
Academia experiences, and educational
Responsible Tech-oriented backgrounds that are most highly sought
Faculty after, and we incorporate these learnings
University-based Research into the advice we give to job seekers
Institutes for how to best prepare themselves to
Civil Society become great candidates, secure these
Global NGOs roles, and contribute to the field of
Non-profits Responsible Tech.
Think Tanks
Research Institutes Responsible Tech Talent Pool &
Philanthropic Foundations Matchmaking Service
Government
Federal All Tech Is Human offers a personalized
State Talent Matchmaking Service to connect
Local hiring managers and recruiters with
Industry Responsible Tech talent within the All
Tech Industry Tech Is Human community and our
Other Industries: Finance, extensive network of talented individuals
Textiles, Energy, Communication, who are ideal for these hard-to-place
Automotive, Pharmaceuticals, roles, and we have a large Responsible
and more Tech Talent Pool of job seekers who are
Responsible Tech Startups interested in connecting with these
Global Consultancies employers!
26
ALL TECH IS HUMAN | 2023
27
ALL TECH IS HUMAN | 2023
28
ALL TECH IS HUMAN | 2023
Profile
Interviews
Hear from individuals in the All Tech
Is Human community on career
advice, what a better tech future looks
like, and more!
1980s
2000s
29
Alix
Fraser
Director, Council for Responsible
Social Media at Issue One
How did you carve out your this domain would be to focus
career, and what advice would on a values-driven career first--
you give to others wanting a how will your work impact
similar role? society for the better? Are you discriminatory, and more
nourished by the culture and reflective of the global
environment you are working in? population. Similarly,
I have carved my career with
Are you supported by the representation is a national
intention while enjoying the
people around you? I do not like security concern, as it is how the
many unexpected twists, turns, to perceive any opportunity as best and the brightest minds
and opportunities along the way. the end goal, but rather a piece
can bring fresh perspectives
While I was in the Masters’ of a broader journey towards into government and civil
Security Studies Program (SSP) career satisfaction and success. society.
at Georgetown, learning about What you’re working on today
national security risks, Section should be a building block for
230 of the Communications what you do 5 years from now--
Decency Act, and how the -but it does not have to fit into a
perfect box. My desires for my
extremist far-right operates
career have changed over time,
online, I had an epiphany that I
but a constant goal has been to
wanted to work in tech. More
work on issues at the
specifically, work on the nexus intersection of technology and
of technology, policy, and society that matter in a values-
national security to bolster driven environment.
online safety and write policies
to govern new technologies. Within your area of practice,
With that in mind, I pursued who still needs to be included
several roles after graduation, in your field?
from busting malicious influence
The area in which I practice is
actors and preserving election
largely dominated by men, both
integrity at LinkedIn to building a
in the national security domain
product safety pipeline for AI
and in the AI domain. There must
image generation at OpenAI. be a conscious effort to bring
more women and people of
The main piece of advice that I color to the forefront of
would give to anyone starting in technology. Diversity, inclusion,
and belonging are not just
helpful but essential in the AI
safety domain, if we want AI
systems to be less biased, less
What advice would you give to What group would you like to
individuals looking to be see more active in the
involved in the Responsible Responsible Tech movement
Tech ecosystem? and why?
How did you pave your career Follow experts on social media,
in the Responsible Tech field? attend conferences and
What advice would you give to meetups, and build your
college & grad students network. This area is rapidly
developing & if you don’t invest
looking to be involved in the
in keeping up, you’ll be left The push for AI regulation by a
Responsible Tech ecosystem?
behind. There is no shortcut, concerned and angry society will
summary course, or CliffsNotes only increase. AI regulation is
I have a BS in Applied to learning about the vast world already being implemented in
Psychology and MS in of ethics in technology as a California, Illinois and
Engineering Psychology/Human foundation and ethics in AI Massachusetts, with more US
Factors Engineering. It is a specifically. states will follow. Just as the
technical degree situated in EU’s GDPR changed the way US-
humanities with an emphasis on Where do you see the future of based companies handled the
research ethics -- ensuring what Responsible Tech headed? data and privacy of customers
we are doing provides benefit in the EU, we will see significant
I’m encouraged by the increased changes in how EU and US-
and avoids harm. I began my
public focus on issues of racial based companies work following
career in user experience
injustice in technology in the AI regulations in the EU. This is a
research, examining people’s
wake of the BLM movement and good thing. Despite fears that
needs, context of use, and COVID. There have been more regulation will harm innovation, I
values. In 2016 I transitioned to discussions about how much believe it will elevate the ethical
research in AI and focused on surveillance we are comfortable playing field and stop the race
the ethical risks AI can present. with, whether it is for health or to the bottom for profits by any
In 2018 I pitched and created a security purposes. Big tech means necessary.
role to focus on ethics in AI full companies are reconsidering
time. who they sell facial recognition
to and for what purposes. There
are questions about the harms
My advice: There are loads of
vs. anticipated benefits of
existing resources and research
predictive policing and if it can
in this area. Block off at least two
ever be applied fairly. There is
hours per week, every single greater awareness of the risks of
week, to stay up-to-date on the deepfakes and disinformation to
latest research and resources. our democracy and who is
responsible for keeping it in
check.
as well as how technological On the other hand, we’re seeing the proper authorities. There
trends and advancements will generative AI technology get are opportunities at each
impact harms against children misused by bad actors to stage of the lifecycle to
in the future? further scale harm against prioritize child safety, and
children. They use this now is our moment to do so.
technology to create AIG-CSAM
Stepping out of the day-to-
(AI-generated child sexual
day, to the broader
abuse material). Victim
perspective: success looks like identification is already a needle
every single kid who has been in the haystack problem for law
identified and recovered from enforcement, where they have
their abusive situation, every to sift through huge amounts of
single image or video content to find that child in
documenting this abuse taken active harm’s way. Anything that
down and reported, and every adds to this haystack makes
single kid who has been their job more difficult. Bad
actors also use this technology
reached and moved away from
to further re-victimization, using
a dangerous situation in time
existing CSAM to generate more
as a result of the work we do in
explicit images of those same
partnership with the broader children. Sextortion is another
child safety ecosystem. area of impact—bad actors
accelerate their efforts by using
What are some trends / GAI technology to support the
growing possibilities in the content creation necessary to
future in your field? target a child.
coal face of products and services, and into complexity, and move through that
making decisions every day about the unknown territory to discover new knowledge.
position of buttons or what words to use - New ways of being. This is all so we can
we need more practical guidance. Because collectively see the world in a different way.
once you get into the practicalities of what Where technology enables meaningful
it takes to ship product, you need the outcomes that are better shared between
problem and opportunity framed differently people, organisations and society . And in
in order to help you do your job. 7 years on I doing that work, we make it possible for
still see an absence of smart thinking and everyone to see that new world too. A
prototypes in the bridging between trustworthy world that opens up new markets
research and product. It's full of wicked and new possibilities that we all benefit from.
problems - like how much friction can you To be inspired by it.
actually add to an interface - but very
rewarding work when you uncover new How did you carve out your career, and
insights and make progress. what advice would you give to others
What is one major change that you would wanting a similar role?
like to see in the next few years?
I started working in Responsible Technology
What does a better tech future look like very early on in my career without realising it.
to you? I was helping to start up a project and
organisation called WikiHouse, which used
We all deserve products and services that open source and 3D manufacturing to
are worth trusting. I want a future where democratise access to home building. Then
technology embeds care. Where through my masters at Central Saint Martins
technology supports each of us as we live, in London I started to look at themes of
work and play. I want more digital teams to technology and societal change and that led
engage in the innovation work that is me to start my company Projects by IF. I have
Responsible Technology. To make the 2 pieces of advice for people wanting to get
space for the messy, choppy, imagination into the Responsible Technology field - one is
work that is needed to move from where we to access and participate in communities and
are to a more responsible future. To be networks already embedded in this space. All
courageous and try new things. That means Tech is Human is a brilliant example of this.
getting uncomfortable, challenging your You'll make amazing connections and find
preconceptions and existing ways of opportunities and experience through
thinking. Because Responsible Technology opportunities that become available. The
work demands new ideas, conversations, second piece of advice is to publish your
and teamwork. Because processes don’t thinking, there is space for all of us and we
solve problems. Teams do. Thinkers, makers need more practitioners who are practically
and doers. Multidisciplinary teams that lean building responsible technology to share their
learning and experiences. So get sharing! I
look forward to reading about your work!
ALL TECH IS HUMAN | 2023
55
Tamara
Kneese
Senior Researcher and AIMLab Project
Director
How did you carve out your At Twitter, I managed our Trust and
career, and what advice would Safety Council, a trusted partners
you give to others wanting a program to support journalists and
similar role? human rights defenders globally, People from all racial,
and a research hub for the Public ethnic, geographic, and
Policy team. My team’s work sat at religious backgrounds need
I started my career working with
the heart of global debates around to be represented in this
civil society organizations in online speech governance, content
field, so that we can
Eastern Europe, the Middle East, moderation, and trust and safety.
develop policies that
and North Africa. In Greece, the Most recently, I have consulted with
reflect everyone’s diverse
civil society organizations and
West Bank, Morocco, and needs. This includes people
companies. These have included
Turkey, I worked on issues who are women, Muslim,
Carnegie Endowment for
including refugee integration International Peace's Partnership African, and Southeast
and immigration, youth violence, for Countering Influence Operations Asian.
community development, on government efforts to combat People with lived experience:
poverty alleviation, conflict disinformation in Ukraine, National People for whom technology
Democratic Institute on online directly impacts them, such
resolution, and education. I
violence against women in politics as those who have
observed firsthand how people and public life, and the Committee experienced online
used platforms to advance to Protect Journalists on a new
discrimination, harassment,
meaningful political discourse chat-based safety initiative that
and account hacking, need
delivers journalist safety
and social movements around to be included in the tech
information. Now, I am the Deputy
restrictive government policies. policy conversation. Their
Director of Strategy for the
Later, at Booz Allen Hamilton, I Massachusetts Executive Office voices are essential to
examined public sentiment, Technology Services and Security, ensuring that tech policy is
social movements, and where I advise executive agencies designed to protect and
across the Massachusetts empower everyone.
disinformation using social
government around technology People with expertise in
media for the U.S. Federal
issues. other fields: Technology
Government. I witnessed how
policy is a complex field that
digital technologies proliferated Within your area of practice, who
requires expertise in a
in the hands of political and still needs to be included in your
field? variety of areas, such as
social organizers and violent conflict resolution,
extremists. This work educated philosophy, and sociology.
There are several groups that still
me on global conversations. need to be included in my field: People with expertise in
these other fields need to be
People from underrepresented included in the tech policy
groups: The tech policy conversation, so that we can
profession remains dominated develop policies that are
by one traditional group.
informed by a wide range of
perspectives.
ALL TECH IS HUMAN | 2023
59
ALL TECH IS HUMAN | 2023
60
Responsible AI
61
Responsible AI
Responsible AI must safeguard user data and (ANI) refers to AI systems that are designed
respect individual privacy. AI systems should for specific tasks or narrow domains.
be designed with safety and security in mind
and should have mechanisms in place for Black Box AI vs Glass Box/White Box: A ‘black
human oversight and control to prevent undue box’ is a system that is so complex that its
reliance or inappropriate use. behavior cannot be explained in terms of its
individual components. In AI and machine
Creating Responsible AI systems requires learning, the components of interest are the
collaboration between engineers, ethicists, features, or inputs, and the parameters that
researchers, policymakers, and the public. the system learns from data. Although it is
Only a multidisciplinary approach ensures that possible to grasp these components
AI is developed with a broad understanding of mathematically and understand them, the
its implications. system as a whole is not accessible—hence
‘black box.’" Glass box models, often referred
Key Terms and Definitions (From to as "white box," are the opposite of Black Box
ActiveFence, TSPA, and Digital Trust & models. With these models, users can
Safety Partnership glossaries) understand the decision-making process and
trace the relationship between inputs and
AI Bias: Bias in AI is the presence of unfair or outputs.
discriminatory outcomes arising from the
incorporation of biased data or flawed Generative AI: Refers to AI systems or models
algorithms. It occurs when the AI's predictions that can create or generate new content, such
or decisions disproportionately favor or as images, music, or text, based on patterns
disadvantage certain groups, thereby learned from training data. See Also:
replicating existing societal biases present in Foundation/Frontier Model
the training data. Addressing AI bias involves https://www.adalovelaceinstitute.org/resource
recognizing, understanding, and rectifying /foundation-models-explainer/
these disparities to ensure outcomes that are
as equitable and unbiased as possible. LLMs (Large Language Models): Refers to
advanced AI models that are trained on large
AGI vs ANI: Artificial General Intelligence (AGI) amounts of text data and can generate
is the hypothetical concept of AI systems that human-like text responses. These models use
possess general intelligence, similar to human deep learning techniques, such as transformer
intelligence. AGI systems would have the architectures, to understand and generate
ability to understand, learn, and apply relevant language.
knowledge across various domains and tasks.
On the contrary, Artificial Narrow Intelligence Human-in-the-loop: A human operator is
involved in every step of the machine learning of AI systems with particular emphasis on the
process, in which human oversight, unintended consequences of civil and human
intervention or decision-making is integrated rights abuses. The “Blueprint” calls for safe and
into an automated or AI-driven process. This effective systems, algorithmic discrimination
approach ensures that humans remain protections, and data privacy. – The White House
actively involved in critical tasks, allowing
them to monitor, guide, and correct the November 2022: OpenAI releases GPT-3.5
system’s actions as needed. A person is part
of every stage of the machine learning February 2023: A reporter’s unsettling
process, blending human oversight and conversation with Bing Chat implies there is still
decision-making with automated or AI work to do. – The New York Times
processes. This way, humans can stay hands-
on with important tasks, keeping an eye on March 2023: The Future of Life Institute
the system and stepping in when necessary. published a letter calling for “all AI labs to
immediately pause for at least 6 months the
Responsible AI: RAI involves developing and training of AI systems more powerful than GPT-
using artificial intelligence systems ethically, 4.” The Future of Life Institute focuses on
considering their potential impacts on society. mitigating long-term “existential” risks to
It requires adhering to human values, legal humanity such as superintelligent AI which they
frameworks, and ethical standards, while argue could lead to extreme automation of jobs
ensuring transparency, accountability, and even human obsolescence. The letter was
fairness, and privacy. The goal of responsible signed by more than 20,000 people, including
AI is to harness the benefits of AI while academic AI researchers as well as industry
minimizing any adverse effects on individuals CEOs. The letter has been criticized for diverting
and society. attention from immediate societal risks such as
algorithmic bias and the lack of a transparency
TESCREAL: The acronym stands for requirement for training data. The pause did not
Transhumanism, Extropianism, occur. – Future of Life Institute
Singularitarianism, Cosmism, Rationalism,
Effective Altruism and Long Termism. See also: August 2023: OpenAI releases GPT-4, its largest
Timnit Gebru. LLM. GPT-4 is publicly availabnle via the paid
ChatGPT Plus, and OpenAI’s API. GPT-4 is a
Key Moments in Responsible AI multimodal model, accepting image and text-
October 2022: The White House Blueprint for based input.
an AI Bill of Rights is released. This “Blueprint”
identifies five core principles to guide and August 2023: Statement on AI Risk released. –
govern the development and implementation Center for AI Safety
August 2023: AI experts inform Congress held the first-ever session on artificial intelligence.
about the advantages and drawbacks of The Council emphasized the risks AI poses to
artificial intelligence, as well as provide international peace and discussed how to mitigate
insights on how to effectively regulate this potential security implications. – The New York
swiftly advancing technology. – Bloomberg Times
Law
The Canadian government sought input on a
August 2023: The Federal Election voluntary code of practice for generative AI, aiming
Commission begins a process to potentially to ensure that participating firms adopt safety
regulate AI-generated deepfakes in political measures, testing protocols, and disclosure
ads ahead of the 2024 election. – Federal practices. – Venture Beat
Election Commission
Europe: The European Parliament passed its
Global Perspectives on AI version of the AI Act, triggering the final stage of
AI has become increasingly intertwined with the Union’s regulatory process. The EU is expected
our daily lives, influencing what we see, where to vote through and implement the law in early
to go, what to buy, and even how we vote. 2024. The Act sets out a comprehensive
Governments and local legislatures are framework for regulating the development and use
working to create actionable laws and of AI in the EU. – EU AI ACT
regulatory practices, as companies increase
the availability of various generative AI models 149 civil society organizations called on EU
to the general public. The following is an institutions to put people first in AI ACT. –
international list of the current major Algorithm Watch
happenings and trends surrounding the use of
artificial intelligence. The National Risk Register officially classified AI as
a long-term security threat to the UK’s safety and
Global: G7 Hiroshima Leaders’ Communiqué critical systems. – CSO
published with a reference to a commitment
to Responsible AI. In May, leaders from the G7 Asia: China unveiled new rules governing the AI.
countries announced they will be setting up Beijing’s controls on internet content and U.S.
the Hiroshima AI Process this year, in curbs on semiconductor exports to the world’s
collaboration with the OECD. The nations are second-largest economy are thought to hamper
set to discuss AI governance, IP rights, and progress. – Reuters
transparency.
India's telecom regulator, Trai, recommends an
North America: The United Nations Security independent statutory authority, the Artificial
Intelligence and Data Authority of India New Zealand updates expectations on the use of
(AIDAI), to regulate responsible AI use across Generative AI. – RegulationAsia
sectors. – Mint
The Way Forward
Africa: Microsoft & DSM in collaboration with More codified enforcement of AI safety and
Data/Scientists Network and Data Science protocols for businesses: Companies may not take
Nigeria and Federal Government of Nigeria into account the implications of their software’s
(FGN) hosted a responsible AI workshop in impact on consumers, including youth. Social
Nigeria on the usage of the new Responsible media platforms and big tech have designed their
AI Dashboards in decision making. – Technext software with large language models that have
increased privacy violations, racial bias,
A report called Mankind and AI was released manipulation, and pressuring and deceptive
as part of Africa Tech Radio. African marketing tactics to turn over a profit using an
researchers across the continent come individual’s data. State and federal governments
together as an open research forum to better must act to uphold strict standards.
understand the African landscape. – Africa
Tech Radio Increased collaboration globally: Big tech is
dominated by Western culture; we’re seeing this
Nigeria calls on experts to support the launch hold true for the development of AI as well.
of a National AI Strategy – Digwatch Responsible AI needs to address biases and
training, red-teaming, and other aspects that need
Launch of the Centre for Artificial Intelligence: to include input from diverse groups, especially
Malawi University of Science and Technology. the Global South. There could be an international
– Malawi University of Science and body that sets standards so that AI machine
Technology learning will have a globally integrated
understanding instead of a biased perspective.
Latin America and the Caribbean: The
Caribbean Artificial Intelligence Initiative is Shift in focus from future harms to present-day
launched. – AI 4 Caribbean harms: The AI doomsday talk is not only a
distraction from present-day AI harms, like bias in
The first webinar organized by UNESCO's loan rates, but it’s also creating an AI arms race
Ibero-American Business Council on Artificial because (as the logic goes) if you don’t rush to
Intelligence and Ethics was held. – UNESCO make/control the next advancement in AI,
someone else will. We’d like to see AI slowed down
Oceania: The Australian government aligns so we can be more thoughtful about its uses and
and updates its AI strategy. – Australian build tools that don’t perpetuate bias, violate
Institute of International Affairs privacy rights, and erode democracy.
ALL TECH IS HUMAN | 2023
65
Responsible AI
Visit our Responsible Tech Job Board for the latest listings and freely
join our Responsible Tech Talent Pool to be matched up with potential
opportunities.
Are you an employer looking for Responsible Tech talent? Reach out to
rebekah@alltechIshuman.org and learn more here.
Although Trust and Safety has existed as long hash data to create databases of hashes related
as Internet services have been offered, the to malicious content. Platforms can then compare
field has grown beyond niche communities, is image hashes from their content to hashes of
leveraged at large companies, and has rapidly known malicious content, without exposing human
grown in the past few years. Organizations like moderators to potentially harmful content.
the Trust and Safety Professional Association
(founded in 2020), TS Collective, the Digital Hate Speech: Any speech or content that incites,
Trust & Safety Partnership, and the Integrity discriminates, justifies hatred, or promotes
Institute were formed to support T&S violence against an individual or group.
professionals and improve the public’s
understanding of T&S. These organizations’ Impersonation: Apps or websites that are created
existence serves as a milestone for the to resemble existing apps or services in order to
professionalism of the Trust and Safety field. gain access to personal data, passwords, or other
sensitive data. Impersonation of individuals is the
Key Terms and Definitions creation of fake accounts, using the target’s
(From TSPA, Digital Trust & Safety Partnership, identifying information and/or images, in order to
and ActiveFence) cause harm to that individual.
Recidivism: The evasion of suspensions or multiple services. This may occur within seconds
bans, such as creation of accounts after a to hours on some platforms like social media.
previous account ban.
Key Moments in Trust & Safety
Reverse Engineering (Red Team): A process Dismantling Trust and Safety at Twitter: The X
to replicate a system, process, device, or (formerly Twitter) Trust and Safety Council was
software, often used by cybersecurity teams. formed in 2016. It consisted of volunteers from
several advisory groups that addressed issues like
Risk Assessment: An analysis of the types, online safety, harassment, human and digital rights,
potential severity, and likelihood of harms of a suicide prevention, mental health, child sexual
product, service, or feature. exploitation, and dehumanization. After Elon
Musk’s acquisition of Twitter in October 2022,
Sextortion: The act of seeking financial gains, many employees responsible for addressing
favors, or private content by threatening to prohibited content and misinformation were laid
share sexually intimate information about a off. Three key members of the Trust and Safety
target. Council, Eirliani Abdul Rahman, Anne Collier, and
Lesley Podesta, resigned in December 2022. They
Terms of Service/Terms of Use: Legal were disappointed in new leadership’s disregard
agreements between users and service for T&S, including Twitter’s move to heavily rely on
providers under which the user can utilize automated content moderation, which “can only
services. go so far in protecting users from ever-evolving
abuse and hate speech before detectable
Transparency Reports: Issued by a service patterns have developed.” As Musk advocated for
that discloses metrics and insights about its free speech, many have noted a rise in
approach to salient risks and relevant misinformation, disinformation, harassment, and
enforcement practices, including how it hate speech on the platform. - Net Family News
enforced its policies and how it handled
requests to remove or restrict user content. Reddit Moderator Protests: Reddit’s API has been
Often detail government requests for user open for developers since 2008. In April 2023,
records. Reddit announced it would charge for its new API
terms. This move was intended to monetize
True Positive/True Negative: Content Reddit’s data and prevent the platform’s content
correctly or incorrectly flagged as violative. from being used to train large language models
(LLMs) for free. Christian Selig, the developer of a
Virality: When content gains high, rapid, and popular Reddit client for iOS called Apollo,”
wide reach amongst the users of a service or announced he would shut it down due to
the $20 million cost to keep the app running of what constitutes hate speech as law or weigh
under the new API terms. Other third-party in on the debate of the definition of hate speech;
developers of Reddit clients shut down in instead, they argued users should be able to
June. Thousands of subreddits went dark in communicate freely. The court ultimately
protest. Reddit threatened and removed blocked the law in February 2023, and similar
some moderators for restricting access to debates on content moderation and free speech
subreddits in protest, under grounds that the persist across the U.S.
protests violate the Code of Conduct. The
removed moderators were eventually The UK’s Online Safety Bill: The upcoming bill
reinstated. - The Verge has several requirements to make tech
companies more responsible for content on their
Volokh v. James: New York’s Online Hate platforms, intended to keep online users safe.
Speech Law was slated to take effect in Requirements include: preventing the spread of
December 2022. The law required social illegal content by requiring organizations to
media networks to develop and publish a remove this as soon as they see it, age-
policy describing how they will address visitor verification processes to access certain websites
complaints of hate speech, create a “clear and (e.g. pornography), securing adults from 'legal but
easily accessible mechanism” for visitors to harmful content' (e.g. abuse, harassment, self-
complain about perceived hate speech on the harm and eating disorders) by removing such
site, and inform complainants of how the content from their platforms, and forcing the
matter is being handled. The law originated, in biggest platforms to take action against paid-
part, as a response to the 2022 mass shooting for-scam adverts published or hosted on their
in Buffalo, NY. Prior to the shooting, the services. This legislation has received backlash
shooter wrote a manifesto describing himself from those who fear freedom of expression and
as an ethno-nationalist and supporter of white user privacy will be threatened. The content
supremacy motivated to commit acts of scanning and surveillance required by the bill
political violence. Eugene Volokh, founder of pose threats to end-to-end encrypted (E2EE)
Rumble (a video platform intended to be a communication services such as WhatsApp and
YouTube alternative) filed a complaint in Signal. E2EE is intended to prevent data from
federal court seeking to stop New York’s being read or modified by anyone other than the
Online Hate Speech Law. The plaintiffs argued sender and recipient, so companies that provide
the law infringes upon the First Amendment of E2EE are unable to hand over texts of their
the US Constitution, which prevents the customers' messages to the authorities. Security
government from making laws that abridge and privacy researchers argue that “nobody but
the freedom of speech. The plaintiffs also us” cryptographic backdoors have historically
argued against upholding the state’s definition failed and created vulnerabilities for attackers to
exploit. WhatsApp and other tech platforms Technology-neutral and future-proof policy and
have indicated they may leave the UK if regulation: With the rise of ChatGPT and general
forced to weaken encryption for the bill. They buzz around generative AI, there has been little to
also argue potential AI models that can scan no consensus in terms of a) whether existing laws
people’s messages for CSAM will likely result and regulations cover this new area and b) how to
in false positives, subject innocent users to regulate/create guardrails around the usage where
having their private messages widely viewed, coverage is unclear. Generative AI is not the first
and face false accusations of viewing CSAM. - emerging tech and won’t be the last, hence we
TechRadar need more technology-neutral and future-proof
guardrails to evaluate and prevent potential harm.
The Way Forward
Change should come in three categories: Standardized regulations with minimum-to-no
more transparency and inclusion in key deviation: With the increased focus on efforts to
partnership and collaboration efforts in the regulate social platforms - specifically within the
industry, building technology-neutral and EU (e.g. Digital Services Act, Online Safety Bill),
future-proof policy and regulation, and trust and safety practices will become more
ensuring intergovernmental coordination on standardized and formalized across the industry.
regulations applied to global firms that That said, we don’t always see coordination
operate in multiple jurisdictions. between governments in how they approach
platforms and their risk assessment and
Key partnership and collaboration efforts in prevention efforts. Given the delicate balance
the industry (and inclusion of youth): Safety between innovation and ensuring online safety, it’d
and wellbeing of the minors and younger be crucial to have good coordination between
audience is the top priority for platforms as different governments in their approach to
well as regulators, and it’s important to give regulating platforms.
the youth a voice while companies are
building products and policies around the
platforms they interact with. Some examples:
the TikTok Youth Council which was
announced recently, Meta’s Safety Advisory
board, the Co-design program and Youth
Advisors, and Youth and Families Advisory
Committee of Youtube just to name a few.
We’d like to see this becoming a common
practice in the industry, and companies to
offer knowledge sharing and best practices
around inclusion of youth.
ALL TECH IS HUMAN | 2023
72
Trust & Safety
Visit our Responsible Tech Job Board for the latest listings and freely
join our Responsible Tech Talent Pool to be matched up with potential
opportunities.
Are you an employer looking for Responsible Tech talent? Reach out to
rebekah@alltechIshuman.org and learn more here.
greater control over their data. Echo chamber – An environment where a person
only encounters information or opinions that
Digital footprint – The trail of data left reflect and reinforce their own.
behind by a person's online activities,
including social media interactions, website Civic hacking – Collaborative and often
visits, and other online actions. grassroots efforts to use technology to address
civic issues, and create apps, tools, and platforms
Digital rights management – Technologies that benefit communities.
or strategies used by content creators or
distributors to control the access, usage, Election integrity – Ensuring the security,
and distribution of digital content. accuracy, and fairness of digital voting systems to
maintain the trust and legitimacy of electoral
Right to be Forgotten (RTBF) – A legal processes.
concept that allows individuals to request
the removal of certain online information Predictive policing – The use of algorithms,
about them from search engine results and predictive analytics, and other techniques in law
other online platforms. enforcement to identify potential criminal
suspects and activities.
Surveillance capitalism – A concept in
political economics that denotes the Techplomacy – Coined by the Danish government
widespread collection and commodification to define the connection between the
of personal data by corporations. governments and tech companies.
utilize cloud services while adhering to the before collecting personal data. Allows the
appropriate data privacy and security government to limit the transfer of data outside
measures. The policy also encourages India and penalizes companies for violating rules.
universal access to broadband, eliminates
regulatory barriers to foster competition in Singapore – Enabling Service Hubs,
the data and cloud sector, promotes ICT Civic tech initiative to strengthen support for
research and development, and creates persons with disabilities and their caregivers within
alignment with the Fourth Industrial the community. Offers residents courses on daily
Revolution (4IR), OECD Framework, and EU living and digital skills.
standards.
Europe
Uganda – The Income Tax Bill amendment
EU – The Digital Services Act (DSA) was
inserts a new section that levies a 5% fee on
established to better protect consumers and their
the revenue of foreign providers of digital
rights online, establish transparency and a clear
communications services operating in accountability framework for online platforms, and
Uganda. It taxes foreign providers of digital foster innovation, growth, and competitiveness
services in Uganda, such as Meta, Twitter, within the single market. By 17 February 2024,
Amazon, or any other foreign-owned online platforms and search engines will be
company offering digital services to increase
required to publish the number of monthly average
tax collections for the country’s burgeoning
users in the EU.
digital economy.
EU – The Digital Markets Act (DMA) establishes
Asia specific criteria for qualifying a large online
China – Management Measures for platform as a “gatekeeper”. The DMA starts on 2
Generative Artificial Intelligence Services, May 2023. By 3 July 2023, gatekeepers need to
The Cybersecurity Administration of China notify their “core platform services” to the
introduced draft measures listing rules that Commission.
generative AI services have to follow,
including the type of content these products EU – The Artificial Intelligence Act is a proposed
are allowed to generate — within the EU regulation targeted at regulating AI systems in
framework set up by China’s national trifecta the EU, aims to maintain trust in AI systems, and to
of data laws: the Cyber Security Law, Data create an ecosystem of excellence for AI.
Security Law, and Personal Information
Protection Law. South America
Brazil – The Fake News Law (Bill 2630) requires
India – The Digital Personal Data Protection
internet companies, search engines, and social
Act requires companies to get user consent
messaging services to find and report illegal
ALL TECH IS HUMAN | 2023
77
Tech & Democracy
material. Recently, Brazil's government and US – The AI Disclosure Act of 2023 (H.R.3831)
judiciary objected to big tech firms would require that any content produced by AI
campaigning against the bill, alleging undue contain the phrase: “DISCLAIMER: this output has
interference in the debate in Congress. been generated by artificial intelligence.”
Chile – “Chile takes first steps towards AI US – The REAL Political Advertisements Act
regulation,” The Chilean parliament is (S.1596) provides further transparency and
engaging in discussions for a proposed bill accountability for the use of AI-generated content
that would address legal and ethical in political advertisements by requiring a
considerations in AI development and usage, disclaimer that AI was used.
aiming to strike a balance between
protecting citizens’ rights and promoting the US – The Digital Platform Commission Act of 2023
accessibility and advancement of these (S.1671) establishes a commission to regulate
technologies. digital platforms.
Costa Rica – “Lawmakers use ChatGPT to Tech & Democracy Information Hubs
draft AI regulation bill,” Costa Rican Tech Policy
legislators asked ChatGPT to draft legislation Data & Society Research Library
aimed at governing AI systems within the MIT Internet Policy Research Initiative Research
country. The generated bill advocates for the Produces policy research in a variety of
establishment of a dedicated institution technical fields, including cybersecurity, AI
responsible for overseeing AI regulation, policy, privacy, advanced network
guided by principles such as accountability, architectures, decentralized web, and app
explainability, bias prevention, and development.
safeguarding human rights. AI Ethicist, a global repository of reference and
research material for research on AI ethics,
North America responsible governance, and social impacts of
Canada – Digital Services Tax Act, AI.
The Digital Services Tax Act would impose a
3% tax on revenue for large tech companies Digital platforms and Political Participation
and online marketplaces, companies like Center for an Informed Public at the University
Walmart, Amazon, and Meta. of Washington Resources – Research,
workshops, and talks on misinformation and
US – The National AI Commission Act disinformation.
(H.R.4223) was introduced to establish an UNC Center for Information, Technology, and
artificial intelligence commission and for Public Life Research –Information on the
other purposes. Political and Civic Applications Division (PCAD),
ALL TECH IS HUMAN | 2023 which develops software to support research
78
Tech & Democracy
into information environments; critical Insights from the Tech & Democracy
disinformation studies; and resources Report
tracking how platform policies, state
laws, and ethics shape campaign
“I was opened up to my work in tech policy upon
communications.
joining Pollicy after the completion of my
Harvard Shorenstein Center on Media, fellowship program and then becoming a data and
Politics, & Public Policy digital rights researcher there. I have since co-led
HKS Misinformation Review Pollicy's AI work. My advice to individuals looking
Media Manipulation Casebook in this field of work — especially young people
across the African continent — would be to
Civic Tech interest themselves in fellowships and other such
Center for Civic Design Tools – Tools and career-shaping programs by organizations in the
resources designed through experiences space both on the continent and elsewhere.” –
working with election offices across the Bobina Zulfa, Digital Rights Researcher at Pollicy
US. They are free to use and adapt.
Election officials should check state law “Data protection issues are now squarely societal
to see if you’re able to use them. and human rights issues. There is a societal
Code for America Brigade impact on every sector that relies on data,
Code For All’s international working affecting the future of healthcare, transportation,
groups and marketing – the list goes on. Many of these
impacts will extend to the future of free speech
Open Data Handbook – Guides, case
and, ultimately, our democracy.“ – Jules
studies and resources for government &
Polonetsky, CEO, Future of Privacy Forum
civil society on the "what, why & how" of
open data.
“I built my career by simply doing three things,
which I like to call the 3C framework: Consume,
Consumer Rights Create and Collaborate. When I was just starting
Consumers International, Digital Rights – out, my first line of action was to consume as
A global resource for policy-makers, much content as I could about tech policy. As you
regulators, the tech industry and consume more content, you begin to identify gaps
consumers. and ignite a burning desire to fill those gaps with
Deceptive Design Hall of Shame – your own content. After consuming and creating,
Hundreds of examples of deceptive you will naturally begin receiving collaboration
patterns used by companies around the requests — which helps to broaden your reach,
world. letting more people know about you and what you
World Bank Digital Regulation Platform, do.” – Faith Obafemi, Data Protection and Privacy
Consumer Affairs Writer, Captain Compliance
Check out responsibletechguide.com for more on
Tech & Democracy.
ALL TECH IS HUMAN | 2023
79
Tech & Democracy
Visit our Responsible Tech Job Board for the latest listings and freely
join our Responsible Tech Talent Pool to be matched up with potential
opportunities.
Are you an employer looking for Responsible Tech talent? Reach out to
rebekah@alltechIshuman.org and learn more here.
Key Terms and Definitions Civic Tech: Civic Tech is a technology that enables
Accessibility: Accessibility is about ensuring greater participation in government or otherwise
that digital technology is usable by people assists the government in delivering citizen
with disabilities. Checklists, standards, and services and strengthening ties with the public.
laws are important tools to help achieve
accessibility — yet sometimes they get the Consentful Technology: Consentful Technologies
focus instead of the fundamental goal of are digital applications and spaces that are built
accessibility: meeting the needs of disabled with consent (defined above) at their core, and
people in the real world. Accessibility is an that support the self-determination of people who
important aspect of diversity, equity, and use and are affected by these technologies.
inclusion (DEI).
Cybersecurity, (also Public Interest
Anti-racist Technology: Structural Racism is Cybersecurity, Cyber Civil Defense): Ensures
a system in which public policies, confidentiality, integrity, and availability of
institutional practices, cultural information, and reduces the risk of cyberattacks.
representations, and other norms work in When applied to public interest organizations such
mutually reinforcing ways to perpetuate as hospitals, city governments, non-profits, etc.,
racial group inequity. Anti-racist Technology who serve the public, and typically lack the
is designed to combat structural racism and capacity to defend against cyber criminals or
mitigate the harms (current, inherited) it politically motivated attacks, we get Public Interest
causes, the access, opportunities, and rights Cybersecurity or Cyber Civil Defense.
it denies baked in as part of the design
process, and it would actively seek to Deceptive Design Patterns: Deceptive Design
generate racial equity - also as part of its Patterns are tricks used by websites and apps to
design. get you to do things that you didn't mean to, or
that you might not otherwise do, like buy things,
Assistive Technology: Assistive technology sign up for services, or switch your settings.
is a technology used by individuals with
disabilities in order to perform or improve GovTech: GovTech is the technology used to
functions that might otherwise be difficult or deliver public sector services, as well as the
impossible and can include mobility devices processes involved in modernizing them (aka
such as walkers and wheelchairs, as well as digital transformation), with an emphasis on
hardware, software, and peripherals that citizen-centric, universally accessible public
assist people with disabilities in accessing services, and whole-of-government approach to
computers or other information digital government transformation.
technologies.
Inclusive Design: Inclusive design describes Some PIT stakeholders in the U.S. are
methodologies to create products that optimistic about the future of civic tech. This
understand and enable people of all is due to a number of factors: recently laid-off
backgrounds and abilities. Inclusive design private sector tech workers showing great
may address accessibility, age, culture, enthusiasm for public sector tech jobs;
economic situation, education, gender, governments making improvements in building
geographic location, language, and race. The up their technical capacity; and governments
focus is on fulfilling as many user needs as becoming more human-centered in their
possible, not just as many users as possible. approach to technology. (See Why 2023 could
be a year for civic-tech optimism and To Build
Public Interest Technology: Public Interest A Better Internet, Put Laid Off Tech Workers
Technology (PIT) is a broad and emergent Back to Work in the Public Interest.)
field that is synonymous with Responsible
Tech. Many people have created definitions Some of the most pressing ethical issues in
for what PIT is, and most definitions agree on technology today are: misuse of personal
this idea: PIT is a technology created for the information, misinformation and deepfakes,
public good, rather than for individual or lack of oversight and acceptance of
commercial gain. Several definitions of PIT responsibility, use of AI, and autonomous
also emphasize that PIT should also aim for technology. (See 5 Ethical Issues in
equity, to ensure that PIT is inclusive and Technology to Watch for in 2023.) PIT is not
accessible to all. immune to these ethical issues. For example,
when governments rely on technology government roles are competitive with similar
created and maintained by external roles in the private sector. (See In Public
consultancies, it it becomes more Service, Technology Is Only as Good or Bad as
difficult to ensure that citizens’ personal We Are.)
data is kept private and secure
(representing the ethical issue of lack of Educate students from the broad range of
oversight and acceptance of fields that contribute to PIT to be prepared to
responsibility). think and work in PIT. An existing example is
the Public Interest Technology University
Consumer Reports is creating an app Network (PIT-UN).
called Permission Slip, which will provide
people more control over how for-profit When creating a new policy or piece of
entities use their consumer data. legislation, think down to the very end user
how that policy or legislation will play out.
Organizations like TechCongress and Ensure there is a real plan for funding and
Presidential Innovation Fellows are implementation, as well as feedback loops for
helping influence tech policy and collecting data and adjusting course according
government technology by placing to that data. (See In Public Service, Technology
technologists as fellows in the offices of Is Only as Good or Bad as We Are.)
federal policymakers and government
agencies. Get the broader public informed and involved
in discussions on tech policy, to ensure that
The Way Forward decisions truly reflect the public interest.
Build up internal technical capacity
within governments so that governments Incorporate AI with caution. Continuously
do not need to rely on external experts educate ourselves about what AI can and
and piecemeal projects to improve their cannot do. When AI is used, monitor for errors.
technology. One way to make this a
reality is by ensuring that pay for these Check out responsibletechguide.com for more on
Public Interest Tech.
Visit our Responsible Tech Job Board for the latest listings and freely
join our Responsible Tech Talent Pool to be matched up with potential
opportunities.
Are you an employer looking for Responsible Tech talent? Reach out to
rebekah@alltechishuman.org and learn more here.
wellbeing and prepares them for success in “improve age-appropriate digital services and to
the digital age. ensure that every child is protected, empowered
and respected online” which incorporates the
Youth are empowered to advocate for their European Parliament Resolution on children’s
digital rights, and government officials across rights.
the world are more resolved to increase
digital well-being for youth. Illinois passed There still needs to be a human-centered and
the first US law aimed at protecting child people led approach that considers the safety and
influencers which will “entitle influencers rights of all users, including youth. In fact, youth are
under the age of 16 to a percentage of taking the lead in their own advocacy with
earnings based on how often they appear on organizations like, Design It For Us, a US based
video blogs or online content” discouraging “coalition of young activists and organizations
youth exploitation (Teen Vogue). fighting for safer social media and online platforms
for kids, teens, and young adults” (Design it for Us).
However, the pursuit of digital well-being for In Africa, the African Union Youth Envoy utilizes
youth has made progress with differing the Google Digital Skills Campaigns encourages
outcomes. In 2023, the US Congress Africa’s growing youth population who want to
reintroduced the Kids Online Safety Act position themselves to benefit from Africa’s digital
(“KOSA”) as a measure to protect children revolution and establish a strong digital economy,
online by increasing monitoring and limiting “part of the larger African Union’s Digital
access to sensitive information after Transformation campaign, which seeks to reach
revisions proposed by over 100 civil 100, 000 young people with digital skills for the
organizations. However, advocates for online creation of jobs by 2024 through a country
safety continue to highlight potential acceleration strategy across the African continent”
dangers. The Electronic Frontier Foundation (Unlocking Africa’s Potential). There are a number
says KOSA this as putting the “tools of of initiatives by youth-led and established
censorship in the hands of state attorneys institutions focused on youth digital well-being.
general, and would greatly endanger the
rights, and safety, of young people online” The Way Forward
(EFF). Also see: Children's Online Privacy EdTech: Technology is a fundamental part of
Protection Rule ("COPPA"), which protects modern education. Equipping young people with
children's privacy by giving parents tools to tech skills prepares them for the job market of the
control what information is collected from future, where digital literacy is essential. Familiarity
their children online ( FTC). The “European with technology also enhances
strategy” to foster a safe environment for their problem-solving, critical thinking, and
youth is Better Internet for Kids (BIK+), to creativity skills. Technology offers tools and
platforms for creative expression. Young serves the best interests of society as a whole.
people can explore various forms of digital
art, music production, video creation, and Bridging Generational Gaps: Fostering meaningful
more. Encouraging their creativity in these intergenerational collaboration in responsible tech
digital mediums can foster innovation and can help bridge the gap in tech understanding and
self-expression. adoption. They can facilitate communication
between older generations and younger ones,
Digital Literacy: By fostering tech literacy fostering collaboration and knowledge sharing.
among youth, we empower them to
effectively use technology to be active Civic Engagement: Youth advocacy on tech issues
participants in shaping their digital is needed to encourage young people to engage in
experiences. And ultimately lead discussions civic activities and become informed and active
about digital ethics, privacy, cybersecurity, participants in shaping government policies
and the social implications of emerging related to technology.
technologies. Youth may become creators of
content, advocates for positive online Diversity, Equity, Inclusion, and Belonging:
communities, and contributors to the digital Involving new voices in responsible tech promotes
landscape in meaningful ways. Learning how diversity and inclusivity in the tech industry, which
to analyze information, evaluate sources, and has historically lacked representation from
make informed decisions in the digital age is underrepresented groups. Advocacy efforts can
crucial for youth to navigate a rapidly help ensure that technology is developed with a
changing world. Understanding technology broader range of perspectives and experiences in
and its impact on society helps young mind.
people become responsible global citizens.
Sustainability and Innovation: Involving a wide
Advocacy: The decisions made in the tech range of perspectives in the development and
industry today will have far-reaching deployment of technologies and policies can serve
consequences for the future. Youth as a catalyst for innovation in emerging challenges.
advocacy ensures that the voices and With tech’s significant impact on the environment,
concerns of future generations are heard and youth advocacy should raise awareness about
considered in policy-making and technology sustainable practices and alternatives, and
development. Through youth advocacy advocate for ethically-minded solutions.
efforts, we harnesses the unique
perspectives, digital fluency, and passion of
young people to drive positive change in the
tech industry, ensuring that technology
Visit our Responsible Tech Job Board for the latest listings and freely
join our Responsible Tech Talent Pool to be matched up with potential
opportunities.
Are you an employer looking for Responsible Tech talent? Reach out to
rebekah@alltechishuman.org and learn more here.
Jeremiah Azurin; Tech & Leah Farrar; Tech & Democracy, Matt Rosenbaum; Tech &
Democracy; Responsible AI Responsible AI; Editor Democracy; LinkedIn
Jillian Drummond; Youth, Tech Lindsey Washburn; Responsible Mia Casesa; Responsible AI
& Wellbeing; Responsible AI AI; LinkedIn
Michelle Mol; Responsible AI
Julie Lee; Responsible AI Lisa D. Dance; UX Design;
LinkedIn; Website Mohsen Monji; Responsible AI
Katleho Mokoena; Responsible
AI; LinkedIn Liz Oh; Responsible AI Natalia Kucirkova; Youth, Tech
& Wellbeing; LinkedIn
Kendrea Beers; Responsible AI Lyn Muldrow; Youth, Tech &
Wellbeing, Responsible AI Nicola Brown; Youth, Tech &
Kim Fernandes; Responsible AI; Wellbeing
LinkedIn Maira Elahi; Responsible AI, Tech
& Democracy, Youth, Tech & Nikki Love Kingman; Public
Kimberly Wright; Public Interest Wellbeing; LinkedIn; Website Interest Technology, Tech &
Technology, Responsible AI, UX Democracy
Design; LinkedIn Mari Cairns; Responsible AI;
LinkedIn Patricia Liebesny Broilo; Youth,
Kwynn Gonzalez-Pons; Youth, Tech & Wellbeing; LinkedIn
Tech & Wellbeing; Responsible Maria Filippelli; Tech &
AI Democracy Priscilla Wahome; Youth, Tech
& Wellbeing
ALL TECH IS HUMAN | 2023
93
CONTRIBUTORS
Raashee Gupta Erry; Sree Lathika; Responsible AI
Responsible AI
Susmitha Tutta; Tech &
Rebecca Scott Thein; Tech & Democracy; Responsible AI
Democracy; LinkedIn
Tracy Kadessa; Responsible AI
Renata Mares; Tech &
Democracy; Responsible AI, Urba Mandrekar; Tech &
Public Interest Technology, Trust Democracy; Youth, Tech &
& Safety; LinkedIn Wellbeing; Responsible AI
Panel Highlight:
How To Build A Career in
Responsible Tech
All Tech Is Human recently held a panel discussion featuring Danielle
Sutton, Kristina Francis, Ginny Fahs, and Flynn Coleman, moderated by
Executive Director Rebekah Tweed. This was part of our Responsible
Tech Mixer and Speaker Series, which brings together 200 people
each month in NYC to build community. This panel was held at
Betaworks on July 26, 2023.
In the coming pages, you will find high-level overviews of each panelist
and key insights from the discussion. You can see all the videos from
our series here.
95
Danielle
Sutton
Senior Consultant at Deloitte
and Trustworthy AI
Strategist
Panel Highlight:
Technology Is
Infrastructure
All Tech Is Human recently held a panel discussion featuring Dr. Saima
Akhtar, Matt Mitchell, Claire Liu Yang, and Lyel Resner, moderated by
Program Associate Elisa Fox. This was part of our Responsible Tech
Mixer and Speaker Series, which brings together 200 people each
month in NYC to build community. This panel was held at Betaworks
on August 24, 2023.
In the coming pages you will find high-level overviews of each panelist
and key insights from the discussion.
100
Claire Liu-
Yang
Chief of Staff at Silicon
Harlem
01
Gain confidence and a better
understanding of the ecosystem
There are thousands of individuals just like
you looking to plug into this community.
Treat it like a high-level of commitment to
learn about the ecosystem, read relevant
books and articles, expand your network,
and find mentors.
02
Play an active role in responsible tech
After gaining a better understanding of the
ecosystem, attend a responsible tech
gathering to meeting others in the field.
Participate in the many activities from
hundreds of organizations working to make a
better tech future.
03
Stay involved with All Tech Is Human
107
All Tech Is Human’s Team
ALL TECH IS HUMAN | 2023
108
Stay involved with All Tech
Is Human and the responsible tech
community!
ALL TECH IS HUMAN | 2023
109
Get in touch
Stay in touch with All Tech Is Human by joining our newsletter and
Slack community, attending our livestreams, and meeting in person at
ouf summits and mixers!
Contact: hello@alltechishuman.org.
110