I’ll be honest, this next post is daunting. Technology is fundamentally reshaping society, remaking how we connect with each other and deeply affecting how we live in both positive and negative ways. In this chapter, I’ll focus on generative artificial intelligence as a remarkable new capability that is accelerating the pace of change exponentially and deepening the challenges outlined previously. AI is able to analyze enormous quantities of information quickly and completely shifts our understanding of what is possible, creating fundamental questions about how we embrace the possibilities it unlocks and safeguard ourselves against the risks it poses.
As a corollary, we’ll also look at the role of technology companies and their leaders, who have outsized influence during this time, and explore how we got here and why it is so difficult to put on the guardrails to ensure AI is in service to people and the planet. However, it’s worth calling out that the conversations that informed these observations are 6-12 months old, and so much is happening now on a daily basis that modifies our understanding of both the potential of AI and the regulatory and economic environment in which it operates. I hope to dig in further on this topic in future posts, but this will give us a good jumping off point.
Technology & Generative AI
Artificial intelligence and generative AI may be the most important technology of any lifetime.
- Marc Benioff, chair, CEO, and co-founder, Salesforce
By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.
- Eliezer Yudkowsky, American computer scientist and researcher
Generative Artificial intelligence (AI) is evolving at a breakneck pace, creating near term efficiencies and the potential for longer term scientific and technological breakthroughs. From its earliest days, AI has changed the way people work, from organizing and streamlining contracts across a global enterprise to accelerating drug discovery to writing and shaping consulting proposals. One nonprofit described using AI to determine which clients to work with and where to expand their services, doing the work in a week at no cost to the organization rather than spending 3-4 months and $60,000 on a consulting project. Another colleague observed that AI is game-changing for young job seekers in Africa with strong core skills but poor written communication. A democracy working group of media and storytelling professionals created bespoke GPTs to develop, coordinate, and roll out strategies across their members to help ensure safe and fair elections. Any organization or person with proprietary data sets and clarity around values and goals is well-positioned to use AI to drive efficiencies and groundbreaking insights.
These are remarkable benefits and yet there are deep concerns about allowing AI to advance without adequate guardrails, given the significant potential economic, societal, political, and geopolitical implications. The knowledge and ability to build and implement new technology is already growing exponentially, with new standalone innovations becoming a feature of another product the next month. AI efficiencies are leading to more layoffs and job uncertainty, especially among white collar workers, with cuts starting in 2023 in the technology industry and more significant job dislocation underway.
While tech layoffs are viewed as less of a concern since other industries need to bolster their tech capacity and will absorb the additional talent, this dynamic will not be the same with broader layoffs. Some see this as an opportunity to work more efficiently, with “those who know how to use AI replacing those who do not,” rather than AI replacing people. Others view the rapid push for AI-driven efficiency and growth as “penny wise and pound foolish” since rising unemployment will further destabilize an economy and society already on edge. Both are probably true.
And since AI is trained on the Internet and uses human designed algorithms, it can – and often already is – seamlessly replicating human biases and deepening existing structural inequities, even as it appears to be neutral. We have already seen the insidious effects of social media algorithms on societal health and wellbeing, and many voiced concerns that the effects of bias in AI could be even more damaging. AI bias can discriminate based on race, gender, biological sex, nationality, social class, or many other factors. Extensive testing and diverse teams can act as safeguards, but even with these measures in place, bias can still enter machine-learning processes which AI systems then automate and perpetuate.[1]
Underlying these dynamics is the unsettling reality that only a handful of companies currently control AI – Alphabet, Meta, Microsoft, and Open AI most notably – and the government has yet to hold them accountable for the repercussions of their innovations, especially under the new administration. For example, according to strategy and marketing executive Susannah Hill, data centers already create environmental challenges but there are substantial climate implications from Generative AI in particular, which runs computers over long periods as it trains on information and uses specialized chips that generate 10 times more heat than standard chips, requiring more power to run them and more resources to cool them.
In fact, greenhouse gas emissions from GenAI in US data centers would be about 1.5 million metric tons annually if they all used average US electrical power.[2] A recent EPRI study estimates that as AI becomes more entrenched in our digital economy, data centers could consume up to 9 percent of US electricity generation by 2030.[3] In addition, a 2023 study concluded that ChatGPT “drinks” a 16oz bottle of fresh water for every 20-50 questions asked.[4] With about 10-20 billion daily AI inquiries, this translates to 25-125 million daily gallons of water usage and rising. Unfortunately, with the requirements of the technology, only fresh water will work, not grey water or other recycled options.
And in this moment of techno-optimism, AI companies are not constraining their growth in order to ensure appropriate safeguards, but rather choosing an ambitious trajectory that is not slowed by concerns with copyright infringement or safety implications. The implicit argument is that any effort to provide such guardrails will slow progress and allow China to surpass the US, fundamentally shifting the balance of power and creating an existential crisis. Sam Altman made it clear at the November 2023 Open AI DevDay conference that the company would pay any copyright infringement costs encountered by its developer community and safety would be a future consideration. Now with OpenAI’s transition to a fully profit-driven model, all pretense of prioritizing societal good is gone. The implications of these decisions are and will continue to be felt across multiple sectors from the creative industries to media, healthcare, education, finance, cybersecurity, and even transportation.
And as we have seen, geopolitically the rise of companies with world-changing technology controlled by private individuals is fundamentally remaking the balance of power. As political scientist and author Ian Bremmer outlined in his 2023 TEDTalk, the US has historically leveraged its position as the security superpower to offset China’s economic dominance, creating a relative balance of power. As technology companies control more advanced infrastructure, Elon Musk became the one who decided if Ukrainian President Volodymyr Zelenskyy could communicate with his troops or Trump could reach a broader audience on X. Combined with outsized political donations and influence on those in power, tech billionaires have become new global superpowers, are using this newfound power to advance their corporate ambitions, and it is unclear what they will do with all that data or how we will hold them accountable. And because their vast resources are nearly unconstrained, they can afford to skirt the law and fight attempts to hold them accountable in court, essentially becoming judgment-proof.
According to journalist and author Jill Lapore’s analysis in her historical account If Then,[5] the absence of tech regulation and ethics has its roots in the 1960s with the rise and fall of a now little-known company called Simulmatics. Launched with idealistic goals during the Cold War and using the cutting-edge large-scale computing power of the time, Lapore says it aimed to segment society, mine data, and manipulate people with goals such as supporting civically minded politicians and winning the Vietnam War. When that did not work, they turned to commercial goals, marketing consumer products to new audiences. Ultimately, they failed at all these efforts.
When Simulmatics shut down, the perceived threat to privacy was considered moot and the government dropped its technology regulation efforts, including axing a proposed agency to protect consumer data. However, the seeds had been planted for a new way of viewing society as segmented and divided by demographics (e.g. Wisconsin soccer moms) rather than focusing on our broader identities as Americans and unfortunately that legacy has stuck. In her conclusion, Lapore highlights how the void left by Simulmatics was subsequently filled by tech entrepreneurs who were singularly focused on growing their footprint and its accompanying financial success and unconcerned with societal implications. And in the age of economic neoliberalism, this vacuum persisted, enabling tech companies to slow efforts to create appropriate legislative safeguards, much as tobacco companies did when faced with the health risks of smoking.
This consolidation of power and “growth at all costs” mentality in technology is a broad source of concern among many, especially in philanthropy and nonprofits. Many people also expressed concern about the technical orientation of corporate leadership in recent years, and the impact on society of a dominant work culture that does not prioritize human connection. Some called this dynamic out for perpetuating a white-dominated culture that does not value a diversity of human experience and further deepens structural inequity. Others highlighted that over the last forty years, this style of leadership has fundamentally shifted work flows and the way knowledge is created and managed. Rather than bringing people together, it favors an engineering orientation, limiting human interactions, and shifting organizational culture away from connection and community.
The European Union has taken a stronger unified stance on regulating artificial intelligence than the US, where efforts have been piecemeal at the state level or rolled back under the current administration. Enacted in August 2024, the EU’s AI Act aims to foster trustworthy AI and ensure respect for fundamental rights, safety and ethical principles, while taking a risk-based approach to regulations. While the EU’s approach is fairly comprehensive, many call out the need to establish global rules and norms given the interconnected nature of business and the economy. Most agree, we have the knowledge to provide the needed safeguards but at least so far, have lacked the political consensus and will to do so in a coordinated and lasting way.
Ultimately, we are individually and collectively deeply divided between techno-fear and techno-optimism, but everyone can agree that technology is changing everything, practically on a daily basis. To the extent that people are fearful about their futures due to growing financial insecurity and climate instability, AI is accelerating and exacerbating those challenges. Humans make decisions based on their experience of what works, and we are now in a time when the future is profoundly unpredictable, feeding insecurity and a willingness to seek safe harbor with politicians who promise to take care of us. Next week we’ll look at how our collective anxiety is feeding into political polarization and challenges to our democracy.
[1] Battling Bias in AI, Caroline Brobeil, Rutgers University
[2] Generative AI’s growing impact, Cloud Sustainability Watch, 2024
[3] EPRI Study: Data Centers Could Consume up to 9 of U.S. Electricity Generation by 2030, 5/29/24
[4] ChatGPT ‘drinks’ a bottle of fresh water for every 20 to 50 questions we ask, study warns, Euronews, 4/20/23
[5] If Then: How the Simulmatics Corporation Invented the Future, Jill Lapore, 9/15/20
This is very scary. People cannot regulate themselves, as the selfish human character takes over. We don’t have enough potable water as it is and this administration is doing all it can to ruin the environment. We’re screwed.
There was a great interview that Kara Swisher did with Sam Altman back in March 2023 where Sam explained that they initially pursued government support for their AI research and development. The US government was historically the primary funder of big audacious technologies like the space race, the internet (via the ARPANET), GPS and any number of other innovations. However, AI is almost exclusively privately funded and that means it’s not structured to benefit the greater good.