Artificial Intelligence and Automation: The Great Ethical and Moral Challenge for CEOs in the “Post-Apocalyptic” Market

Historically, we’ve always been fascinated—and fearful—of the idea of “the end of times,” that final event that would end society as we know it. This concept has been present since ancient religious beliefs and has evolved into modern portrayals through entertainment like Black Mirror, Terminator, and The Matrix, envisioning a post-apocalyptic world where current society is replaced by machines—a reality that now seems alarmingly close. 

Today, society fears the negative impact of AI on the job market, particularly the idea of massive job losses to technology. This fear is tied to a vision of a world ruled by robots and machines doing everything for humans, making them completely useless and incapable of thinking for themselves—destroying sources of income and stability, potentially leading to chaos. In this dystopian view, only large corporations and wealthy individuals would have “control” over AI, using it to dominate the world as they please. 

But even when we leave fiction aside and look at reality, the scenario is not much better. Many of these ideas are being projected into society due to arbitrary or careless statements from figures like Sam Altman (OpenAI), Jen-Hsun Huang (NVIDIA), Sundar Pichai (Google), and Mark Zuckerberg (Meta)—well-known CEOs driving AI adoption—who casually present AI as the inevitable future, even if it means the loss of thousands of jobs. These statements reveal that economic gain is their main concern, as they overlook copyright laws, privacy, and human wellbeing, ignore immediate market disruptions, and neglect the massive energy consumption and hardware demands of AI systems. They also fail to consider the increasing lack of control over AI actions, as evidenced by reported incidents where AI systems have self-programmed to override shutdown commands or disobey their operators. 

In the face of so many risks, we need to adopt a more positive and objective perspective. Many CEOs don’t realize that they are living in an unprecedented moment: a convergence of advanced technological tools, a highly skilled workforce, abundant funding sources, and a variety of communication channels with shareholders and stakeholders. This is a unique opportunity to take their organizations to the next level—if they foster a healthy synergy between all these elements. The role of a company is to build and contribute to societal development, not just chase endless profit. That’s why CEOs must evaluate their actions’ social impact through an ethical and moral lens. 

Embracing automation responsibly—especially when it involves AI—can become a competitive advantage. Companies will still have accountable individuals overseeing final results, allowing them to improve automated processes without fearing that these AI models could turn against them. This approach may also create new jobs, with specialists responsible for AI implementation and maintenance, focusing on teamwork and collaboration with human employees. 

Automation tools, without exception, will never be accountable for their actions or outcomes. AI systems don’t sign NDAs (non-disclosure agreements), which are essential to protect a company’s information and intellectual property. Consider the case of Air Canada, which entrusted an AI chatbot with handling open ticket bookings. The bot offered a user a special “bereavement fare”—a discount for passengers traveling to funerals. However, when the user tried to claim the discount, the company denied it, blaming a mistake in the bot’s information. The case went to court, and the judge ruled in the user’s favor, upholding the policy the AI had presented. Air Canada argued that the AI was responsible, not the company—but the court’s decision resulted in a financial blow to the airline and set an important precedent for others. 

Another issue is the high dependency that AI creates. If the system goes down for a day or two, the impact can be severe. AI tends to disconnect people from many responsibilities while handling large production volumes. So when outages occur, the few remaining employees are left unable to meet demand. Additionally, current legal frameworks still don’t fully cover AI-based actions—only those carried out by humans—raising risks of intellectual property loss or violations. 

Just because a technology is available doesn’t mean we should use it blindly just because it’s “easy.” One example is the CEO of Duolingo, who proudly announced the dismissal of many employees because AI could now do their work. However, the backlash was intense—social media users and employees heavily criticized the company, forcing the CEO to issue a public “correction,” claiming it was a misunderstanding and that they were re-hiring because the platform’s success relied on its people. 

Believing that AI’s speed automatically makes it more qualified than a human to perform a task is not an objective perspective. Any transition toward automation takes time. We must not abandon the legacy and values a company has built just to glorify AI—especially when most companies don’t even create their own AI models but use third-party services. On the other hand, when a company implements AI thoughtfully, with the goal of process innovation, the result is very different. 

Responsibility doesn’t lie solely with CEOs. Employees must also integrate AI into their workflows in ways that enhance, not threaten, their roles. They should be empowered to voice how they see this technology contributing to their areas of expertise. 

Finally, transparency is essential when adopting these tools. Companies should be clear about where and how AI is being used to avoid reputation and stability disasters—like in the case of Builder.ai. The company claimed to build apps entirely with AI, but after a scandal involving poor financial practices, it was revealed that most of the work was actually done by outsourced workers in India. This misled investors and customers, who believed the AI alone had produced the work error-free, dismissing human contribution. 

These transparency issues happen elsewhere too. Scottish voice actor Gayanne Potter discovered that her voice was being used in an automated train system—without her consent—because a company she’d worked with in the past had replicated her voice with AI. This caused her emotional and professional harm. 

It’s also important to consider customer service. If human interaction is replaced by AI, companies must clearly disclose whether users are communicating with a person or a machine. 

As a CEO, you may not be able to stop the zombies—but when it comes to the post-apocalyptic market ruled by machines, you have all the power to become the next John Connor, by choosing to prioritize people first, and using technology to support them in improving the company—and positively impacting society. 

Picture of Sergio Cáceres Velasco

Sergio Cáceres Velasco

Production Manager
Red Design Systems

Share

From the CEO’s strategic vision to real stories of brands that are marketing for real.

Subscribe to the newsletter and receive each edition with insights, tools and trends ready to apply.

    document.getElementById( "ak_js_1" ).setAttribute( "value", ( new Date() ).getTime() );





      document.getElementById( "ak_js_2" ).setAttribute( "value", ( new Date() ).getTime() );

      Suscríbete al newsletter y recibe cada edición con insights, herramientas y tendencias listas para aplicar.

        document.getElementById( "ak_js_3" ).setAttribute( "value", ( new Date() ).getTime() );

        Subscribe to the newsletter and receive each edition with insights, tools and trends ready to apply.