Recent advancements in robotics, artificial intelligence, machine learning, and sensors now enable machines to automate activities that once seemed safe from disruption—including tasks that rely on higher-level thinking, learning, tacit judgment, emotion sensing, and even disease detection. Despite these advancements, the ethical issues of business automation and artificial intelligence—and who will be affected and how—are less understood. In this article, we clarify and assess the cultural and ethical implications of business automation for stakeholders ranging from laborers to nations. We define business automation and introduce a novel framework that integrates stakeholder theory and social contracts theory. By integrating these theoretical models, our framework identifies the ethical implications of business automation, highlights best practices, offers recommendations, and uncovers areas for future research. Our discussion invites firms, policymakers, and researchers to consider the ethical implications of business automation and artificial intelligence when approaching these burgeoning and potentially disruptive business practices.