Moral Considerations: How AI Is Influencing Our Choices and Values
9/20/20253 min read
The Hidden Variables Impacting Our Lives
Self-driving cars are increasingly observed blending into traffic these days. With their rise, we face new ethical questions, ones we’ve never really had to ask before, setting the stage for a broader discussion about AI’s moral influence.
Imagine this: A self-driving car is driving down a busy street, and a pedestrian sprints into the street after a ball. The self-driving car suddenly has to decide: stay straight and hit the pedestrian or swerve into a parked vehicle, putting its passenger in danger.
Who’s really making that choice? The programmer who set the rules? The code that executes them? Or you, the person who clicked “accept” on the user agreement without a second thought?
AI isn’t just solving problems anymore—it’s actively shaping our choices and, in turn, our values. When hospitals use it to determine who gets treated first, or courts use it to weigh bail, or insurance companies use it to decide coverage, AI takes on a decisive role in deeply human issues like fairness, justice, and compassion.
And here’s the scary part: when AI makes those calls, we risk forgetting how to make them ourselves.
The Rise of Algorithmic Morality
Why is AI spreading so fast? Simple, it’s efficient. Machines process data faster than we ever could, and they don’t get tired, emotional, or overwhelmed.
But morality isn’t just about data points. It’s about empathy. Context. The messy, human stuff that doesn’t fit neatly into code. Logic alone can’t capture that.
The Risk of Outsourcing Our Ethics
Consider the doctor, judge, or corporate CEOs who make dozens of ethical decisions each day; ethical choices are exhausting. They pull us in two directions, leave us uncomfortable, and rarely offer a neat answer. No wonder it feels easier to let a machine decide.
But giving up that responsibility comes with a cost:
Dependency on AI to make ethically based decisions reduces the responsibility of the individuals who were hand-picked based on an interview during which other individuals carefully assessed the values, characteristics, and morals of the applicant.
Additionally, when these responsibilities are delegated to AI, decision-makers become the audience and face a dilemma where their moral values are challenged by the very machine they have placed in charge, all while their moral muscles weaken and the ability to make hard choices becomes, well, harder.
Here’s the bigger question: if we let AI handle every tough decision, what happens to our own sense of right and wrong?
The Human Core of Morality
That’s why AI can’t be the only one deciding the direction of our ethics. It should challenge us, perhaps even guide us, but the responsibility for deciding still lies with us.
How to Stay Morally Awake in an AI World
There are steps we can take to maintain our moral decision-making capacity amidst AI. First and foremost, if AI leads you to an ethical choice, it’s important not to just run with it, but to pause and consider, and understand how the system came to this decision, while also considering who benefits from the decision and at what cost.
Including the human aspect in these decisions is imperative, as it allows the AI to be challenged to provide additional moral angles while also allowing a push for clarification. These individuals must have the knowledge and hold the capacity to challenge the systems they oversee.
For example, if algorithms are shaping choices in schools, hospitals, or courts, these individuals should be empowered to demand explanations. If the decisions presented by AI can’t be explained clearly, maybe they shouldn’t be making those choices at all.
Lastly, morality grows in conversation. The opportunity for discussion on decisions developed by AI should take place frequently among organizational leaders, stakeholders, and the broader community. Talking it through makes us stronger and provides safety nets to ensure ethics remain in the solutions implemented.
AI as a Mirror, Not a Master
Adapting AI as a moral mirror can actually help us identify blind spots in our decision-making and those decisions implemented through AI. A triage program in a hospital might force doctors to explain why they’ve always done things a certain way. A moderation tool might make us rethink how we balance free speech with harm.
The danger isn’t that AI raises moral questions. It’s that we stop answering them ourselves.
The Courage to Choose
Every major technology has pushed us into new moral territory. The printing press shook authority. The internet redefined privacy. Now AI is testing whether we’re willing to stay responsible for our own choices.
We lose our agency only if we hand it over. Each time we stop, think, and choose—even with AI whispering an easy solution—we strengthen our own moral voice.
So when AI steps in and tries to make a decision for you, don’t shrink back. Lean in. Ask questions. Decide for yourself.
Because morality has never been about being perfect—it’s about having the courage to keep choosing, even when it would be so much easier to let a machine do it.