Seeing Through AI: How Artificial Intelligence may be Blurring Human Agency
9/24/20254 min read
Is the Lense Blurred
Picture putting on glasses that not only help you see better but also help you understand what’s around you. With features like real-time translation, object recognition, and hands-free recording, wearable AI glasses have moved from science fiction to reality.
These glasses are advertised as tools that make daily life easier. Still, there’s an important question to consider: do they give us more control, or are they slowly taking it away?
The Allure of Augmented Perception
Wearable AI glasses are all about making life easier. If you’re traveling, they can translate signs as you see them. In a new city, they quickly point out landmarks. For students, professionals, and anyone else, these glasses help with memory and problem-solving tasks.
This is part of a larger shift, as, from a big-picture perspective, we are increasingly delegating cognitive labor to machines, a process known as “cognitive offloading” (Gerlich, 2025). Done right, it frees up mental bandwidth. Done excessively, it risks dulling the very faculties we are trying to enhance (Pitts et al., 20f25).
The Hidden Trap: Overreliance
Research shows that when humans begin to rely too heavily on these new forms of technology, overtime, their decision-making and critical thinking can begin to diminish. This is highlighted heavily by a systematic review on how overreliance on AI dialogue systems impairs analytical reasoning and creativity (Zhai et al., 2024).
Newer studies are beginning to research and reveal that automation bias, the tendency to accept machine recommendations uncritically, can lead people to follow incorrect advice even when their own judgment is better (Abdelwanis et al., 2024; Romeo & Conti, 2025).
When technology provides us with information right in front of our eyes, it can be challenging to distinguish what we’re actually seeing from what the technology is suggesting.
The Privacy Blind Spot
Beyond cognition, wearable technology such as glasses, body cameras, etc., poses profound privacy dilemmas. They often rely on continuous recording, facial recognition, or cloud-based data processing.
This results in people nearby being recorded without their knowledge, in addition to the potential for unanticipated recordings due to the wearer becoming unaware that they are recording their environment, which can transcribe private conversations and track their location.
Unlike phones, which you have to use on purpose, these glasses work in the background, collecting information without anyone noticing. This makes the privacy concerns even more serious, with scholars warning that the normalization of constant recording risks eroding social trust and amplifying surveillance culture (Janssen et al., 2019).
Agency at the Verge of Automation
With the infancy of this technology in addition to the pace at which its being delivered to everyday citizens, the concern is not just accuracy; it is autonomy. As Pitts et al. (2025) note, reliance patterns that occur over time often reflect a “cognitive cost-benefit calculation” that results in the slow erosion of decision-making effort.
Faced with time pressure or mental fatigue in a world where pace is constantly being pushed to test the capability of these new innovations, people are beginning to default to trusting AI outputs rather than verifying them.
The concern remains that individuals become complacent using wearables and begin unconsciously accepting what algorithms tell us throughout our day. Combined with privacy issues, this could lead to a world where our decisions and even our focus are shaped by technology we are oblivious to that we’re using.
A Path Forward to Responsible Design
Researchers advocate for “human-centered design” and “trust calibration” to mitigate these risks. Strategies include transparency cues that indicate when AI is being implemented or when the information being displayed is uncertain, privacy-first defaults such as visible recording indicators, and forcing functions that encourage verification rather than blind acceptance (Li et al., 2024; Vaccaro et al., 2024).
Adaptive interfaces that respond to user expertise can further reduce dependency (Ma et al., 2024). By making these glasses collaborators instead of authorities, and by embedding privacy safeguards, developers can help preserve human judgment and public trust.
Wearable AI glasses are a bit of a paradox. They help us see and remember more, but they also make it easy to let technology do the work for us. At the same time, they can put other people’s privacy at risk.
The real question isn’t just if these devices can see for us, but whether we still get to decide what and how we see. As we utilize technology to enhance our vision, we must ensure that we maintain our independence and safeguard our privacy.
References
Abdelwanis, M., Alarafati, H. K., Tammam, M. M. S., & Simsekler, M. C. E. (2024). Exploring the risks of automation bias in healthcare artificial intelligence applications: A Bowtie analysis. Journal of Safety Science and Resilience, 5, 460–469. [https://doi.org/10.1016/j.jnlssr.2024.06.001](https://doi.org/10.1016/j.jnlssr.2024.06.001)
Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. [https://doi.org/10.3390/soc15010006](https://doi.org/10.3390/soc15010006)
Janssen, C. P., Donker, S. F., Brumby, D. P., & Kun, A. L. (2019). History and future of human-automation interaction. International Journal of Human-Computer Studies, 131, 99–107. [https://doi.org/10.1016/j.ijhcs.2019.05.006](https://doi.org/10.1016/j.ijhcs.2019.05.006)
Li, J., Yang, Y., Zhang, R., & Lee, Y.-C. (2024). Overconfident and unconfident AI hinder human-AI collaboration. arXiv. [https://arxiv.org/abs/2402.07632](https://arxiv.org/abs/2402.07632)
Ma, S., Wang, X., Lei, Y., Shi, C., Yin, M., & Ma, X. (2024). “Are you really sure?” Understanding the effects of human self-confidence calibration in AI-assisted decision making. Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’24), 1–20. [https://doi.org/10.1145/3613904.3642671]
Pitts, G., Rani, N., Mildort, W., & Cook, E.-M. (2025). Students’ reliance on AI in higher education: Identifying contributing factors. arXiv. [https://arxiv.org/abs/2506.13845]
Romeo, G., & Conti, D. (2025). Exploring automation bias in human–AI collaboration: A review and implications for explainable AI. AI & Society. [https://doi.org/10.1007/s00146-025-02422-7]
Vaccaro, M., Almaatouq, A., & Malone, T. (2024). When combinations of humans and AI are useful: A systematic review and meta-analysis. Nature Human Behaviour, 8, 2293–2303. [https://doi.org/10.1038/s41562-024-02024-1]
Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systematic review. Smart Learning Environments, 11(28). [https://doi.org/10.1186/s40561-024-00316-7]