The Personalized Media Bubble: How Artificial Intelligence Shapes Audience Vision
Keywords:
AI Role, Personalized Media Bubble, Audience Vision, Filter Bubble TheoryAbstract
AI or Artificial Intelligence has fundamentally determined how we perceive social media, and interact within the global digital media realm. Algorithmic systems are now basically runs by AI nearly every aspect of the daily media experience, everything from social media to news recommendations. This study investigates on how AI-based personalization impacts on audience engagement, emotional response, credibility and objectivity perception to which this theory is being called Filter Bubble Theory. The research sees on how AI can now understand and generate content and also at the same time AI is controlling the users’ desires while controlling their access to different perspective, thus forming ideological spectacles and political conflict. This research also explores the severe consequences of AI developed content, including fake media (deepfakes) with respect to trust in audiences; authenticity; media literacy and behavior related aspects and the mental state. Through using multiple analysis method of surveys, interviews and platform data, the research seeks to outline the relationship between AI personalization and political segregation in Pakistan’s digital public eyes. The results will provide somewhat of insights into ethical, psychological and social implications of this AI systematic control, with concrete recommendations for transparency or accounts on AI generated content and required digital rights frameworks to make it more open and responsible digital environment.
References
Abiri, G. (2025). Generative AI as digital media (arXiv preprint). arXiv. https://arxiv.org/abs/2503.06523
Ahmmad, M., Shahzad, K., Iqbal, A., & Latif, M. (2025). Trap of social media algorithms: A systematic review of research on filter bubbles, echo chambers, and their impact on youth. Societies, 15(11), Article 301. https://doi.org/10.3390/soc15110301
Bajwa, U. M., & Iftikhar, I. (2025). Impact of social media algorithms on polarization despite perceived diversity: Evidence from Pakistan (Preprint). Lahore Garrison University.
Cambridge University Press. (n.d.). Social media and politics in Southeast Asia. https://www.cambridge.org/core/elements/social-media-and-politics-in-southeast-asia/C9162DC3D2D71484FB0F640A6E61A2DA
Khalil, H. (2024). Algorithmic bias and political polarization: Analyzing the role of news aggregators and social media in Pakistan. Pakistan Languages and Humanities Review, 8(2), 755–768. https://doi.org/10.47205/plhr.2024(8-II)66
EngageMedia. (2025). Report calls for realising responsible AI through data protection in South and Southeast Asia. https://engagemedia.org/2025/report-responsible-ai-data-protection-south-southeast-asia/
Harvard Business School. (n.d.). The health risks of generative AI-based wellness apps. https://www.hbs.edu/ris/Publication%20Files/the%20health%20risks%20of%20generative%20AI_f5a60667-706a-4514-baf2-b033cdacf857.pdf
Media Support. (2024). Lack of enabling AI and digital rights policies hurting media freedom in Pakistan. https://www.mediasupport.org/blogpost/lack-of-enabling-ai-policies-hurting-press-freedom-in-pakistan/
National Science Foundation. (n.d.). The social impact of deepfakes. https://par.nsf.gov/servlets/purl/10233906
PwC. (n.d.). Understanding algorithmic bias and how to build trust in AI. https://www.pwc.com/us/en/tech-effect/ai-analytics/algorithmic-bias-and-trust-in-ai.html
Raza, A., & Aslam, M. W. (2024). Algorithmic curation in Facebook: An investigation into the role of AI in forming political polarization and misinformation in Pakistan. Annals of Human and Social Sciences, 5(2), 219–232. https://doi.org/10.35484/ahss.2024(5-II-S)22
Saeed, M. U., Bilal, Z., Raza, M. R (2020). Political speeches and media agenda: electoral rigging movement—2013 as a building factor of media agenda in Pakistan. Indian Journal of Science and Technology.
Salenger, Sack, Kimmel & Bavaro, LLP. (n.d.). AI chatbot self-harm lawsuits: Legal help for minors. https://sskblaw.com/class-action-and-mass-tort/ai-chatbot-self-harm-lawsuits
Salenger, Sack, Kimmel & Bavaro, LLP. (n.d.). Nationwide class action & mass tort attorneys. https://sskblaw.com/class-action-mass-torts
Stanford Medicine. (2025). Why AI companions and young people can make for a dangerous mix. https://med.stanford.edu/news/insights/2025/08/ai-chatbots-kids-teens-artificial-intelligence.html
UNCTAD. (2024). From divides to dialogue: Here’s how developing countries can catch the AI boom. https://unctad.org/news/divides-dialogue-heres-how-developing-countries-can-catch-ai-boom
UNESCO. (n.d.). Deepfakes and the crisis of knowing. https://www.unesco.org/en/articles/deepfakes-and-crisis-knowing
Von der W, C., Abdul, A., Fan, S., & Kankanhalli, M. (2020). Helping users tackle algorithmic threats on social media: A multimedia research agenda (arXiv preprint). arXiv. https://arxiv.org/abs/2009.07632
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Umar Keen, Dr. Wajid Zulqarnain, Dr. Muhammad Riaz Raza

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.






LEGALOPEDIA EDUCATINIA (PVT) LTD