Terms like algorithm, digital proxy war, and AI have dominated Nepal's media, Gauri Bahadur Karki's report, and even the opening session of Parliament in recent times. Casual tea-shop conversations and academic discussions are no exception. Viewed optimistically, this ongoing discourse has helped many people become acquainted with these technical concepts, their real-world impact, and their attendant risks.
Social media platforms have become a central source of news and information for the general public. User engagement has steadily shifted from text to graphic illustrations to multimedia content such as reels.
To go deeper: literacy, at its most basic, is the ability to read and write—and it lays the foundation for digital literacy. But digital literacy means something far broader. It is not simply the ability to read and write—it is the capacity, acquired through experience or education, to use digital tools to verify and critically question dubious information encountered while scrolling a social media feed or swiping through reels.
In Nepal, millions of digitally inexperienced people with easy internet access are not failing the internet—the internet's algorithms are failing them. For an uninformed, digitally illiterate user, a new algorithmic design principle for social media platforms could change that.
During the GenZ protests and Nepal's 2026 elections, manipulated videos of political leaders circulated widely on Facebook and TikTok—mob lynching footage, doctored screenshots, and digitally altered content accumulating thousands of reactions and shares before fact-checkers could respond. The AI-generated deepfake videos were fabricated and false, but the damage was real. These instances of misinformation, disinformation, and fake news point to a systemic crisis. Nepal has more than 16.6m internet users (approximately 56 percent of its population), yet assessments by the International Telecommunication Union (ITU) suggest that only 30 to 35 percent possess digital literacy. That leaves an estimated 10 to 12m people navigating a digital environment they are ill-equipped to evaluate rationally, one that is curated and orchestrated by algorithms that exploit that very vulnerability.
The engagement-based algorithm is the problem
Social media platforms do not offer an accurate representation of the world—they present a curated, commercially optimized version of it. Despite their proprietary differences, the dominant platforms share one central design principle: engagement. Facebook, TikTok, and Instagram are governed by recommendation engines trained to maximize time spent, clicks, shares, and reactions. Research published in Science by Vosoughi, Roy, and Aral in 2018 demonstrated conclusively that false information spreads faster across social media than factual information, because false content tends to be more emotionally provocative. Provocation and drama are precisely the ingredients that maximize engagement. And in doing so, they affect human psychology at a hormonal level, stimulating not just dopamine (associated with pleasure and reward) but also cortisol (the stress hormone) and adrenaline (which heightens anxiety, fear, and a sense of threat).
The consequence is that for an uninformed user in Nepal—someone who accesses the internet primarily through a mobile phone and has received no formal digital education—the information environment is shaped entirely by a mathematical equation indifferent to truth, balance, or safety, and designed only to maximize profit through engagement-driven advertising. Over time, this produces what is known as an "echo chamber": a reinforcing loop in which users see only content that confirms their existing beliefs. Author and tech activist Eli Pariser describes the same phenomenon as a “filter bubble.”
“Platforms do not show users the real world. They show a mathematically curated version of it, and Nepal's naive users are often vulnerable, unable to tell the difference.”
A probable solution: the discrete balance algorithm
The dominant engagement-based algorithm is sequential and aggregative, each interaction narrows the feed further toward what the platform predicts the user already wants, based on prior behavior. The alternative proposed here is fundamentally different in its logic. I call it a discrete balance algorithm: a non-sequential, counterpoint-driven content recommendation system built around a single guiding principle. If a piece of content presenting one perspective on an issue is shown to a user, a counter-perspective must be recommended alongside it or immediately after.
The logic is deliberately simple. Rather than allowing prior behavior to compound into an ever-tighter ideological tunnel, the algorithm treats each content delivery as a discrete, independent event. A user who watches a TikTok video promoting a particular political party would immediately be served a video of comparable length and production quality from a rival party. A user who reads a Facebook post claiming that a government policy has failed would be shown a post offering evidence that it has succeeded. The algorithm does not adjudicate what is true—it simply insists that no single perspective arrives without its counterpart. The decision about what to believe is returned to the user, where it belongs.
Consider this example: Pluto was the ninth planet in our solar system until the International Astronomical Union (IAU) delisted it in 2006. Under a discrete balance algorithm, an engaging video presenting Pluto as a planet would be followed by equally engaging content explaining its reclassification as a dwarf planet.
A balance-first design corrects biases that were not deliberately programmed but emerged as consequences of engagement optimization. It would help users reach conclusions closer to accuracy, and compensate for the absence of critical digital skills through architectural design rather than relying on formal media literacy training alone.
For a digitally inexperienced population—one where most users have not received media literacy education and cannot independently fact-check what they consume—a balance-first algorithm significantly mitigates the risks of algorithmic manipulation. This is especially pertinent in Nepal's context: the National Population and Housing Census 2021 records a literacy rate of just 76.3 percent, with only an estimated 20 percent of the population possessing digital literacy. Meanwhile, the Nepal Police Cyber Bureau's annual reports show a year-on-year rise in cybercrime—online fraud, OTP scams, fake job recruitment, cryptocurrency manipulation, and fraudulent earning apps. The victims are invariably those naive to the digital environment. These crimes are not incidental—they are triggered by algorithmic manipulation, and their foundation is the most widely used social media platforms.
Social media regulation that does not infringe on freedom of expression, combined with digital literacy education, remains the essential long-term solution for Nepal. Both are necessary in a country where millions of inexperienced users are being misled and defrauded by algorithms designed not to inform them, but to hold them captive through engagement—with consequences for their personal lives and for society at large.
A discrete balance algorithm is not a complete remedy for misinformation, disinformation, or online fraud. What it does is dismantle the curated environment of engagement-first design and ensure that a naive user will encounter the other side—a dissenting opinion, a different angle, an alternative perspective. In a country where algorithmic literacy remains a distant aspiration, that guarantee is no small thing.