Knowledge and responsibility in the age of AI

With the rise of generative AI in research and education, a question keeps coming to mind: How is the way we understand knowledge changing as AI becomes a bigger part of our daily learning and work? This is not just a question for academics or tech experts; it affects everyone who relies on knowledge to make decisions, express ideas or contribute to their communities. We are at a point where the very act of knowing is changing—not just how we know, but who we consider to be the "knower." When a machine writes an article, summarizes a book, or helps design a curriculum, what role does the human thinker still play?

On the one hand, this technology opens up new possibilities. A student in a remote village in Nepal can now access summaries of global literature, translate complex theories into Nepali or get help writing a research paper—all at the click of a button. Generative AI can be a powerful tool for breaking down barriers of language, access and time. On the other hand, there’s also the risk that we may stop thinking for ourselves, relying too heavily on a tool that reflects patterns, not true understanding. In a world where so much is automated, what happens to reflection, to critical thought, and to the slow and sometimes uncomfortable process of finding our own insights?

As I struggled with these questions, I found some guidance in Eastern philosophy. While ancient texts didn’t predict AI or digital tools, they did take the question of knowledge very seriously. In the Eastern tradition, knowledge (jñāna) is not just about gathering facts. It’s something that transforms us, something that reveals the self, the world, and the relationship between the two. Importantly, it is always tied to ethics. One does not seek knowledge simply to win arguments or impress others; knowledge is pursued to live rightly, act responsibly and move closer to truth and liberation.

This is especially relevant now as generative AI begins to influence how we write, research and think. The Upanishads tell us that the student should not just ask, “What is this?” but also, “Who am I?” It’s a question of identity, intention and inner clarity. When I use AI to write a paragraph or generate ideas, I try to stay aware of what part of me is involved. Am I using the tool to clarify my thoughts or to avoid doing the hard work of thinking? Am I driven by curiosity or by convenience? These may seem like philosophical questions, but they have very practical implications. Imagine a college student in Kathmandu working on their assignments. With AI, they can generate drafts in minutes, find sources and even correct their grammar. But if they stop reading, stop questioning and simply copy what the machine offers, they may submit a polished assignment—but miss the point of education entirely.

The machine can assist, but it cannot reflect. It cannot care. It cannot ask, “Is this meaningful to my society, my values, or my life? Eastern philosophy offers a helpful metaphor here: the yantra or instrument. Tools are nothing new. Humans have always used tools to extend our abilities—whether it’s the plough in agriculture, the loom in weaving or the telescope in astronomy. What matters is not just the tool, but how we use it, and for what purpose.

The Bhagavad Gita reminds us that the right action must be performed without attachment to the outcome, guided by clarity and duty—dharma. In today’s world, AI is a new yantra, but it requires the same discipline. We must ask: is it helping me fulfill my role as a student, researcher or a citizen? Or is it just making things easier at the cost of meaning? This doesn’t mean we should fear technology. Far from it. Used wisely, generative AI can become a partner in learning, a bridge across educational gaps and a tool to preserve and even regenerate local knowledge.

Imagine AI trained to document indigenous languages in Nepal or to translate oral histories into written texts. Imagine teachers using AI to create personalized learning experiences for students from different backgrounds and needs. These are exciting possibilities—but they can only become a reality if we use them with care, ethics, and awareness.

 In Eastern philosophy, ethics is not separate from knowledge. Truth (satya) is not just about factual correctness; it is about aligning what we know, say and do. When we conduct research with the help of AI, it still matters that we acknowledge our sources, credit others and question the biases embedded in the tools we use.

It still matters that we ask: Does this help society? Does it deepen understanding? Or am I simply using a machine to do my work for me? This brings us back to the idea of rethinking how we understand and interpret knowledge. Perhaps the real shift is not just technological—from books to machines, from human writers to AI—but ethical.

It is a turn toward remembering that knowledge is not neutral. It shapes lives, it holds power and it demands responsibility. In this light, AI is neither a savior nor a threat. It becomes a mirror, reflecting our habits, assumptions and goals. And it asks us: What kind of knowers do we want to be?

In a country like Nepal, where tradition and modernity often walk side by side, we have a unique opportunity. We can engage with new technologies not blindly, but with the wisdom of our philosophical traditions.

We can teach students not just how to use AI, but how to think with it—critically, ethically and reflectively. We can build an academic culture that values not just output, but insight. In the end, Eastern philosophy doesn’t reject tools. It simply reminds us: We must be worthy users of them.