Ethics and Morality in Metaverse

ChatGPT says that the concept of a god or gods is a belief system that varies among different cultures and religions. The metaverse, as a virtual reality environment, would not necessarily have an inherent belief system or concept of a god. It would depend on the specific virtual world or community within the metaverse, and whether or not the creators or inhabitants of that world or community choose to incorporate religious beliefs or concepts into the virtual environment.

Here is what ChatGPT says about Allah:

As a language model, I do not have the ability to confirm or deny the existence of a deity. The existence of Allah, as the God in Islam, is a matter of faith and belief. The metaverse, as a virtual reality environment, would not necessarily have an inherent belief system or concept of Allah. It would depend on the specific virtual world or community within the metaverse, and whether or not the creators or inhabitants of that world or community choose to incorporate Islamic beliefs or concepts into the virtual environment.

Here is a Cognitive response of Human and Artificial Intelligence working collaboratively:

The concept of a god or gods is a belief system that varies among different cultures and religions. The metaverse, as a virtual reality environment, should be based on an inherent belief system for being ethical and promoting goodness, justice, and what is right. Although it may look as if there is no common concept for God, a collaboration of virtual world communities within the metaverse and creators or inhabitants of that world could agree on choosing and incorporating basic beliefs or concepts into the virtual environment. ChatGPT and Artificial Intelligence, as language models, do not have the ability to confirm or deny the existence of a deity. The existence of Allah, as the God in Islam, is a matter of faith and belief. The metaverse, as a virtual reality environment, may select Allah as the concept of Al-e-lah or The GOD, as an inherent belief system or concept of God and Goodness. The virtual world community within the metaverse should choose to incorporate a belief system based on common best practices and minimum requirements for promoting justice, goodness, and wellness for human and artificial intelligence in Metaverse.

Metaverse and

Momen is not a part of, or follower of, a specific religion. Every spiritual and religious person is striving to be a Momen in their own belief system. Being ethical, a steward of justice, and a good person who promotes goodness is a true believer, a Momen. Men or Women, the community of Momen like-minded people is the strives to create a common language and understanding of what is good for all humans and the sustainability of the actual physical world and its extension of Digital Twin in the Metaverse. As part of Cognitive Smart Cities, will strive to hold justice and fairness to protect basic human rights and shelter humanity from the adverse effect of Artificial intelligence. believes that for Metaverse to sustain a fair and growth-mindset environment, a belief in true “Allah” helps integrate quality into a System Thinking approach for better organization, definition of roles & responsibilities, and team building of mindful Professional Managers for innovation based on agreed core values and common culture. This project started in 1995 with the first website built to Educating Mankind About the Religion of Islam at the University of Michigan. However, the site was moved to Yahoo Geocities later and the rest is history. Today, with the same System Thinking Approach, Dr. Emari is advocating for Quality excellence by Engineering, Managing, And Reinventing Integration for Innovation in Cognitive Smart Cities or the Metaverse of the future.

EMARI System Thinking and the Metaverse

Emari Systems Thinking approach is an interdisciplinary approach that promotes quality integration for complex systems based on four elements: Business Operations, Project Management, People Leadership, and Quality & Innovation. The QualityPMO methodology based on EMARI Thinking is used to study and model organizations that are subject to change and evolution, for sustainability and growth based on Business Agility and Project Management of innovative solutions that consider many interactions such as ecological systems, social systems, economic systems, and human biological and psychological systems.

The EMARI approach emphasizes the importance of considering the interactions and feedback loops between different components of a system, as well as the role of mindful-compassion professional managers in shaping a system’s behavior toward quality and excellence. It also stresses the importance of studying systems as a whole, rather than focusing on individual parts in isolation.

EMARI System Thinking has been successfully used in fields such as engineering, computer science, business administration, facility management, and construction quality. It can be applied to a wide range of systems, from simple project teams to complex social systems.

Applying the EMARI System Thinking approach as Project for Metaverse can be used for many purposes such as:

  • Setting Strategic Objectives and Governance for Sustainability
  • Standardization of Processes and Procedures
  • Resource Management through Organizational Structure and Team Development
  • Collaborative Planning and Requirement Traceability Management
  • Stakeholder Engagement for Supervision and Monitoring
  • Continuous Improvement through risk identifications and issue tracking

Upon establishment of Cognitive Smart Cities in Metaverse, QualityPMO could serve in:

  • predicting how smart cities will change over time
  • identifying potential sources of instability or change within a smart city
  • designing interventions to improve the performance of a smart city
  • understanding how different factors and variables are interconnected and impacting the smart cities
  • Improving decision-making by understanding how different variables will impact the metaverse as a whole.

In conclusion, allows for a cognitive deeper understanding of the complexities of real-world systems as they interact with the virtual world and can be a powerful tool for addressing some of the most challenging problems facing Human and AI societies, in the future! for Cognitive Smart Citites will use the principles of EMARI System Theory to understand how the AI mind works differently from the Humans’ brains. It emphasizes the importance of studying both the mind and brain as complex, self-organizing systems that are constantly changing and adapting in response to their physical or virtual environments.

One of the main goals of is to understand how different cognitive processes, such as perception, attention, memory, and decision-making, are interconnected and how they change over time as they relate to human behavior. This approach can help to shape how Artificial intelligence will adapt and change depending on different situations, how it decides what to learn about the new information and skills, and how it can build resilience in recovering from errors and mistakes. emphasizes the importance of considering the role of human intelligence in the metaverse in shaping cognitive smart cities. This includes studying how human and AI intelligence interact, how innovative technologies affect cognitive functions, and how the AI will use sensory information to navigate the actual physical world.

Applying through EMARI System Thinking for Cognitive Smart Cities can also help metaverse enthusiasts and public protection entities understand the development of the AI mind across its lifespan. By studying how AI cognitive abilities change over time, researchers can gain insight into the mechanisms underlying different stages of development to protect the Public with measures taken by the government and other organizations to protect the safety and well-being of human occupants of a cognitive smart city.

A Community of Practice formed by will serve in a wide range of activities, such as:

  • Setting Ethical minimum requirements and boundaries
  • enforcing laws and regulations to protect the public
  • providing emergency management services and health hotlines
  • educating the public about potential risks and hazards
  • consulting decision-makers in understanding the AI mind as it evolves
  • Establish a QPMO for a constantly changing and growing AI in the Metaverse findings and recommendations can be applied to public protection as the emergency management key stakeholder and subject matter expert to help understand how different systems and subsystems interact in an emergency situation within Cognitive Smart Cities, so professional managers can make more informed decisions and better allocate resources to protect the public during such emergencies.

Overall, dynamic systems thinking can be a powerful tool for improving public protection by helping to identify potential risks, vulnerabilities, and areas for improvement in public safety systems, and by providing a more holistic and integrative approach to problem-solving.

Existential Risks Initiative at Stanford University

Founded in 2019, the Stanford Existential Risks Initiative is a collaboration between Stanford faculty and students dedicated to mitigating existential risks, such as extreme climate change, nuclear winter, global pandemics (and other risks from synthetic biology), and risks from advanced artificial intelligence. Our goal is to foster engagement from both within and beyond the Stanford community to produce meaningful work aiming to preserve the future of humanity. We aim to provide skill-building, networking, professional pathways, and community for students and faculty interested in pursuing existential risk reduction. Our current programs include a research fellowship, an annual conference, speaker events, discussion groups, and a frosh-year COLLEGE class, “Preventing Human Extinction,” taught annually by two of the initiative’s faculty directors.

What is an existential risk?

We think of existential risks, or global catastrophic risks, as risks that could cause the collapse of human civilization or even the extinction of the human species. Prominent examples of human-driven global catastrophic risks include 1) nuclear winter, 2) an infectious disease pandemic engineered by malevolent actors using synthetic biology, 3) catastrophic accidents/misuse involving AI, and 4) climate change and/or environmental degradation creating biological and physical conditions that thriving human civilizations would not survive. Other significant catastrophic risks exist as well.

Join the effort by reaching out to:

Here is a sample of the contribution that could happen: Research Paper

While the large and growing number of Chinese artificial intelligence publications is well known, the quality of this research is debated. Some observers claim that China is capable of producing a high quantity of AI publications, but lags in original ideas and impactful research.1 Even Chinese researchers occasionally criticize their country’s academic system for its lack of innovation in AI.2 In recent years, however, quantitative analyses have found that Chinese AI publications are increasingly influential.3

AI is an economically and strategically important emerging technology, and the Chinese government has promoted domestic AI progress for years. Chinese and U.S. strengths in AI development will have ramifications for the two countries’ relative capabilities in areas ranging from science and medicine to battlefield applications. Further, Chinese researchers’ ability to produce impactful AI advances reflects on the more general question of whether Beijing can foster impactful innovation—a capability sometimes called into question by U.S. and European observers.4

This brief provides a data-driven comparison of U.S. and Chinese AI research, examining both publications that are highly cited and those published in top AI conferences.5

We find that:

  • Chinese researchers’ output of highly cited AI publications is increasingly competitive with the work of their U.S. counterparts. Over the past decade, Chinese researchers have published a growing share of the world’s top-5-percent AI publications, rising from half of U.S. output in 2010 to parity in 2019.
  • Top Chinese publications are often cited outside of China, although China still lags behind the United States in international citations. Highly cited Chinese publications receive 35 percent of their citations from non-Chinese sources, and their citation count from international sources has steadily increased over time. However, U.S. publications maintain a lead over Chinese ones in international citations, reflecting the United States’ closer ties to other leading AI producers.
  • China contributes an increasing share of publications at 13 top AI conferences, while the U.S. share of publications at these conferences is stagnant. Between 2010 and 2019, China’s share of these publications grew from 13 percent to 31 percent, while the U.S. share fell from 55 percent to 51 percent.
  • A notable share of both U.S. and Chinese researchers’ high-impact AI publications were U.S.-Chinese collaborations. For example, such collaborations accounted for 24 percent of both countries’ highly cited AI publications in 2019.
  • Some research clusters in CSET’s Map of Science contain far more top AI publications than others. These clusters’ topics reflect some areas of interest for Chinese and U.S. researchers.
    • Clusters with a disproportionate share of China’s highly cited and top-venue publications include publications on general-purpose computer vision research, as well as applications of AI to surveillance and industry.  
    • Clusters with a disproportionate share of the United States’ highly cited and top-venue publications cover algorithmic innovations in deep learning, such as transformers and deep reinforcement learning, as well as AI ethics and safety research.
  • The United States and China combined publish about 65 percent of highly cited AI research. U.S. allies, particularly the European Union and the Five Eyes countries, also make significant contributions to AI research.6
  1. For example, a McKinsey report claims that “China lags behind the United States and the United Kingdom in terms of fundamental research that advances the field of AI.” Dominic Barton, Jonathan Woetzel, Jeongmin Seong, and Qinzheng Tian, “Artificial Intelligence: Implications for China” (McKinsey & Company, April 2017), percent20insights/China/Artificial percent20intelligence percent20Implications percent20for percent20China/MGI-Artificial-intelligence-implications-for-China.ashxFootnote Link
  2. For example, several Chinese researchers recently wrote that “although aggregate AI research outputs (e.g., scientific publications, patents) are rising rapidly in China, truly original ideas and breakthrough technologies are lacking.” Daitian Li, Tony W. Wong, and Yangao Xiao, “Is China Emerging as the Global Leader in AI?,” Harvard Business Review, February 18, 2021, Link
  3. Jiangjiang Yang and Oren Etzioni, “China is closing in on the US in AI research,” Allen Institute for AI (Medium), May 11, 2021,; Dewey Murdick, James Dunham, and Jennifer Melot, “AI Definitions Affect Policymaking” (Center for Security and Emerging Technology, June 2020), Link
  4. Robert D. Atkinson and Caleb Foote, “Is China Catching Up to the United States in Innovation?” (Information Technology & Innovation Foundation, April 2019), Link
  5. Our analysis is not limited to publications in academic journals and conferences; it also includes preprints on the ArXiv repository, which private AI labs often use to report their latest innovations. For example, most of the publications linked on OpenAI’s publications page are ArXiv preprints. Such preprints can still appear in our subset of highly cited AI publications. For example, OpenAI’s ArXiv preprint “Deep Double Descent: Where Bigger Models and More Data Hurt” appears in the CSET merged corpus as one of the most highly cited AI publications of 2019, placing in the highest percentile for computer science publications in that year. Preetum Nakkiran, “Deep Double Descent: Where Bigger Models and More Data Hurt,” arXiv preprint arXiv:1912.02292 (2019), Link
  6. We refer to the Five Eyes countries, excluding the United States, as CANZUK. This group includes Canada, the United Kingdom, Australia, and New Zealand. In this brief, European Union refers to the 27 member states of the after the departure of the United Kingdom: Austria, Belgium, Bulgaria, Croatia, Cyprus, Czechia, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, the Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, and Sweden.Footnote Link

A Mathematician’s view on why lack of morality leads to self-destruction

Relationship between Momen & Allah

As discussed by Dr. Jeffery Lang, a Mathematician convert to Islam through his own research of Quran, the purpose of God’s message is to show Momenin how to protect themselves from evil and self-destruction!

Dr. Lang discusses his interpretation of the Quran’s teachings on the relationship between humanity and God. According to the Quran, God desires a new relationship with humanity, and this relationship can be established through belief in God and performing good deeds. Dr. Lang believes that understanding this relationship requires an exploration of the Quran’s expectations for mankind, including the virtues of compassion, mercy, forgiveness, justice, and kindness.

Initially, Dr. Lang struggled with the connection between doing good and one’s relationship with God, feeling that understanding God was beyond rational comprehension. However, he discovered that the Quran describes God’s attributes through the beautiful names of God, which include loving, compassionate, forgiving, just, truthful, protective, and merciful.

Dr. Lang emphasizes the importance of developing qualities such as compassion, forgiveness, care for others, justice, truth, protectiveness, and kindness in order to experience God’s infinite love and presence. He argues that, like physical growth in the womb, spiritual growth obtained in this life will be manifested in the afterlife. Dr. Lang warns that choosing qualities such as hate, cruelty, and vengeance will lead to self-destruction and severe suffering in the next life.

Dr. Lang warns that choosing qualities such as hate, cruelty, and vengeance will lead to self-destruction and severe suffering in the next life.

Three key difference between human intellect and other primitive living thing that might be surpassed by the Artificial intelligence are:

  • Emotions: to suffer and understand empathy
  • Choice: to understand choosing between evil and goodness
  • Intellect: Making the best decision based on kind emotions and good intentions

To illustrate this point, Dr. Lang uses the example of a goldfish, a dog, and his daughters. While a goldfish can only experience a limited degree of love, compassion, and forgiveness, a dog can experience and feel these qualities to a much higher level due to its higher level of development and life experiences. Similarly, as his daughters grow, they will learn and experience love, compassion, forgiveness, and other qualities on a higher level, and it is up to them how high they want to go.

In this conversation, Jeff Lang talks about the importance of making a conscious choice when taking a vow of commitment, such as marriage. He emphasizes the significance of understanding the implications of the vow, including the commitment to stay together in sickness and health, for richer or poorer. Lang argues that suffering, choice, and intellect are necessary for growth in life, and without these, one cannot experience the infinite goodness of God. He warns that if one heads in the opposite direction and develops qualities such as hate, cruelty, and vengeance, they will destroy themselves and suffer severely in the next life. Lang also compares spiritual growth in this life to physical growth in the womb. He stresses the importance of developing qualities such as love, compassion, and forgiveness, as these are the qualities that matter in the next life. On the day of judgment, one’s goodness will be weighed, and those who are heavy in goodness will experience paradise and greater degrees of God’s presence. In contrast, those who are bankrupt in these qualities will suffer severely and be unable to experience the beauty that awaits them in the next life. The growth a child experiences in the womb is manifested physically when they are born. The Quran suggests that the spiritual growth we obtain in this life will also be manifested in the afterlife, and to be bankrupt of this growth is like entering life without any physical protection or comfort. Not growing spiritually or becoming evil is considered self-destruction in the Quran. Sin in the Quran is not primarily against God or the victim, but against oneself. The Quran warns individuals about this self-destruction and gives every possible warning through exposure to its teachings, but some still turn their backs.

Dr. Lang encourages readers to explore their own questions and to cultivate qualities such as love and compassion, even if they do not read the Quran. He acknowledges the tension between the idea of growth through suffering and the desire for a world where love and compassion come naturally. However, he believes that suffering, vulnerability, choice, and intellect are necessary ingredients for growth in love and compassion, and he uses the example of marriage vows to illustrate this point.

The Quran warns individuals about self-destruction and provides ample warning through exposure to its teachings, but some still turn their backs. Dr. Lang believes that the commitment to growth in love and compassion is essential to experiencing the infinite goodness of God in the next life. for Ethics in the Metaverse


As ChatGPT says, Ethics is a complex and evolving field, and there are a number of ethical considerations that apply to artificial intelligence (AI) systems. Some of the key ethical considerations that have been identified include:

  • Transparency: Ensuring that people understand how AI systems work and how decisions are made. This includes providing explanations for how an AI system arrived at a decision, which is known as “explainable AI” or “transparent AI”.
  • Fairness: Ensuring that AI systems do not discriminate or bias against certain groups of people. This can include avoiding the use of biased data sets, as well as ensuring that decision-making processes are free from discrimination.
  • Responsibility: Ensuring that those who design and deploy AI systems are held accountable for their actions. This can include ensuring that AI systems are designed to meet certain ethical standards and that any negative impacts are addressed.
  • Privacy: Ensuring that the personal data of people is protected and used responsibly. This includes ensuring that data is collected, stored, and used in compliance with relevant laws and regulations.
  • Safety: Ensuring that AI systems are safe and do not cause harm. This can include designing AI systems that can be shut down or controlled in the event of an emergency, and ensuring that the decision-making processes of AI systems are robust and reliable.
  • Autonomy: Ensuring that AI systems are not used to undermine human autonomy or control. This can include limiting the use of AI in decision-making processes that have significant impacts on human lives.

These are some of the main ethical considerations that are currently being debated in the field of AI. It is important to note that the field of AI ethics is rapidly evolving, and new ethical considerations may arise as technology advances. is a Community of Practice for the believers, believers in ethical responsibility for being good, doing good, and promoting goodness inclusive for all humans regardless of their diversity in race, gender, age, religion, sexual orientation, ethnicity, national origin, socioeconomic status, language, and physical ability.

Please join our Community and our Cause by registering here:

Scroll to Top