10 Essential Questions to Reimagine Higher Education in the Era of Artificial Intelligence – Al-Fanar Media

(The opinions expressed in this article are those of the author and do not necessarily reflect those of Al-Fanar Media).
If you ask ChatGPT about the essential skills an effective leader should cultivate in the era of artificial intelligence, you will get a list of compelling and relevant attributes. AI is emerging as a transformative tool in higher education, unfolding a range of present and future possibilities, both exciting and diverse.
However, there are still many questions that educators and educational institutions must address in 2024 to navigate and capitalize on this constantly evolving technological landscape.
AI-powered chatbots like ChatGPT generate text or images based on users’ questions, or prompts. It is often said that the secret of an effective prompt lies in asking relevant and well-thought-out questions. This premise gained special relevance for me as I prepared for a panel discussion last month at the QS Reimagine Education Awards & Conference 2023, organised by QS Quacquarelli Symonds in Abu Dhabi, where we discussed different potential applications of AI in education and the numerous uncertainties that persist.
Here, I explore ten essential questions to Reimagine Higher Education in the Era of Artificial Intelligence.
1. Can AI be multilingual but monocultural? Since more than half of the content on the internet is in English, followed by Russian (less than 10 percent) and only a small percentage in Spanish (5 percent), AI models trained on data harvested from the internet risk perpetuating the cultural biases present in the source data. Some authors are concerned that ChatGPT predominantly reflects or is biased towards a single culture. How could we integrate more diverse data sets in the development of AI models so a wider range of cultural perspectives is ensured?
2. Are we dedicating more resources to teaching machines to learn than teaching humans how to learn? The evolution of large language models (LLMs), the problem-solving algorithms at the heart of AI chatbots, has centered the debate on how to feed and teach these “machines”, while neglecting the fundamental aspect of teaching humans to learn effectively and autonomously. Héctor Ruiz Martín, director of the International Science Teaching Foundation, develops in his latest books the importance of the teacher knowing the cognitive and emotional mechanisms that govern learning to teach students how to study, learn, and memorize.
3. Returning to the initial question, what competencies should a good leader possess to make proper use of AI in education? According to Diego Alcázar, chief executive of IE University, new AI tools will allow leaders to elevate different skills to an unprecedented level. From my analysis, a leader in an institution or company in the AI era should develop at least four key competencies:
4. How vast are the scope and potential of AI in education? Harvard Business School proposes a wide variety of situations where AI could be very useful in student learning. AI can function as a personalized tutor, adapting to the individual levels and needs of each student. It also has promising capacity to offer personalized feedback to students, but we also face other dilemmas. Would it be appropriate for an LLM to assign final grades in academic evaluations? How can we identify when it is unfair or incorrect? Can it assess creativity? In which cases would the student consider the automation of their results legitimate?
5. How do we address plagiarism and academic integrity in the era of AI? Although there are applications with plagiarism-detection systems, they still come with an significant percentage of errors, including false positives and negatives. This raises the question of whether it would be more effective to focus on teaching ethical values for responsible use of AI. How do we develop that awareness and ways of acting to identify in which cases its use is or is not integral?
6. Can the use of AI chatbots create technological overdependence or even distrust in students’ abilities? Excessive use of AI-powered tools could reduce students’ development of research skills, critical thinking, or problem-solving skills. The ease and speed with which chatbots provide information could also reduce a student’s motivation or perceived need to learn and retain information. Students might even feel insecure about their creativity and originality once accustomed to relying on AI-generated responses. How do we guide students in the responsible use of AI tools?
7. How do biases in AI affect large language models’ results and ability to determine patterns? If LLMs train on data sets that contain biases, such as gender or race-based stereotypes, or prevailing cultural and geographical perspectives, these biases will be reflected in the models’ results. What are the implications of training an AI model with biased data? How can we ensure that minority perspectives, which contribute diversity and, at times, less recognized or hidden truths, are not overlooked?
8. What is the impact of “algorithmic hallucinations”, especially when content generated by large language models becomes increasingly common on the internet? If this content is poorly developed, it can generate misinformation in searches, distorting reality and creating confusion. How can we address and mitigate these distortions generated by AI to maintain the integrity of information online?
9. What are the ethical boundaries in the use of AI, regarding privacy or its political impact? When entering personal data into systems like LLMs, these can be stored and used in future trainings, often without attribution. Questions arise about how to ensure privacy, what protections exist against the use of AI to disseminate inaccurate or discriminatory content, how these tools are trained, and what security measures are implemented to protect users from misinformation or harmful interactions. Additionally, the question arises of how AI handles copyright and content distribution.
10. Now that we integrate sustainability into education across all levels, does artificial intelligence reduce or increase our environmental footprint? Combined with technologies like the Internet of Things, artificial intelligence offers potential to optimize resource consumption and promote sustainability, for example, in a university. However, processing large volumes of data demanded by AI leads to an increase in energy consumption. Based on studies, by 2040, energy use associated with the information and communications technology (ICT) sector is expected to account for 14 percent of global greenhouse gas emissions. Most of these emissions will come from ICT infrastructure, particularly data centers and communication networks. We are faced with a crucial dilemma: How do we strike a balance between the benefits of AI for sustainability and its environmental impact due to energy consumption?
The intersection of AI with education offers a horizon of infinite possibilities, but also poses dilemmas that we must continue to address in 2024 with care and reflection to ensure an inclusive, ethical, and effective educational future.
Borja Santos Porras is associate vice dean for learning innovation at IE School of Politics, Economics and Global Affairs, IE University.
Related Reading
Your email address will not be published. Required fields are marked *







source

Leave a Comment

WP2Social Auto Publish Powered By : XYZScripts.com