As the fields of learning analytics and learning design mature, the convergence and synergies between the two are becoming an important area for research. This paper intends to summarize the main outcomes of a systematic review of empirical evidence on learning analytics for learning design. Moreover, this paper presents an overview of what and how learning analytics have been used to inform learning design decisions and in what contexts. The search was performed in seven academic databases, resulting in 43 papers included in the main analysis. The results from the review depict the ongoing design patterns and learning phenomena that emerged from the synergy that learning analytics and learning design impose on the current status of learning technologies. Finally, this review stresses that future research should consider developing a framework on how to capture and systematize learning design data grounded in learning analytics and learning theory, and document what learning design choices made by educators influence subsequent learning activities and performances over time.
Massive open online courses (MOOCs), a unique form of online education enabled by web-based learning technologies, allow learners from anywhere in the world with any level of educational background to enjoy online education experience provided by many top universities all around the world. Traditionally, MOOC learning contents are always delivered as text-based or video-based materials. Although introducing immersive learning experience for MOOCs may sound exciting and potentially significative, there are a number of challenges given this unique setting. In this paper, we present the design and evaluation methodologies for delivering immersive learning experience to MOOC learners via multiple media. Specifically, we have applied the techniques in the production of a MOOC entitled Virtual Hong Kong: New World, Old Traditions, led by AIMtech Centre, City University of Hong Kong, which is the first MOOC (as our knowledge) that delivers immersive learning content for distant learners to appreciate and experience how the traditional culture and folklore of Hong Kong impact upon the lives of its inhabitants in the 21st Century. The methodologies applied here can be further generalized as the fundamental framework of delivering immersive learning for future MOOCs.
Question classification is a key point in many applications, such as Question Answering (QA, e.g., Yahoo! Answers), Information Retrieval (IR, e.g., Google search engine), and E-learning systems (e.g., Bloom's tax. classifiers). This paper aims to carry out a systematic review of the literature on automatic question classifiers and the technology directly involved. Automatic classifiers are responsible for labeling a certain evaluation item using a type of categorization as a selection criterion. The analysis of 80 primary studies previously selected revealed that SVM is the main algorithm of the Machine Learning used, while BOW and TF-IDF are the main techniques for feature extraction and selection, respectively. According to the analysis, the taxonomies proposed by Li and Roth and Bloom were the most used ones for the classification criteria, and Accuracy/Precision/Recall/F1-score were proven to be the most used metrics. In the future, the objective is to perform a meta-analysis with the studies that authorize the availability of their data.
This paper presents an innovative method to tackle the automatic evaluation of programming assignments with an approach based on well-founded assessment theories (Classical Test Theory (CTT) and Item Response Theory (IRT)) instead of heuristic assessment as in other systems. CTT and/or IRT are used to grade the results of different items of evidence obtained from students’ results. The methodology consists of considering program proofs as items, calibrating them, and obtaining the score using CTT and/or IRT procedures. These procedures measure overall validity reliability as well as diagnose the quality of each proof (item). The evidence is obtained through program proofs. The SIETTE system collects and processes all data to calculate the student knowledge level. This innovative method for programming task evaluation makes it possible to deploy the whole artillery developed in this research field over the last few decades. To the best of our knowledge, this is a new and original contribution in the area of programming assessment.
In this paper, we address the problem of enhancing young people's awareness of the mechanisms involving privacy in online social networks by presenting an innovative approach based on gamification. In particular, we propose a web application that allows kids and teenagers to experience the typical dynamics of information spread through a realistic interactive simulation. Under the supervision of the teacher, the students are inserted in a small artificial social graph, and through the different stages of game, they can post sentences with different levels of sensitivity, and “like” or share messages published by friends. At the end of game session, the application calculate multiple behavioral scores that can be used by the teacher to raise the curiosity of the students and stimulate discussions. Moreover, a complete interactive report is generated to analyze every individual action of the terminated game sessions. Our educational tool has been employed within an extensive experimental study involving more than 450 kids and 22 teachers in seven Italian primary school institutes. The results show that our approach is stimulating and supports teachers in helping kids discover and recognize potential privacy risks in social network activities.
One of the characteristics of massive open online courses (MOOCs) is that the overall number of social interactions tend to be higher than in traditional courses, hindering the analysis of social learning. Learners typically ask or answer questions using the forum. This makes messages a rich source of information, which can be used to infer learners’ behavior and outcomes. It is not feasible for teachers to process all forum messages and automated tools and analysis are required. Although there are some tools for analyzing learners’ interactions, there is a need for methodologies and integrated tools that help to interpret the learning process based on social interactions in the forum. This paper presents the 3S (Social, Sentiments, Skills) learning analytics methodology for analyzing forum interactions in MOOCs. This methodology considers a temporal analysis combining the social, sentiments, and skill dimensions that can be extracted from forum data. We also introduce LATƎS, a learning analytics tool for edX/Open edX related to the three dimensions (3S), which includes visualizations to guide the proposed methodology. We apply the 3S methodology and the tool to an MOOC on Java programming. Results showed, among others, the action–reaction effect produced when learners increase their participation after instructor's events. Moreover, a decrease of positive sentiments over time and before deadlines of open-ended assignments was also observed and that there were certain skills, which caused more troubles (e.g., arrays and loops). These results acknowledge the importance of using a learning analytics methodology to detect problems in MOOCs.
Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.
Presents the table of contents for this issue of the publication.