Skip to Main Content

Selected Online Reading on Artificial Intelligence in Education, Culture and Audiovisual Sector

Find a list of selected books, electronic books and articles, online databases, newswires and training sessions to enhance your knowledge from home.

Artificial intelligence in education

  • Alexa, What classes do I have today? The use of Artificial Intelligence via Smart Speakers in Education; Camelia Şerbana, Ioana-Alexandra Todericiua; Procedia Computer Science; Vol. 176; October 2020; pp. 2849-2857.
    From the abstract: As the world changes, so does the future of our students. In this respects, the evolution of the technology comes up with specific environments for educational purpose. Building smart learning environments supported by e-learning platforms is an important area of research in education domain within our days. The evolution of these smart learning environments is justified by some events (Covid19) that force students to learn remotely. The paper proposes a software application component using Alexa smart speaker, that integrates different services (Amazon Web Services, Microsoft Services) for a proper virtual environment platform, for both students and teachers. It addresses the main concerns of the current educational system, and provides a smart solution through the use of Artificial Intelligence based tools. The proposed approach not only achieves unifying data and knowledge-share mechanisms in a remotely mode, but it brings also a good learning experience, increasing the effectiveness and the efficiency of the learning process.

  • Artificial intelligence in education: The three paradigms; Fan Ouyang, Pengcheng Jiao; Computers and Education: Artificial Intelligence; Vol. 2; 2021; p. 100020.
    Abstract from authors: With the development of computing and information processing techniques, artificial intelligence (AI) has been extensively applied in education. Artificial intelligence in education (AIEd) opens new opportunities, potentials, and challenges in educational practices. In its short history, AIEd has been undergoing several paradigmatic shifts, which are characterized into three paradigms in this position paper: AI-directed, learner-as-recipient, AI-supported, learner-as-collaborator, and AI-empowered, learner-as-leader. In three paradigms, AI techniques are used to address educational and learning issues in varied ways. AI is used to represent knowledge models and direct cognitive learning while learners are recipients of AI service in Paradigm One; AI is used to support learning while learners work as collaborators with AI in Paradigm Two; AI is used to empower learning while learners take agency to learn in Paradigm Three. Overall, the development trend of AIEd has been developing to empower learner agency and personalization, enable learners to reflect on learning and inform AI systems to adapt accordingly, and lead to an iterative development of the learner-centered, data-driven, personalized learning.
     
  • Engaging With Biology by Asking Questions: Investigating Students’ Interaction and Learning With an Artificial Intelligence-Enriched Textbook; Marta M. Koć-JanuchtaKonrad J. SchönbornLena A. E. Tibell et al.; Journal of Educational Computing Research; Vol. 58, Issue 6; May 2020; pp. 1190-1224.
    Abstract by the authors: Applying artificial intelligence (AI) to support science learning is a prominent aspect of the digital education revolution. This study investigates students’ interaction and learning with an AI book, which enables the inputting of questions and receiving of suggested questions to understand biology, in comparison with a traditional E-book. Students (n = 16) in a tertiary biology course engaged with the topics of energy in cells and cell signaling. The AI book group (n = 6) interacted with the AI book first followed by the E-book, while the E-book group (n = 10) did so in reverse. Students responded to pre-/posttests and to cognitive load, motivation, and usability questionnaires; and three students were interviewed. All interactions with the books were automatically logged. Results revealed a learning gain and a similar pattern of feature use across both books. Nevertheless, asking questions with the AI book was associated with higher retention and correlated positively with viewing visual representations more often. Students with a higher intrinsic motivation to know and to experience stimulation perceived book usability more favorably. Interviews revealed that posing and receiving suggested questions was helpful, while ideas for future development included more personalized feedback. Future research shall explore how learning can be benefitted with the AI-enriched book.
     
  • Why Not Robot Teachers: Artificial Intelligence for Addressing Teacher Shortage; Bosede I. Edwards, Adrian Cheok; Applied Artificial Intelligence; Vol. 32 Issue 4; May 2018; pp. 345-360.
    Abstract by the authors: Global teacher shortage is a serious concern with grave implications for the future of education. This calls for novel ways of addressing teacher roles. The economic benefits of tireless labor inspires the need for teachers who are unlimited by natural human demands, highlighting consideration for the affordances of robotics and Artificial Intelligence in Education (AIED) as currently obtainable in other areas of human life. This however demands designing robotic personalities that can take on independent teacher roles despite strong opinions that robots will not be able to fully replace humans in the classroom of the future. In this article, we argue for a future classroom with independent robot teachers, highlighting the minimum capabilities required of such personalities in terms of personality, instructional delivery, social interaction, and affect. We describe our project on the design of a robot teacher based on these. Possible directions for future system development and studies are highlighted.
     
  • Critical Imaginaries and Reflections on Artificial Intelligence and Robots in Postdigital K-12 Education; Stefan HrastinskiAnders D. OlofssonCharlotte Arkenback et al.; Postdigital Science and Education; Vol. 1; May 2019; pp. 427–445.
    Abstract by the authors: It is commonly suggested that emerging technologies will revolutionize education. In this paper, two such emerging technologies, artificial intelligence (AI) and educational robots (ER), are in focus. The aim of the paper is to explore how teachers, researchers and pedagogical developers critically imagine and reflect upon how AI and robots could be used in education. The empirical data were collected from discussion groups that were part of a symposium. For both AI and ERs, the need for more knowledge about these technologies, how they could preferably be used, and how the emergence of these technologies might affect the role of the teacher and the relationship between teachers and students, were outlined. Many participants saw more potential to use AI for individualization as compared with ERs. However, there were also more concerns, such as ethical issues and economic interests, when discussing AI. While the researchers/developers to a greater extent imagined ideal future technology-rich educational practices, the practitioners were more focused on imaginaries grounded in current practice.

Artificial intelligence in culture

  • The Development and Design of Artificial Intelligence in Cultural and Creative Products; Xue Li, Baifeng Lin; Mathematical Problems in Engineering; April 2021; pp. 1-10.
    Abstract by the authors: The rapid development of the global cultural and creative industry has provided a new stage for the development and innovation of Chinese traditional culture. Cultural creativity has broken the rigid design and production mode of traditional products from the perspective of market and has become the key to improve the economic benefits and competitiveness of traditional products. From the perspective of cultural and creative product design and product development, artificial intelligence technology has been fully utilized at the present stage. The purpose of this article is to compare the traditional design patterns used in product design and understand the new design patterns assisted by artificial intelligence so as to achieve the purpose of process simplification and design innovation more quickly. In this article, traditional graphic patterns and local cultural connotations of the experimental area are taken as the main research points. Through a large number of field investigations and first-hand photo materials, the regional cultural characteristics and local traditional graphic language are analyzed in detail and then summarized. Finally, the examples of development research are summarized and reflected. It is hoped that through the further excavation of the traditional patterns of the region and the exploration of the level of regional culture regeneration, the development of local economy, culture, and tourism can be driven, and a cultural brand that can go out of Anhui province and into the whole country can be built. At the present stage of product research and development, the key is to introduce artificial intelligence means as necessary support and try to infiltrate artificial intelligence measures in the research and development of new products in each link.
     
  • Artificial intelligence, culture and education; Sergey B. Kulikov, Anastasiya V. Shirokova; AI & Society; Vol. 36; July 2020; pp. 305-318.
    Abstract by the authors: Sequential transformative design of research (Hanson et al. in J Couns Psychol 52(2):224–235, 2015; Groleau et al. in J Mental Health 16(6):731–741, 2007; Robson and McCartan in Real world research: a resource for users of social research methods in applied settings, Wiley, Chichester, 2016) allows testing a group of theoretical assumptions about the connections of artificial intelligence with culture and education. In the course of research, semiotics ensures the description of self-organizing systems of cultural signs and symbols in terms of artificial intelligence as a special set of algorithms. This approach helps to consider the arguments proposed by Searle (Behav Brain Sci 3(3):417–457, 1980) against ‘strong’ artificial intelligence. Searle (Behav Brain Sci 3(3):417–457, 1980) believes that artificial or machine intelligence cannot fully emulate the processes of the human mind. Machine intelligence shows own inevitable weakness. This is non-autonomous tool for computations and data operating. In fact, this tool cannot provide insight into real cognitive conditions. After Lotman and Uspensky (On the semiotics mechanism of culture, Alexandra, Tallinn, 1993), authors expand the meaning of artificial intelligence. The authors identify a cultural type of ‘strong’ artificial intelligence or ‘self-increase of Logos’ in terms by Lotman and Uspensky (On the semiotics mechanism of culture, Alexandra, Tallinn, 1993). The interpretation of human intelligence as imitation of machine intelligence makes possible such immersion of artificial intelligence in culture. The authors reveal a case of self-organizing autonomous generation, encoding, decoding, reception, storage, and transmission of social information in the field of physical training. From the empirical studies it is clear that the organization of collective activities without external control ensures the development of positive emotions and social orientations. Interest in autonomous behavior provides the formation of educational and cognitive motives. As a special set of algorithms, these motives are the most promising and favorable for personal development.
     
  • The next wave of digital technological change and the cultural industries; Christian Peukert; Journal of Cultural Economics; Vol. 53; November 2018; pp. 189-210.
    Abstract by the authors: In this proposal of a research agenda for cultural economics, I discuss the supply-side economics of the next wave of digital technological change. I begin by arguing that digitization and internet-enabled platforms, together with automated licensing of user-generated content, have substantially lowered the costs of individual-level cultural participation. I discuss how the dependence on advertising revenues may affect this dynamic and highlight some implications for the economics of copyright. Next, I discuss circumstances under which market data, which have become much less expensive to collect at more fine-grained levels, can trigger differentiation of cultural products. Finally, I speculate about the economic implications of artificial intelligence that complements, or perhaps substitutes for human creativity with regard to cultural participation, copyright and the industrial organization of culture.
     
  • Daily briefing: Artificial intelligence is cracking long-standing puzzles in art history; Flora Graham; Nature; June 2019.
    Briefing: Machine learning is helping experts to figure out who painted what, a modified PET scanner can produce 3D images of the whole body in seconds and the world’s most powerful superconducting magnet.
     
  • Will Avatars Kill The Radio Stars?; Tatiana Cirisano; Billboard; June 2021; Vol. 133, Issue 9; pp. 19-20.
    Abstract:Two years ago, Anthony Martini’s teenage daughter showed him an Instagram profile for a green-haired rapper with tattooed arms, hypebeast style and glitched-out trap tracks getting hundreds of thousands of plays on SoundCloud. Martini — a former artist manager who helped develop artists like Lil Dicky and MadeinTYO, and in March became CEO of Royalty Exchange — signed the rapper, who now goes by FN Meka, to his record label, Factory New. Since then, FN Meka has released three official singles and acquired 9.7 million TikTok followers. It’s a typical story about online artist development, but with a twist: FN Meka isn’t real, and Factory New isn’t for human artists.

Artificial intelligence in audiovisual sector

  • NETFLIX KILLED THE CABLE TV STAR: Cable TV is Definitionally Disadvantaged for Use of Artificial Intelligence; Casey Patchunka; Federal Communications Law Journal; Vol. 71, Issue 2; May 2019; pp. 275-298.
    From the introduction: While your TV is unlikely to know the reasons why you turn your TV on at a certain time, the cable TV industry has considered time-shifting as an option for TV consumers based on the mass amount of data each household produces daily that can be computed to take on a form of intelligent information. Artificial Intelligence ("AI") is likely to become increasingly present in the entertainment and technology industries. It is worthwhile for the cable TV industry to begin investing in and expanding the use of AI in the face of decreased advertising revenue and increased costs passed on to consumers, especially due to competitors such as Netflix or Hulu. The use of AI requires the use of personally identifiable information ("PII"), which is regulated more strictly for cable TV as compared to its streaming-based competitors, which are regulated under the Video Privacy Protection Act ("VPPA"). This disparity poses a threat to the quality and cost of cable TV service, and thus, ultimately, the survival of cable TV in the future of the entertainment industry.
     
  • Intelligence Is beyond Learning: A Context-Aware Artificial Intelligent System for Video Understanding; Ahmed Ghozia, Attiya Gamal, Adly Emad and Nawal El-Fishaw; Computational Intelligence & Neuroscience; December 2020; pp. 1–15.
    Abstract by the author: Understanding video files is a challenging task. While the current video understanding techniques rely on deep learning, the obtained results suffer from a lack of real trustful meaning. Deep learning recognizes patterns from big data, leading to deep feature abstraction, not deep understanding. Deep learning tries to understand multimedia production by analyzing its content. We cannot understand the semantics of a multimedia file by analyzing its content only. Events occurring in a scene earn their meanings from the context containing them. A screaming kid could be scared of a threat or surprised by a lovely gift or just playing in the backyard. Artificial intelligence is a heterogeneous process that goes beyond learning. In this article, we discuss the heterogeneity of AI as a process that includes innate knowledge, approximations, and context awareness. We present a context-aware video understanding technique that makes the machine intelligent enough to understand the message behind the video stream. The main purpose is to understand the video stream by extracting real meaningful concepts, emotions, temporal data, and spatial data from the video context. The diffusion of heterogeneous data patterns from the video context leads to accurate decision-making about the video message and outperforms systems that rely on deep learning. Objective and subjective comparisons prove the accuracy of the concepts extracted by the proposed context-aware technique in comparison with the current deep learning video understanding techniques. Both systems are compared in terms of retrieval time, computing time, data size consumption, and complexity analysis. Comparisons show a significant efficient resource usage of the proposed context-aware system, which makes it a suitable solution for real-time scenarios. Moreover, we discuss the pros and cons of deep learning architectures.
     
  • Hey Siri, tell me a story: Digital storytelling and AI authorship; Sarah Thorne; Convergence: The International Journal of Research into New Media Technologies; April 2020; Vol. 26, Issue 4; pp. 808-823.
    From the abstract: Surveying narrative applications of artificial intelligence in film, games and interactive fiction, this article imagines the future of artificial intelligence (AI) authorship and explores trends that seek to replace human authors with algorithmically generated narrative. While experimental works that draw on text generation and natural language processing have a rich history, this article focuses on commercial applications of AI narrative and looks to future applications of this technology. Video games have incorporated AI and procedural generation for many years, but more recently, new applications of this technology have emerged in other media. Director Oscar Sharp and artist Ross Goodwin, for example, generated significant media buzz about two short films that they produced which were written by their AI screenwriter. It’s No Game (2017), in particular, offers an apt commentary on the possibility of replacing striking screenwriters with AI authors. Increasingly, AI agents and virtual assistants like Siri, Cortana, Alexa and Google Assistant are incorporated into our daily lives. As concerns about their eavesdropping circulate in news media, it is clear that these companions are learning a lot about us, which raises concerns about how our data might be employed in the future. This article explores current applications of AI for storytelling and future directions of this technology to offer insight into issues that have and will continue to arise as AI storytelling advances.

     

  • A Review of Audio-Visual Fusionwith Machine Learning ; Xiaoyu Song, Hong Chen, Qing Wang et al.; Journal of Physics: Conference Series; Vol. 1237, Issue 2; 2019.
    Abstract: For the study of single-modal recognition, for example, the research on speech signals, ECG signals, facial expressions, body postures and other physiological signals have made some progress. However, the diversity of human brain information sources and the uncertainty of single-modal recognition determine that the accuracy of single-modal recognition is not high. Therefore, building a multimodal recognition framework in combination with multiple modalities has become an effective means of improving performance. With the rise of multi-modal machine learning, multi-modal information fusion has become a research hotspot, and audio-visual fusion is the most widely used direction. The audio-visual fusion method has been successfully applied to various problems, such as emotion recognition and multimedia event detection, biometric and speech recognition applications. This paper firstly introduces multimodal machine learning briefly, and then summarizes the development and current situation of audio-visual fusion technology in some major areas, and finally puts forward the prospect for the future.
     

  • AI in Video Analysis, Production and Streaming Delivery, , et al.; Journal of Physics: Conference Series; Vol. 1712; 2020.
    Abstract: Video technologies evolve steadily with the evolution of machine learning and artificial intelligence which use cloud platform and video transcoding for better video production, delivery and live streaming. AI has profound effect on media and film industry, from content delivery to viewer's experience. AI serves the richer and realistic experiences in personalization of user experience for video production and analysis process. AI changes the fact of manual tasks and facilitates deep content indexing. Quality assessment becomes easier when AI scrutinizes the content. Personal and interactive video provides new delightful viewing experiences. AI generates new level of interactions at scene by dichotomizing the videos and builds more practical access methods within the content.

Further sources

If you are unable to access the article you need, please contact us and we will get it for you as soon as possible.

Data Protection Notice   Cookie Policy & Inventory
Library Catalogue
Journals on all devices
Books, articles, EPRS publications & more
Newspapers on all devices