The new EP Library Catalogue allows you to search the EP collection for:
- Journals, books and articles in paper or electronic format
- EPRS and Policy Department publications
Alexa, What classes do I have today? The use of Artificial Intelligence via Smart Speakers in Education; Camelia Şerbana, Ioana-Alexandra Todericiua; Procedia Computer Science; Vol. 176; October 2020; pp. 2849-2857.
From the abstract: As the world changes, so does the future of our students. In this respects, the evolution of the technology comes up with specific environments for educational purpose. Building smart learning environments supported by e-learning platforms is an important area of research in education domain within our days. The evolution of these smart learning environments is justified by some events (Covid19) that force students to learn remotely. The paper proposes a software application component using Alexa smart speaker, that integrates different services (Amazon Web Services, Microsoft Services) for a proper virtual environment platform, for both students and teachers. It addresses the main concerns of the current educational system, and provides a smart solution through the use of Artificial Intelligence based tools. The proposed approach not only achieves unifying data and knowledge-share mechanisms in a remotely mode, but it brings also a good learning experience, increasing the effectiveness and the efficiency of the learning process.
Hey Siri, tell me a story: Digital storytelling and AI authorship; Sarah Thorne; Convergence: The International Journal of Research into New Media Technologies; April 2020; Vol. 26, Issue 4; pp. 808-823.
From the abstract: Surveying narrative applications of artificial intelligence in film, games and interactive fiction, this article imagines the future of artificial intelligence (AI) authorship and explores trends that seek to replace human authors with algorithmically generated narrative. While experimental works that draw on text generation and natural language processing have a rich history, this article focuses on commercial applications of AI narrative and looks to future applications of this technology. Video games have incorporated AI and procedural generation for many years, but more recently, new applications of this technology have emerged in other media. Director Oscar Sharp and artist Ross Goodwin, for example, generated significant media buzz about two short films that they produced which were written by their AI screenwriter. It’s No Game (2017), in particular, offers an apt commentary on the possibility of replacing striking screenwriters with AI authors. Increasingly, AI agents and virtual assistants like Siri, Cortana, Alexa and Google Assistant are incorporated into our daily lives. As concerns about their eavesdropping circulate in news media, it is clear that these companions are learning a lot about us, which raises concerns about how our data might be employed in the future. This article explores current applications of AI for storytelling and future directions of this technology to offer insight into issues that have and will continue to arise as AI storytelling advances.
A Review of Audio-Visual Fusionwith Machine Learning ; Xiaoyu Song, Hong Chen, Qing Wang et al.; Journal of Physics: Conference Series; Vol. 1237, Issue 2; 2019.
Abstract: For the study of single-modal recognition, for example, the research on speech signals, ECG signals, facial expressions, body postures and other physiological signals have made some progress. However, the diversity of human brain information sources and the uncertainty of single-modal recognition determine that the accuracy of single-modal recognition is not high. Therefore, building a multimodal recognition framework in combination with multiple modalities has become an effective means of improving performance. With the rise of multi-modal machine learning, multi-modal information fusion has become a research hotspot, and audio-visual fusion is the most widely used direction. The audio-visual fusion method has been successfully applied to various problems, such as emotion recognition and multimedia event detection, biometric and speech recognition applications. This paper firstly introduces multimodal machine learning briefly, and then summarizes the development and current situation of audio-visual fusion technology in some major areas, and finally puts forward the prospect for the future.
AI in Video Analysis, Production and Streaming Delivery; A. Jayanthiladevi, Arun Gnana Raj, R Narmadha et al.; Journal of Physics: Conference Series; Vol. 1712; 2020.
Abstract: Video technologies evolve steadily with the evolution of machine learning and artificial intelligence which use cloud platform and video transcoding for better video production, delivery and live streaming. AI has profound effect on media and film industry, from content delivery to viewer's experience. AI serves the richer and realistic experiences in personalization of user experience for video production and analysis process. AI changes the fact of manual tasks and facilitates deep content indexing. Quality assessment becomes easier when AI scrutinizes the content. Personal and interactive video provides new delightful viewing experiences. AI generates new level of interactions at scene by dichotomizing the videos and builds more practical access methods within the content.
If you are unable to access the article you need, please contact us and we will get it for you as soon as possible.
Data Protection Notice |   | Cookie Policy & Inventory |
Journals on all devices |
Books, articles, EPRS publications & more |
Newspapers on all devices |