The term AI, or artificial intelligence, has been in use since the 1950s, but only the past 5 to 10 years have seen the development of algorithms and computers that are fast enough to enable the versatile generation of visual material. We are talking about a generative AI that produces new content such as text, images, audio and even moving images, or video. In this context, an algorithm can be interpreted to mean a computer program or a command given to such a program.
Current AI models that can produce visual material have been taught using a vast amount of images. The LAION-B dataset, for example, consists of almost 6 billion images. This dataset or a taught model is at the core of many AI programs that generate images. Another AI that interprets images has reviewed the collected images and identified and classified the objects in them, such as cats, clouds, flowers and landscapes, but also described features like the mood in the image or the style in which the image has been made. Current AI software can be used to create new images by entering keywords relating to the motifs and the style, for example.
The use of this kind of visual material bank involves questions such as who and what is depicted in the images and where the images come from. The images have been collected from the internet, and especially from various image services available, such as the Flickr photo service, ArtStation, DeviantArt, Instagram and Pinterest. The images uploaded in these services have been used as the teaching material for new algorithms. It is possible that the datasets focus on western imagery, culture, landscapes, people and city life. Imagery from other cultures, such as the cultures in Africa or South America, might not be as available in these services. This has an impact on what kind of visual material these AI algorithms generate.
Another problem that has come up is that of copyright. AI can produce images that look like the ones by a certain artist, after the AI has learned the style from the artist’s work published on the internet. This might cause problems, especially to those artists and illustrators that are still alive. If AI can be used to generate images that are very similar to the work of artists and illustrators, it might endanger their livelihood.
Another potential problem is the generation of deepfake material. It is possible to generate new images that seem to depict a famous actor or a politician, based on previously published photos of the person. There is a lot of visual material of people who often appear in the media, and AI algorithms have probably learned what these people look like. An AI can be taught what a person looks and sounds like.
We can see that the use of AI algorithms is moving towards a more ethical direction. In some image sharing services (e.g. ArtStation), people can already deny the use of their images for teaching an AI. People can also delete their images from certain AI algorithms. There is also an ongoing discussion about making the image of humanity more equal in AI models, so that different sexes, ages, ethnical backgrounds and other factors would be better represented.
The next development, of which some preliminary phases have already come about, is to generate video footage in the creative industry. Even though it is already possible to create impressive images, photographs or even paintings, generating video is still demanding. However, there is a lot of positive potential with an image-generating AI. A designer can generate multiple images using different styles, for example, and make different versions to quickly create various examples as a basis for the work. It might be fun to transfer your own image to a comic, watercolour painting or origami.
Currently, you can test making images with different internet services and applications. Some of the services are free of charge, at least to start off with, and you do not necessarily have to use your own images. Money can usually buy better visual material and images are faster to generate.
It is worth bearing in mind that AI is often used in everyday life. For example, it helps us find the best route in a map application when we move from point A to point B, or suggests movies or TV series in Netflix based on our previous choices. AI also helps voice recognition applications, such as Alexa, Cortana and Siri, understand what we are saying. An example of how image editing AI algorithms are becoming commonplace is the Photoshop image processing software: it includes existing AI-based methods, i.e. neural filters, for editing images. It is likely that the use of AI will only increase in applications in the future. AI poses many challenges but also opens up possibilities.
All in all, a generative AI is an interesting and versatile technology that offers countless possibilities for generating images, audio and video. It is still important to understand that its use includes ethical and practical challenges and things to consider, such as data protection, copyright and cultural diversity. We must learn to understand the potential and limitations of AI to use it responsibly and sustainably in the future.
Timeline for different AI models that create images (and video): https://www.fabianmosele.com/ai-timeline
Google Imagen Video: https://imagen.research.google/video/
CLIP: an AI that interprets images, helps create a connection between the explanatory text and the image: https://www.pinecone.io/learn/zero-shot-object-detection-clip/
Text: Tomi Knuutila
Image: Siru Tirronen
The media landscape of children and young people keeps changing, with new phenomena following each other back-to-back. Providing pupils with tools for understanding and processing these phenomena is important. This learning package is part of Pathways to New Media Phenomena – Information and Exercise Materials Series. The series includes information and exercises for the teacher and the pupils. You can explore new phenomena in a meaningful way with the help of the How to discuss new media literacy phenomena through pedagogical means method.
CC BY 4.0
Material for the teacher
Creative AI can be used to generate many kinds of media content, such as text, images, audio and video. The content generated by an AI offer various possibilities but also involve many ethical questions and challenges. In the video, Tomi Knuutila considers the use of creative AI from various viewpoints. The video has English subtitles.
Watch the video and think about the following questions:
What students should know about creative AI and its use and why?
In what ways creative AI could be covered in teaching?
The background of the video is made with an AI called Stable Diffusion.
Teachers do not need to be experts and skilled at everything. Having a good control over one’s own speciality and pedagogics and being curious and enthusiastic about learning new things provide a great starting point for tackling new media phenomena. Approach to processing new media literacy phenomena encourages you to use your own expertise and competence when working with various phenomena.
Examine the model and consider the following questions:
- Based on your experience, what challenges does discussing new media literacy phenomena entail?
- What things support the discussing of new phenomena in your own work?
- How would you utilise the model to discuss the phenomenon at hand?
Media literacy is a transversal competence, whose promotion is required by the core curriculum of basic education (2014).
The objectives based on the core curriculum have been expressed separately for each school grade in the national descriptions of media literacy (the New Literacies development programme 2021). Basic education concerns the descriptions of good and advanced competences. The descriptions clarify the meaning of media literacy and the related objectives expressed in the core curriculum texts. The descriptions have been divided into three main areas: media interpretation and evaluation, media production and acting in media environments. You can learn more about the descriptions here.
Consider the following questions:
- How is the phenomenon under discussion structured in the media literacy competence descriptions?
- What kind of media literacy skills do the pupils learn in connection with discussing the topic?
You can refer to the materials of KAVI and the New Literacies development programme for support in the promotion of media literacy.
Media Literacy School (mediataitokoulu.fi) The Media Literacy School website brings together various learning resources and materials for the media education purposes of different age groups also in English.
The page Media Literacy School – New Literacies brings together a range of materials that were created within the development programme to support the media education of basic education. The materials produced in the programme can be found on the open learning materials website at AOE.fi.
Material for the pupil
AI is used in many ways in everyday life (e.g. Google search, finding a route in map services, etc.). It can be used to generate new text, audio, images and even videos based on previously published material. AI models that can produce visual material are taught using a vast amount of images. The images have been collected from the internet, and especially from various image services available, such as Instagram and Pinterest, to which users have uploaded them. An AI can generate new images with the help of another AI that has been taught to identify what is depicted in an image (e.g. cats, clouds or flowers), but also the styles, executions and even moods in the images. An AI can also produce images based on a text description entered.
Images generated by an AI pose many challenges and ethical questions. The images in image material banks focus on western people and culture, and based on them, the AI generates very one-sided material. The images and audio edited with an AI also involve risks of abuse, such as making deepfake material (Explore the phenomenon: Deepfake). One problem that has arisen relates to copyright. An AI can adopt the style of visual expression from work published on the internet and generate similar images, which can endanger the livelihood of artists and illustrators.
AI also has positive uses in the creative industry: designers, illustrators and artists can use the images generated by an AI as examples and starting points for their own design work. It is also possible to generate 3D models and video footage with an AI, even though it is currently quite challenging.
Image editing can be tested using various internet services and applications. You do not have to upload your own images in them, and some of them are free of charge at least in the beginning. The current algorithms for image generation can one day be included in image processing software, for example.
In the future, AI should be developed but also used responsibly and sustainably. The use of AI algorithms is moving towards a more ethical direction. In some image sharing services, people can already deny the use of their images for teaching an AI. People can also delete their images from certain AI algorithms. There is also an ongoing discussion on how the image of humanity generated by AI models could be more equal.
Text: Tomi Knuutila
1. Examine images.
a) The following images are generated with the AI software Stable Diffusion. They have been generated using the text “An old woman crying, portrait, photo, winner, stunning composition, beautiful light”. What kind of culture do the images represent? (Images: Tomi Knuutila)
b) Perform a Google image search with the phrase “old woman crying”. How do the images differ from the ones generated with the AI and why? You can also use the following image collage from a performed Google search.
c) Compare two images. One of the images is a photograph from the internet and the other is generated with the AI software Stable Diffusion and enlarged with the AI algorithm ESRGAN. Is it possible to say based on the image whether it has been generated with an AI or whether it is taken by a person? Based on what?
2. Create images. Come up with a text-based description that you would like to enter to an image-generating AI. Discuss what kinds of images might be generated based on the text. Try to enter the suggested texts with your teacher to the AI at https://creator.nightcafe.studio/ or https://dream.ai/create. The texts have to be translated into English. Did the images meet your expectations? What if you would enter the texts again?
3. Come up with your own AI: Divide the students in small groups and ask them to come up with their own AI. What kinds of tasks their AI could perform? How would it work? This assignment might motivate the students to think about what kinds of problems an AI could solve.
Split into small groups and come up with your own AI. What kinds of tasks your AI could perform? How would it work? This assignment might motivate you to think about what kinds of problems an AI could solve.
One of the assignments is created by Chat GPT. Can you recognise which of the assignments is by an AI?
Text: Tomi Knuutila
1. What are the topics that have been included in the image generated by the AI? What is the style imitated by the image, and how can you tell? What are the verbal descriptions entered to the AI?
2. Enter into the role of a modern artist or illustrator. Think about the benefits of an AI for your work or the harm it could do.
3. An AI generating visual material is taught with a vast amount of images published on the internet, based on which it then generates new images. What do you think about the fact that images uploaded online will end up as visual material for an AI? What if an AI would use images uploaded by you?
Image: Siru Tirronen