Nous aurions, et nous espérons, pouvoir réaliser ce projet en français aussi. Il y a 2 raisons pour lesquelles nous ne pouvions pas le lancer simultanément dans les 2 langues officielles du Canada. Tout d’abord, les bases de données annotées actuelles ne contiennent pas assez de mots en français pour générer des images avec l’IA. Même en anglais le résultat n’est pas parfait, et inclus des biais de genre mais aussi culturelles, politiques et autres. Avec la plateforme Runway ML, vous pouvez entrer des mots en français mais la résultante sera souvent une image toute noire. Par conséquent, et c’est la 2e raison, mener ce type de projet en français nécessiterait des ressources que nous n’avons pas présentement. Notre organisme fonctionne avec très peu de financement et un soutien bénévole ponctuel. Nous vous remercions pour votre compréhension.

N’hésitez pas à nous laisser votre courriel dans la section prévu à cet effet afin que nous puissions vous informer des développement à venir. Qui sait, peut-être trouverons nous un soutien financier pour assurer la réalisation de projets en art algorithmiques en éthique de l’IA en français.

FAQ

  1. What is the goal of “The Pear, You and AI?
  2. We believe art is a process. “The Pear, You and AI” is the first step in a larger project dedicated to “Algorithmic Art to Counter Gender Bias in AI”. This first phase involves creating a new and unique database designed to express alternative perspectives on and around women, as seen by individuals self-identifying as women. We understand that using the word “women” risks to simplify the complex question of identity by suggesting a binary option. Far from this, we invite you to change normative notions by adding your own association of words. Woman - or better yet, womxn - is what you want it to be. In the second step of this algorithmic art project, the words you’ve added to the PearAI.art app will be injected into different AI-driven text-to-image-generation systems to examine, visually understand and challenge gender bias in the machine learning process. The images generated using biased datasets will be compared to those created using this newly co-curated dataset, through a series of artworks generated via machine-learning tools and integrated into a virtual installation. Here, we are painting with words and data.

  3. How is the data going to be used?
  4. The words you give us will be used creatively by artists in various ways. Valentine Goddard will first do a test run on the same machine learning platform that originally delivered the image of “a pear” and create digital stamps. A paper will be published exploring best curatorial practices in AI ethics focusing on this initial stage of the project. Once we reach a critical number of words, our AI Research partners will be able to test their impact on the semantics of AI-generated images, by generating images from texts produced by women, comparing the different architectures and suggesting mechanisms to intervene in AI-driven image generation. This creative process (both artistic and scientific), along with the resulting series of digital prints, will be shared on the Art+IA Platform .

  5. Why do you need nouns?
  6. This project is developed for English language due to technical reasons. However, nouns are linguistic universal features recurring in almost all contemporary languages. By focusing on nouns and associations between nouns, this annotation process is adapted to current limits of machines learning. But just how do machines understand language? Text-to-image systems use natural language text and the algorithms are trained to interpret it into a visual format (e.g. pictures). Among the many steps involved in this AI process is text or image annotation, where a short description (annotation) is given to an image. In this paper, we discuss how machines would encounter issues if trained on the subjective nature of opinions and order qualifiers in the English language. Of the adjective types, “Opinion” stands out as being problematic for tying text to images. Consider the statements “A beautiful welded joint” and “A beautiful red bird”. Two people may have completely different opinions on the term “Beautiful”, reflecting diversity of taste. This diversity would arguably make it extremely difficult for a machine to learn anything about adjectives. Nouns can be associated with an image more easily. For the purpose of this art project, we would like to start by creating new visual semantics on three selected words, and explore new viable solutions using Natural Language Processing and algorithmic art.

  7. Why are you giving cues with the blurbs under the words? Doesn’t that just bias the data you’re collecting?
  8. Our work starts from acknowledging existing biases in datasets. While there are many reasons for such biases, we focus on two of them: historical human biases and incomplete or unrepresentative data. To ignore these biases would be a form of denial. Instead, we decided to expose the problem of flawed data and insert cues in our questions, by embracing and yet complexifying an existing terminology for women, femininity, womahood and other questions around gender. These cues serve as provocations, inputs, prompts that foster a deeper reflection on which nouns do we choose, and how, and why. PearAI.art is not just an AI project, it is an art project with a very human touch. We call it critical, and constructive, inter-arts curation, and it is part of a larger initiative aimed at digital literacy and inclusion.

  9. What’s an inter-arts curator ?
  10. For the Canada Council of the Arts, inter-arts work involves the exploration or integration of multiple traditional and/or contemporary arts disciplines, which are merged or mixed in such a way that no single artistic discipline dominates in the final outcome. Transdisciplinary methods cross the arts with other non-arts disciplines to explore a theme or issue, such as the social, legal, economic, political and ethical implications of AI, including human rights. Some experts refer to this role as a cultural agent of social change.

  11. By asking only humans who identify as women-womxn, or gender fluid individuals to annotate, we are excluding men, or people who identify as men ?
  12. In a way, yes we are. Most databases currently deployed in machine-learning are the result of a white male dominant tech culture. As argued by Catherine D’Ignazio and Lauren F. Klein in their work on Data Feminism: "Today, data science is a form of power. It has been used to expose injustice, improve health outcomes, and topple governments. But it has also been used to discriminate, police, and surveil. This potential for good, on the one hand, and harm, on the other, makes it essential to ask: Data science by whom? Data science for whom? Data science with whose interests in mind? The narratives around big data and data science are overwhelmingly white, male, and techno-heroic." Most databases still contain sexually loaded, derogative, discriminatory annotations , and are often used to wrongfully describe to AI what terms like “woman” mean. In turn, machine-learning algorithms respond by internalising and reiterating biases on women and other minorities. Fed with distorted representations of femininity, AI suffers from a severe gender crisis that impacts all of us as a consequence of algorithms developed by a small percentage of human actors. We ask all self-identifying women to help us change this.

  13. How can algorithmic art be used to counter gender bias in AI and help shape our digital futures (economy and democracy) ?
  14. Ethical guidelines on the development and governance of artificial intelligence (AI) require accountability, fairness and transparency. With the PearAI.art project, AI Impact Alliance invites citizens and leaders to take a closer view at the latter, and explore how the arts can be a tool that makes AI more transparent. Our premise is that transparency goes beyond the ability to explain algorithmic results. Automated decision-making systems can have a dramatic outcome on individuals’ or communities’ well-being and safety through surveillance mechanisms, biased algorithms and so on. Explainability is without a doubt indispensable, but it is not the only critical element of transparency in AI. AI will impact everyone, and therefore everyone needs to be part of ongoing and upcoming choices, especially society’s most vulnerable and underrepresented citizens. Coordinated and strategic efforts must be made to facilitate the understanding of AI’s ethical, social, legal, cultural, economic and political implications in order to reach inclusive and sustainable digital futures.

  15. What are the most effective means of informing and engaging citizens in digital governance, supporting an independent narrative on AI, improving data and AI literacy, building trust while ensuring a democratic deployment of AI?
  16. We argue that the arts offer effective, creative, and innovative channels that can help shape inclusive and diverse perspectives on AI. To do so, we introduced the Art+IA Platform. Artists can help increase the ability of citizens to make informed choices, and research has shown that in order to implement legitimate AI policies, we need 1) a large number of citizens, 2) a diversity of perspectives, and 3) an understanding of the implications of the science or technology involved. Recent policy recommendations from the United Nations underscore the important role of civil society and the arts in sustainable and ethical digital governance. Immersive art forms not only offer a safe space to explore critical perspectives of ethical and social dimensions of new technologies, but they have also proven to provide the participant with “capacity for intervention”, while increasing civic engagement on important social issues, such as Sustainable Development Goals.

  17. Will this project lead to a data feminism/algorithmic feminism manifesto?
  18. We are designing residencies, workshops, conferences with deeper dives into these topics expected to develop throughout fall of 2021-2022. Here are some thoughts from our Director of Research and Innovation, Giulia Taurino, PhD : “In this project, we want to redefine conventional and culturally constructed concepts, like femininity or womanhood, in order to reshape cultural and algorithmic images traditionally associated with three terms: women, beauty, imperfection. If in this annotation process you find concepts that might seem normative, we invite you to use the fill-out space to add meanings and associations that are missing. The aim is to acknowledge the social - and now also technological - issues of wording and understanding concepts around a broader notion of womxn, rather than just erasing them or ignoring them. The terminology we adopted is meant to be used as a cue, and not as a suggestion or a forced path. Cue as retrieval cue, as a prompt that helps us activate a process for remembering, rewriting, reimagining cultural memories, traumas, kinships, disconnections around what women, beauty, imperfection mean.”

  19. Are there going to be Call for Projects / Residencies ?
  20. We are most certainly considering it. Through workshops, residencies and call for art projects, artists, curators and AI/Data experts, AI Impact Alliance, through workshops, residencies and call for art projects, participating artists, curators and AI/Data experts will be able to deepen their research and creation focus, as well as their network of collaborators.

  21. When will the launch be held?
  22. We are currently fundraising for the next phases of this project. Please sign up to our mailing list to be invited to the launch or follow the evolution of this Algorithmic Art to Counter Gender Bias project. We’ll be thrilled to let you know all the good news as we move forward.

  23. Where can I find more information about AI Impact Alliance and this project?
  24. Do you have more questions? Contact us at v.goddard@allianceimpact.org .

  25. How can I stay in touch and be notified of updates, news and events?
  26. Sign up to our mailing list. Thanks for asking!