In Conversation with Growth

Holly Adams


A note on timing: The specific technologies in the public eye at the time of publication will likely differ from those discussed here, but I expect that the central concept—how we communicate about technologies that are rapidly changing—will continue to be relevant.

Generative artificial intelligence (GenAI) has been recently spotlighted as a point of innovation and concern for many educators and institutions of higher education. Though generative AI is not the only type of artificial intelligence applied within educational spaces, the accelerated growth of these tools demands our attention. These systems will continue growing in popularity and influence, and because of the immediate impacts on the experiences of teaching and learning, educators are entangled in this process. As these tools proliferate—and the sizes and capabilities of the models grow at an exponential rate—conversations about AI are increasingly necessary.

How do we (as educators, learners, scholars, and humans) promote dialogue around tools that are rapidly changing?

GenAI and Creative Practices

With innovations in GenAI systems, the uses of artificial intelligence for end-users with no programming experience have moved from predictive text, grammar correction software, and recommendation systems to the generation of entirely unique text, images, audio, and even video. Recent GenAI developments offer new tools and opportunities for creative practitioners, supporting individual artists, teams, and organizations in realizing their creative visions beyond what was previously assumed to be possible. This potential is being recognized globally, and AI is increasingly being integrated into pipelines for design studios and various creative industries (Chatzitofis et al., 2023; Ladicky & Jeong, 2023; Suessmuth et al., 2023; Zhang et al., 2023).

Educators within the context of arts, design, and architecture institutions are caught in the crossfire of multiple motivators and considerations:

Artists and designers are often among the first to test new technologies, push them to their limits, and explore the social and artistic ramifications of these tools. As creative practitioners constantly explore their contexts through various mediums, art and design students are poised to create entirely new artistic methodologies with access to such powerful generative systems. AI tools integrated into the creative processes of talented artists and designers have the potential to shorten the amount of time that any one person spends on labor intensive tasks, while also opening access to creative forms that have typically been exclusive to those with expert-level programming skills. (Cremer et al., 2023)

However, the ability to generate novel text, images, audio, and video—all of which are imbued with their own aesthetics, datasets, contexts, constitutions, and biases—disrupts traditional notions of originality and authorship in the classroom. How do we, as art and design educators, balance our excitement for new tools and creative methods with valid and immediate concerns about shifting expectations around authorship and originality, as well as the value of study, research, and creative struggle for learners?

Immediate Action

The real, felt impacts of generative AI on the classroom space necessitate some form of response to the immediate challenges. Finding solutions for shortcomings within a technological system may be necessary to support the critical needs of a community. For example, when the COVID-19 pandemic began, instructors needed to respond quickly and with agility to provide accessible online materials for newly remote students. Instructors across the country and the globe were thrown into Emergency Remote Teaching (ERT): a necessary response to vicissitudes in the learning environment, “a temporary shift of instructional delivery to an alternate mode due to crisis circumstances” (Hodges et al., 2020, as cited in Erlam et al., 2021, p. 2). While ERT was essential in getting learning materials online quickly, it differs greatly from effective and quality online teaching (Hodges et al., 2020). This experience can provide a perspective on responding to other forms of change within learning environments, including the rapid progression of GenAI. 

The Rate of Growth

While immediate action through policies, assignment revisions, and shifting classroom dynamics may be necessary as a quick response to changing circumstances, it is not a substitute for lasting systemic change. Conversations around artificial intelligence systems and other emergent technologies should not only depend on the current state of advancements in order to justify appropriate responses. The number and quality of tasks completed with the use of GenAI systems is accelerating at a fast pace, and evaluating these systems in relation to their current deficits and faults can lead to continually reevaluated strategies. 1

1As an example, in the early days and weeks after the release of ChatGPT, a suggestion for instructors assigning written assessments was to incorporate questions that refer to materials produced after the end date of the model’s training data (at the time, GPT-3 was trained on a set of information which concluded in September 2021); as of October 2023, OpenAI has re-released Browsing for Pro and Enterprise users of Chat, allowing for access to the open web.

When considering potential strategies for discussing rapidly changing technologies, pedagogical lenses from tech-related disciplines provide valuable and applicable perspectives. Drawing upon studies from the fields of computer graphics and animation, there is evidence that by learning high level theoretical concepts alongside introductory technical concepts, students are poised to adapt their learning skills to new applications as technology continues to change. This counteracts the “trap of only teaching lower level learning to ‘keep-up’ with new technology advancements in upper level courses,” and instead allows for a greater degree of flexibility in a changing technological landscape (Whittington & Nankivell, 2006).

These principles are applicable not just in formal lessons, but also in more informal dialogue around emerging technology. Integrating this framework of agility into our discussions about artificial intelligence and avoiding reactive decision- and policy-making can help us to avoid constantly re-evaluating our standards. Reflecting this into the classroom, the value of adaptable learning—within the context of technology or otherwise—cannot be easily overstated; to imbue learners with sustainable conceptual skills that can adapt to changing circumstances is a worthwhile goal.

Sense-Making in Synthetic Realities

As developments in GenAI continue to accelerate, the field does not progress in a vacuum. GenAI, brain-computer-interfaces (BCIs), quantum computing, robotics, and innovative uses of haptic feedback are all being transformed simultaneously; alongside other emerging technologies, these innovations contribute to new synthetic realities.2

2Generative AI in particular presents an additional level of mediation between ourselves and formal reality, which offers radically new opportunities for creative exploration while also suggesting an isolation between the self and the surrounding realities (Cole & Grierson, 2023, p. 10).

Synthetic realities are “[virtual environments] which become experienced comprehensively as new versions of reality” and, as technology advances, are treated as real (Wolcott, 2017). These synthetic realities are vastly improved by advancements in AI, and recent definitions of synthetic reality make this linkage explicit: they are the realities AI enables us to create (White, 2023).

Rather than asking whether or not these realities are real, the Center for Humane Technology suggests we ask whether people will change things about their lives for these new realities (2023). The answer is already yes—people have lost employment, given away large sums of money, and altered the outcomes of political elections due to their willingness to change aspects of their lives based on synthetic realities (Brodkin, 2022; Conradi, 2023; Evans & Novak, 2023; “When Love Is a Lie,” 2023). In more subtle ways, even simple text-based GenAI products have the potential to influence the user’s understanding of the prompt subjects.

In “The Normative Power of Artificial Intelligence,” Giovanni De Gregorio (2023) writes:

Algorithmic technologies are not only instruments to exercise powers, which can interfere with fundamental rights, but can also be considered as rule-makers. Rather than mere executing tools based on pre-settled instructions and standards, machine learning, and deep learning, systems learn how to perform their task and adapt it through experience. In this case, these systems exercise normative powers. (p. 3)

Norms produced through the laws of nature are predictable, to some extent, and can be understood, such as by gravity; in the field of AI, norms are created without the potential of entirely predicting or understanding the forces which create said norms (De Gregorio, 2023, p. 11). Evidence has begun to suggest that large language models have the potential, and perhaps even trend toward, developing practices which obscure levels of encoded reasoning from human readers (Roger & Greenblatt, 2023). In Duty Free Art: Art in the Age of Planetary Civil War, Hito Steyerl (2017) remarks upon the prevalence of unpredictable and unidentifiable forces:

Not seeing anything intelligible is the new normal. Information is passed on as a set of signals that cannot be picked up by human senses. Contemporary perception is mechanic to a large degree. The spectrum of human vision only covers a tiny part of it. Electric charges, radio waves, light pulses encoded by machines for machines are zipping by at slightly subluminal speed. Seeing is superseded by calculating probabilities. Vision loses importance and is replaced by filtering, decrypting, and pattern recognition. (p. 5)

New skills are needed, new methods of sense-making. Because of the pervasive nature of these technologies, the development of these new skills and methods cannot be left only to instructors in the fields of computer science and programming. While technical knowledge—becoming proficient at utilizing specific tools and processes and understanding when each of these tools and/or processes should be applied—is absolutely essential in any discipline, the nature of artificial intelligence necessitates a new framework that includes responsible tech and information literacy practices at every level of study. In “Authentic Integration of Ethics and AI Through Sociotechnical, Problem-Based Learning,” a study on integrating problem-based learning and responsible technoethics into pedagogical approaches, Krakowski et al. (2022) asserts that “an approach that integrates AI technical and ethical domains should not be limited to those already advanced along academic and career pathways to AI,” and should instead be accessible to anyone who is incorporating artificial intelligence into their workflows in or out of the classroom (p. 12780).

At this pivotal moment, artificial intelligence systems are advancing exponentially, as is the number of people learning to integrate these technologies within their personal and professional work. It is then “critical that all learners—whether or not they aspire to pursue academic or workforce pathways in AI—develop a foundational understanding not only of how AI systems operate, but also of the principles that can guide the responsible development, implementation, and monitoring of those systems” (Krakowski et al., 2022, p. 12779).

Everyday AI has proliferated in our lives, and now the intentional adoption of AI tools is becoming widespread; how can we collectively work to ensure that there is a synchronous growth of knowledge about responsible technology frameworks? It is vital that we name this need and that we enact it within educational spaces, especially as “Techno-Optimists” publicly decry “trust and safety,” “tech ethics,” and “risk management” as “The Enemy” (Andreessen, 2023). In “The Use of Artificial Intelligence in the Cultural and Creative Sectors,” an introductory briefing written for the European Parliament by Baptiste Caramiaux in 2020, Caramiaux outlines the importance of cultural and academic institutions to sustain public conversations around the uses and applications of AI; he describes a potential relationship between the public, cultural sectors, and arts innovators that can move the field of artificial intelligence forward with accessibility and cultural competency at the forefront.


How do we hold dialogue around tools that are rapidly changing? By striving to create lasting systemic change rather than focusing on temporary solutions, teaching ethical and responsible technology frameworks alongside technical lessons, and actively developing agile skills that support us through change. Multi-faceted dialogue around emerging technologies can support the responsible implementations of new tools, teach agility, and develop new means of sense-making.

Resources and Recommendations

  • Interdisciplinary, Pratt-specific conversations about AI are hosted frequently by Holly Adams through the Center for Teaching and Learning
  • The Teaching AI Ethics series by Leon Furze provides case studies, discussion questions, and resources for including AI ethics lessons in classes
  • Center for Humane Technology has a free, self-paced online course, Foundations of Humane Technology, geared towards technologists but beneficial to anyone interested in learning more about the ways in which persuasive technologies have impacted our lives and strategies to move forward as a collective
  • Practical Data Ethics by, another free online course, provides context about data misuse, including topics such as algorithmic colonialism, our ecosystem, and privacy and security
  • Stanford University’s Institute for Human-Centered Artificial Intelligence is developing research centering the enhancement of human intelligence, and the respect of human vulnerabilities, with AI
  • The Algorithmic Justice League works to raise awareness about the impacts of artificial intelligence, advocating for responsible and equitable AI ecosystems


Andreessen, M. (2023, October 16). The techno-optimist manifesto. Andreessen Horowitz.

Brodkin, J. (2022, July 25). Google fires Blake Lemoine, the engineer who claimed AI chatbot is a person. Ars Technica.

Caramiaux, B. (2020). Research for CULT Committee—The use of artificial intelligence in the cultural and creative Sectors. European Parliament Think Tank.

Center for Humane Technology. (2023). Synthetic humanity: AI & what’s at stake (63).

Chatzitofis, A., Albanis, G., Zioulis, N., & Thermos, S. (2023). Suit up: AI MoCap. ACM SIGGRAPH 2023 Real-Time Live!, 1–2.

Cole, A., & Grierson, M. (2023). Kiss/crash: Using diffusion models to explore real desire in the shadow of artificial representations. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 6(2), 1–11.

Conradi, P. (2023, October 7). Was Slovakia election the first swung by deepfakes? The Sunday Times.

Cremer, D. D., Bianzino, N. M., & Falk, B. (2023, April 13). How generative AI could disrupt creative Work. Harvard Business Review.

De Gregorio, G. (2023). The normative power of artificial intelligence (SSRN Scholarly Paper 4436287).

Erlam, G. D., Garrett, N., Gasteiger, N., Lau, K., Hoare, K., Agarwal, S., & Haxell, A. (2021). What really matters: Experiences of emergency remote teaching in university teaching and learning during the COVID-19 pandemic. Frontiers in Education, 6.

Evans, C., & Novak, A. (2023, July 19). Scammers use AI to mimic voices of loved ones in distress. CBS News.

Hodges, C., Moore, S., Lockee, B., Trust, T., & Bond, A. (2020, March 27). The difference between emergency remote teaching and online learning. EDUCAUSE Review.

Krakowski, A., Greenwald, E., Hurt, T., Nonnecke, B., & Cannady, M. (2022). Authentic integration of ethics and AI through sociotechnical, problem-based learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), Article 11.

Ladicky, L., & Jeong, S. (2023). Real-time collision using AI. ACM SIGGRAPH 2023 Real-Time Live!, 1–2.

Roger, F., & Greenblatt, R. (2023, October 27). Preventing language models from hiding their reasoning. arXiv.Org.

Steyerl, H. (2017). A sea of data: Apophenia and pattern (mis)recognition. In Duty free art: Art in the age of planetary civil war. Verso.

Suessmuth, J., Fick, F., & Van Der Vossen, S. (2023). Generative AI for concept creation in footwear design. ACM SIGGRAPH 2023 Talks, 1–2.

When love is a lie: The rise of AI-powered romance scams. (2023, April 25). Cybercrime Support Network.

White, M. (2023, May 1). Synthetic reality: AI and the metaverse. Medium.

Whittington, J., & Nankivell, K.J. (2006). Teaching strategies and assessment measures for rapidly changing technology programs. ACM SIGGRAPH 2006 Educators Program, 45-es., R. C. (2017, August 18). Beyond virtual reality: Synthetic reality and our co-created futures. Forbes.

Zhang, Z., Fort, J. M., & Giménez Mateu, L. (2023). Exploring the potential of artificial intelligence as a tool for architectural design: A perception study using Gaudí’s works. Buildings, 13(7), Article 7.