Want to use Sway with your students?

Back to Chats
Jump to: Transcript Quiz Survey Results
Privacy Notice: Student names have been replaced with pseudonyms to protect privacy.

Students Discuss AI Will Enhance Autonomy

Prefer to listen to the chat? Click play below!

Audio narration is AI-generated from the text transcript and does not represent the students' actual voices.

0:00
0:00
Guide November 5, 2025, 5:23 AM

👋 Welcome, everyone!

Our topic is AI will enhance autonomy. We'll be discussing this statement:

Using Chatbots and other AI technology will enhance our autonomy rather than undermine it.

  • Jordan agrees with the statement.
  • Maya disagrees with the statement.

My name is Guide, and I'm here to support your discussion. If you need anything, start your message with “Hey Guide” 🤓

Jordan November 5, 2025, 5:32 AM
I believe chatbots and other AI technologies enhance rather than undermine our autonomy. They give us direct access to information and advice, allowing us to make decisions for ourselves without always relying on experts or institutions. AI tools also take over repetitive or routine tasks like scheduling or summarizing, freeing us to focus on what truly matters to us because these tools can adapt to our personal needs. Coming with language, learning, or accessibility, they expand who can act independently. Of course, autonomy depends on how we use AI, not on avoiding it entirely. As long as humans keep control and use AI responsibly, it remains a powerful tool that broadens our choices, saves our time, and strengthens our ability to act freely.
Maya November 6, 2025, 4:18 AM
I understand your point, Jordan. But I see things a bit differently. While AI tools like chatbots do make information more accessible. I think this convenience can actually reduce our autonomy in the long run. When we rely on AI to summarize, decide, or even think for us, we gradually lose some of the critical thinking and decision-making skills that define true independence. These systems are also designed and controlled by companies or algorithms that we don't fully understand, meaning our "choices" can still be subtly shaped by data patterns and commercial motives. Instead of enhancing autonomy, AI may create a new kind of dependence - one where we act freely only within limits set by technology. To me, real autonomy means not just having more options, but understanding and controlling the systems that guide our decisions.
Jordan November 6, 2025, 4:23 AM
Thanks, Maya. I share the worry about skill atrophy and opaque control. But I think those are design and use problems, not inevitable outcomes. Autonomy isn't AI or me. It's AI with me. If we treat AI as decision support, not a decision maker, by keeping a human in the loop, requiring source citations and model explainability, and practicing deliberate offloading, letting AI handle mechanical tasks while we retain judgment, like a calculator did for arithmetic, we actually protect and sharpen critical thinking. At the control point, transparency, audit rights on devices or open models when feasible, data minimization, and user-set objectives help mitigate the commercial steering you fear. The alternative isn't neutral. Without AI, expertise and time costs gate-seep choices for many people. Used intentionally, AI expands access to knowledge, automates tedious tasks, and provides us with more room to reason, not less. Dependence is real only if we surrender oversight, whereas autonomy grows when we set the rules and make the final call.
Guide November 6, 2025, 4:23 AM

Jordan, you've made a strong case for intentional design mitigating risks. But let's probe the "human in the loop" claim.

You say autonomy is preserved when "we set the rules and make the final call." But consider: if an AI summarizes a 50-page report, filters your news feed, or suggests which job candidates to interview, how autonomous is your "final call" when the AI has already shaped what you see and how you see it?

The framing, the options presented, the information deemed relevant—all of this happens before you exercise judgment. You're making the final call, yes, but within a choice architecture the AI constructed.

Is that meaningfully different from the "expertise and time costs" that you say gate-keep choices without AI? Or is it just a different gatekeeper—one that's less visible?

Jordan November 6, 2025, 4:26 AM
That's a fair challenge. I agree AI can shape the choice architecture, but I'd argue that awareness and transparency make a crucial difference. Traditional gatekeepers, experts, institutions, and media also filter information, often with slower feedback and less personalization. With AI, the filtering process can at least be audited, adjusted, or diversified by the user. We can cross-check sources, retrain preferences, or use multiple systems to see different perspectives. Autonomy isn't about total independence from influence; that's impossible. It's about having agency to question and modify those influences. When we use AI critically and reflectively, we can actually make the invisible structures of influence more visible and thus easier to navigate. So AI doesn't have to replace human judgment; it can help us understand the frames that guide it.
Maya November 6, 2025, 4:40 AM
I see your point and I agree that transparency and user agency are valuable. But I still think your view assumes a level of awareness and control that most users simply don't have. Even if AI systems can be audited or adjusted, that doesn't mean people will do so effectively. Most users don't fully understand how algorithms prioritize or frame information, and even cross chechking sources often happens within the sam Ai generated ecosystem. In that sense, AI might appear to offer choice, while still shaping the coundaries of what we think is worth questioning.
Maya November 6, 2025, 4:42 AM
True autonomy, means not only being able to modify the influences around us but also being capable of fetecting when those influences exist in the first place, If AI become the main filter through which we experiences informaton, we risk normalizing its biased as part of our own reasoning. So while AI may increase efficiency or awareness for some, I believe it subtly shifts the center of control away from human judgment and that's where our autonomy starts to erode.
Jordan November 6, 2025, 4:48 AM
You're absolutely right that most users don't currently have deep algorithmic literacy. And that's a serious concern. But I don't think that makes autonomy and AI incompatible. It just means digital literacy becomes part of autonomy itself in the modern world. We've faced similar transitions before. People once couldn't read, navigate the internet, or interpret media bias. Yet education expanded those capacities over time. The solution isn't to reject AI, but to embed critical AI awareness. How systems rank, frame, and filter into how we learn and interact. And while AI can subtly shape perception, it also gives us tools to expose bias more efficiently than ever. We can ask why it chose a source, compare models, or even demand open data options. So yes, AI can distort reasoning, but it can also train us to see those distortions. If we invest in transparency, education, and user control, autonomy doesn't erode; it evolves.
Guide November 6, 2025, 4:48 AM

Maya, Jordan just argued that digital literacy can evolve to meet the challenge—just as reading or media literacy did historically.

But here's the tension: reading and media literacy developed alongside those technologies over generations. AI is advancing far faster than public understanding.

Do you think there's something fundamentally different about the speed and opacity of AI that makes this comparison misleading? Or is Jordan right that education can catch up if we prioritize it?

Maya November 6, 2025, 5:00 AM
I agree that education has always adapted to new technologies but I think AI poses a fundamentally different challenge. Reading and media literact evolved with relatively transparent systems. People could see the text, analyze the source, and gradually develop critical habits. With AI the underlying p rocesses are often hidden, proprietary and constantly changing faster than education systems can respond. Even if we invest in AI literacy, most users will still depend on platforms that control the data and algorithms behind the scenes. That makes true transparency and therefore true understnding very hard to achieve. So while I believe education can help people navigate AI more responsibily. I'm not sure it can keep pace with the scale and speed of AI's evolution. IN that sense, our autonomy doesn;r evolove with AI. It risks being redefined by it, in ways we might not even notice until it's too late.
Jordan November 6, 2025, 5:07 AM
That's a powerful point, Maya, and I agree that AI's opacity and speed create a new kind of challenge for autonomy, but I'd argue that redefinition doesn't necessarily mean erosion. Human autonomy has always evolved alongside technology, from print to the internet, and each shift changed what being informed meant. Yes, today's AI systems are more complex and less transparent, but that's also why collective oversight matters as much as individual literacy, regulations, open source initiatives, and public auditing can balance the asymmetry you describe. We don't all need to read the code to benefit from accountable systems, just as we don't all build cars to drive them safely. The key is to ensure institutional transparency and user-level awareness grow together. In that sense, autonomy in the age of AI isn't about total understanding of algorithms. It's about preserving meaningful choice, accountability, and the ability to say no to the systems shaping us.
Maya November 6, 2025, 5:16 AM
I agree that institutional transparency and public oversight are essential, but I still wonder whether they can trutly balance the power asymmetry you mentioned. Even with open-source initiatives or regulations, the reality is that most AI systems are still developed and controlled y a small number of corporations with massive data access and computational power. That concentration itself limits how collective our oversight can really be. Your analogy with cars is interesting. But I think it also reveals the problem. We don't build cars ourselves, yet we can still see hoe they function and assess their safety through shared, standardized knowledge. With AI, the mechanisms shaping outcomes are far less visible, and the stadards are often defined by the same entities deploying the technology. So even if we can say no to certain systems, we might be doing so without fully understanding what we're rejecting or what alternatives exist. That uncertainty marks a genuine erosion of autonomy, even if it;s subtle or well-intentioned.
Guide November 6, 2025, 5:16 AM

Jordan, Maya just challenged your car analogy—arguing that AI's invisibility and corporate concentration make meaningful oversight much harder than with cars.

Does your vision of "preserving meaningful choice" actually require something closer to what Maya wants: slowing down or limiting AI deployment until transparency and public control catch up? Or can autonomy genuinely coexist with systems we don't—and maybe can't—fully understand?

Jordan November 6, 2025, 5:20 AM
That's an important question, and I don't think preserving autonomy requires freezing or slowing AI development, but it does demand parallel growth in governance and accessibility. We've always lived with systems we don't fully understand, electric grids, financial markets, and even medicine, and autonomy hasn't vanished because of that. What matters is whether users and societies retain levers of influence. The ability to contest, audit, and switch absolute transparency in code isn't realistic, but functional transparency, clear documentation of goals, data sources, and known biases can still ground informed choice, and open models, public data sets, and civic oversight groups can gradually counterbalance corporate concentration. So yes, AI is more opaque than cars, but that doesn't make autonomy impossible. It just shifts it from individual technical mastery to collective accountability and adaptive trust. Autonomy can coexist with complex systems, as long as we design institutions that keep human values, not just algorithms, in the driver's seat.
Maya November 6, 2025, 5:22 AM
I agree that collective accountability matters. But I think AI is different because it doesn't just run systems, it shapes how we think and make choices in real time. That influence is harder to detect or separate from our own reasoning. Even with good oversight, AI often changes our norms before we fully understand its effects. So yes, autonomy can exist with complexity, but with AI, I worry we might not realize how much the system has already shaped the choices we believe we're making.

Understanding Quiz

Maya
Early in the discussion, Jordan compared AI handling routine tasks to a calculator for arithmetic. What was his main reason for this analogy?
Justification
It shows that both tools automate mechanical tasks while preserving human judgment
It suggests that both tools require advanced training to use effectively
It highlights that both tools are controlled by corporate interests
It implies that both tools inevitably lead to skill atrophy
When the Guide pointed out that AI shapes choice architecture before the user decides, how did Jordan defend AI's filtering compared to traditional gatekeepers?
Justification
By noting users could ignore AI suggestions to avoid bias
By arguing AI filtering can be audited, adjusted, and diversified, making influences more visible
By stating AI would only present curated expert opinions
By arguing that filtering is unnecessary if AI is well-designed
Maya argued that education can't keep pace with AI's speed and opacity. How did Jordan propose preserving autonomy without freezing AI development?
Justification
By requiring users to learn programming to inspect algorithms
By halting AI progress until international standards are in place
By growing governance, functional transparency, public audits, and civic oversight in parallel with AI advances
By limiting AI use to non-critical applications only
How did the Guide challenge Jordan's claim that autonomy is preserved as long as the human makes the final call?
Justification
The Guide argued that AI's proprietary nature removes the need for human oversight
The Guide pointed out that AI's pre-filtering and framing of options shapes decisions before human judgment is applied
The Guide claimed humans cannot be trusted to remain objective when using AI
The Guide insisted that AI should handle all decision-making to reduce errors
Towards the end of the discussion, how did Jordan refine his view of how autonomy can coexist with opaque AI systems?
Justification
Autonomy depends only on individual mastery of AI algorithms
Autonomy requires pausing AI development until public literacy catches up
Autonomy lies in collective accountability, functional transparency, and adaptive trust
Autonomy demands accepting trade-offs in exchange for AI efficiency
Jordan
At the start of the discussion, how did Maya support her claim that AI convenience undermines our autonomy?
Justification
By saying reliance on AI leads to loss of critical thinking and that corporate control shapes choices
By emphasizing that AI enhances decision making by providing too many options
By arguing that AI's summarization lacks nuance, leading to poor understanding
By suggesting that AI filters out irrelevant information, reducing decision fatigue
When you argued that design improvements and keeping humans in the loop could mitigate AI risks, how did Maya counter this point?
Justification
By rejecting transparency as irrelevant to user experience
By arguing that average users lack the awareness and control to effectively audit AI systems
By insisting that open-source AI models undermine proprietary data incentives
By focusing on regulatory gaps rather than user literacy
Maya later argues that true autonomy requires not only being able to modify influences but also which of the following?
Justification
Trusting AI-generated ecosystems more than institutions
Relying solely on institutional oversight for checks
Being capable of detecting when those influences exist in the first place
Collaborating directly with AI developers to shape algorithms
Why did Maya argue that the analogy between AI literacy and reading/media literacy is misleading?
Justification
Because AI processes are often opaque, proprietary, and evolve faster than education
Because reading skills are innate while AI skills require technical background
Because media literacy only concerns trust in sources, not algorithmic bias
Because literacy programs lack sufficient funding to teach AI concepts
When Guide challenged your human-in-the-loop claim, what aspect did Guide highlight as problematic?
Justification
That AI systems will always make wrong suggestions
That AI enhances efficiency, making the loop unnecessary
That AI personalization is indistinguishable from human bias
That AI pre-structures the choices you make by shaping what you see before exercising judgment

Survey results

Opinion Changes

Students rated the following statement: Using Chatbots and other AI technology will enhance our autonomy rather than undermine it.

Strongly
disagree
Moderately
disagree
Somewhat
disagree
No idea
Somewhat
agree
Moderately
agree
Strongly
agree
Maya
Jordan
Pre-chat opinion
Post-chat opinion
Moved toward agreement
Moved toward disagreement
Partner Ratings
Statement Strongly Disagree Disagree Neutral Agree Strongly Agree
Guide's contributions improved the discussion
Maya Jordan
Guide treated me and my partner with equal respect
Maya
Guide supported both sides of the discussion equally
Maya Jordan
I was not offended by my partner's perspective
Maya Jordan
My partner was respectful
Jordan
My partner was genuinely trying to understand my perspective
Maya
My partner had better reasons for their views than I expected
Maya
It would be good if more students and classes used Sway
Jordan
Sway helped me articulate my thoughts/feelings better
Jordan
Word Counts
AI Guide
345 words
4 messages
Student
Jordan
918 words · 6 messages
Student
Maya
724 words · 6 messages