• March 4, 2026 10:45-11:30 Sessions

    • Level: Novice

      Presenters:

      This session presents an approach to classroom AI use that focuses on two instructional goals: scaffolding ethical, transparent AI use in the writing process and making learning through discussion visible in hybrid and asynchronous team-based courses. Rather than treating AI as a generalized productivity tool, the model embeds AI at specific points in the learning process to support, learning, conceptual application, teamwork, and the development of critical thinking.

      The presenters will share a series of assignments that require students in a writing-intensive course to document, evaluate, and reflect on their AI-assisted decisions during drafting and revision. Building on this structure, we will describe an AI-facilitated discussion platform, Breakout Learning based, deployed to observe and support team interactions that are otherwise difficult to assess. The platform prompts discussion aligned to course objectives, analyzes participation and conceptual engagement using Bloom’s Taxonomy, and provides instructors with configurable, non-punitive analytics.

      Drawing on a Spring 2026 multi-course deployment across undergraduate and MBA courses, we will illustrate the tool’s methods and student response.  Also, we will outline an assessment design using pre- and post-student self-reports, discussion analytics, and qualitative feedback to examine concept comprehension, confidence in application, and critical thinking development. Implications for faculty workload, assessment practices, and ethical AI integration are discussed.

    • Level: Intermediate

      Presenter: Collin Vaughn

      Artificial intelligence is embedded in the workflows graduates encounter immediately upon entering the workforce. In large organizations, AI is already expected to be used to generate content, create visual systems, accelerate production, and support decision making at scale. However, these tools only produce high-quality outcomes when guided by strong human judgment, process, and taste.

      This session shares firsthand experience from IBM Consulting, where AI workflows are used across a global organization to drive productivity, consistency, and engagement. Drawing from real, award winning, work, the presentation shows how early AI experimentation with Ai produced inconsistent and low-quality results, and how reintroducing foundational design practices transform those tools into reliable, scalable workflows.

      Rather than positioning AI as a replacement for expertise, the session frames it as an amplifier of skills developed through programs like Texas States Communication Design such as composition, concepting, critique, and iteration. While not prescriptive about teaching approaches, the session offers clear visibility into how AI is currently used in industry and what graduates are expected to navigate on day one.

    • Level: Novice

      Presenter: Mr. Eric Algoe

      Artificial intelligence has become an effective resource for university operations. In this session, Mr. Eric Algoe, Executive Vice President for Operations and Chief Financial Officer for Texas State University, demonstrates how tools such as ChatGPT and NotebookLM can support analysis, clarify complex topics, and prepare information for timely action. He will share examples from business operations and outline ways non-technical teams can apply AI in daily work.

    • Level: Novice

      Presenter: Dr. Vangelis Metsis

      With recent surveys showing 97% of students using AI to maximize efficiency, there is a risk that learners are bypassing the "productive struggle" required for deep understanding. Using the framework of Daniel Kahneman’s "System 1" (fast, automated thinking) versus "System 2" (slow, deliberate thinking), this session introduces the "Grind Smart" strategy: a pedagogical approach where AI handles the routine work, but assessments aggressively target the deep, deliberate thinking that AI cannot replace.

      Presented by a Computer Science faculty member, this session shares practical strategies for shifting the grading focus from generation (doing the work) to evaluation (judging the work). We will explore two specific assessment models adaptable across disciplines: Failure Case Analysis, where students use AI to build a solution but are graded on their ability to find and explain where the AI failed; and Competition-Based Learning, where gamified leaderboards force students to optimize and refine AI outputs rather than accepting the default answer. Participants will leave with a roadmap for designing assignments that treat AI as a "cognitive partner" while ensuring students remain the critical thinkers in the loop.

    • Level: Intermediate

      Presenter: Doreen Mayrell

      This session challenges the growing trend of “AI-proofing” assignments and reframes artificial intelligence as a catalyst for better teaching and learning. Participants explore a practical blueprint for AI-powered flipped learning in which an outcomes-aligned AI tutor provides first exposure to content, while class time is reclaimed for coached problem solving, discussion, and authentic application. Examples span disciplines, from digital twins that simulate lab experiments, to historical decision rooms that support source analysis, to writing copilots that enhance feedback and revision.

      Rather than theory, this session focuses on implementation. Attendees gain a modular framework for safely integrating AI tutors, a responsible-use playbook with bias checks and human-in-the-loop safeguards, and a rapid-pilot plan to track cost per successful learning outcome. The overall message is pragmatic transformation: when educators design AI interactions that are safe, measurable, and discipline-authentic, classrooms shift from compliance and policing to creativity and practice. Participants leave with ready-to-use tools and strategies that align innovation with real instructional impact.

    • Level: Novice

      Presenter: Maria Wasley-Valdez

      With over 260,000 living alumni, Texas State’s University Advancement team has a big job in keeping alumni connected to their alma mater. This presentation explores how University Advancement is utilizing artificial intelligence to enhance alumni engagement by supporting more timely, relevant, custom, and data-informed outreach via channels relevant to our alumni. As Texas State seeks new ways to strengthen relationships with alumni amid growing expectations for personalization, AI tools such as a Virtual Engagement Officer offers opportunities to scale strategic communication efforts and ultimately bolster connection and community.

      Drawing on the applications within University Advancement and the Alumni Association, this session examines how AI can assist with engaging larger audiences with one-to-one communications without replacing the human relationships at the core of alumni engagement. We will highlight use cases such as connecting Alumni to events and news on campus, college specific content, and Alumni Association membership offers.

    • Level: Intermediate

      Presenters:

      • Mica Rutschke
      • Ripsime Bledsoe

      As Generative AI reshapes higher education, educators must move beyond policing tools to cultivating "AI-Resilience" (Corbin et al., 2025). This session presents a dual-layered case study from a capstone research course where students operationalized an "AI as Assistant" model. We examine how this intervention shifted student perceptions of self-efficacy and critical evaluation in academic writing. However, evaluating the complex process of student-AI interaction generates massive amounts of qualitative data. To address this, we present a novel, applicable methodological innovation: a human-in-the-loop AI-assisted qualitative research workflow. We demonstrate how we utilized AI to perform stage-by-stage thematic analysis on student logs and pre/post survey. By generating Baseline, Process, and Outcome codebooks, we successfully mapped the cognitive journey of students. Attendees will leave with a framework for integrating AI into writing assignments and a specific, prompt-engineered workflow for analyzing qualitative student data efficiently. 

    • Level: Novice

      Presenters: 

      • Jay Nguyen - AWS Account Manager
      • Sam Telles - AWS Senior Solutions Architect
      • Sohail Noor Muhammad - AWS Solutions Architect

      Join us for an engaging session showcasing how Texas State University is revolutionizing student support with "Ask TXST"—an AI-driven Assistant chatbot, powered by AWS, that provides instant, accurate answers about academic programs, admissions processes, financial aid, university policies, and campus life—available 24/7.

      In this presentation, we'll explore Texas State University's journey with AWS, from identifying challenges to implementing a cutting-edge AI Assistant in a matter of months. We'll also deep-dive into Texas State University's implementation of "Ask TXST" using Amazon Q Business, featuring a live demonstration of the AI Assistant that now serves over 40,000 students. Additionally, we'll highlight the growing adoption of AWS services across multiple Texas State University departments and discuss exciting opportunities for future collaboration. Whether you're exploring AI solutions for your department or looking to optimize existing implementations, this session will provide valuable insights and practical takeaways.

  • March 4, 2026 1:30-2:15 Sessions

    • Level: Novice

      Presenter: Dr. Ju Long

      This session showcases a transformative project within an online MBA "AI and Business Applications" course, where 60 non-technical students moved beyond basic prompt engineering to successfully design and deploy functional AI chatbots and agents. Students utilized accessible platforms, including Microsoft AI Foundry, Google Antigravity, and Claude, to build custom chatbots and agents capable of solving specific business inefficiencies.

      The presentation will outline the pedagogical structure used to demystify AI architecture for non-technical learners. It highlights a unique "triangle of support"—combining peer collaboration, instructor guidance, and AI-assisted troubleshooting—that allowed students to overcome technical hurdles. Attendees will view a sample of these student-created agents and leave with a roadmap for implementing similar "low-code/no-code" experiential learning projects that scale across disciplines.

    • Level: Intermediate

      Presenters:

      Large Language Models (LLMs) are increasingly used in programming courses. Still, many classroom applications rely on ad hoc prompting or generic Chain of Thought reasoning that is not well aligned with established software engineering practices. This session introduces Module of Thought (MoT), a structured prompting approach that guides LLMs to follow a defined software development process grounded in architecture, design principles, and design patterns.

      We present classroom experiences using MoT in a software development course, including a short in-class demonstration where students applied the approach to incrementally develop a calculator application. Instead of generating code in a single step, the LLM was guided to reason through and produce artifacts module by module, reflecting architectural decisions, interface contracts, and design patterns taught in class.

      The session demonstrates how MoT supports transparency, traceability, and closer alignment between AI-generated output and course learning objectives. Evidence drawn from student submissions, generated design artifacts, and instructional observations shows improved design consistency, more productive student interaction with AI tools, and clearer opportunities for instructor feedback. Participants will leave with a practical framework for integrating structured LLM prompting into project-based courses across disciplines. 

    • Level: Intermediate

      Presenter: Dr. Timothy Ponce

      Career readiness has become a central concern for many public universities, and the growing presence of AI in hiring has intensified questions about how students should prepare for the job market. Although career centers provide important institutional support, students often turn first to faculty in their home departments because they have established relationships with them. Faculty, however, are usually trained in research rather than industry practices, and many feel uncertain about how to discuss employment expectations or résumé development with confidence.

      This session introduces a practical method that uses web scraping and AI-supported text analysis to help faculty guide students in examining job descriptions as data. The approach gives faculty a structured way to help students identify common skills, tools, and language in postings related to their fields, allowing students to connect what they learn in their classes to skills listed in job posts. When students later visit a career center, they do so better prepared and with a clearer sense of how their academic work connects to hiring expectations. The method offers faculty a manageable entry point into these conversations while supporting students’ developing AI literacy.

    • Level: Novice

      Presenter: Dr. Anica Lee

      Faculty conversations about AI often center on concern, uncertainty, or enforcement, leaving many instructors unsure how to engage productively with these tools. This session introduces PlayLab, a low-stakes, pedagogically grounded approach that positions AI as a space for exploration, reflection, and learning rather than a shortcut or replacement for student work.

      PlayLab reframes AI use as a learning laboratory, where students and instructors test ideas, examine limitations, and reflect on outcomes. Drawing from classroom examples, this session will demonstrate how PlayLab activities can support student engagement, critical thinking, and metacognition while maintaining academic rigor. Rather than focusing on technical expertise, the emphasis is on instructional design and intentional framing.

      Participants will see how PlayLab can be implemented in a variety of teaching contexts—including STEM, writing-intensive, and general education courses—and how it can help normalize transparent, ethical AI use. Attendees will leave with adaptable strategies and activity ideas they can immediately apply in their own courses.

    • Level: Novice

      Presenters: CADS, UL RDS, VPIT, CTLS

      Join us for a hands-on and highly accessible workshop  where we will explore AI concepts and vocabulary, responsible use and ethics, look at applied research tasks, and work on AI collaboration briefs.

      Workshop Segments Include:
       
      • AI Concepts & Vocabulary (CADS)
      • Responsible Use & Ethics. (UL RDS)
      • Applied Research Tasks (live demos + guided activity) (VPIT)
      • AI Collaboration Brief  

       
    • Level: Novice

      Presenters:

      This session demonstrates how to create a functional AI Assistant using OpenAI's CustomGPT Builder. I'll share the complete experience of building a Materials Engineering Tutor for ENGR2300—to a fully operational GPT serving students with four distinct learning modes: Practice, Exam, Tutoring, and Explain.
      The presentation covers fundamental Generative Pre-trained Transformer (GPT) concepts and critical implementation details, including optimization strategies (e.g., addressing the conversation starter visibility problem) and gamification elements (e.g., streak tracking with emojis), without any direct textbook integration. Attendees will learn prompt engineering techniques for accurate technical explanations, CSV structuring for question banks, and methods to create engaging educational interactions.
      This proof-of-concept demonstrates that faculty can create discipline-specific AI Assistants without programming skills, providing 24/7 student support at dramatically reduced costs compared to traditional tutoring. The session emphasizes practical, transferable strategies applicable across all STEM disciplines and technical subjects, making AI-powered education accessible to educators willing to curate high-quality content.
       

    • Level: Novice

      Presenter: Oracle

      Join us for an overview presentation on Oracle Academy, Oracle's philanthropic global education program, dedicated to advancing technology education in schools, colleges, and universities at no cost. Discover how Oracle Academy’s extensive resources can elevate your institution’s academic offerings and empower both educators and students to thrive in today’s technology-driven landscape.

    • Level: Novice

      Presenter: Oracle

  • March 4, 2026 2:30-3:15 Sessions

    • Level: Intermediate

      Presenter: Doreen Mayrell

      Traditional education models were designed for efficiency rather than personalization. While the one teacher to many approach expanded access, it often left students disengaged or underprepared. Advances in generative artificial intelligence now make it possible to personalize learning at scale without increasing teacher workload.

      This session introduces an AI powered flipped classroom model that shifts personalization to pre class learning and transforms in class time into a high impact space for application, collaboration, and feedback. Students engage with adaptive, AI generated micro lessons tailored to their prior knowledge, misconceptions, pace, and interests. As a result, class time moves away from lecture and toward problem solving, discussion, and deeper understanding.

      Drawing on classroom implementations and educator professional development experiences, this presentation explores how AI can function as a personal learning coach while preserving the central role of the teacher. Key themes include alignment with learning science, implications for equity and access, and scalable design principles. The central message is that AI enables a shift from standardized instruction to learning environments that meet students where they are at scale.

    • Level: Novice

      Presenter: Dr. Apan Qasem

      Classroom AI Assistants are rapidly becoming powerful partners in teaching and learning. From simple chatbots that answer common course questions to more advanced virtual teaching assistants that guide students through assignments, AI can reduce instructor workload while improving student support and engagement. 

      This hands-on tutorial is designed specifically for TXST faculty and will walk participants through building an AI Assistant for their own Canvas course: no AI expertise required. Using software scaffolding developed in-house at TXST, faculty will learn how to stand up a functional AI Assistant by configuring a few key settings rather than writing code. 

      Participants will see how course materials can be securely pulled from Canvas, transformed into an AI-ready knowledge base, and deployed as a conversational assistant that students can interact with anytime. The session will also highlight practical use cases, deployment options, and resources for faculty who want to explore AI-enhanced teaching further.

    • Level: Novice

      Presenters:

      Generative AI is rapidly reshaping higher education, yet instructors continue to grapple with how to integrate these tools in ways that support learning without undermining critical thinking. This session presents a case study from a graduate-level Geography seminar on Global Climate Change that implemented a structured, three-stage approach to incorporating AI tools (such as ChatGPT, Gemini, Perplexity etc.) into coursework. In Stage 1, students completed tightly scaffolded assignments comparing their own answers with AI-generated responses using a standardized template. Stage 2 transitioned to a semi-structured format, where students used AI as a learning aid to answer topical questions while documenting verification and evaluation practices. Stage 3 granted students full autonomy to use AI in an independent final project, paired with documentation of AI use. Drawing on student homework artifacts, reflections, and pre/post surveys, this session examines how structured AI engagement influenced students’ learning, confidence, and critical awareness of AI’s strengths and limitations. Findings highlight how staged design helps students move from skepticism or overreliance toward more responsible, reflective use of AI. The session concludes with practical strategies for designing AI-integrated coursework that promotes disciplinary learning, ethical awareness, and transferable critical thinking skills.

    • Level: Intermediate

      Presenter: Dr. Moira DiMauro-Jackson

      This session shares a simple, repeatable way to turn short OER videos into guided online lessons—augmented with light, well-scoped AI support. In my fully online Italian sequence (A1/A2), I pair brief “Professor Dave” grammar clips with free practice from Italian Conversations and add an AI “micro-tutor” prompt in Canvas. Students are coached to use AI only for hints, rephrasings, and quick checks—not for doing the task—via clear “allowed / not allowed” examples.
      Each lesson follows four teacher-friendly steps that scale to any subject:
      1. Before watching: a 1-minute warm-up question tied to the week’s goal.
      2. During: a pause-and-check note plus a 1-question gate in the LMS.
      3. After: short practice using an open resource (listen/read, model, then create).
      4. Wrap-up: a one-sentence reflection and an optional AI-hint box (“Ask for a clue, not a solution”).

      Evidence from course analytics and student comments shows more on-time starts, clearer questions, and fewer “I’m lost” emails, while grades remain aligned with authentic performance. Accessibility is built in: transcripts/captions, printable versions, screen-reader-friendly pages, and flexible windows.

      Attendees leave with a copy-and-paste Canvas template, rubric snippets, an AI usage statement, and starter prompts that any instructor can adapt—swapping in their own OER video or reading.

    • Level: Novice

      Presenters: CADS, UL RDS, VPIT, CTLS

      This is a repeat session.

      Join us for a hands-on and highly accessible workshop  where we will explore AI concepts and vocabulary, responsible use and ethics, look at applied research tasks, and work on AI collaboration briefs.

      Workshop Segments Include:
      • AI Concepts & Vocabulary (CADS)
      • Responsible Use & Ethics. (UL RDS)
      • Applied Research Tasks (live demos + guided activity) (VPIT)
      • AI Collaboration Brief  
    • Level: Novice

      Presenters:

      This session presents two custom AI tools built with Microsoft Co-Pilot and ChatGPT to streamline Texas State University's curriculum review process. These AI tools streamline the identification and revision of prescriptive, advocacy-oriented, or affective-domain language in course proposals, ensuring that learning outcomes focus on analytical skills rather than predetermined conclusions. The Faculty Pre-Submission Assessment Tool provides real-time AI-powered analysis of course titles, descriptions, learning outcomes, and justifications, offering specific examples of problematic phrasing and concrete revision suggestions. The Reviewer Assessment Tool delivers a systematic AI-driven evaluation, generating consistent feedback across different reviewers and providing actionable recommendations. Both AI tools are actively deployed in Texas State's curriculum review workflow. The tools protect faculty academic freedom by analyzing only how learning outcomes are framed, focusing on analytical skills versus predetermined conclusions, while advancing equity by ensuring all students can participate regardless of background or beliefs. Attendees will learn the technical process of developing these AI tools, receive customizable prompt templates and agent configurations, and explore discipline-specific applications from Texas State's implementation across social sciences, humanities, STEM, and professional programs. This session demonstrates practical AI implementation using widely available platforms, requiring no specialized software or technical expertise beyond basic familiarity with Co-Pilot or ChatGPT.

    • Level: Novice

      Presenters:

      This session provides faculty with a practical, hands-on learning model for using generative AI to help students practice and reflect on professional communication in high-stakes settings. Participants will learn how to implement a structured, low-stakes role-play activity in which students use generative AI voice mode to design, conduct, and evaluate simulated interviews aligned with real-world professional goals.
      Students engage with generative AI to (1) create structured questions for an authentic professional scenario, (2) alternate between role-playing the interviewer and the candidate using AI voice interaction, and (3) reflect on their experience through a short video or written reflection. Through guided prompts and iterative feedback, students practice articulating ideas clearly, responding professionally, and evaluating their own communication strategies.
      Student feedback indicates that generative AI provides a low-stress, judgment-free environment for skill development, allowing learners to rehearse, receive feedback, and refine responses without the pressure of a live audience. Many students report increased confidence and greater awareness of professional communication expectations as they prepare for real-world interviews and presentations.
      This learning model emphasizes reflection, self-assessment, and confidence building and can be adapted across disciplines such as communication, business, marketing, speech, and career readiness courses.
       

       

    • Level: Beginner

      Presenter: Apple

      Artificial Intelligence and Machine Learning are transforming research productivity, and universities are increasingly looking to deploy AI-enabled computers to stay ahead. Apple takes a unique approach — deeply integrating hardware, software, and services to deliver an unparalleled user experience.

      Built with Apple Silicon, every Mac features the Neural Engine for accelerating AI and ML workflows. From running large language models locally to connecting securely with cloud-based systems, Mac offers researchers a powerful, private, and efficient AI platform.

      What you’ll learn:

      • Apple’s innovative hardware capabilities optimized for today’s AI and ML workflows
      • Discover Apple Intelligence features that boost productivity for faculty, staff, and students
      • Understand the role of Private Cloud Compute in accessing larger server-based models while protecting sensitive data
      • Experience the performance of public Large Language Models (LLMs) running efficiently on Apple Silicon Macs
  • March 4, 2026 3:30-4:15 Sessions

    • Level: Intermediate

      Presenter: Dr. Omar Lopez

       The definition of "workforce ready" changed the moment generative AI entered the office; this session identifies exactly which 27 skills matter most now and how their application has fundamentally shifted. While much of the current AI discourse remains speculative, this session presents evidence-based findings from an analysis of 115 non-STEM and 88 STEM undergraduate-level occupations (Job Zone 4). Utilizing SAS 9.4 and O*NET 28.1 data, we identified the core Knowledge, Skills, and Abilities (KSAs) essential for the modern graduate. We then cross-referenced these with "AI-Augmented" benchmark anchors—behavioral examples that replace legacy tasks with new human-AI synergies. For instance, the competency of Deductive Reasoning shifts from manual rule-application to the strategic validation of AI-generated logical chains. This session moves past abstract "AI literacy" to provide educators with a concrete, behavioral roadmap for curriculum redesign. Participants will explore how to transition students from routine "doing" to high-level "directing," ensuring that the "Education Shield" remains a powerful tool for social mobility and career resilience in an increasingly automated labor market.

    • Level: Novice

      Presenter: April Miles

      As generative AI tools become increasingly embedded in higher education, many faculty are unsure how to move beyond inconsistent or reactive, ad hoc policies or blanket prohibitions toward intentional, learning-centered use. This session draws on a mixed-methods dissertation study conducted at Texas State University that examined the impact of a brief, structured instructional intervention focused on Critical Prompt Literacy—teaching students how to think with AI rather than around it.

      This session draws on my doctoral research conducted at TXST. The study involved a 30–40-minute classroom lesson delivered across multiple sections of US 1100, reaching approximately 380 undergraduate students. The lesson combined practical instruction on crafting purposeful prompts (context, role, task, and constraints) with guided ethical reflection on transparency, authorship, and responsible AI use. Pre- and post-intervention survey data and follow-up interviews with students and faculty revealed measurable shifts in students’ confidence, ethical awareness, and understanding of appropriate AI use in academic work.

      Participants in this session will learn how even short, intentional instruction can improve student engagement, reduce confusion around academic integrity, and support more equitable access to AI tools. The session will emphasize classroom-tested strategies that faculty can immediately adapt to their own courses, regardless of discipline or prior experience with AI.

    • Level: Novice

      Presenter: David Angelow

      As generative AI becomes embedded in academic and professional environments, instructors face a critical challenge: how to design learning experiences where students learn with AI rather than outsource thinking to it. This session shares classroom-tested approaches from two undergraduate courses—ISAN 3380 (business analytics) and ANLY 2300 (a cross-functional analytics course for all majors).

      Students engaged in structured AI-supported activities to deepen conceptual understanding, connect course materials, and examine how AI is reshaping real-world analytics work. Activities included student-generated prompts using AI tools such as Perplexity to supplement course content, and model-agnostic analyses of how AI affects data cleaning, data modeling, and data visualization in professional practice.

      The session emphasizes learning outcomes, assessment design, and observed impacts on student understanding and engagement. Participants will gain practical strategies for aligning AI use with course objectives, evaluating student learning in AI-supported assignments, and preparing students for AI-influenced workplaces.

    • Level: Intermediate

      Presenter: Dr. Ivilina Popova

      Generative AI is now unavoidable in student work, but learning gains depend on how assignments are designed. This session presents evidence from a semester-long undergraduate project that deliberately integrates AI while preserving analytical rigor and student ownership.

      Students completed a hedge-fund-style project requiring portfolio construction, institutional-data backtesting, risk-adjusted performance evaluation, and a professional investor pitch. AI use was permitted only for explanation, verification, and communication support, and every student submitted a required AI-use reflection. Across submissions, students consistently used AI to clarify financial concepts, verify Excel formulas, and improve presentation quality—while all quantitative analysis and investment decisions remained student-generated.

      Student reflections indicate increased confidence and understanding, particularly when navigating complex tools or time constraints. As one student noted, “AI helped me check my work and understand calculations I didn’t fully grasp at first, but I still had to understand every step to make the numbers make sense.”

      The key insight is that well-designed constraints turn AI into a learning accelerator rather than a shortcut. 

    • Level: Novice

      Presenters:

      Artificial intelligence tools are increasingly shaping how research is conducted across disciplines, yet many faculty remain unsure where to begin or how to adopt these tools responsibly. This panel features early-career faculty sharing concrete, real-world examples of how they have integrated AI into their research workflows. Panelists will discuss the tools they use, the tasks where AI has been most effective, ethical and authorship considerations, and lessons learned from experimentation.

      The session aims to demystify AI for research by focusing on practical use cases rather than theory. By highlighting both successes and limitations, the panel will offer an honest and accessible perspective on how AI can support research productivity while maintaining rigor and integrity.

    • Level: Intermediate

      Presenter: Dr. Matthew Flynn

      Generative AI tools are now deeply embedded in students’ academic and professional lives, yet most students still interact with AI as a simple tool rather than as a collaborative partner. As NVIDIA CEO Jensen Huang has noted, AI itself will not replace workers, but those who use AI effectively will replace those who do not.

      This session reframes AI as a teammate that can meaningfully enhance thinking, creativity, and productivity when used well, but can substantially undermine learning when used poorly. The talk introduces an interdisciplinary framework for effective prompt design grounded in clarity of objectives, contextual reasoning, and iterative refinement. A live demonstration contrasting ineffective and effective prompts highlights common student pitfalls and illustrates the cognitive skills that should be explicitly taught in AI-enabled classrooms.

      Attendees will leave with a practical mental model for guiding students toward responsible, transparent, and intellectually engaged AI use. This approach emphasizes productivity gains while reinforcing, rather than replacing, critical thinking.

    • Level: Novice

      Presenters:

      This session presents research results from a three-semester mixed methods study that examined how undergraduate psychology students used and perceived generative AI (genAI) in an upper-level research methods course, followed by a practical demonstration of how instructors can conduct similar investigations in their own classrooms. Drawing on 227 written reflections from 68 students between Summer 2024 to Spring 2025, the study found that students primarily used genAI academic support for clarifying readings, summarizing research articles, structuring writing, and navigating APA. Students’ attitudes became increasingly positive over time, though concerns about accuracy, ethical boundaries, and overreliance persisted. Ethical reasoning and instructor guidance emerged as central influences shaping responsible genAI use.

      The second half of the session pulls back the research curtain showing how to extract, clean, and transform Canvas reflections/assignments into usable datasets that reveal patterns in students’ genAI engagement. We demonstrate a take and transform approach, distinct from traditional lecture or make and take formats, offering practical strategies for designing reflection prompts, integrating pedagogical AI activities, and using classroom derived data for research or classroom application. We address realistic timelines and IRB considerations. Participants will leave with adaptable tools they can apply to study and support students’ evolving genAI practices.

    • Level: Novice

      Presenter: Shashank Santosh

      In an age of constant distraction, attention is a precious and limited resource. Designing for learning is also designing for attention - an opportunity to nurture, protect, and sustain it. This workshop will explore strategies for engaging students across the entire learning journey: from piquing curiosity by framing a course around a big question, to modeling engagement and creating a classroom climate that invited participation. We'll examine practical methods for “warming the course”, and sustaining focus through interactive content. Drawing on insights from theatre and learning science - with examples of how tools like Top Hat can bring these strategies to live - participants will walk away with practices to foster connection and design learning experiences that resonate in an attention-scarce world.

  • March 5, 2026 10:45-11:30 Sessions

    • Level: Intermediate

      Presenter: Dr. Jennifer King

      This session explores practical tools designed to enhance student engagement, support comprehension, and reduce instructional preparation time. Participants will learn how to leverage NotebookLM video creation to generate instant transcripts and explore options for adding closed captioning to instructional videos. The session will also demonstrate how to use Canva AI to create professional-quality visuals, including custom page images for your student-ready Canvas pages. Whether the goal is to increase accessibility or streamline lesson preparation, this session provides a clear blueprint for implementing a modern, AI-enhanced teaching workflow.

    • Level: Intermediate

      Presenter: Jennifer Miller-Ray

      AI enabled multimodal assessment systems are transforming higher education by capturing authentic learning behaviors that traditional methods overlook. Building on recent developments in AI assessment (Yesilyurt, 2023; Edwards, Wexner, & Nichols, 2021; Akintola, Akintayo-Kadri, et al., 2025), this session explores how intelligent systems create more accurate, accessible, and personalized feedback processes essential for student success. The integration of AI-driven tools to include advanced reading diagnostics, natural language processing for oral responses, and platforms that synthesize voice, text, and gesture data reveals comprehensive insights into the learning process while supporting the relationship-rich education framework advocated by Felten and Lambert (2020).  Grounded in Universal Design for Learning principles (CAST, 2024), this session demonstrates how multimodal AI assessment reduces barriers while maintaining rigorous academic standards. Participants will examine practical applications across disciplines, addressing critical considerations including ethical frameworks, privacy concerns, and implications for instructional design as assessment evolves into an intelligent, embedded component of teaching and learning. Attendees will receive curated AI literacy assessment resources demonstrating strategies for leveraging these tools to strengthen feedback loops, increase engagement, and develop discipline-specific competencies. The presentation provides actionable frameworks for integrating AI-enabled multimodal assessment while maintaining pedagogical integrity and student agency in higher education contexts. Discover how AI and Assessment tools reshaped the Connecting with Students for Success Program to improve feedback with preservice educators in the Learning with Technology undergraduate course at Sul Ross State University sponsored by the Provost's office. 

    • Level: Novice

      Presenter: Dr. Miha Vindis

      This session introduces a practical, human-centric workflow for integrating Artificial Intelligence into writing and communication-based curricula. Moving beyond basic text generation, this approach empowers students to use AI to deepen their own empathy and critical thinking. The presenter will demonstrate a three-stage student workflow piloted in the CPM and MPA programs but applicable across multiple disciplines: 1) Using AI to simulate diverse audience personas for independent analysis; 2) Leveraging AI to refine tone and style for specific contexts; and 3) Employing AI role-play to prepare for rigorous oral defense. By positioning AI as a "critical collaborator" rather than a writer, this method forces students to engage more deeply with the human elements of argument. The session provides attendees with a replicable framework for teaching these skills, improving student performance in research, persuasion, and defense.

    • Level: Novice

      Presenter: Jeff Davis

      Artificial intelligence is often framed as an optional tool—something educators can choose to allow or prohibit. Yet most faculty already use AI daily without realizing it: through algorithmic filtering, adaptive interfaces, real-time transcription, predictive text, auto-enhancement, and embedded systems across devices. This session reframes AI not as a discrete technology but as an ambient cognitive infrastructure that shapes how students (and faculty) think, work, and learn.

      Using real classroom experiences from Communication Design—paired with practical implementation strategies—this session demonstrates how to move from AI avoidance or fear toward AI fluency, ethical awareness, and creative agency. Participants will see concrete, evidence-based examples of assignments, critique methods, accessibility gains, and engagement strategies that work for both AI novices and skeptics.

      Rather than debating whether students “should” use AI, we examine how educators can teach effectively in an environment where AI already influences cognition, attention, creativity, and assessment. The session provides a pragmatic, values-driven roadmap for integrating AI that strengthens learning outcomes, supports equity, and maintains student authorship.
       

    • Level: Intermediate

      Presenter: Dr. Ted Lehr

      Recent Texas State computer science graduates discuss their use of AI on the job and contribute to a conversation on what outcomes should be targeted by educators. 

    • Level: Novice

      Presenters:

      This session presents Texas State University's innovative approach to streamlining general education assessment auditing through AI assistance. Facing mandated assessment requirements from THECB and SACSCOC, faculty auditors spend considerable time reviewing assessment reports with varying consistency in quality. In collaboration with Texas State IT, the Office of Program Accreditation and Assessment developed a custom ChatGPT-based Contextual AI Assistant trained on institutional rubrics and exemplar reports. This tool conducts initial reviews of assessment reports, identifying gaps and providing recommendations while maintaining faculty expertise as the final authority. Participants will learn how AI can augment, not replace, human judgment in academic assessment processes, reducing workload while improving consistency and quality of feedback.

    • Level: Intermediate

      Presenter: Sunja Fraser

      As artificial intelligence becomes more present in classrooms, educators face a critical question: how do we use AI without losing the humanity at the heart of teaching and learning? This session shares real classroom experiences from a secondary English classroom where AI was intentionally used to center student stories, amplify student voice, and support teacher decision-making rather than replace it.

      Participants will explore practical strategies for using AI as a thinking partner to help students reflect, revise, and articulate their experiences, particularly for students who often struggle to be seen or heard in traditional academic spaces. Through concrete examples, including prompts, scaffolds, and student artifacts, this session demonstrates how AI can support equity, engagement, and deeper learning when grounded in intentional instructional design.

      Rather than focusing on tools alone, this session emphasizes instructional moves that humanize learning and can scale across disciplines. Attendees will leave with adaptable strategies, classroom tested prompts, and a clearer framework for using AI to enhance student centered instruction while maintaining ethical and equitable practices.

    • Level: Intermediate

      Presenter: Jennifer Stob

      Last year at this symposium, I shared my experiences with student AI plagiarism and outlined strategies to keep AI helpful rather than hurtful in humanities assignments. I return with hard lessons this year: my carefully-crafted and lenient AI policy failed, as did some of my strategies for inculcating an appreciation of writing. I process what happened here, centering my student's own observations about their AI use. Advanced, engaged student thinkers do not always view writing as worthwhile, even when given great freedom to customize assignments. Student aversion to writing likely goes deeper than I had thought. I will review the AI policy I developed, explain its breakdown, share my students' reflections on their AI use, and present a new framework for an entire course unit on AI use and the worth of writing. My session will offer new strategies for helping students understand writing as embodied thinking essential to their personal, professional and civic lives. This renewed attempt at pedagogy that reframes both the writing-AI relationship and the one between interior and exterior thought can be applicable to classrooms across university disciplines. The goal is to ensure students choose writing deliberately and understand what they gain by that choice. 

  • March 5, 2026 1:30-2:15 Sessions

    • Level: Intermediate

      Presenter: Kamarie Carter

      As AI quickly integrates into our classes and students’ lives, there is a valid concern about its impact on authentic learning and academic integrity. How can we educate students about AI in their respective disciplines while steering them away from an overreliance on it? Join this interactive session to explore practical strategies for fostering authentic learning in an AI-forward class. Participants will have the opportunity to reflect on current assignments and brainstorm ways to integrate AI that enhances, not replaces, meaningful learning experiences. 

    • Level: Novice

      Presenter: Dr. Shelly Forsythe

      This session examines the implementation of weekly, small-scale AI-embedded learning tasks in a STEM for Early Childhood and Elementary Education course for preservice teachers. Rather than relying on a single high-stakes AI assignment, the course design integrated consistent, low-risk AI activities paired with structured reflection. Drawing on student work and reflective data from elementary, special education, and bilingual preservice teachers, the session highlights how repeated exposure influenced participants’ understanding of AI, their perceptions of its role in teaching and learning, and their self-efficacy in using AI professionally. Findings from two months of initial implementation will be presented. The session concludes with implications for faculty across disciplines interested in embedding AI in ways that emphasize pedagogy, reflection, and transfer rather than tool mastery alone. Participants will leave with concrete examples and design principles they can adapt to their own courses.

       

    • Level: Intermediate

      Presenter: Dr. Omar Lopez

      As Artificial Intelligence reshapes the United States labor market, educators face a critical challenge: preparing students for an economy defined by both displacement and augmentation. This session introduces the "Education Shield" framework, based on a comprehensive study of 846 occupations using SAS 9.4 and O*NET data. I present the Net Labor Capacity (NLC) model, which quantifies how AI serves as a productivity multiplier rather than just a subtractive force. The findings reveal that while AI displacement is pervasive, higher educational attainment—specifically Job Zones 4 and 5—provides a significant "Augmentation Dividend," protecting labor value while amplifying productive capacity. This session moves past the binary "job loss" debate to provide evidence-based insights into the specific human competencies, such as social intelligence and complex problem-solving, that students must master to thrive. Participants will explore how to integrate these findings into P-16 curricula to ensure all students can transition from vulnerable workers to augmented "Super-Workers".

    • Level: Novice

      Presenter: Dr. Oluseyi Sowole

      Artificial intelligence is becoming increasingly visible in higher education, yet its adoption remains focused on individual tools rather than on the design of learning environments. As a result, institutions risk adding technological complexity without meaningfully improving student learning, engagement, or instructional effectiveness.
      This session reframes AI not as a collection of technologies to be adopted, but as a design element within a broader learning system. Drawing on teaching practice, learning science, and systems thinking, the session explores how AI can be thoughtfully integrated into course design, formative assessment, feedback processes, and student support in ways that reduce friction, increase clarity, and strengthen student learning. The emphasis is on supporting student thinking and sensemaking, rather than on automating or replacing core learning activities.
      Participants will examine practical, minimal risk use cases of AI in teaching, discuss common pitfalls and ethical considerations, and apply a simple framework for deciding when and where AI meaningfully supports learning. The session is designed to be accessible to educators without technical backgrounds and focuses on pedagogical judgment rather than technical implementation. The overall goal is to help educators move from reactive adoption toward intentional design of AI-enabled learning environments that are sound, responsible, and focused on student engagement and success.
       

    • Level: Novice

      Presenter: Dr. Deirdre Williams

      Integrating artificial intelligence into classrooms presents preservice teachers with profound opportunities and ethical challenges for cultivating students’ critical thinking, especially regarding distinguishing fact from fiction and determining ethical use of AI tools. As Carter (2026) notes, one of AI’s central aims in education is to prepare a new generation to navigate an increasingly complex digital world, including through inquiry based learning that encourages students to question, explore, and construct their own knowledge. Grounded in epistemological frameworks that ask “How do we know what we know?”, this session draws on a 2025 scoping review of AI learning tools in K–12 education, which highlights evidence based strategies for supporting critical thinking and problem solving, as well as research on ethical and critical AI literacy in elementary settings (Meniado, 2024). Khan’s (2024) call to rethink how we “teach everything to everyone” underscores the urgency of helping students evaluate information credibility in an era of AI generated content. A complementary systematic review (Wiese, et al., 2025) further emphasizes ethics and reflective practice as essential learning outcomes. Participants will leave with practical, equity centered tools that promote inquiry, verification, and reflective judgment in grades K–5 that are transferrable to upper grades and higher education.

       

    • Level: Intermediate

      Presenter: Tim Mousel

      This session will demonstrate how educators and instructional technologists can use generative AI to radically streamline teaching workflows, focusing on the integration with LMS systems. Participants will learn hands-on techniques analyze entire courses for best practices, how to increase student success rates through redesigning assignment aesthetics, to upload a previous semester syllabus and use AI to analyze course dates, assignments, and due patterns; then automatically generate new due dates, announcements, and Intelligent Agents for a new semester. The session also introduces an innovative method using Google NotebookLM to turn course materials or faculty guides into dynamic FAQs that can be embedded directly into D2L using accessible HTML accordion design.

      Through live demonstrations, participants will see how AI transforms repetitive tasks like date rollovers, announcement scheduling, and FAQ creation into automated, error-free workflows, freeing time for deeper teaching and design innovation. By session end, attendees will walk away with reusable prompts, tested templates, and an ethical framework for applying these techniques responsibly across disciplines.

    • Level: Intermediate

      Presenter: Casey Ford

      As generative AI becomes embedded in higher education, its potential to advance accessibility is both promising and complex. This session explores how AI tools can operationalize Universal Design for Learning (UDL) principles to create learning environments that work for all students. We will examine practical applications such as AI-driven captioning, adaptive content generation, and real-time language support, alongside strategies for mitigating algorithmic bias and ensuring equitable access. Participants will leave with tools for evaluating AI technologies through an accessible lens and integrating them into course design without compromising academic integrity or student agency. 

    • Level: Novice

      Presenter: You

      Join your colleagues attending the AI in Teaching and Learning Symposium in the Grand Ballroom to exchange ideas, chat about use cases, and get to know each other more. Discussion topics will be placed on tables to help facilitate conversation, and feel free to add your own topics to the mix as well. Learn from your peers and make meaningful connections with like-minded attendees.

  • March 5, 2026 2:30-3:15 Sessions

    • Level: Intermediate

      Presenters:

      This four-part professional development series sponsored by the STEM-For-All Partnership equips educators to integrate AI and educational technology into their instruction. In the four-part series, participants review or create AI use policies, design a custom AI bot, and enhance existing lessons with TEKS-aligned technology goals. Educators are collaborating, receiving peer feedback, and measuring the impact of their tech-enhanced lessons. Attendees are required to participate in all four professional development sessions held on Saturdays (in September and November 2025, and January and March 2026) and upload their completed lessons and AI tools into the Canvas site for review and sharing. The professional development series is a collaborative initiative of the Department of Organization, Workforce, and Leadership Studies (OWLS) and the College of Education’s LBJ Institute for STEM Education & Research.  The STEM-For-All Partnership is an LBJ Institute grant-funded initiative through a Congressional appropriation under the sponsorship of U.S. District 31 Representative John Carter. This year-long pilot effort has the future potential to be developed into a Texas State Global course offering and/or an undergraduate or graduate course. 

    • Level: Novice

      Presenters:

      This qualitative study explores the lived experiences of undergraduate students enrolled in STEM-adjacent disciplines (referred to as STEM X) at a Hispanic Serving Institution (HSI). The project examines undergraduate STEM X students’ entry in modular bridging curricula in data and computation, AI, and high-engagement strategies contribute to their retention and development of belonging and STEM identity (Estrada et al., 2011; Hernandez, Woodcock, Estrada, & Schultz, 2013). Focusing on participants in the STEM-CLEAR program, the study seeks to illuminate individual and collective journeys through three longitudinal interviews. Special emphasis will be on the implementation of project activities within the systems and institutional context of the HSI, investigating how these practices support the persistence and success of students in STEM X (Nuñez et al., 2013). 

    • Level: Intermediate

      Presenters:

      • Luis Hernandez-Ledezma
      • Dr. Uvaldina Janecek

       

      This session presents a human-centered framework for AI-enhanced teacher development in which AI tutors function as Vygotskian “More Knowledgeable Others,” supporting both pedagogical reasoning and target-language proficiency through scenario-based learning. Building on a chapter recently published with IGI Global, the session demonstrates how AI tutors can be designed to evaluate educators’ understanding of theory and their ability to apply that theory in realistic instructional scenarios using a specified target language.

      The system supports multiple target languages, including English, Spanish, Portuguese, and Vietnamese, allowing participants to engage with scenarios while practicing academic and professional language aligned to instructional goals. Since the publication of the chapter, the system has evolved to include highly contextual prompting, adaptive scenario progression, and multi-model verification processes that strengthen the accuracy, safety, and pedagogical alignment of AI-generated feedback. These components operate alongside a retrieval-augmented generation (RAG) architecture grounded in expert-reviewed content.

      Participants will explore how AI tutors can simultaneously support conceptual understanding, instructional decision-making, and language use within scenario-based activities. The session emphasizes design principles, ethical considerations, and the role of human expertise in creating reliable AI-supported learning environments for teacher preparation and professional development.

    • Level: Intermediate

      Presenter: Dr. David Levy

      This session follows on from last year’s presentation about the law and ethics of artificial intelligence in legal education and focusses on the practical use of generative artificial intelligence in the classroom.  It challenges traditional assumptions about classroom learning and provides concrete examples of how the use of generative AI can enhance education, rather than hasten its diminution.  Given that the law is not static but is in an ongoing state of development though the legislative process and court decisions, memorization of detailed facts is counter-productive at best and may be better characterized as an act of malpractice.  

      As professors, what is our role? If the expectation is that we only provide need to provide facts to students and expect memorization of such facts, we're mere programmers. If the essential truths of a given subject remain a constant with detail and nuance subject to development, we should be concerned with promoting the ability to think and reason about the essential concept. If we acknowledge that detail is subject to change, by making students aware of that potential variability and showing them how to verify whether information is current, we prepare them for a lifetime use and application of such information.
       

    • Level: Novice

      Presenters:

      • CADS
      • University Libraries
      • VPIT
      • CTLS

      Join CADS, University Libraries, VPIT, CTLS, and campus partners for an informal drop-in session where faculty and graduate researchers can ask questions, explore AI tools, meet experts, and learn about research support resources. Topic stations include: AI tools for research, responsible use & ethics, data/HPC readiness, teaching with AI, and grant support. Attendees may bring questions, ideas, or works-in-progress.  

    • Level: Novice

      Presenters:

      • Mark Whitworth

      Explore AI use cases in higher education, and how AI efforts often fail due to low utilization resulting from decisions which place security at odds with return on AI investments. Uncover why multitenancy and cloud environments are often critical to maximizing value and how ARMOR, the first chip-to-cloud AI security framework developed with industry and university leaders improves the flexibility, economics, and security of your AI efforts.

    • Level: Novice

      Presenters:

      This presentation examines the use of AI-powered simulations adopted by the Center for Professional Sales as a scalable instructional strategy aligned with experiential and competency-based learning. The session demonstrates how the Second Nature AI tool, implemented across professional sales courses at Texas State University, provides students with unlimited access to realistic sales conversations with virtual buyers and sellers, executive presentation feedback, sales competition preparation, and support for building effective elevator pitches. Second Nature AI addresses traditional barriers such as limited class time, peer availability, and instructor capacity.

      Unlike many artificial intelligence tools that primarily focus on content generation or knowledge acquisition, this application emphasizes the development of human centered skills, including public speaking, emotional intelligence, active listening, and the ability to think on one’s feet in dynamic interactions. Participants will see how Second Nature AI simulates authentic sales scenarios, adapts dynamically to student responses, and delivers immediate, personalized feedback on key competencies, including communication, discovery, objection handling, and closing techniques.

      The presentation highlights the value of continuous, low-risk practice, enabling students to build confidence, refine their sales approach, and learn through repetition without fear of failure. The session concludes with practical examples of classroom implementation and evidence of impact on student skill development, engagement, and career readiness.

  • March 5, 2026 3:30-4:15 Session