The latest IJEI article is out: “Algorithmically-driven writing and academic integrity: exploring educators' practices, perceptions, and policies in AI era” by Leah Gustilo, Ethel Ong, and Minnie Rose Lapinid #Academic Integrity #ArtificialIntelligence https://lnkd.in/gFhphGM2
Sarah Elaine Eaton, PhD’s Post
More Relevant Posts
-
I help struggling students to regain their confidence through Innovative design learning solutions, excellence in learning, leadership development, career readiness and professional development.
There is an urgent need to educate students on the proper use of AI. One of the most concerning consequences of AI GPT's is the number of students who have faced academic probation or suspension due to their use of AI-generated content. Unfortunately, this trend continues, despite the fact that AI is here to stay. Academic institutions and educators need to accept this new reality by providing guidance on the ethical use of AI tools, proper citation, and academic integrity. It's very important to recognize that AI is here to stay, and we should teach students how to use its while maintaining academic integrity. To address this issue, it's important that we provide students with guidance on how to utilize AI tools effectively and ethically, including proper citation and academic integrity best practices. I recommend checking out this article, which offers valuable insights and resources for responsible AI use in academic settings. Please out this from Michelle Kassorla, Ph.D. and Eugenia Novokshanova, Ph.D. at https://lnkd.in/gssM36by
Addressing Writing Instructors' Worries in the Age of Ai
papers.ssrn.com
To view or add a comment, sign in
-
"With or without AI-based tools, students will always need faculty support to help develop the writing, critical thinking, rhetorical, and evaluative skills required to become effective communicators. Institutions will benefit from developing a clear and sophisticated definition of academic integrity to honor the student-faculty dynamic in the age of AI." https://lnkd.in/enb_nZua
Rethinking Academic Integrity Policies in the AI Era
grammarly.com
To view or add a comment, sign in
-
My 2 cents again on this subject: In response to Turnitin's new AI Paraphrasing Detection Feature, I urge us to focus on #education rather than detection. As educators, we must teach students how to use AI tools ethically and responsibly.#GenerativeAI is new to all of us, and expecting students to know how to use these tools transparently without guidance is unfair. Instead of using detection tools like Turnitin's feature punitively, preferably not used at all, we should use them to initiate constructive conversations and learning opportunities. #AITools can greatly enhance education when used correctly. Turnitin does acknowledge that its detection tools are designed to initiate constructive conversations and not be the sole basis for punitive action. However, I've heard of many faculty who do not use these indicators as a point of discussion but rather to catch and punish. By teaching students the ethical use of AI, we can prevent misuse and foster a more honest and effective learning environment. Let’s prioritize education over punishment and guide our students in navigating this new technological landscape. https://lnkd.in/gH8_GuKe
Turnitin Helps Educators and Publishers Advance Critical Thinking with New AI Paraphrasing Detection Feature (ANZ)
turnitin.com
To view or add a comment, sign in
-
Lecturer @ Henley Business School | AI Skills for Higher Results | VR and Web3 Specialist | Founder NewBusinessEdu
Gen AI is changing the way we work and learn. Educators will need to assess learners by their ability to use these technologies to support critical thinking and substantiate human capacity. Writing essays do not measure these skills anymore. https://lnkd.in/eeKqMJ56
More than half of UK undergraduates say they use AI to help with essays
theguardian.com
To view or add a comment, sign in
-
Can Chatgpt give feedback to students on their writing? See this new article Patti Taylor and I wrote exploring the possibility and limits of using LLMs like ChatGPT for essay feedback. Check it out in the Journal of Applied Learning and Teaching. It's open access! #artificialintelligence #teachingwriting #chatgpt
We continue to publish articles for the December issue (vol. 7, issue 2) of the Journal of Applied Learning & Teaching. JALT is a Scopus-indexed (Q1 in Scopus), open-access academic journal that neither charges its readers nor its authors. We are pleased to present a research article by Prof Patricia Taylor and Prof Mark Marino: On feedback from bots: Intelligence tests and teaching writing "One of the much-debated uses for AI, especially among writing instructors, is the potential for AI to take over the commenting and grading functions of teaching. In this paper, we describe the creation and use of AI for writing feedback in two separate but interconnected approaches: the use of the “Perfect Tutor” exercise in the classroom to teach students to conceptualize the components and priorities we bring to the writing process, and how students might struggle to make use of the same AI for feedback in a less actively guided context, or when the emphasis is not on the metacognition surrounding writing. During our examination of making bots and evaluating their feedback, we explore the limits of current AI. While emphasizing the importance of understanding the limitations, we also identify productive uses of these AI feedback bots in the college writing classroom to develop student critical thinking and writing." All articles, past and present, are available here: https://lnkd.in/gsVZen4 #JALT, #JournalofAppliedLearningandTeaching, #HigherEducation, #HigherEducationResearch, #KaplantheChoice, #AIEd, #GenAI, Shannon Tan, Pauline Seah, Mohamed Fadhil Mohamed Ismail(Ph.D.), Dr Fiona Xiaofei Tang, Vanessa Stafford, Veronica Mitchell, Dr. Kyriaki Koukouraki SFHEA, Lydia Lymperis, PhD, Ailson J. De Moraes, Begüm Burak (PhD) ✍️🐝, Samson Tan (Dr), Dr Margarita Kefalaki, 柯菊, Stefan Popenici, Ph.D., Faiza Qureshi, Sayan Dey, Ph.D., Dr Tarin Ong, Anna Mills, Dr Mike Perkins, Dr Jasper Roe SFHEA, Leon Furze, Ahmed Tlili, Michael Adarkwah
To view or add a comment, sign in
-
I allow my doctoral students to collaborate with AI on written assignments. When I started teaching my students how to collaborate with AI, I immediately recognized the need to develop new rubrics that prioritize depth of thought, originality, and complexity of ideas over surface-level writing quality. Here's why: · 𝐀𝐈 𝐞𝐱𝐜𝐞𝐥𝐬 𝐚𝐭 𝐬𝐮𝐫𝐟𝐚𝐜𝐞-𝐥𝐞𝐯𝐞𝐥 𝐢𝐦𝐩𝐫𝐨𝐯𝐞𝐦𝐞𝐧𝐭𝐬: AI tools are great at enhancing grammar, structure, and overall readability. This means that weaker ideas can be masked by well-polished writing. · 𝐃𝐞𝐩𝐭𝐡 𝐨𝐟 𝐚𝐧𝐚𝐥𝐲𝐬𝐢𝐬 𝐢𝐬 𝐮𝐧𝐢𝐪𝐮𝐞𝐥𝐲 𝐡𝐮𝐦𝐚𝐧: The synthesis of complex ideas, critical evaluation of concepts, and application of theory to novel situations remain distinctly human skills. 𝐀𝐧𝐝 𝐠𝐮𝐞𝐬𝐬 𝐰𝐡𝐚𝐭? 🚫 You don't need an AI detector to distinguish between human and AI-generated text. ✅ Instead, focus on learning how to assess the student's quality of analysis and original thought. Learn more at my upcoming webinar "𝐇𝐨𝐰 𝐭𝐨 𝐀𝐬𝐬𝐞𝐬𝐬 𝐖𝐫𝐢𝐭𝐢𝐧𝐠 𝐖𝐢𝐭𝐡𝐨𝐮𝐭 𝐀𝐈 𝐃𝐞𝐭𝐞𝐜𝐭𝐨𝐫𝐬." 🔗 https://lnkd.in/eNHzVdMj #AIInEducation #AcademicWriting #DoctoralEducation #AICollaboration #AssessmentInnovation #TeachingwithAI #AILiteracy #AcademicChatter
How to Assess Writing Without AI Detectors
https://moxielearn.ai
To view or add a comment, sign in
-
Completely agree! Many writing teachers and coaches don't provide deep feedback beyond grammar and mechanics, perhaps because making the writing "readable" has always been the priority. Now that AI tools can clean up the writing, human expertise is needed to provide meaningful, nuanced feedback for idea development, organization, and writing craft.
I allow my doctoral students to collaborate with AI on written assignments. When I started teaching my students how to collaborate with AI, I immediately recognized the need to develop new rubrics that prioritize depth of thought, originality, and complexity of ideas over surface-level writing quality. Here's why: · 𝐀𝐈 𝐞𝐱𝐜𝐞𝐥𝐬 𝐚𝐭 𝐬𝐮𝐫𝐟𝐚𝐜𝐞-𝐥𝐞𝐯𝐞𝐥 𝐢𝐦𝐩𝐫𝐨𝐯𝐞𝐦𝐞𝐧𝐭𝐬: AI tools are great at enhancing grammar, structure, and overall readability. This means that weaker ideas can be masked by well-polished writing. · 𝐃𝐞𝐩𝐭𝐡 𝐨𝐟 𝐚𝐧𝐚𝐥𝐲𝐬𝐢𝐬 𝐢𝐬 𝐮𝐧𝐢𝐪𝐮𝐞𝐥𝐲 𝐡𝐮𝐦𝐚𝐧: The synthesis of complex ideas, critical evaluation of concepts, and application of theory to novel situations remain distinctly human skills. 𝐀𝐧𝐝 𝐠𝐮𝐞𝐬𝐬 𝐰𝐡𝐚𝐭? 🚫 You don't need an AI detector to distinguish between human and AI-generated text. ✅ Instead, focus on learning how to assess the student's quality of analysis and original thought. Learn more at my upcoming webinar "𝐇𝐨𝐰 𝐭𝐨 𝐀𝐬𝐬𝐞𝐬𝐬 𝐖𝐫𝐢𝐭𝐢𝐧𝐠 𝐖𝐢𝐭𝐡𝐨𝐮𝐭 𝐀𝐈 𝐃𝐞𝐭𝐞𝐜𝐭𝐨𝐫𝐬." 🔗 https://lnkd.in/eNHzVdMj #AIInEducation #AcademicWriting #DoctoralEducation #AICollaboration #AssessmentInnovation #TeachingwithAI #AILiteracy #AcademicChatter
How to Assess Writing Without AI Detectors
https://moxielearn.ai
To view or add a comment, sign in
-
Generative AI educator and career services innovator tapping into AI-enhanced thinking for professional growth
I agree with the general sentiment of this message, but I disagree that depth of analysis is uniquely human. It might be in its infancy, but generative AI Is decent at identifying themes, rating writing based on a rubric, and identifying gaps in knowledge. Analysis, originality, and complex synthesis are only a stones throw away. Perhaps further, but I definitely think it will be able to accomplish nuanced analysis relatively soon at its current pacing.
I allow my doctoral students to collaborate with AI on written assignments. When I started teaching my students how to collaborate with AI, I immediately recognized the need to develop new rubrics that prioritize depth of thought, originality, and complexity of ideas over surface-level writing quality. Here's why: · 𝐀𝐈 𝐞𝐱𝐜𝐞𝐥𝐬 𝐚𝐭 𝐬𝐮𝐫𝐟𝐚𝐜𝐞-𝐥𝐞𝐯𝐞𝐥 𝐢𝐦𝐩𝐫𝐨𝐯𝐞𝐦𝐞𝐧𝐭𝐬: AI tools are great at enhancing grammar, structure, and overall readability. This means that weaker ideas can be masked by well-polished writing. · 𝐃𝐞𝐩𝐭𝐡 𝐨𝐟 𝐚𝐧𝐚𝐥𝐲𝐬𝐢𝐬 𝐢𝐬 𝐮𝐧𝐢𝐪𝐮𝐞𝐥𝐲 𝐡𝐮𝐦𝐚𝐧: The synthesis of complex ideas, critical evaluation of concepts, and application of theory to novel situations remain distinctly human skills. 𝐀𝐧𝐝 𝐠𝐮𝐞𝐬𝐬 𝐰𝐡𝐚𝐭? 🚫 You don't need an AI detector to distinguish between human and AI-generated text. ✅ Instead, focus on learning how to assess the student's quality of analysis and original thought. Learn more at my upcoming webinar "𝐇𝐨𝐰 𝐭𝐨 𝐀𝐬𝐬𝐞𝐬𝐬 𝐖𝐫𝐢𝐭𝐢𝐧𝐠 𝐖𝐢𝐭𝐡𝐨𝐮𝐭 𝐀𝐈 𝐃𝐞𝐭𝐞𝐜𝐭𝐨𝐫𝐬." 🔗 https://lnkd.in/eNHzVdMj #AIInEducation #AcademicWriting #DoctoralEducation #AICollaboration #AssessmentInnovation #TeachingwithAI #AILiteracy #AcademicChatter
How to Assess Writing Without AI Detectors
https://moxielearn.ai
To view or add a comment, sign in
-
We continue to publish articles for the December issue (vol. 7, issue 2) of the Journal of Applied Learning & Teaching. JALT is a Scopus-indexed (Q1 in Scopus), open-access academic journal that neither charges its readers nor its authors. We are pleased to present a research article by Prof Patricia Taylor and Prof Mark Marino: On feedback from bots: Intelligence tests and teaching writing "One of the much-debated uses for AI, especially among writing instructors, is the potential for AI to take over the commenting and grading functions of teaching. In this paper, we describe the creation and use of AI for writing feedback in two separate but interconnected approaches: the use of the “Perfect Tutor” exercise in the classroom to teach students to conceptualize the components and priorities we bring to the writing process, and how students might struggle to make use of the same AI for feedback in a less actively guided context, or when the emphasis is not on the metacognition surrounding writing. During our examination of making bots and evaluating their feedback, we explore the limits of current AI. While emphasizing the importance of understanding the limitations, we also identify productive uses of these AI feedback bots in the college writing classroom to develop student critical thinking and writing." All articles, past and present, are available here: https://lnkd.in/gsVZen4 #JALT, #JournalofAppliedLearningandTeaching, #HigherEducation, #HigherEducationResearch, #KaplantheChoice, #AIEd, #GenAI, Shannon Tan, Pauline Seah, Mohamed Fadhil Mohamed Ismail(Ph.D.), Dr Fiona Xiaofei Tang, Vanessa Stafford, Veronica Mitchell, Dr. Kyriaki Koukouraki SFHEA, Lydia Lymperis, PhD, Ailson J. De Moraes, Begüm Burak (PhD) ✍️🐝, Samson Tan (Dr), Dr Margarita Kefalaki, 柯菊, Stefan Popenici, Ph.D., Faiza Qureshi, Sayan Dey, Ph.D., Dr Tarin Ong, Anna Mills, Dr Mike Perkins, Dr Jasper Roe SFHEA, Leon Furze, Ahmed Tlili, Michael Adarkwah
To view or add a comment, sign in
-
What do we gain and lose when students utilize AI to write? This insightful article delves into the evolving landscape of education, examining the impact of AI technologies on student writing skills, creativity, and critical thinking. #SupportingSuccess #EmpoweringTeachers #GreatTeaching #FrameworkforTeaching #TheDanielsonGroup
What Do We Gain and Lose When Students Use AI to Write? - EdSurge News
edsurge.com
To view or add a comment, sign in
Award-winning Professor at University of Central Oklahoma; AI in higher ed cautious optimist; public speaker on AI in higher ed; Tech Writer with a love of editing
7moAdding this to my reading queue.