141 Matching Annotations
  1. Oct 2023
    1. What is easier? Come up with good slogans out of nowhere, or come up with good slogans after getting a list of striking details?

      Of course this is the basis of keeping a zettelkasten for writing as well. When you can pull up prior ideas as a bank of information to work from, you're never starting from scratch, which is difficult not only for ChatGPT, but for people in general.

      Cross reference research on naming "white" things versus naming with a more specific prompt like "white" things in your refrigerator.

  2. Sep 2023
    1. What do you do then? You can take the book to someone else who, you think, can read better than you, and have him explain the parts that trouble you. ("He" may be a living person or another book-a commentary or textbook. )

      This may be an interesting use case for artificial intelligence tools like ChatGPT which can provide the reader of complex material with simplified synopses to allow better penetration of the material (potentially by removing jargon, argot, etc.)

  3. Aug 2023
  4. Jun 2023
    1. tech inevitability standpoint

      I can see correlations between this idea and the "proximate future" idea presented in this article: https://techpolicy.press/artificial-intelligence-and-the-ever-receding-horizon-of-the-future/

  5. May 2023
    1. strengths and weaknesses

      Yes, getting them questioning the output is step one. We have to rediscover critical thinking in the Era of Text Generation.

  6. Mar 2023
    1. Analysis of specifics from images, audio, or videos. Students would need to describe these kinds of media in detail in order to generate automated outputs about them.

      This is no longer true with ChatGPT 4. According to Open AI, "GPT-4 can accept images as inputs and generate captions, classifications, and analyses." https://openai.com/product/gpt-4

    1. adapting teaching to this new reality

      I don't remember how I put this but this phrase seems so broad--we wouldn't all agree on adapting teaching, but we might all agree that we need to make explicit policies about AI.

    2. help students learn the “basic building blocks” of effective academic writing.

      I wonder what makes Onyper think students are learning these 'basic building blocks'--ChatGPT can produce them, but what is going on in the student's mind when they see what it produces? Reading a sample essay doesn't teach us to write...

    3. he writes in his course policy that the use of such models is encouraged, “as it may make it possible for you to submit assignments with higher quality, in less time.”

      Doesn't this imply that the purpose of the assignment is to produce a high quality product rather than the purpose being the student's learning?

  7. Feb 2023
    1. critique the products of AI writing tools

      Maybe start with Kevin Roose's conversation with "Sydney"--the alter-ego of the new AI powered Bing search/chat platform.

    2. an intimidating blinking cursor on a blank page

    3. We should be familiarizing ourselves with, and nurturing, our student’s writing styles and lines of inquiry.

      I've seen some push back on this idea in conversations on Twitter and elsewhere. I've heard some instructors say they don't necessarily have bandwidth for this kind of intimate pedagogy.

      I'm sympathetic with that challenge--MANY teachers are overworked and overwhelmed--but I still don't think backing off of humnanizing education is the right approach. I'd rather focus systematically on freeing up teachers to use this approach.

    4. In this moment of generative AI, Hypothesis continues to rely on what we’ve always done: support process-oriented pedagogies that make learning more accessible.

      Check out our follow-up post for practical ideas on how to use social annotation to build in more scaffolded process into your courses.

    5. It’s hard to avoid concerns about plagiarism with the rise of ChatGPT.

      I really struggled with whether to mention plagiarism at all in this post. I didn't want to add additional hype to the concerns about "cheating students" and the surveillance side of edtech that has profited off it. But I wouldn't be being honest if "plagiarism" wasn't something mentioned by many of the frontline teachers that I work with on the daily.

    6. What are the differences and affordances in moving from cadavre exquis to Eno/Schmidt's Oblique Strategies to ChatGPT?

    7. ChatGPT could be used as a writing prompt for writers to leverage for their work in much the same way that [[Benjamin Franklin]] rewrote existing works or the major plot point in the movie [[Finding Forrester]] in which Jamal used William's work as a springboard for his own.

      Link to: https://hypothes.is/a/HPQLinKXEemyqafW9xlIFQ.

    1. What we do know is that ChatGPT’s underlying tech is GPT-3 and OpenAI plans to drop an upgraded version, GPT-4 in 2023. Asking students to train the thing that might take away opportunities from them down the road seems particularly cannibalistic but I also don’t know how you fight something you don’t understand.

      Or, since many of our students in higher ed will be entering the knowledge work sector, it's a fair question to ask: what do you want that sector to be like? Do you want those jobs to be more like the minder of intelligent machines? Or do you want it to be a place where the human in the loop is still a craftsman with agency?

    2. I am skeptical of the tech inevitability standpoint that ChatGPT is here

      inevitability is such an appropriate word here, because it captures a sort of techno-maximalist "any-benefit" mindset that sometimes pervades the ed-tech scene (and the position of many instructional designers and technologists)

    1. A calculator performs calculations; ChatGPT guesses. The difference is important.

      Thank you! So beautifully and simply put ChatGPT is also used mostly for tasks where there is no one clear right answer.

    1. synthetic writing

      Interesting phrase.

    2. In PRS, I encouraged teachers to shift their focus from asking questions to teaching students how to ask high-quality questions themselves.

      Perhaps in annotations?

    1. https://www.cyberneticforests.com/ai-images

      Critical Topics: AI Images is an undergraduate class delivered for Bradley University in Spring 2023. It is meant to provide an overview of the context of AI art making tools and connects media studies, new media art, and data ethics with current events and debates in AI and generative art. Students will learn to think critically about these tools by using them: understand what they are by making work that reflects the context and histories of the tools.

    1. However, the article does not take a clear stance on the matter and does not offer a conclusion on whether the capitalization of the word "black" is a good or bad thing.

      This implies that it should take a stand but doesn't say why. Note that the New York Times article is not an editorial.

      This summary also misses the stand implied by the choice to end on a quote from scholar Crystal Fleming explaining why she capitalizes "Black":

      "'Frankly,” she said, “because I want to. That’s also something that I think we need to see more of in every field — black people doing what we want to do.'"

    2. long overdue

      Why? What value do these people see in the capitalization? This summary ignores the idea of power, which is central in the article. Eligon writes that capitalizing "Black" "confers a sense of power and respect to black people, who have often been relegated to the lowest rungs of society through racist systems, black scholars say."

    3. Some scholars believe that the capitalization of the word "Black" lumps people of African descent into a monolithic group and erases their diversity, while others believe that writing "Black" but not "White" is giving white people a pass on recognizing their racial privilege.

      Here is common academic phrasing to contrast ideas: "some scholars argue...while others believe that..."

      However, it's not a great choice to use this contrast phrasing for the particular ideas described here. The previous sentence used "while" to contrast one view in favor of "Black" with a view against it. But the two parts of this particular sentence are not opposing views. The first idea is against capitalization, whereas the second idea doesn't take a stance on "Black" vs. "black" but rather is weighing in on what we should do about terminology for whiteness if we do capitalize "Black."

      This is an example of how ChatGPT uses academic phrasing in ways that read as natural and even sound intelligent, but on closer examination remind us that ChatGPT has no understanding of the ideas.

    4. John Eligon, a New York Times columnist, writes about the debate surrounding the capitalization of the word "black" to refer to African-Americans. The move to capitalize the word has gained widespread acceptance in recent weeks and has sparked a debate over identity, race, and power. Many news organizations have changed their style to capitalize the word, but not all have embraced it, including The New York Times and The Washington Post, who are still grappling with the decision. The capitalization of black is part of a long-standing struggle to refer to people of African descent in the most respectful and accurate way.

      Here's a sample ChatGPT critical assessment of the NY Times article at https://www.nytimes.com/2020/06/26/us/black-african-american-style-debate.html

      For contrast, see this human-written sample essay from the textbook How Arguments Work: A Guide to Writing and Analyzing Texts in College: https://human.libretexts.org/Bookshelves/Composition/Advanced_Composition/Book%3A_How_Arguments_Work_-A_Guide_to_Writing_and_Analyzing_Texts_in_College(Mills)/04%3A_Assessing_the_Strength_of_an_Argument/4.11%3A_Sample_Assessment_Essays/4.11.02%3A_Sample_Assessment-_Typography_and_Identity

  8. platform.openai.com platform.openai.com
    1. Educator considerations for ChatGPT<br /> https://platform.openai.com/docs/chatgpt-education

      <small><cite class='h-cite via'> <span class='p-author h-card'>Hypothesis</span> in Liquid Margins 38: The rise of ChatGPT and how to work with and around it : Hypothesis (<time class='dt-published'>02/09/2023 16:11:54</time>)</cite></small>

    2. upskilling activities in areas like writing and coding (debugging code, revising writing, asking for explanations)

      I'm concerned people will see this and remember it without thinking of all the errors that are described later on in this document.

    3. ChatGPT use in Bibtex format as shown below:

      Glad they are addressing this, and I hope they will continue to offer such suggestions. I don't think ChatGPT should be classed as a journal. We really need a new way to acknowledge its use that doesn't imply that it was written with intention or that a person stands behind what it says.

    4. will continue to broaden as we learn.

      Since there is a concern about the bias of the tool toward English and developed nations, it would be great if they could include global educators from the start.

    5. As part of this effort, we invite educators and others to share any feedback they have on our feedback form as well as any resources that they are developing or have found helpful (e.g. course guidelines, honor code and policy updates, interactive tools, AI literacy programs, etc).

      I wonder how this information will be shared back so that other educators can benefit from it. I maintain a resource list for educators at https://wac.colostate.edu/repository/collections/ai-text-generators-and-teaching-writing-starting-points-for-inquiry/

    6. one factor out of many when used as a part of an investigation determining a piece of content’s source and making a holistic assessment of academic dishonesty or plagiarism.

      It's still not clear to me how they can be used as evidence at of academic dishonesty at all, even in combination with other factors, when they have so many false positives and false negatives. I can see them used to initiate a conversation with a student and possibly a rewrite of a paper. This is tricky.

    7. Ultimately, we believe it will be necessary for students to learn how to navigate a world where tools like ChatGPT are commonplace. This includes potentially learning new kinds of skills, like how to effectively use a language model, as well as about the general limitations and failure modes that these models exhibit.

      I agree, though I think we should emphasize teaching about the limitations before teaching how to use the models. Critical AI literacy must become part of digital literacy.

    8. Some of this is STEM education, but much of it also draws on students’ understanding of ethics, media literacy, ability to verify information from different sources, and other skills from the arts, social sciences, and humanities.

      Glad they mention this since I am skeptical of claims that students need to learn prompt engineering. The rhetorical skills I use to prompt ChatGPT are mainly learned by writing and editing without it.

    9. While tools like ChatGPT can often generate answers that sound reasonable, they can not be relied upon to be accurate consistently or across every domain. Sometimes the model will offer an argument that doesn't make sense or is wrong. Other times it may fabricate source names, direct quotations, citations, and other details. Additionally, across some topics the model may distort the truth – for example, by asserting there is one answer when there isn't or by misrepresenting the relative strength of two opposing arguments.

      If we teach about ChatGPT, we might do well to showcase examples of these kinds of problems in output so that students develop an eye for them and an intuitive understanding that the model isn't thinking or reasoning or checking what it says.

    10. While the model may appear to give confident and reasonable sounding answers,

      This is a bigger problem when we use ChatGPT in education than in other arenas because students are coming in without expertise, seeking to learn from experts. They are especially susceptible to considering plausible ChatGPT outputs to be authoritative.

    11. . Web browsing capabilities and improving factual accuracy are an open research area that you can learn more in our blog post on WebGPT.

      Try PerplexityAI for an example of this. Google's Bard should be another example when released.

    12. subtle ways.

      Glad they mention this in the first line. People will see the various safeguards and assume that ChatGPT is safe because work has been done on this, but there are so many ways these biases can still surface, and since they are baked into the training data, there's not much prospect of eliminating them.

    13. Verifying AI recommendations often requires a high degree of expertise,

      This is a central idea that I would wish were foregrounded. If we are trying to use auto-generated text in a situation in with truth matters, we need to be quite knowledgeable and also invest time in evaluating what that text says. Sometimes that takes more time than writing something ourselves.

    14. students may need to develop more skepticism of information sources, given the potential for AI to assist in the spread of inaccurate content.

      It strikes me that OpenAI itself is warning of a coming flood of misinformation from language models. I'm glad they are doing so, and I hope they keep investing in improving their AI text classifier so we have some ways to distinguish human writing from machine-generated text.

    15. Educators should also disclose the use of ChatGPT in generating learning materials, and ask students to do so when they incorporate the use of ChatGPT in assignments or activities.

      Yes! We must begin to cultivate an ethic of transparency around synthetic text. We can acknowledge to students that we might sometimes be tempted to autogenerate a document and not acknowledge the role of ChatGPT (I have certainly felt this temptation).

    16. export their ChatGPT use and share it with educators. Currently students can do this with third-party browser extensions.

      This would be wonderful. Currently we can use the ShareGPT extension for this.

    17. they and their educators should understand the limitations of the tools outlined below.

      I appreciate these cautions, but I'm still concerned that by foregrounding the bulleted list of enticing possibilities, this document will mainly have the effect of encouraging experimentation with only lip service to the cautions.

    18. custom tutoring tools

      I'm concerned that any use of ChatGPT for tutoring would fall under the "overreliance" category as defined below. Students who need tutoring do not usually have the expertise or the time to critically assess or double check everything the tutor tells them. ChatGPT already comes off as more authoritative than it is. It will come across as even more authoritative if teachers are recommending it as a tutor.

    1. Why are people so quick to be impressed by the output of large language models (LLMs)?

      This take-down isn't actually address this question. It's using it as a dismissal.

      It is a good question though and one not to be dismissed as its causes might interrogated.

      I am impressed (while also skeptical of ChatGPT). Does that make me dumb?

    1. So will AI text generation tools revolutionize or kill college writing? Both! Neither! For sure! Probably! Eventually! Somewhat! It’s…complicated.

      Nice summary of the discourse on ChatGPT!

    2. e-Literate isn’t about what I know. It’s about what I’m learning.

      There's an interesting point to be made about process here. Can the same be said for course work: that writing for a class isn't about what you know it's about what you are learning.

    3. Particularly if used judiciously as part of the writing curriculum rather than the whole thing, it could be quite useful.

      Very sensible.

    4. students are heavily influenced by whether they believe their teacher cares about their learning.

      Making writing more of a process rather than a product, a process in which the teacher gives regular feedback to the student, would help build that relationship.

    5. Then I would have edited the output

      Interesting. Collaborating with the bot in composition. It gets you started, but you are still needed.

    1. ChatGPT doesn’t mark the end of high school English class, but it can mark the end of formulaic, mediocre writing performance as a goal for students and teachers. That end is long overdue, and if ChatGPT hastens that end, then that is good news.

      Provocative argument: ironically, it's the standardization of learning that is killed by AI writing platforms.

    2. Both started with a version of “Work A and Work B have many similarities and many differences,” an opening sentence that I would have rejected from a live student

      So what's the point, ChatGPT isn't really all that sophisticated in its analysis? Relies on cliched structures? Either way or both, I kind of buy it. It's not a creative writer. It' utilitarian.

      There's also an interesting point to be made here in terms of the prompts teachers provide students for essays. They too need to be sophisticated rather than simply compare and contrast these two books.

    3. If they put a great degree of thought into designing a prompt, would that not mean that they were doing something involving real learning?

      Yes!

    4. I suspect that test runs with ChatGPT depend in part on the richness of the prompt given,

      Writing good prompts could be something we teach students.

    5. And the algorithm cannot manage supporting its points with quotes from the works, a pretty fundamental part of writing about literature.

      ChatGPT not good at integration of quotes, a key piece of writing from evidence.

    1. He said it was “very naive” to think it would be possible to impose restrictions on internet platforms, particularly with Microsoft primed to integrate AI into its search engine, Bing.“Are you going to ban Google and Bing?”

      Fair point.

    1. At the same time, we need to continue building activities and assessments to make classroom work more specific and experiential.

      Yes! Not sure that means banning AI as a tool which this essay ends up arguing.

    2. Pedagogically speaking, focusing on the grunt work of trying out ideas—watching them develop, wither, and cede ground to better ones—is the most valuable time we can spend with our students. We surrender that time to Silicon Valley and the messy database that is the internet at the peril of our students.

      This turns into a very traditional argument of the don't use Wikipedia variety.

    3. digital utopians might claim that students and teachers will have more opportunities for critical thinking because generating ideas—the grunt work of writing—isn’t taking up any of our time. Along this line of thinking, ChatGPT is just another calculator, but for language instead of numerical calculation.

      I'm still compelled by this idea TBH...

    1. Analysis of recent events not in the training data for the system.

      Wouldn't analysis and commentary on recent events be readily available on the Internet?

    2. Note that ChatGPT can produce outputs that take the form of  “brainstorms,” outlines, and drafts. It can also provide commentary in the style of peer review or self-analysis. Nonetheless, students would need to coordinate multiple submissions of automated work in order to complete this type of assignment with a text generator.

      Interesting. It almost takes MORE work to use ChatGPT in the context of such heavily scaffolded writing process,

    3. get a better sense of their thinking

      And if we're reading more of their writing through social annotation or other "steps" in the process, we also become familiar with their thinking.

    4. a process that empowers critical thinking

      Yes, I've never felt I was simply teaching writing when I taught composition. Writing was a visible end product of a lot of other work (reading, thinking, and non-summative pre-writing activities) that I was training students in.

    5. students who feel connected to their writing will be less interested in outsourcing their work to an automated process.

      Love this idea. Teaching students to own and enjoy their writing.

    6. skip the learning and thinking around which their writing assignments are designed.

      Or does it focus the learning? Just as I don't really care if my students know how to spell as long as they use spell check, what does writing with ChatGPT open up in terms of enabling students and instructors to focus on different aspects of writing.

    1. Augmenting teachers, not replacing them

      Amen!

    2. There’s a line somewhere between using ChatGPT in collaboration, and getting it to do all the work.

      Important point.

    3. ChatGPT is not an original thinker, but you are.

      This is important to remind students of too. And maybe a key area for teachers to focus on what students could contribute to a writing process that includes ChatGPT.

    4. using the model’s suggestions as a starting point

      Perhaps the same with students. Not using ChatGPT to write the essay, but perhaps in the brainstorming process.

    5. Right now, one of the most powerful things you can learn about ChatGPT is how to write quality prompts.

      Interesting. Writing instructors could start to train students in writing prompts for AI. The rubrics below are not dissimilar from what we traditionally ask student to do in their writing. So maybe ChatGPT isn't the death of the essay!

    6. Beyond the media hype about cheating,

      I think it's important to move past the plagiarism aspect of the debates around ChatGPT, but don't think it's just "hype." Teachers are concerned.

    1. "I would much rather have ChatGPT teach me about something than go read a textbook."

      What about accuracy? Textbooks go through a rigorous process of composition and editing to ensure accuracy. Most of what exists to be scraped on the internet does not. I realize this is an old Web 2.0 "problem."

      (Would textbooks even be available for scraping by ChatGPT? What does it have access to?)

    2. the company has also heard from them that the chat bot can be "an unbelievable personal tutor for each kid," Altman said.

      ChatGPT as a tutor. Perhaps with the same guardrails in place so that tutors don't do the work for the students.

    3. "We adapted to calculators and changed what we tested for in math class, I imagine.

      What are the implications here for the writing instructor? What "computational" equivalent to basic calculation would then be no longer central to teaching writing?

    1. This framing means that as educators we need to be clear not only about what we hope our students are learning but also about how and why.

      This seems to point to process over product and more formative assessment or scaffolding as part of instruction.

    2. The main goal of transparent teaching is simple: to promote students’ conscious understanding of how they learn.

      So metacognition?

    3. The educational issues surrounding ChatGPT are similar in kind to those we've seen with the growing power of the web

      Yeah, is this even a new thing? It this the same debate we've always had?

    1. Note that students will not be able to cite ChatGPT using a link to their generated response;instead, ask students to repeat the exact language of their search query in the footnotes in lieu of a link

      Actually citation is possible with this extension.

    2. formulaic syntax

      Interesting. So creativity is not it's strength. It's imitative.

    3. These tools, along with a range of other practices,

      Yes, the practices are key! I doubt the battle of algorithms can be won by either side.

    4. Get support. Consider starting a conversation with other teachers or the child’s family about AI-generated workand the importance of students writing authentically.

      For higher education students and especially where its used as a support tool then I think it would be very useful to highlight how they may get the extra support they need. Not everyone cheats because they are lazy, or disorganised. If they get into higher education it's an achievement so to throw it away in this way could highlight deeper issues around support, well being etc.

    5. Set Clear Classroom Expectations For AI-Generated Writing

      I think this is crucial, along with educating the public, parents, students, other teachers about what it can and can't do. The more familiar people are then scarmongering and negative attitudes towards its use might be addressed quickly. Everyone is an expert and have their own views based on what they have read or been told so we may as well do what we can to promote ethical and sensible/useful examples of its application support teaching and learning.

    6. academic dishonesty

      raises concerns around permissions to use third party tools and submission of students work as well as what if its wrong? What would that do to the student trust etc?

    7. currently free

      As of 2nd Feb 2023 launch in the US (only) of ChatGPT3 premium service for $20 a month. Article here via CNN

    8. Created by CommonLit and Quill.org Common Lit Quill.org Both non-profits which makes me feel better :)

    1. create assessments that “take into consideration the processes and experiences of learning.”

      Annotation!

    2. Ask students to engage in metacognitive reflection that has them articulate what they have learned, how they have learned it, and why the knowledge is valuable.

      Students annotating their own writing?

    1. Is this moment more like the invention of the calculator, saving me from the tedium of long division, or more like the invention of the player piano, robbing us of what can be communicated only through human emotion?

      Great question!

    2. The question isn’t “How will we get around this?” but rather “Is this still worth doing?”

      Somewhat defeatist. Quit rather than evolve?

    3. The rudiments of writing will be considered a given, and every student will have direct access to the finer aspects of the enterprise.

      I wonder if there are analogs in math.

      The graphic calculator, for example, must have changed how math was taught, removing the need for that lower-order computation in math.

    4. Last night, I received an essay draft from a student. I passed it along to OpenAI’s bots. “Can you fix this essay up and make it better?” Turns out, it could. It kept the student’s words intact but employed them more gracefully; it removed the clutter so the ideas were able to shine through. It was like magic.

      This is probably scariest of all. ChatGBT as editor rather than author.

    5. nor does it successfully integrate quotations from the original texts

      Interesting. Probably easy for AI develop this skill rather than a limit of the technology.

      But, for now, maybe a good indicator of more sophisticated writing.

    6. What GPT can produce right now is better than the large majority of writing seen by your average teacher or professor.

      Wow, that's a provocative statement! What is meant by better here?

      On some level, I've always felt that a poorly-written, but original essay is better than a well-written, well-analyzed but plagiarized one.

    1. methods of assessment that take into consideration the processes and experiences of learning, rather than simply relying on a single artifact like an essay or exam. The evidence of learning comes in a little of different packages

      How about Hypothesis social annotation throughout a course and throughout the process of essay composition.

    2. The fact that the AI writes in fully fluent, error-free English with clear structure virtually guarantees it a high score on an AP exam

      Yikes!

    3. ChatGPT may be a threat to some of the things students are asked to do in school contexts, but it is not a threat to anything truly important when it comes to student learning.

      Great line, powerful claim.

    4. an opportunity to re-examine our practices and make sure how and what we teach is in line with our purported pedagogical values.

      Love this.

    5. Rather than letting students explore the messy and fraught process of learning how to write, we have instead incentivized them to behave like algorithms, creating simulations that pass surface-level muster

      Annotation shows that messy process.

  9. Jan 2023
    1. In The New Laws of Robotics, legal scholar Frank Pasquale argues for guidance from professional organizations about whether and how to use data-driven statistical models in domains such as education or health care.

      Very interesting. Hypothesis, in its small way, can perhaps help some educators...

    2. we need collaborative processes to seek clarity.

      Indeed!

      And the reminder that writing (and knowledge production more generally) is always collaborative, has an audience, both potentially elided by relying on ChatGPT to generate prose/ideas.

    3. slow thinking,

      Love it! Social annotation certainly help slow reading IMO.

    4. Should I ask students to prompt a language model and then critique its output?

      Great assignment idea!

    5. preferences of data scraped from internet sites hardly renowned for their wisdom or objectivity.

      Something else we try to teach our students, right?

    6. “mathy math,” a model of language sequences built by “scraping” the internet and then, with massive computing, “training” the model to predict the sequence of words most likely to follow a user’s prompt

      A kind of plagiarism in and of itself?

    7. What a contrast to the masochistic persistence I had practiced for so many years and preached to my struggling students.

      So true. Writing is hard, isn't it? ChatGPT sometimes makes it look easy. What will students make of that!?

    1. Back in the early 2000s, I used to demonstrate to students how EasyBib often gets it wrong when it comes to MLA formatting.

      This is a great analogy. I remember feeling the same way about EasyBib when teaching comp.

    2. having students socially annotate the paper, practicing their editing and fact-checking skills.

      Yes! Would love to see an example of such an assignment.

    3. The text is being generated on behalf of the student and is being substituted for the student’s self-generated text. This use of AI is inherently dishonest.

      Could one still argue that it's a component piece of the text/writing that is generated? Just like spelling, grammar, and citation are?

      No doubt it's a lot MORE of the text that is generated and COULD be handed in completely as is in many cases. But could it nonetheless be seen as a kind of starting point for students to then focus on other work, other skills? Like the editing processes mentioned above.

    4. Teaching students to be good critical readers takes time and requires instructors develop activities, such as social annotation assignments, that draw students’ attention to the details of a well-written text.

      Yes! And they ARE writing when they read and annotate, so they can still practice and instructors can still evaluate that skill. It's just a very different writing assignment than a final paper.

    5. So, while effective editors may or may not be exceptional writers, they must be great critical readers.

      I have often wondered (when I was an English teacher), am I teaching writing or reading? Obviously the answer is both.

      The product of so much English courses is paper writing, but that's also meant to be an assessment of a student's reading, right?

      So maybe there's a shift to focus more on reading as a formative assessment that is needed?