Rather than banning AI, I'm showing students how to use it effectively as a personalized TA. I'm giving them this AGENTS.md file:
https://gist.github.com/1cg/a6c6f2276a1fe5ee172282580a44a7ac
And showing them how to use AI to summarize the slides into a quiz review sheet, generate example questions with answer walk throughs, etc.
Of course I can't ensure they aren't just having AI do the projects, but I tell them that if they do that they are cheating themselves: the projects are designed to draw them into the art of programming and give them decent, real-world coding experience that they will need, even if they end up working at a higher level in the future.
AI can be a very effective tool for education if used properly. I have used it to create a ton of extremely useful visualizations (e.g. how twos complement works) that I wouldn't have otherwise. But it is obviously extremely dangerous as well.
"It is impossible to design a system so perfect that no one needs to be good."
Competing with LLM software users, 'honest' students would seem strongly incentivized to use LLMs themeselves. Even if you don't grade on a curve, honest students will get worse grades which will look worse to graduate schools, grant and scholarship committees, etc., in addition to the strong emotional component that everyone feels seeing an A or C. You could give deserving 'honest' work an A but then all LLM users will get A's with ease. It seems like you need two scales, and how do you know who to put on which scale?
And how do students collaborate on group projects? Again, it seems you have two different tracks of education, and they can't really work together. Edit: How do class discussions play out with these two tracks?
Also, manually doing things that machines do much better has value but also takes valuable time from learning more advanced skills that machines can't handle, and from learning how to use the machines as tools. I can see learning manual statistics calculations, to understand them fundamentally, but at a certain point it's much better to learn R and use a stats package. Are the 'honest' students being shortchanged?
Of course I can't ensure they aren't just having AI do the projects, but I tell them that if they do that they are cheating themselves
This is the right thing to say, but even the ones who want to listen can get into bad habits in response to intense schedules. When push comes to shove and Multivariate Calculus exam prep needs to happen but you’re stuck debugging frustrating pointer issues for your Data Structures project late into the night… well, I certainly would’ve caved far too much for my own good.IMO the natural fix is to expand your trusting, “this is for you” approach to the broader undergrad experience, but I can’t imagine how frustrating it is to be trying to adapt while admin & senior professors refuse to reconsider the race for a “””prestigious””” place in a meta-rat race…
For now, I guess I’d just recommend you try to think of ways to relax things and separate project completion from diligence/time management — in terms of vibes if not a 100% mark. Some unsolicited advice from a rando who thinks you’re doing great already :)
This is why I'm going to in-person written quizzes to differentiate between the students who know the material and those who are just using AI to get through it.
I do seven quizzes during the semester so each one is on relatively recent material and they aren't weighted too heavily. I do some spaced-repetition questions of important topics and give students a study sheet of what to know for the quiz. I hated the high-pressure midterms/finals of my undergrad, so I'm trying to remove that for them.
The pressure was what got me to do the necessary work. Auditing classes never worked for me.
> I do some spaced-repetition questions of important topics and give students a study sheet of what to know for the quiz.
Isn't that what the lectures and homework are for?
I find, as a parent, when I talk about it at the high school level I get very negative reactions from other parents. Specifically I want high schoolers to be skilled in the use of AI, and particular critical thinking skills around the tools, while simultaneously having skills assuming no AI. I don’t want the school to be blindly “anti AI” as I’m aware it will be a part of the economy our kids are brought into.
There are some head in the sands, very emotional attitudes about this stuff. (And obviously idiotically uncritical pro AI stances, but I doubt educators risk having those stances)
That said I agree with all your points too: some version of this argument will apply to most white collar jobs now. I just think this is less clear to the general population and it’s much more of a touchy emotional subject, in certain circles. Although I suppose there may be a point to be made about being more slightly cautious about introducing AI at the high school level, versus college.
No, it's not.
Nothing around AI past the next few months to a year is clear right now.
It's very, very possible that within the next year or two, the bottom falls out of the market for mainstream/commercial LLM services, and then all the Copilot and Claude Code and similar services are going to dry up and blow away. Naturally, that doesn't mean that no one will be using LLMs for coding, given the number of people who have reported their productivity increasing—but it means there won't be a guarantee that, for instance, VS Code will have a first-party integrated solution for it, and that's a must-have for many larger coding shops.
None of that is certain, of course! That's the whole point: we don't know what's coming.
The genie is out of the bottle, never going back
It's a fantasy to think it will "dry up" and go away
Some other guarantees over the next few years we can make based on history: AI will get batter, faster, and more efficient like everything else in CS
Plenty of tech becomes exploitative (or more exploitative).
I don't know if you noticed but 80% of LLM improvements are actually procedural now: it's the software around them improving, not the core LLMs.
Plus LLMs have huge potential for being exploitative. 10x what Google Search could do for ads.
I think that Microsoft will not be willing to operate Copilot for free in perpetuity.
I think that there has not yet been any meaningful large-scale study showing that it improves performance overall, and there have been some studies showing that it does the opposite, despite individuals' feeling that it helps them.
I think that a lot of the hype around AI is that it is going to get better, and if it becomes prohibitively expensive for it to do that (ie, training), and there's no proof that it's helping, and keeping the subscriptions going is a constant money drain, and there's no more drumbeat of "everything must become AI immediately and forever", more and more institutions are going to start dropping it.
I think that if the only programmers who are using LLMs to aid their coding are hobbyists, independent contractors, or in small shops where they get to fully dictate their own setups, that's a small enough segment of the programming market that we can say it won't help students to learn that way, because they won't be allowed to code that way in a "real job".
Our university is slowly stumbling towards "AI Literacy" being a skill we teach, but, frankly, most faculty here don't have the expertise and students often understand the tools better than teachers.
I think there will be a painful adjustment period, I am trying to make it as painless as possible for my students (and sharing my approach and experience with my department) but I am just a lowly instructor.
People need to learn to do research with LLMs, code with LLMs, how to evaluate artifacts created by AI. They need to learn how agents work at a high level, the limitations on context, that they hallucinate and become sycophantic. How they need guardrails and strict feedback mechanisms if let loose. AI Safety connecting to external systems etc etc.
You're right that few high school educators would have any sense of all that.
I do know people who would get egregiously wrong answers from misusing a calculator and insisted it couldn't be wrong.
This is my exact experience as well and I find it frustrating.
If current technology is creating an issue for teachers - it's the teachers that need to pivot, not block current technology so they can continue what they are comfortable with.
Society typically cares about work getting done and not much about how it got done - for some reason, teachers are so deep into the weeds of the "how", that they seem to forget that if the way to mend roads since 1926 have been to learn how to measure out, mix and lay asphalt patches by hand, in 2026 when there are robots that do that perfectly every-time, they should be teaching humans to complement those robots or do something else entirely.
It's possible in the past, that learning how to use an abacus was a critical lesson but once calculators were invented, do we continue with two semesters of abacus? Do we allow calculators into the abacus course? Should the abacus course be scrapped? Will it be a net positive on society to replace the abacus course with something else?
"AI" is changing society fundamentally forever and education needs to change fundamentally with it. I am personally betting that humans in the future, outside extreme niches, are generalists and are augmented by specialist agents.
I had a discussion with a recruiter on Friday, and I said I guess the issue with AI vs human is, if you give a human developer who is new to your company tasks, the first few times you'll check their work carefully to make sure the quality is good. After a while you can trust they'll do a good job and be more relaxed. With AI, you can never be sure at any time. Of course a human can also misunderstand the task and hallucinate, but perhaps discussing the issue and the fix before they start coding can alleviate that. You can discuss with an AI as much as you want, but to me, not checking the output would be an insane move...
To return to the point, yeah, people will use AI anyway, so why not teach them about the risks. Also LLMs feel like Concorde: it'll get you to where you want to go very quickly, but at tremendous environmental cost (also it's very costly to the wallet, although the companies are now partially subsidizing your use with the hopes of getting you addicted)..
I once got "implement a BCD decoder" with about a 1"x4" space to do it.
I'm concerned about handwriting, which is a lost skill, and how hard that will be on the TAs who are grading the exams. I have stressed to students that they should write larger, slower and more carefully than normal. I have also given them examples of good answers: terse and to the point, using bulleted lists effectively, what good pseudo-code looks like, etc.
It is an experiment in progress: I have rediscovered the joys of printing & the logistics moving large amounts of paper again. The printer decided half way through one run to start folding papers slightly at the corner, which screwed up stapling.
I suppose this is why we are paid the big bucks.
Oh man, this reminds me of one test I had in uni, back in the days when all our tests were in class, pen & paper (what's old is new again?). We had this weird class that taught something like security programming in unix. Or something. Anyway, all I remember is the first two questions being about security/firewall stuff, and the third question was "what is a socket". So I really liked the first two questions, and over-answered for about a page each. Enough text to both run out of paper and out of time. So my answer to the 3rd question was "a file descriptor". I don't know if they laughed at my terseness or just figured since I overanswered on the previous questions I knew what that was, but whoever graded my paper gave me full points.
So how do you handle kids who can‘t write well? The same way we‘ve been handling them all along — have them get an assessment and determine exactly where they need support and what kind of support will be most helpful to that particular kid. AI might or might not be a part of that, but it‘s a huge mistake to assume that it has to be a part of that. People who assume that AI can just be thrown at disability support betray how little they actually know about disability support.
It's embarrassing to see this question downvoted on here. It's a valid question, there's a valid answer, and accessibility helps everyone.
There's not such thing as "disabled people who can't write well", there's individuals with specific problems and needs.
Maybe there's jessica who lost her right hand and is learning to write with the left who gets extra time. Maybe there's joe who has some form of nerve issue and uses a specialized pen that helps cancel out tremors. Maybe sarah is blind and has an aide who writes it or is allowed to use a keyboard or or or...
Reasonable accommodations absolutely should be made for children that need them.
But also just because you’re a bad parent and think the rules don’t apply to you doesn’t mean your crappy kid gets to cheat.
Parents are the absolute worst snowflakes.
This is the key part. I'm doing a part-time graduate degree at a major university right now, and it's fascinating to watch the week-to-week pressure AI is putting on the education establishment. When your job as a student is to read case studies and think about them, but Google Drive says "here's an automatic summary of the key points" before you even open the file, it takes a very determined student to ignore that and actually read the material. And if no one reads the original material, the class discussion is a complete waste of time, with everyone bringing up the same trite points, and the whole exercise becomes a facade.
Schools are struggling to figure out how to let students use AI tools to be more productive while still learning how to think. The students (especially undergrads) are incredibly good at doing as little work as possible. And until you get to the end-of-PhD level, there's basically nothing you encounter in your learning journey that ChatGPT can't perfectly summarize and analyze in 1 second, removing the requirement for you to do anything.
This isn't even about AI being "good" or "bad". We still teach children how to add numbers before we give them calculators because it's a useful skill. But now these AI thinking-calculators are injecting themselves into every text box and screen, making them impossible to avoid. If the answer pops up in the sidebar before you even ask the question, what kind of masochist is going to bother learning how to read and think?
In my first year of college my calculus teacher said something that stuck with me "you learn calculus getting cramps on your wrists", yeah, AI can help remember things and accelerate learning, but if you don't put the work to understand things you'll always be behind people that know at least with a bird eye view what's happening.
Depends. You might end up going quite far without even opening up the hood of a car even when you drive the car everyday and depend on it for your livelihood.
If you're the kind that likes to argue for a good laugh, you might say "well, I don't need to know how my car works as long as the engineer who designed it does or the mechanic who fixes it does" - and this is accurate but it's also accurate not everyone ended up being either the engineer or the mechanic. It's also untrue that if it turned out it would be extremely valuable to you to actually learn how the car worked, you wouldn't put in the effort to do so and be very successful at it.
All this talk about "you should learn something deeply so you can bank on it when you will need it" seems to be a bit of a hoarding disorder.
Given the right materials, support and direction, most smart and motivated people can learn how to get competent at something that they had no clue about in the past.
When it comes to smart and motivated people, the best drop out of education because they find it unproductive and pedantic.
My argument is that when you have at least a basic knowledge of how things work (be it as a musician, a mechanical engineer or a scientist) you are in a much better place to know what you want/need.
That said, smart and motivated people thrive if they are given the conditions to thrive, and I believe that physical interfaces have way less friction than digital interfaces, turning a knob is way less work than clicking a bunch of menus to set up a slider.
If I were to summarize what I think about AI it would be something like "Let it help you. Do not let it think for you"
My issue is not with people using AI as a tool, bit with people delegating anything that would demand any kind of effort to AI
If the sole purpose of college is to rank students, and funnel them to high prestige jobs that have no use for what they actually learn in college then what the students are doing is rational.
If however the student is actually there to learn, he knows that using ChatGPT accomplishes nothing. In fact all this proves is that most students in most colleges are not there to learn. Which begs the question why are they even going to college? Maybe this institution is outdated. Surely there is a cheaper and more time efficient way to ranking students for companies.
This topic comes up all the time. Every method conceivable to rank job candidates gets eviscerated here as being counterproductive.
And yet, if you have five candidates for one job, you're going to have to rank them somehow.
I do not. This is your problem, companies. Now, I am aware that I have to give out grades and so I walk through the motions of doing this to the extent expected. But my goal is to instruct and teach all students to the best of my abilities to try to get them all to be as educated/useful to society as possible. Sure, you can have my little assessment at the end if you like, but I work for the students, not for the companies.
I think this is mostly accurate. Schools have been able to say "We will test your memory on 3 specific Shakespeares, samples from Houghton Mifflin Harcourt, etc" - the students who were able to perform on these with some creative dance, violin, piano or cello thrown in had very good chances at a scholarship from an elite college.
This has been working extremely well except now you have AI agents that can do the same at a fraction of the cost.
There will be a lot of arguments, handwringing and excuse making as students go through the flywheel already in motion with the current approach.
However, my bet is it's going to be apparent that this approach no longer works for a large population. It never really did but there were inefficiencies in the market that kept this game going for a while. For one, college has become extremely expensive. Second, globalization has made it pretty hard for someone paying tuition in the U.S. to compete against someone getting a similar education in Asia when they get paid the same salary. Big companies have been able to enjoy this arbitrage for a long time.
> Maybe this institution is outdated. Surely there is a cheaper and more time efficient way to ranking students for companies
Now that everyone has access to labor cheaper than the cheapest English speaking country in the world, humanity will be forced to adapt, forcing us to rethink what has seemed to work in the past
I didn't get it. How can printing avoid AI? And more importantly is this AI-resistance sustainable?
Does this literally work? It adds slightly more friction, but you can still ask the robot to summarize pretty much anything that would appear on the syllabus. What it likely does it set expectations.
This doesn't strike me as being anti-AI or "resistance" at all. But if you don't train your own brain to read and make thoughts, you won't have one.
Hell, in Italy we used to have an editor called Bignami make summaries of every school topic.
In any case, I don't know what to think about all of this.
School is for learning, if you skip the hard part you not gonna learn, your lost.
At this point auto AI summaries are so prevalent that it is the passive default. By shifting it to require an active choice, you’ve make it more likely for students to choose to do the work.
It’s not much more effort. The level of friction is minimal. But we’re talking about the activation energy of students (in an undergrad English class, likely teenagers). It doesn’t take much to swing the percentage of students who do the reading.
"TYCO Print is a printing service where professors can upload course files for TYCO to print out for students as they order. Shorter packets can cost around $20, while longer packets can cost upwards of $150 when ordered with the cheapest binding option."
And later in OA it states that the cost to a student is $0.12 per double sided sheet of printing.
In all of my teaching career here in the UK, the provision of handouts has been a central cost. Latterly I'd send a pdf file with instructions and the resulting 200+ packs of 180 sides would be delivered on a trolley printed, stapled with covers. The cost was rounding error compared to the cost of providing an hour of teaching in a classroom (wage costs, support staff costs, building costs including amortisation &c).
How is this happening?
Public universities are always underfunded.
Universities can get more money by putting the cost on the students and then they cover it with gov grants and loans.
You see a policy, and your clever brains come up with a way to get around it, "proving" that the new methodology is not perfect and therefore not valuable.
So wrong. Come on people, think about it -- to an extent ALL WE DO is "friction." Any shift towards difficulty can be gained, but also nearly all of the time it provides a valuable differentiator in terms of motivation, etc.
> They don't care about students or education, they care about wasting resources and making a lot of money in the process.
This obv isn’t a push by parents because I can’t imagine parents I know want their kids in front of a screen all day. At best they’re indifferent. My only guess is the teachers unions that don’t want teachers grading and creating lesson plans and all the other work they used to do.
And since this trend kid scores or performance has not gotten better, so what gives?
Can anyone comment if it’s as bad as this and what’s behind it.
The older one has a chromebook and uses it for research and production of larger written projects and presents—the kind of things you’d expect. The younger one doesn’t have any school-supplied device yet.
Both kids have math exercises, language worksheets, short writing exercises, etc., all done on paper. This is the majority of homework.
I’m fine with this system. I wish they’d spend a little more time teaching computer basics (I did a lot of touch typing exercises in the 90’s; my older one doesn’t seem to have those kind of lessons). But in general, there’s not too much homework, there’s good emphasis on reading, and I appreciate that the older one is learning how to plan, research, and create projects using the tool he’ll use to do so in future schooling.
* People needed to be taught digital skills that were in growing demand in the workplace.
* The kids researching things online and word-processing their homework were doing well in class (because only upper-middle-class types could afford home PCs)
* Some trials of digital learning produced good results. Teaching by the world's greatest teachers, exactly the pace every student needs, with continuous feedback and infinite patience.
* Blocking distractions? How hard can that be?
Writing with a word processor that just helps you type, format, and check spelling is great. Blocking distractions on a general-purpose computer (like a phone or a tablet) is as hard as handing locked-down devices set up for the purpose, and banning personal devices.
Students have always looked for ways to minimize the work load, and often the response has been to increase the load. In some cases it has effectively become a way to tech you to get away with cheating (a lesson this even has some real-world utility).
Keeping students from wasting their tuition is an age-old, Sisyphean task for parents. School is wasted on the young. Unfortunately youth is also when your brain is most receptive to it.
First, extremely cumbersome and error-prone to type compared to swipe-typing on a soft keyboard. Even highlighting a few sentences can be problematic when spanning across a page boundary.
Second, navigation is also painful compared to a physical book. When reading non-fiction, it’s vital to be able to jump around quickly, backtrack, and cross-reference material. Amazon has done some good work on the UX for this, but nothing is as simple as flipping through a physical book.
Android e-readers are better insofar as open to third-party software, but still have the same hardware shortcomings.
My compromise has been to settle on medium-sized (~Kindle or iPad Mini size) tablets and treat them just as an e-reader. (Similar to the “kale phone” concept ie minimal software installed on it … no distractions.) They are much more responsive, hence fairly easy to navigate and type on.
That said, I always thought exams should be the moment of truth.
I had teachers that spoke broken english, but I'd do the homework and read the textbook in class. I learned many topics without the use of a teacher.
This made sense a couple of decades ago. Today, it's just bizarre to be spending $150 on a phonebook-sized packet of reading materials. So much paper and toner.
This is what iPads and Kindles are for.
To make it more palpable for an IT worker: "It's just bizarre to give a developer a room with a door, so much sheetrock and wood! Working with computers is what open-plan offices are for."
What could it mean for an "option" to be "required"?
This isn't my article nor do I know this Educator but I like her approach and actions taken:
https://www.npr.org/2026/01/28/nx-s1-5631779/ai-schools-teac...
1. Instead of putting up all sorts of barriers between students and ChatGPT, have students explicitly use ChatGPT to complete the homework
2. Then compare the diversity in the ChatGPT output
3. If the ChatGPT output is extremely similar, then the game is to critique that ChatGPT output, find out gaps in ChatGPT's work, insights it missed and what it could have done better
4.If the ChatGPT output is diverse, how do we figure out which is better? What caused the diversity? Are all the outputs accurate or are there errors in some?
Similarly, when it comes to coding, instead of worrying that ChatGPT can zero shot quicksort and memcpy perfectly, why not game it:
1. Write some test cases that could make that specific implementation of `quicksort` or `memcpy` fail
2. Could we design the input data such that quicksort hits its worst case runtime?
3. Is there an algorithm that would sort faster than quicksort for that specific input?
4. Could there be architectures where the assumptions that make quicksort "quick", fail to hold true? Instead, something simpler and worse on paper like a "cache aware sort" actually work faster in practice than quicksort?
I have multiple paragraphs more of thought on this topic but will leave it at this for now to calibrate if my thoughts are in the minority
>Shorter packets can cost around $20, while longer packets can cost upwards of $150 when ordered with the cheapest binding option
Does a student need to print out multiple TYCO Packets ? If so, only the very rich could afford this. I think educations should go back to printed books and submitting you work to the Prof. on paper.
But submitting printed pages back to the Prof. for homework will avoid the school saying "Submit only Word Documents". That way a student can use the method they prefer, avoiding buying expensive software. One can then use just a simple free text editor if they want. Or even a typewriter :)
> TYCO Print is a printing service where professors can upload course files for TYCO to print out for students as they order. Shorter packets can cost around $20, while longer packets can cost upwards of $150 when ordered with the cheapest binding option.
Lol $150 for reading packets? Not even textbooks? Seriously the whole system can fuck off.