I have mentioned this in a few comments: for my CS classes I have gone from a historical 60-80% projects / 40-20% quizzes grade split, to a 50/50 split, and have moved my quizzes from being online to being in-person, pen-on-paper with one sheet of hand-written notes

Rather than banning AI, I'm showing students how to use it effectively as a personalized TA. I'm giving them this AGENTS.md file:

https://gist.github.com/1cg/a6c6f2276a1fe5ee172282580a44a7ac

And showing them how to use AI to summarize the slides into a quiz review sheet, generate example questions with answer walk throughs, etc.

Of course I can't ensure they aren't just having AI do the projects, but I tell them that if they do that they are cheating themselves: the projects are designed to draw them into the art of programming and give them decent, real-world coding experience that they will need, even if they end up working at a higher level in the future.

AI can be a very effective tool for education if used properly. I have used it to create a ton of extremely useful visualizations (e.g. how twos complement works) that I wouldn't have otherwise. But it is obviously extremely dangerous as well.

"It is impossible to design a system so perfect that no one needs to be good."

  • mmooss
  • ·
  • 35 minutes ago
  • ·
  • [ - ]
I think that's a great approach. I've thought about how to handle these issues and wonder how you handle several issues that come to mind:

Competing with LLM software users, 'honest' students would seem strongly incentivized to use LLMs themeselves. Even if you don't grade on a curve, honest students will get worse grades which will look worse to graduate schools, grant and scholarship committees, etc., in addition to the strong emotional component that everyone feels seeing an A or C. You could give deserving 'honest' work an A but then all LLM users will get A's with ease. It seems like you need two scales, and how do you know who to put on which scale?

And how do students collaborate on group projects? Again, it seems you have two different tracks of education, and they can't really work together. Edit: How do class discussions play out with these two tracks?

Also, manually doing things that machines do much better has value but also takes valuable time from learning more advanced skills that machines can't handle, and from learning how to use the machines as tools. I can see learning manual statistics calculations, to understand them fundamentally, but at a certain point it's much better to learn R and use a stats package. Are the 'honest' students being shortchanged?

  • bbor
  • ·
  • 1 hour ago
  • ·
  • [ - ]
You seem like a great professor(/“junior baby mini instructor who no one should respect”, knowing American academic titles…). Though as someone whose been on the other end of the podium a bit more recently, I will point out the maybe-obvious:

  Of course I can't ensure they aren't just having AI do the projects, but I tell them that if they do that they are cheating themselves
This is the right thing to say, but even the ones who want to listen can get into bad habits in response to intense schedules. When push comes to shove and Multivariate Calculus exam prep needs to happen but you’re stuck debugging frustrating pointer issues for your Data Structures project late into the night… well, I certainly would’ve caved far too much for my own good.

IMO the natural fix is to expand your trusting, “this is for you” approach to the broader undergrad experience, but I can’t imagine how frustrating it is to be trying to adapt while admin & senior professors refuse to reconsider the race for a “””prestigious””” place in a meta-rat race…

For now, I guess I’d just recommend you try to think of ways to relax things and separate project completion from diligence/time management — in terms of vibes if not a 100% mark. Some unsolicited advice from a rando who thinks you’re doing great already :)

The irony is that on-time completion of is probably the #1 source of project failure in the real world.
Yes, I expect that pressure will be there, and project grades will be near 100% going forward, whether the student did the work or not.

This is why I'm going to in-person written quizzes to differentiate between the students who know the material and those who are just using AI to get through it.

I do seven quizzes during the semester so each one is on relatively recent material and they aren't weighted too heavily. I do some spaced-repetition questions of important topics and give students a study sheet of what to know for the quiz. I hated the high-pressure midterms/finals of my undergrad, so I'm trying to remove that for them.

> I hated the high-pressure midterms/finals of my undergrad

The pressure was what got me to do the necessary work. Auditing classes never worked for me.

> I do some spaced-repetition questions of important topics and give students a study sheet of what to know for the quiz.

Isn't that what the lectures and homework are for?

  • ·
  • 20 minutes ago
  • ·
  • [ - ]
Do you find advocating for AI literacy to be controversial amongst peers?

I find, as a parent, when I talk about it at the high school level I get very negative reactions from other parents. Specifically I want high schoolers to be skilled in the use of AI, and particular critical thinking skills around the tools, while simultaneously having skills assuming no AI. I don’t want the school to be blindly “anti AI” as I’m aware it will be a part of the economy our kids are brought into.

There are some head in the sands, very emotional attitudes about this stuff. (And obviously idiotically uncritical pro AI stances, but I doubt educators risk having those stances)

Not OP, but I would imagine (or hope) that this attitude is far less common amongst peer CS educators. It is so clear that AI tools will be (and are already) a big part of future jobs for CS majors now, both in industry and academia. The best-positioned students will be the ones who can operate these tools effectively but with a critical mindset, while also being able to do without AI as needed (which of course makes them better at directing AI when they do engage it).

That said I agree with all your points too: some version of this argument will apply to most white collar jobs now. I just think this is less clear to the general population and it’s much more of a touchy emotional subject, in certain circles. Although I suppose there may be a point to be made about being more slightly cautious about introducing AI at the high school level, versus college.

> It is so clear that AI tools will be (and are already) a big part of future jobs for CS majors now, both in industry and academia.

No, it's not.

Nothing around AI past the next few months to a year is clear right now.

It's very, very possible that within the next year or two, the bottom falls out of the market for mainstream/commercial LLM services, and then all the Copilot and Claude Code and similar services are going to dry up and blow away. Naturally, that doesn't mean that no one will be using LLMs for coding, given the number of people who have reported their productivity increasing—but it means there won't be a guarantee that, for instance, VS Code will have a first-party integrated solution for it, and that's a must-have for many larger coding shops.

None of that is certain, of course! That's the whole point: we don't know what's coming.

It is clear that AI had already transformed how we do our jobs in CS

The genie is out of the bottle, never going back

It's a fantasy to think it will "dry up" and go away

Some other guarantees over the next few years we can make based on history: AI will get batter, faster, and more efficient like everything else in CS

  • oblio
  • ·
  • 24 minutes ago
  • ·
  • [ - ]
Yeah, like Windows in 2026 is better than Windows in 2010, Gmail in 2026 is better than Gmail in 2010, the average website in 2026 is better than in 2015, Uber is better in 2026 than in 2015, etc.

Plenty of tech becomes exploitative (or more exploitative).

I don't know if you noticed but 80% of LLM improvements are actually procedural now: it's the software around them improving, not the core LLMs.

Plus LLMs have huge potential for being exploitative. 10x what Google Search could do for ads.

I agree with you that everything is changing and that we don’t know what’s coming, but I think you really have to stretch things to imagine that it’s a likely scenario that AI-assisted coding will “dry up and blow away.” You’ll need to elaborate on that, because I don’t think it’s likely even if the AI investment bubble pops. Remember that inference is not really that expensive. Or do you think that things shift on the demand side somehow?
I think that even if inference is "not really that expensive", it's not free.

I think that Microsoft will not be willing to operate Copilot for free in perpetuity.

I think that there has not yet been any meaningful large-scale study showing that it improves performance overall, and there have been some studies showing that it does the opposite, despite individuals' feeling that it helps them.

I think that a lot of the hype around AI is that it is going to get better, and if it becomes prohibitively expensive for it to do that (ie, training), and there's no proof that it's helping, and keeping the subscriptions going is a constant money drain, and there's no more drumbeat of "everything must become AI immediately and forever", more and more institutions are going to start dropping it.

I think that if the only programmers who are using LLMs to aid their coding are hobbyists, independent contractors, or in small shops where they get to fully dictate their own setups, that's a small enough segment of the programming market that we can say it won't help students to learn that way, because they won't be allowed to code that way in a "real job".

  • LtWorf
  • ·
  • 21 minutes ago
  • ·
  • [ - ]
If they start charging what it costs them for example…
AI is extremely dangerous for students and needs to be used intentionally, so I don't blame people for just going to "ban it" when it comes to their kids.

Our university is slowly stumbling towards "AI Literacy" being a skill we teach, but, frankly, most faculty here don't have the expertise and students often understand the tools better than teachers.

I think there will be a painful adjustment period, I am trying to make it as painless as possible for my students (and sharing my approach and experience with my department) but I am just a lowly instructor.

Honestly defining what to teach is hard

People need to learn to do research with LLMs, code with LLMs, how to evaluate artifacts created by AI. They need to learn how agents work at a high level, the limitations on context, that they hallucinate and become sycophantic. How they need guardrails and strict feedback mechanisms if let loose. AI Safety connecting to external systems etc etc.

You're right that few high school educators would have any sense of all that.

I don't know anyone who learned arithmetic from a calculator.

I do know people who would get egregiously wrong answers from misusing a calculator and insisted it couldn't be wrong.

> I find, as a parent, when I talk about it at the high school level I get very negative reactions from other parents. Specifically I want high schoolers to be skilled in the use of AI, and particular critical thinking skills around the tools, while simultaneously having skills assuming no AI. I don’t want the school to be blindly “anti AI” as I’m aware it will be a part of the economy our kids are brought into.

This is my exact experience as well and I find it frustrating.

If current technology is creating an issue for teachers - it's the teachers that need to pivot, not block current technology so they can continue what they are comfortable with.

Society typically cares about work getting done and not much about how it got done - for some reason, teachers are so deep into the weeds of the "how", that they seem to forget that if the way to mend roads since 1926 have been to learn how to measure out, mix and lay asphalt patches by hand, in 2026 when there are robots that do that perfectly every-time, they should be teaching humans to complement those robots or do something else entirely.

It's possible in the past, that learning how to use an abacus was a critical lesson but once calculators were invented, do we continue with two semesters of abacus? Do we allow calculators into the abacus course? Should the abacus course be scrapped? Will it be a net positive on society to replace the abacus course with something else?

"AI" is changing society fundamentally forever and education needs to change fundamentally with it. I am personally betting that humans in the future, outside extreme niches, are generalists and are augmented by specialist agents.

I'm also for education for AI awareness. A big point on teaching kids about AI should also be a lot about how unreliable they can be.

I had a discussion with a recruiter on Friday, and I said I guess the issue with AI vs human is, if you give a human developer who is new to your company tasks, the first few times you'll check their work carefully to make sure the quality is good. After a while you can trust they'll do a good job and be more relaxed. With AI, you can never be sure at any time. Of course a human can also misunderstand the task and hallucinate, but perhaps discussing the issue and the fix before they start coding can alleviate that. You can discuss with an AI as much as you want, but to me, not checking the output would be an insane move...

To return to the point, yeah, people will use AI anyway, so why not teach them about the risks. Also LLMs feel like Concorde: it'll get you to where you want to go very quickly, but at tremendous environmental cost (also it's very costly to the wallet, although the companies are now partially subsidizing your use with the hopes of getting you addicted)..

[dead]
hopefully you've also modified the quizzes to be handwriting compatible.

I once got "implement a BCD decoder" with about a 1"x4" space to do it.

We just had our first set of in person quizzes and I gave them one question per page, with lots of space for answers.

I'm concerned about handwriting, which is a lost skill, and how hard that will be on the TAs who are grading the exams. I have stressed to students that they should write larger, slower and more carefully than normal. I have also given them examples of good answers: terse and to the point, using bulleted lists effectively, what good pseudo-code looks like, etc.

It is an experiment in progress: I have rediscovered the joys of printing & the logistics moving large amounts of paper again. The printer decided half way through one run to start folding papers slightly at the corner, which screwed up stapling.

I suppose this is why we are paid the big bucks.

> I have also given them examples of good answers: terse and to the point

Oh man, this reminds me of one test I had in uni, back in the days when all our tests were in class, pen & paper (what's old is new again?). We had this weird class that taught something like security programming in unix. Or something. Anyway, all I remember is the first two questions being about security/firewall stuff, and the third question was "what is a socket". So I really liked the first two questions, and over-answered for about a page each. Enough text to both run out of paper and out of time. So my answer to the 3rd question was "a file descriptor". I don't know if they laughed at my terseness or just figured since I overanswered on the previous questions I knew what that was, but whoever graded my paper gave me full points.

Was it a Perl exam?
How do you handle kids w/ a learning disability who can't effectively write well?
Reasonable accommodations have been made for students with disabilities for decades now. While there might be some cases where AI might be helpful for accommodating students, it is not, nor should it be, a universal application because different disabilities (and different students) require different treatment and support. There‘s tons of research on disability accommodations and tons of specialists who work on this. Most universities have an entire office dedicated to supporting students with disabilities, and primary and secondary schools usually have at least one person who takes on that role.

So how do you handle kids who can‘t write well? The same way we‘ve been handling them all along — have them get an assessment and determine exactly where they need support and what kind of support will be most helpful to that particular kid. AI might or might not be a part of that, but it‘s a huge mistake to assume that it has to be a part of that. People who assume that AI can just be thrown at disability support betray how little they actually know about disability support.

We have a testing center at Montana State for situations like this. I deliver my tests in the form of a PDF and the testing center administers it in a manner appropriate for the student.
>How do you handle kids w/ a learning disability who can't effectively write well?

It's embarrassing to see this question downvoted on here. It's a valid question, there's a valid answer, and accessibility helps everyone.

It's a question that's too vague to be usefully answered especially on a forum like this.

There's not such thing as "disabled people who can't write well", there's individuals with specific problems and needs.

Maybe there's jessica who lost her right hand and is learning to write with the left who gets extra time. Maybe there's joe who has some form of nerve issue and uses a specialized pen that helps cancel out tremors. Maybe sarah is blind and has an aide who writes it or is allowed to use a keyboard or or or...

In the context of the immediate problems of AI in education, it's not a relevant thing to bring up. Finding ways for students with disabilities to succeed in higher education has been something that institutions have been handling for many decades now. The one I attended had well defined policies for faculty and specialist full time staff plus facilities whose sole purpose was to provide appropriate accommodations to such students and that was long, long ago. There will undoubtedly be some kind of role in the future for AI as well but current students with disabilities are not being left high and dry without it.
Because it’s another nonsensical “think of the children” argument for why nothing should ever change. Your comment really deserves nothing more than an eye roll emoji, but HN doesn’t support them.

Reasonable accommodations absolutely should be made for children that need them.

But also just because you’re a bad parent and think the rules don’t apply to you doesn’t mean your crappy kid gets to cheat.

Parents are the absolute worst snowflakes.

> Your comment really deserves nothing more than an eye roll emoji, but HN doesn’t support them.

(◔_◔)

  • bmacho
  • ·
  • 54 minutes ago
  • ·
  • [ - ]
There is -.-" for exasperation/annoyance
> “Over the years I’ve found that when students read on paper they're more likely to read carefully, and less likely in a pinch to read on their phones or rely on chatbot summaries,” Shirkhani wrote to the News. “This improves the quality of class time by orders of magnitude.”

This is the key part. I'm doing a part-time graduate degree at a major university right now, and it's fascinating to watch the week-to-week pressure AI is putting on the education establishment. When your job as a student is to read case studies and think about them, but Google Drive says "here's an automatic summary of the key points" before you even open the file, it takes a very determined student to ignore that and actually read the material. And if no one reads the original material, the class discussion is a complete waste of time, with everyone bringing up the same trite points, and the whole exercise becomes a facade.

Schools are struggling to figure out how to let students use AI tools to be more productive while still learning how to think. The students (especially undergrads) are incredibly good at doing as little work as possible. And until you get to the end-of-PhD level, there's basically nothing you encounter in your learning journey that ChatGPT can't perfectly summarize and analyze in 1 second, removing the requirement for you to do anything.

This isn't even about AI being "good" or "bad". We still teach children how to add numbers before we give them calculators because it's a useful skill. But now these AI thinking-calculators are injecting themselves into every text box and screen, making them impossible to avoid. If the answer pops up in the sidebar before you even ask the question, what kind of masochist is going to bother learning how to read and think?

Last weekend I was arguing with a friend that physical guitar pedals are better for creativity and exploration of the musical space than modelers even though modelers have way more resources for a fraction of the cost, the physical aspect of knobs and cables and everything else leads to something that's way more interactive and prone to "happy mistakes" than any digital interface can offer.

In my first year of college my calculus teacher said something that stuck with me "you learn calculus getting cramps on your wrists", yeah, AI can help remember things and accelerate learning, but if you don't put the work to understand things you'll always be behind people that know at least with a bird eye view what's happening.

> but if you don't put the work to understand things you'll always be behind people that know at least with a bird eye view what's happening.

Depends. You might end up going quite far without even opening up the hood of a car even when you drive the car everyday and depend on it for your livelihood.

If you're the kind that likes to argue for a good laugh, you might say "well, I don't need to know how my car works as long as the engineer who designed it does or the mechanic who fixes it does" - and this is accurate but it's also accurate not everyone ended up being either the engineer or the mechanic. It's also untrue that if it turned out it would be extremely valuable to you to actually learn how the car worked, you wouldn't put in the effort to do so and be very successful at it.

All this talk about "you should learn something deeply so you can bank on it when you will need it" seems to be a bit of a hoarding disorder.

Given the right materials, support and direction, most smart and motivated people can learn how to get competent at something that they had no clue about in the past.

When it comes to smart and motivated people, the best drop out of education because they find it unproductive and pedantic.

Yes, you can and I know just enough of cars to not be scammed by people, but not to know how the whole engine works, and I also don't think that you should learn everything that you can learn, there's no time for that, that's why I made the bird view comment.

My argument is that when you have at least a basic knowledge of how things work (be it as a musician, a mechanical engineer or a scientist) you are in a much better place to know what you want/need.

That said, smart and motivated people thrive if they are given the conditions to thrive, and I believe that physical interfaces have way less friction than digital interfaces, turning a knob is way less work than clicking a bunch of menus to set up a slider.

If I were to summarize what I think about AI it would be something like "Let it help you. Do not let it think for you"

My issue is not with people using AI as a tool, bit with people delegating anything that would demand any kind of effort to AI

At some level, this is a problem of unmotivated students and college mostly being just for signaling as opposed to real education.

If the sole purpose of college is to rank students, and funnel them to high prestige jobs that have no use for what they actually learn in college then what the students are doing is rational.

If however the student is actually there to learn, he knows that using ChatGPT accomplishes nothing. In fact all this proves is that most students in most colleges are not there to learn. Which begs the question why are they even going to college? Maybe this institution is outdated. Surely there is a cheaper and more time efficient way to ranking students for companies.

For elite colleges, it is a pithy aphorism that the hardest part is getting in.
> Surely there is a cheaper and more time efficient way to ranking students for companies.

This topic comes up all the time. Every method conceivable to rank job candidates gets eviscerated here as being counterproductive.

And yet, if you have five candidates for one job, you're going to have to rank them somehow.

  • jrm4
  • ·
  • 13 minutes ago
  • ·
  • [ - ]
As a college instructor, one issue I find fascinating is the idea that I'm supposed to care strongly about this.

I do not. This is your problem, companies. Now, I am aware that I have to give out grades and so I walk through the motions of doing this to the extent expected. But my goal is to instruct and teach all students to the best of my abilities to try to get them all to be as educated/useful to society as possible. Sure, you can have my little assessment at the end if you like, but I work for the students, not for the companies.

  • rr808
  • ·
  • 1 hour ago
  • ·
  • [ - ]
It starts at admissions where learning is not a rewarded activity. You should be making impact in the community, doing some performative task that isn't useful for anything except making you different to your class mates who naively read the books and do the classwork honestly.
> At some level, this is a problem of unmotivated students and college mostly being just for signaling as opposed to real education.

I think this is mostly accurate. Schools have been able to say "We will test your memory on 3 specific Shakespeares, samples from Houghton Mifflin Harcourt, etc" - the students who were able to perform on these with some creative dance, violin, piano or cello thrown in had very good chances at a scholarship from an elite college.

This has been working extremely well except now you have AI agents that can do the same at a fraction of the cost.

There will be a lot of arguments, handwringing and excuse making as students go through the flywheel already in motion with the current approach.

However, my bet is it's going to be apparent that this approach no longer works for a large population. It never really did but there were inefficiencies in the market that kept this game going for a while. For one, college has become extremely expensive. Second, globalization has made it pretty hard for someone paying tuition in the U.S. to compete against someone getting a similar education in Asia when they get paid the same salary. Big companies have been able to enjoy this arbitrage for a long time.

> Maybe this institution is outdated. Surely there is a cheaper and more time efficient way to ranking students for companies

Now that everyone has access to labor cheaper than the cheapest English speaking country in the world, humanity will be forced to adapt, forcing us to rethink what has seemed to work in the past

  • zkmon
  • ·
  • 2 hours ago
  • ·
  • [ - ]
>This academic year, some English professors have increased their preference for physical copies of readings, citing concerns related to artificial intelligence.

I didn't get it. How can printing avoid AI? And more importantly is this AI-resistance sustainable?

The students were reading AI summaries rather than the original text.

Does this literally work? It adds slightly more friction, but you can still ask the robot to summarize pretty much anything that would appear on the syllabus. What it likely does it set expectations.

This doesn't strike me as being anti-AI or "resistance" at all. But if you don't train your own brain to read and make thoughts, you won't have one.

I was reading summaries online 25 years ago as well.

Hell, in Italy we used to have an editor called Bignami make summaries of every school topic.

https://www.bignami.com/

In any case, I don't know what to think about all of this.

School is for learning, if you skip the hard part you not gonna learn, your lost.

Instead of learning the things that can be done by ai, learn how to use the ai as that’s the only edge you got left.
This approach is just cheap theater. It doesn't actually stop AI, it just adds a step to the process. Any student can snap a photo, OCR the text and feed it into an LLM in seconds. All this policy accomplishes is wasting paper and forcing students to engage in digital hoop-jumping.
It’s not theater. It introduces friction into the process. And when there is friction in both choices (read the paper, or take a photo and upload the picture), you’ll get more people reading the physical paper copy. If students want to jump through hoops, they will, but it will require an active choice.

At this point auto AI summaries are so prevalent that it is the passive default. By shifting it to require an active choice, you’ve make it more likely for students to choose to do the work.

  • blell
  • ·
  • 49 minutes ago
  • ·
  • [ - ]
Any AI app worth its salt allows you to upload a photo of something and it processes it flawlessly in the same amount of time. This is absolutely worthless teather.
It’s not the time that’s the friction. It’s the choice. The student has to actively take the picture and upload it. It’s a choice. It takes more effort than reading the autogenerated summary that Google Drive or Copilot helpfully made for the digital PDF of the reading they replaced.

It’s not much more effort. The level of friction is minimal. But we’re talking about the activation energy of students (in an undergrad English class, likely teenagers). It doesn’t take much to swing the percentage of students who do the reading.

  • blell
  • ·
  • 39 minutes ago
  • ·
  • [ - ]
Are you really comparing the energy necessary to read something to taking a photo and having an ai read it for you. You are not comparing zero energy to some energy, you are comparing a whole lot of energy to some energy.
That friction is trivial. You are comparing the effort of snapping a photo against the effort of actually reading and analyzing a text. If anyone chooses to read the paper, it's because they actually want to read it, not because using AI was too much hassle.
  • jrm4
  • ·
  • 11 minutes ago
  • ·
  • [ - ]
You fundamentally misunderstand the value of friction. The digital hoop-jumping, as you call it, is a very very useful signal for motivation.
Students tend to be fairly lazy, so this may simply mean another x% of the class reads the material rather than scanning in the 60 pages of reading for the assignment.
You can't easily copy and paste from a printout into AI. Sure, you can track down the reading yourself online, and then copy and paste in, but not during class, and not without some effort.
It’s easy to take a picture of a printout and then ask AI about it. Not that hard even when it’s many pages.
  • xigoi
  • ·
  • 2 hours ago
  • ·
  • [ - ]
LLM services have pretty much flawless OCR for printed text.
Quote from OA

"TYCO Print is a printing service where professors can upload course files for TYCO to print out for students as they order. Shorter packets can cost around $20, while longer packets can cost upwards of $150 when ordered with the cheapest binding option."

And later in OA it states that the cost to a student is $0.12 per double sided sheet of printing.

In all of my teaching career here in the UK, the provision of handouts has been a central cost. Latterly I'd send a pdf file with instructions and the resulting 200+ packs of 180 sides would be delivered on a trolley printed, stapled with covers. The cost was rounding error compared to the cost of providing an hour of teaching in a classroom (wage costs, support staff costs, building costs including amortisation &c).

How is this happening?

  • lokar
  • ·
  • 1 hour ago
  • ·
  • [ - ]
Two things

Public universities are always underfunded.

Universities can get more money by putting the cost on the students and then they cover it with gov grants and loans.

Except, Yale is a private university, not public, while very few universities in the UK are private (https://en.wikipedia.org/wiki/Category:Private_universities_... lists 8 and https://studying-in-uk.org/private-universities-in-uk/ lists 11) so 2b3a51 almost certainly means public college teaching experience in the UK.
  • jrm4
  • ·
  • 7 minutes ago
  • ·
  • [ - ]
College instructor here. One thing I'm seeing here that's kind of funny is how badly so many of you are misunderstanding the value of "friction."

You see a policy, and your clever brains come up with a way to get around it, "proving" that the new methodology is not perfect and therefore not valuable.

So wrong. Come on people, think about it -- to an extent ALL WE DO is "friction." Any shift towards difficulty can be gained, but also nearly all of the time it provides a valuable differentiator in terms of motivation, etc.

At 150 eurobucks apiece, printed freshman coursebooks were prohibitively expensive in uni. We just pirated everything as a consequence.
At my university in actual Europe, many copies of the required textbooks were available in the library. Printing was free.
That's the whole point. They don't care about students or education, they care about wasting resources and making a lot of money in the process.
some do and some don't. the "outrage" button is appropriate for the first part (don't care about students; waste resources to increase profits), but destructive for the second (we do care about students; we use resources in the classroom). It is hard to discuss this important topic when things go to "yelling" immediately?

> They don't care about students or education, they care about wasting resources and making a lot of money in the process.

  • bko
  • ·
  • 2 hours ago
  • ·
  • [ - ]
Who is behind this over digitization of primary school? My understanding is that in the Us pretty much all homework and tests are done on computers or iPads.

This obv isn’t a push by parents because I can’t imagine parents I know want their kids in front of a screen all day. At best they’re indifferent. My only guess is the teachers unions that don’t want teachers grading and creating lesson plans and all the other work they used to do.

And since this trend kid scores or performance has not gotten better, so what gives?

Can anyone comment if it’s as bad as this and what’s behind it.

My kids are in elementary school in the SF area (although pretty far in the ‘burbs) and this is not my experience.

The older one has a chromebook and uses it for research and production of larger written projects and presents—the kind of things you’d expect. The younger one doesn’t have any school-supplied device yet.

Both kids have math exercises, language worksheets, short writing exercises, etc., all done on paper. This is the majority of homework.

I’m fine with this system. I wish they’d spend a little more time teaching computer basics (I did a lot of touch typing exercises in the 90’s; my older one doesn’t seem to have those kind of lessons). But in general, there’s not too much homework, there’s good emphasis on reading, and I appreciate that the older one is learning how to plan, research, and create projects using the tool he’ll use to do so in future schooling.

A few decades ago:

* People needed to be taught digital skills that were in growing demand in the workplace.

* The kids researching things online and word-processing their homework were doing well in class (because only upper-middle-class types could afford home PCs)

* Some trials of digital learning produced good results. Teaching by the world's greatest teachers, exactly the pace every student needs, with continuous feedback and infinite patience.

* Blocking distractions? How hard can that be?

Reading with AI summaries jumping into your eyes is like writing in a word processor that completes sentences and paragraphs for you.

Writing with a word processor that just helps you type, format, and check spelling is great. Blocking distractions on a general-purpose computer (like a phone or a tablet) is as hard as handing locked-down devices set up for the purpose, and banning personal devices.

In pretty much any school system, just complain that the printout is not compatible with your text-to-speech engine, and the instructor will be required to provide an electronic version, no questions asked.
Or you can fold your tuition dollars into cranes and burn them as performance art.
Students have never understood the value of school work. It's a hard thing to understand. None of the assignments are asked because the teacher wants to know the answer. They already know. So it all closely resembles busy work. AI is perfectly designed to do busy work.

Students have always looked for ways to minimize the work load, and often the response has been to increase the load. In some cases it has effectively become a way to tech you to get away with cheating (a lesson this even has some real-world utility).

Keeping students from wasting their tuition is an age-old, Sisyphean task for parents. School is wasted on the young. Unfortunately youth is also when your brain is most receptive to it.

This is a bit off topic, but why are used books so expensive on abebooks, thriftbooks, amazon so expensive compared to booksales, etc? I recall a time when a lot of these online stores were selling them for a few cents (granted, it was a long time ago and it was still called zShops on Amazon).
  • rr808
  • ·
  • 1 hour ago
  • ·
  • [ - ]
Do you mean a few cents plus $5 shipping? I think they still exist but often results are ranked by total cost now which is clearer.
Computers have not advanced education — the data shows the opposite. I think we should just go back to physical books (which can be used!), and pen and paper for notes and assignments.
I might be wrong but I fear this strategy might unfairly punish e-readers which imo offer the best of both worlds
I’ve tried many e-readers since early Kindle but I keep coming back to two fundamental problems with e-ink, both relevant to education.

First, extremely cumbersome and error-prone to type compared to swipe-typing on a soft keyboard. Even highlighting a few sentences can be problematic when spanning across a page boundary.

Second, navigation is also painful compared to a physical book. When reading non-fiction, it’s vital to be able to jump around quickly, backtrack, and cross-reference material. Amazon has done some good work on the UX for this, but nothing is as simple as flipping through a physical book.

Android e-readers are better insofar as open to third-party software, but still have the same hardware shortcomings.

My compromise has been to settle on medium-sized (~Kindle or iPad Mini size) tablets and treat them just as an e-reader. (Similar to the “kale phone” concept ie minimal software installed on it … no distractions.) They are much more responsive, hence fairly easy to navigate and type on.

I've brought my kindle to even the most strict of technology-banned lectures (with punishments like dropping a letter grade after one violation, and failing you after two), and never have they given me a problem when asked. They realize the issue isn't the silicon or lithium, it's the distractions it enables. I'm sure I could connect to some LLM on it, it's just that no one ever will.
Its obvious they don't care.

That said, I always thought exams should be the moment of truth.

I had teachers that spoke broken english, but I'd do the homework and read the textbook in class. I learned many topics without the use of a teacher.

> TYCO Print is a printing service where professors can upload course files for TYCO to print out for students as they order. Shorter packets can cost around $20, while longer packets can cost upwards of $150 when ordered with the cheapest binding option.

This made sense a couple of decades ago. Today, it's just bizarre to be spending $150 on a phonebook-sized packet of reading materials. So much paper and toner.

This is what iPads and Kindles are for.

No, the cost of the paper, toner, and binding is the cost of providing of a provably distraction-free environment.

To make it more palpable for an IT worker: "It's just bizarre to give a developer a room with a door, so much sheetrock and wood! Working with computers is what open-plan offices are for."

> This semester, she is requiring all students to have printed options.

What could it mean for an "option" to be "required"?

If you are flipping through the reading to find a quote, then printed readings are hard to beat, unless you can search for a word with digital search. But speed reading RSVP presentation beats any kind of print reading by a mile, if you are aiming for comprehension. So, it is hard to say where the technology is going. Nobody has put in the work to really make reading on an iPad as smooth and fluid as print, in terms of rapid page flipping. But the potential is there. It is kind of laughable how the salesman will be saying, oh it has a fast processor, and then you open up a PDF and scroll a few pages fast and they start being blank instead of actually having text.
My thesis paper about a course for Freshman Composition Writing to stress fundamentals by way of using quill, pencil, pen, and finally a typewriter, was written 20 YEARS AGO in response to Spell Check and Auto Predict at the time...2006...

This isn't my article nor do I know this Educator but I like her approach and actions taken:

https://www.npr.org/2026/01/28/nx-s1-5631779/ai-schools-teac...

I have been thinking about this and it seems like it's an asset that students want to do as little work as possible to get course credits. They also love playing games of various sorts. So instead of killing trees, printing pages of materials out and having students pay substantial sums to the printing press so we can inject distance between students reading the material and ChatGPT, why not turn it around completely?

1. Instead of putting up all sorts of barriers between students and ChatGPT, have students explicitly use ChatGPT to complete the homework

2. Then compare the diversity in the ChatGPT output

3. If the ChatGPT output is extremely similar, then the game is to critique that ChatGPT output, find out gaps in ChatGPT's work, insights it missed and what it could have done better

4.If the ChatGPT output is diverse, how do we figure out which is better? What caused the diversity? Are all the outputs accurate or are there errors in some?

Similarly, when it comes to coding, instead of worrying that ChatGPT can zero shot quicksort and memcpy perfectly, why not game it:

1. Write some test cases that could make that specific implementation of `quicksort` or `memcpy` fail

2. Could we design the input data such that quicksort hits its worst case runtime?

3. Is there an algorithm that would sort faster than quicksort for that specific input?

4. Could there be architectures where the assumptions that make quicksort "quick", fail to hold true? Instead, something simpler and worse on paper like a "cache aware sort" actually work faster in practice than quicksort?

I have multiple paragraphs more of thought on this topic but will leave it at this for now to calibrate if my thoughts are in the minority

While I fully agree with this, this quote bothers me:

>Shorter packets can cost around $20, while longer packets can cost upwards of $150 when ordered with the cheapest binding option

Does a student need to print out multiple TYCO Packets ? If so, only the very rich could afford this. I think educations should go back to printed books and submitting you work to the Prof. on paper.

But submitting printed pages back to the Prof. for homework will avoid the school saying "Submit only Word Documents". That way a student can use the method they prefer, avoiding buying expensive software. One can then use just a simple free text editor if they want. Or even a typewriter :)

If textbooks weren't so expensive I'd be more cheering on them.

> TYCO Print is a printing service where professors can upload course files for TYCO to print out for students as they order. Shorter packets can cost around $20, while longer packets can cost upwards of $150 when ordered with the cheapest binding option.

Lol $150 for reading packets? Not even textbooks? Seriously the whole system can fuck off.