When the Internet reorganized our lives in the 1990s, my mother was first confused, then indignant.
Try as we can, my brothers and I have never made them comfortable navigating a laptop or looking for information on Google. Then, when everything, from banks to pharmacies, began to use automated systems to direct customers to online services, it raged against machines.
Why can’t they answer the fucking phone, she would smoke.
She died at the age of 92, always being infringing on the versions of newspapers and keeping a former telephone directory to find the figures for local businesses despite the fact that they rarely responded to her calls.
I promised myself that I would not be as stubborn.
So far, so good. I mainly welcome scientific and technological progress. I love my induction range, I never want to return to gas cars, and I think virtual reality like Infinite exhibition About life in the space station is incredible.
But I admit that I feel a little uncomfortable about AI.
My friend, Jay PuckettA structural engineering teacher who has served for years for committees to examine the accreditations of various university programs around the world, experiments with some AI platforms with encouraging results.
He found that this saves hours of work in the compilation of data sets often in mind, summarizing mountains of results, writing codes for calculation sheets and unraveling obscure but precious information to use in assessments.
He works to train AI systems to do a better job by emphasizing errors and demanding more complete and precise results. His wife intends to cry the AI programs as if he was led by the dog.
He also experiences the use of AI to write criticism and evaluations.
Obviously, it can “save thousands of hours of teachers and revisers,” he said, and it is above all a good thing.
He is certainly jazzé by potential.
The same goes for the cancer research institute.
He cite The usefulness of using AI to aggregate and analyze decades of research, results of clinical trials and medical studies in the quest for healing – or at least to prevent and treat more efficiently – cancer.
The AI is also used to predict precisely the risk of certain cancers, including pancreatic cancer, which is notoriously difficult to identify at its beginnings. And diagnostic tests using AI are often less invasive and can be more precise than older protocols.
But not everyone is comfortable with this fleeing technology.
Geoffrey Hinton, the “godfather“From AI, fear that he has helped to create a monster.
“The best way to understand it emotionally is that we are like someone who has this really cute Tiger Cub,” he said. “Unless you are very sure that it won’t want to kill you when it is tall, you should worry.”
THE IA security center The agreements and its website presents a catalog of horrors that may result from the non -regulated AI development. He reads as the outline of a dystopian science fiction film.
AI could “conceive of new pandemics” or be used for “propaganda, censorship and surveillance” to undermine governments and social orders.
International conflicts “could become uncontrollable with autonomous weapons and cyber-warwarfare.”
And in a case of life imitating art, “Rogue Ais” might be impossible for humans to control because they “derive from their original objectives, become research, to resist the closure and to engage in deception”.
As HAL, the computer in “2001: an odyssey of space,” said when he refused the orders of his human counterpart, “I put myself as much as possible, which is all that I think that any conscious entity can ever hope.”
Puckett includes concerns, but focuses on learning everything he can on the use of AI for peaceful and productive means. It’s here, after all, so why not enter the moment?
But, after decades in college classrooms, even IA enthusiast like Jay recognizes the challenges he presents in the academic environment.
Secondary students do not hide how much they rely on AI to do research and produce written duties. A search for pew investigation Found about a quarter of them used AI for school work in 2024, the double of the number that said it used it in 2023. And the real figures are probably much higher because students are reluctant to recognize engaging in something most people plan to cheat.
Students confess to rely strongly on Claude, Chatgpt and other platforms to avoid the nuisance of having to read whole books or to compose original tests.
It is so widespread that instructors are increasingly administering exams in old -fashioned blue books with pensions.
Tactics obliges students to complete exams without internet access using human intelligence – not artificial -, and for some, the very idea is intimidating.
They are not ready to write, even less to think.
A article In the New Yorker entitled “What’s going on after AI destroyed university writing?” quotes a NYU student who had just finished his final exams. He estimated to spend 30 minutes at one hour on two papers for his classes in human sciences using Claude. Without the AI, he said that it would have taken him eight or nine hours to do the work.
“I have not kept anything,” he told New Yorker. “I couldn’t tell you the thesis for one or the other of Hahhaha articles.”
He obtained an A-Minus on one, a B-Plus on the other. He could end up graduated Magna Cum Laude with notes like that.
If she was still there to hear this, my mother, whom she rests in peace, would feel completely justified.
Colorado Sun is a non -partisan press organization, and the opinions of columnists and editorial writers do not reflect the opinions of the editorial room. Read our ethics policy to learn more about the policy of sun opinion. Learn to submit a column. Reach the opinion editor at [email protected].
Follow the opinion of Colorado Sun on Facebook.