Not rendering correctly? View this e-mail as a web page here.
Editor’s note: Due to a technical error, you may have received this morning’s Altimetry Daily Authority issue a day early. We apologize for any confusion this caused.
Once Upon an AI Battle
By Rob Spivey, director of research, Altimetry
It all started with a story about a girl named Elara…
And another…
And another.
If you’re not familiar with this tale, you probably haven’t spent much time grading papers lately. And if you have done so, you probably know exactly where this is going…
Elara is a main character frequently used in stories generated by ChatGPT. At least, that has been the experience of one high school English teacher on social media platform Reddit.
Frustrated by students’ frequent use of the chatbot, the teacher threatened to give a zero on any assignment that seemed AI generated. However, it was difficult to prove that an assignment was written by AI.
So they had to get creative.
The teacher did some research… and realized ChatGPT had some predictable habits…
For example, every time they asked the chatbot to tell them a story, it wrote about a girl in the woods named Elara.
The teacher created a simple assignment – write a story about anything you want. Instead of telling the students they would get a zero for using AI, the assignment instructions included this warning in smaller text…
If your main character’s name is Elara, -99 points.
Sure enough, multiple students submitted stories about Elara. That’s not exactly a common name… and the instructions were clear.
A grade of 1 out of 100 was all it took to get the message across.
This teacher’s experience, shared in a viral online post, is just one example of the rising tension between educators and AI.
He called this year’s crash 13 days before it unfolded… the 2020 and 2022 crashes… and the 2023 bank run, four months before it occurred. Now, he’s warning anyone who will listen to RAISE CASH for June 25 and get ready for the year’s biggest turning point in the market. By tomorrow, click here for his full blueprint and No. 1 new recommendation.
In March, the Senate granted sweeping powers to a Harvard economist who has some radical ideas about the U.S. dollar. Today, he reports directly to the Oval Office and has proposed a potential “Mar-a-Lago Accord,” which could send the dollar falling by as much as 40% or more. One analyst who predicted the 2008 crisis says the plan may save the country. But if you’re unprepared, it WILL be a national nightmare.
While some resort to clever tricks to catch cheaters, others are grappling with a far bigger challenge – how to prove students are using AI at all.
As tools like ChatGPT become more sophisticated, they’re raising new questions surrounding academic integrity. Schools are facing the issue of whether to consider the use of AI an honor-code violation.
Violations like plagiarism have become easy to detect using simple Internet searches and specialized software.
But proving the use of AI is an entirely new challenge…
Many universities reported a surge in suspiciously well-written essays following the release of ChatGPT in late 2022. Some professors have turned to new AI-detection solutions from popular academic-integrity software like Turnitin.
However, these tools often produce false positives. This has created a gridlock in the debate around academic integrity.
At the same time, some educators are embracing AI.
Professor Christian Terwiesch at the University of Pennsylvania’s Wharton School made headlines for testing an earlier version of ChatGPT on the final exam of his Operations Management MBA course. It scored somewhere between a B and a B-minus.
Another Wharton professor, Ethan Mollick, now requires students to use AI in their assignments, the same way some teachers require students to use a calculator. His new policy deems the use of AI as an “emerging skill.”
The academic war over AI is just beginning. No matter the outcome, it’s going to change how education works.
And as this battle rages in schools, the same is happening across corporate America…
Companies are choosing whether to adopt AI with open arms… or keep ignoring it.
Folks, it’s tempting to blindly buy the former at any cost. Nobody wants to miss their shot to profit from this revolutionary technology.
But it’s not that simple. Much like lazy students putting prompts into ChatGPT, some companies’ AI plans have a lot more marketing than substance.
Before you invest in a business, make sure it’s using AI in a productive way. Dig into the numbers. Check if a new tool is really attracting more customers or improving efficiency.
Companies are clamoring to grab attention with their supposed AI usage. But it takes more than slick marketing to generate long-term outperformance.
You have received this e-mail as part of your subscription to Altimetry Daily Authority. If you no longer want to receive e-mails from Altimetry Daily Authority, click here.
Published by Altimetry.
You’re receiving this e-mail at peter.hovis@gmail.com. For questions about your account or to speak with customer service, call (800) 701-9346 (U.S.), 9 a.m. – 5 p.m. Eastern time or e-mail info@altimetry.com. Please note: The law prohibits us from giving personalized financial advice.
Any brokers mentioned constitute a partial list of available brokers and is for your information only. Altimetry does not recommend or endorse any brokers, dealers, or investment advisors.
Altimetry forbids its writers from having a financial interest in any security they recommend to our subscribers. All employees of Altimetry (and affiliated companies) must wait 24 hours after an investment recommendation is published online – or 72 hours after a direct mail publication is sent – before acting on that recommendation.
This work is based on SEC filings, current events, interviews, corporate press releases, and what we’ve learned as financial journalists. It may contain errors, and you shouldn’t make any investment decision based solely on what you read here. It’s your money and your responsibility.