From diagnosing rare illnesses and scanning the planet’s surface to finding security flaws and accidentally exposing millions of résumés, AI is proving it can do almost anything—sometimes too well, and sometimes not well enough.
AI Agents Are Getting Better at Writing Code—and Hacking It as Well
AI just scored big in cybersecurity—but that may be both a victory and a warning. UC Berkeley researchers found that AI tools identified 15 previously unseen software flaws with minimal effort, utilizing a new system called CyberGym to deploy models like Claude and EnIGMA on open-source code.
These findings validate AI's potential in automating the hunt for zero-days—rare, high-stakes bugs—but also raise fears that attackers could soon wield these tools too.
Experts agree that the AI-human combo works best, as models still struggle with complex issues and typically achieve a modest 2% detection rate. The next cyber arms race? It may be AI vs. AI.
McDonald’s AI Hiring Bot Exposed Millions of Applicants’ Data to Hackers Who Tried the Password ‘123456’
McDonald’s AI hiring bot, Olivia, just got grilled after researchers found it exposed millions of job applicants’ personal info, thanks to a password literally set as “123456.”
The chatbot, made by Paradox.ai, helps screen applicants but was wide open to hackers due to weak security and basic web vulnerabilities. The breach may have affected up to 64 million chat records containing names, emails, and phone numbers.
Paradox.ai admitted the flaw and says it was fixed quickly, with no evidence of misuse beyond the researchers. McDonald’s says it’s “disappointed” and is holding its tech partner accountable.
AI slows down some experienced software developers, study finds
A new study says AI tools like Cursor may be slowing down seasoned software devs instead of speeding them up.
Developers expected a 24% productivity bump, but instead, they took 19% longer when using AI on tasks they already knew well. Why? Reviewing and fixing AI’s suggestions added extra time.
While the tools still made work more enjoyable and less tiring, the gains were clearer for junior devs or new projects. TL;DR: AI helps, but not always how or where you think.
LGND wants to make ChatGPT for the Earth
Startup LGND wants to make Earth’s data more useful by using AI to decode complex geospatial questions, like how wildfire fire breaks have changed in California.
Their tool converts satellite imagery into “geographic embeddings”—compressed summaries that make querying spatial data faster and cheaper.
Instead of spending hundreds of thousands on a single-use dataset, LGND’s platform makes spatial analysis scalable for everything from climate planning to vacation rentals. They just raised $9 million to scale the tech.
Their goal? Make Earth’s 100 terabytes of daily satellite data as searchable as asking ChatGPT.
A New Kind of AI Model Lets Data Owners Take Control
In a move to give data owners real control, researchers at the Allen Institute for AI have unveiled FlexOlmo—a model that lets contributors train on their own data, merge their results into a larger system, and later pull out their piece without breaking the model.
It's a stark contrast to today's AI giants, who often treat your data like a one-way donation.
Built using a modular “mixture of experts” framework, FlexOlmo enables asynchronous training and revocable data use, a potential game-changer for legal and ethical AI development.
The model also proved its technical chops, outperforming comparable setups across benchmarks. It’s a recipe for AI that doesn’t require sacrificing ownership or transparency.
Dr. ChatGPT Will See You Now
When a Reddit user cured a five-year jaw issue in one ChatGPT exchange, it marked a shift: AI is now unofficially joining the diagnostic team. Patients and parents, frustrated with endless doctor visits, are turning to LLMs like ChatGPT—and often finding startlingly accurate medical insights.
Yet while AI outperforms many humans in simulations, its real-world use stumbles when users omit key symptoms or overtrust its polished prose. Doctors are learning to integrate AI into their practice, not as rivals but as diagnostic allies.
Meanwhile, institutions like Harvard and companies like OpenAI and Microsoft are developing tools and curricula to prepare for the AI-augmented future of healthcare.
As AI becomes more integrated into our daily systems and decisions, one thing is clear: its evolution isn’t just a tech story anymore—it’s becoming the pulse of our future, for better or for glitch.