AI's Current Limitations in Freelance Work
AI agents are currently unable to complete a significant majority of tasks on platforms like Upwork to even a basic standard. Researchers from Scale AI and the Center for AI Safety tested six different AI models on 240 Upwork projects across writing, design, and data analysis categories. The results showed that AI models were overwhelmingly unsuccessful in completing these tasks.
The best-performing AI model, Manus, managed to complete only 2.5% of the tasks, earning $1,810 out of a potential $143,991. Other models, such as Claude Sonnet and Grok 4, finished just 2.1% of the tasks. While AI excels at simple, defined tasks like generating a logo, it struggles with multi-step workflows, taking initiative, or applying judgment.

This suggests that mass unemployment due to AI is unlikely in the immediate future. This finding aligns with previous research from MIT in August, which indicated that 95% of organizations had seen no return on their substantial investments in AI.
Human Advantage in Understanding the World
AI models demonstrate proficiency in pattern matching and word prediction, but they currently lack the ability to build robust internal models of the world. Research from MIT's WorldTest and Basis Research highlights this deficiency.
For instance, humans possess an innate understanding of their environment, such as knowing the location of items in a kitchen, estimating cooking times, and planning a sequence of actions to prepare a meal. However, testing revealed that three frontier reasoning AI models struggled significantly with these types of tasks.

Researchers developed 129 tasks across 43 interactive worlds, including "spot the difference" and physics puzzles. These tasks required AI to predict hidden aspects of the environment, plan actions, and identify rule changes. Human participants were tested on the same problems.
Our analysis reveals that humans achieve near-optimal scores while existing models frequently fail.
Humans outperform AI on these tasks due to an intuitive understanding of environments, the ability to revise beliefs based on new evidence, conduct experiments, explore strategically, and restart when necessary. Increasing computational power alone did not consistently improve AI performance, aiding success in only 25 out of 43 environments.
AI's Inaccuracies in News Reporting
Recent research from the BBC and the European Broadcasting Union indicates that AI assistants like ChatGPT, Copilot, Gemini, and Perplexity are unreliable for news reporting. These models failed to meet key criteria such as accuracy, proper sourcing, distinguishing opinion from fact, and providing adequate context.

The study found that 45% of AI-generated news answers contained at least one significant issue. Specifically, 31% were sourced incorrectly, and 20% contained inaccuracies, including fabricated details and outdated information. Gemini performed the worst, with significant problems in 76% of its responses.
Impact of AI Cover Letters on Hiring
Traditionally, cover letters serve as a signal of an applicant's effort and diligence. A well-crafted cover letter demonstrating knowledge of the company could differentiate motivated candidates from those submitting low-effort applications.
However, new research from Freelancer.com suggests that AI-generated cover letters have compromised this hiring signal. Employers are now hiring fewer people, and often the wrong candidates. Compared to the pre-AI era, highly skilled workers are being hired 19% less often, while individuals with the lowest skill levels are being hired 14% more often.

XPeng Unveils Human-Like Robot
Chinese electric vehicle manufacturer XPeng has introduced the XPeng Iron female robot, which exhibits a striking resemblance to humans. The robot features spine movement similar to humans and has skin made from soft 3D lattice structures that mimic the human body.
Scheduled for production early next year, the robot currently requires too much computational power for home use. It is expected to be deployed in commercial settings first, such as assisting customers at XPeng dealerships.
The first catwalk from @XPengMotors XPENG IRON female robot appears to cross the uncanny valley in its walking!
Mass production is slated to begin end of 2026, with initial deployments in commercial locations. China seems to be in the lead in robotics advances & manufacturing. pic.twitter.com/CiSsFMkW7Y— Derya Unutmaz, MD (@DeryaTR_) November 5, 2025
Controversy Over AI's Role in Ransomware Attacks
A recent paper by MIT Sloan researchers and Safe Security claims that 80% of ransomware attacks are AI-driven. The study, "Rethinking the Cybersecurity Arms Race," analyzed 2,800 ransomware attacks and concluded that adversarial AI is automating attack sequences, including malware creation, phishing campaigns, and social engineering via deepfake phone calls.
However, this statistic has been met with skepticism from other cybersecurity experts. Researcher Kevin Beaumont, who tracks ransomware activity, stated that generative AI is not a significant component of current ransomware operations and called the paper "almost complete nonsense" and "jaw droppingly bad." The researchers' paper has been criticized for including defunct ransomware like Emotet and Conti as AI-powered and misclassifying IBM's DeepLocker as malware.
The paper is almost complete nonsense. Its jaw droppingly bad. Its so bad its difficult to know where to start.
Researcher Marcus Hutchins also commented on the paper's absurdity.
David Sacks Expresses Concerns About Orwellian AI
David Sacks, a prominent figure in the crypto and AI space, has voiced concerns about the potential for AI to exacerbate censorship and create dystopian outcomes. Speaking on the a16z Podcast, Sacks suggested that the censorship already observed on social media and search engines could become far more pervasive and insidious with AI models.

"I almost feel like the term woke AI is insufficient to explain whats going on because it somehow trivializes it," Sacks stated. He elaborated, "What were really talking about is Orwellian AI. Were talking about AI that lies to you, that distorts an answer, that rewrites history in real time to serve a current political agenda of the people who are in power."
To me, this is the biggest risk of AI… Its not The Terminator, its 1984.
Coca-Cola's Improved AI Christmas Ad
Following criticism of its AI-generated Christmas commercial last year, Coca-Cola has released a new ad showcasing advancements in AI video generation. The company aimed to demonstrate the significant improvements in AI craftsmanship over the past year.
Pratik Thakar, global vice president and head of generative AI at Coca-Cola, noted that the craftsmanship in the new ad is "ten times better" than the previous year's, though he later qualified it as "maybe 10% better." The 60-second commercial was compiled from thousands of AI-generated clips, with a production time of one month, significantly shorter than the year-long production of their live-action commercials. Despite these improvements, a survey by Attest indicates that approximately 46% of consumers in the US, UK, and Australia dislike AI-generated imagery in advertisements.
Google's Project Suncatcher and AI Developments
Google is exploring an innovative approach to address the electricity demands for AI expansion by proposing data centers in space. The company has unveiled Project Suncatcher, a concept for satellite fleets equipped with solar arrays to harness near-constant sunlight.
This initiative, described as a "moonshot," aims to scale machine learning in space. Two prototype satellites, scheduled for launch in early 2027, will feature custom AI chips developed from Google's ground-based data centers.
Google CEO Sundar Pichai reported that the Gemini app has surpassed 650 million monthly active users, a significant increase from 350 million in March and 90 million last October. However, Google was compelled to withdraw its Gemma AI model from AI Studio after it generated defamatory content about Senator Marsha Blackburn. The senator stated that the AI's claims were not harmless hallucinations but defamation.

