%title%
Applied AI
Businesses such as Microsoft are increasingly nudging their employees to utilize AI to get more done. Others are forcing the issue. One of the biggest firms to take the more aggressive path is Meta Platforms, which at the end of December had 78,865 employees.
Feb 3, 2026

Applied AI

Jyoti Mann headshot
Supported by Sponsor Logo

Businesses such as Microsoft are increasingly nudging their employees to utilize AI to get more done. Others are forcing the issue.

One of the biggest firms to take the more aggressive path is Meta Platforms, which at the end of December had 78,865 employees.

At a companywide meeting last month, Meta leaders briefed employees on a new performance review and bonus system closely tied to their AI usage, according to a recording of the meeting. 

For instance, the company’s AI-powered performance tracker, Checkpoint, looks at how many lines of code are generated by software engineers using AI tools, along with over 200 other data points. They include the number of errors or bugs associated with engineers’ code; the amount of code an engineer wrote themselves without AI.

Checkpoint is supposed to reduce the time it takes to prepare performance reviews, according to the meeting recording. It uses data from tools employees already use, such as Google Workspace to provide a summary of an employee’s work.

“To be clear, this is not an activity tracker—it’s an impact-evidence starter,” said one HR leader during the meeting. Managers then evaluate employees’ impact based on these checkpoint summaries and assign a rating.

To help employees accomplish such goals, Meta has expanded their access to AI models made by other firms, such as Google’s Gemini 3 Pro and OpenAI’s GPT-5, alongside Meta’s own Llama models. Employees use such models for coding and other tasks.

Meta introduced Checkpoint last month alongside a revamped bonus program, according to an internal memo. Staffers who receive the highest ranking of its four performance tiers will earn a 200% individual bonus multiplier and the company is also rolling out a new “Meta Award,” a 300% bonus multiplier reserved for top performers.

“We’re evolving our performance program to simplify it and placing greater emphasis on rewarding outstanding performance, “ a Meta spokesperson said. “While our employees have always been held to a high-performance, impact-based culture, this new direction allows for more frequent feedback and recognition in a more efficient way.”

Meta didn't just spring the news on employees. It told them in November that workers’ performance would increasingly be tied to their AI adoption, and those who demonstrate “AI-driven impact” would be eligible for higher rewards. 

 

Meta also plans on hard-wiring AI into its day-to-day operations. On Meta’s fourth-quarter earnings call last week, CEO Mark Zuckerberg told investors that “2026 is going to be the year that AI starts to dramatically change the way that we work.” 

He echoed that message in an internal memo, saying Meta is investing in AI tools to help individuals “get more done” while flattening teams and elevating individual contributors. Projects that once required large teams, he said, can now be handled by a single, highly skilled employee. Presumably, such skills would include knowing how to leverage AI.

Meta has already begun reshaping parts of its workforce around that thinking. In October, the company cut roles in its risk division, telling employees that many routine decisions could now be handled more efficiently by automated systems. 

We are willing to bet many more companies will either start programs similar to Meta’s or will raise the performance bar for many employees on the expectation they will use company-provided AI tools, especially coding assistants, to be more productive.

Microsoft Memo: OpenClaw is “Not a Production-Ready Consumer Product”

OpenClaw, the open-source AI agent product formerly known as Clawdbot and Moltbot, has delighted early adopters—including Microsoft CEO Satya Nadella—by accessing various applications on their computers to write code, edit files and perform research and other tasks for them. 

It also raises a host of potential security concerns, as my colleague Rocket covered on Monday.

Earlier this week, Microsoft staff working on AI safety sent employees a security review of OpenClaw that outlines the risks associated with using the product, according to a copy of the memo reviewed by The Information.

The memo warns that OpenClaw is “not a solved version of computer use”—a term AI researchers use to describe AI agents that access computer programs. The tool “doesn’t suddenly make browser-driving agents reliable,” the memo said. 

Still, it notes that the product “took off because it made agency immediate and personal,” granting people a glimpse of what AI agents that can use a computer in a way similar to humans. But it warns staff that an AI agent with the ability to take actions across the web could lead to unforeseen security risks.—Aaron Holmes

A message from Google Cloud

Is your AI strategy merely making you efficient, or is it actually driving sales?

The era of agentic commerce is here. This marks a fundamental shift from passive optimization to active customer acquisition—eliminating friction to act on consumer intent instantly. Explore our latest NRF updates to see how Google Cloud’s AI agents are delivering measurable, real-world impact today.

Read the blog.

Opportunities

Group subscriptions

Empower your teams to stay ahead of market trends with the most trusted tech journalism.

Learn more


Brand partnerships

Reach The Information’s influential audience with your message.

Connect with our team

About Applied AI

A new The Information franchise that will take you inside how businesses are using AI to automate all kinds of work.

Read the archives

Follow us
X
LinkedIn
Facebook
Threads
Instagram
Sent to niepodam@niepodam.­pl | Manage your preferences or unsubscribe | Help The Information · 251 Rhode Island Street, Suite 107, San Francisco, CA 94103