In 2024, more than 150,000 jobs were cut across 549 tech companies. 2025 continued the trend, with February alone accounting for over 16,000 cuts.
The standard advice is familiar by now. Update LinkedIn. Network harder. Practice LeetCode. Work on personal branding. This advice is incomplete because it focuses on competing for the same jobs everyone else is chasing.
The job market has changed. AI capabilities reached a threshold where automation became cheaper than hiring. Not across every role, but enough high-value functions that workforce planning fundamentally shifted. The companies laying off workers weren't failing. They were recalibrating to distinguish what humans still needed to do from what models could handle.
This shift creates a specific problem for technical professionals: the skills that provided job security five years ago might be exactly what's getting automated now. The employment landscape isn't returning to 2021 hiring patterns. The question isn't when things normalize. It's the skills that remain valuable as AI handles an expanding share of technical work.
What's really driving the tech layoffs wave
A Resume.org survey found that 58% of companies plan layoffs in 2026, with 26% calling them "very likely," 32% "somewhat likely." 37% of them expect to replace roles with AI by year's end. 28% already have.
But this doesn’t paint the whole picture.
The same survey found that 92% of companies plan to hire in 2026. And here's a detail that rarely makes the headlines; 59% of companies admitted they emphasize AI when explaining layoffs because it "plays better with stakeholders" than citing financial constraints. Some of what looks like an AI story is actually a budget story wearing different clothes.
PwC's 2025 Global AI Jobs Barometer analyzed nearly a billion job postings across six continents. Engineers with AI skills command salary premiums up to 56%, which is more than double the 25% premium recorded the year before. Additionally, jobs requiring AI skills grew 7.5% even as total job postings fell 11.3%.
Companies aren't cutting roles that became redundant due to market conditions. They're systematically eliminating roles that AI tools have made structurally obsolete.
The automation threshold
A specific inflection point appears repeatedly. The moment when automating a workflow becomes cheaper than maintaining a human team doing it. Not theoretically cheaper in some future state, but cheaper at present, this quarter.
The calculation is brutally straightforward. Fine-tuning a model costs a fraction of one employee's annual salary and can eliminate work that previously required an entire team. The ROI timeline is measured in months, not years.
For instance, a legal services team built a specialized model to process discovery documents. Paralegals who previously spent most of their time on initial document review now only review edge cases and exceptions. The team shrank. Then shrank again. The work didn't disappear, but the humans doing it did.
This pattern now extends to domains that seemed safely insulated not long ago: technical writing, code review, QA testing, tier-one support, data analysis, and content creation. The common thread isn't that these jobs are simple. It's that they're economically automatable.
Why "productivity gains" actually mean workforce reduction
When executives announce AI investments, they frame them as "productivity enhancements" or "tools to augment human workers." The operational reality: a team of twelve becomes a team of four using AI tools. Those four people are genuinely more productive. They handle edge cases, manage AI outputs, and focus on complex judgment calls. But eight positions still disappeared.
One media company automated social media content generation and scheduling. Their social team went from six people to two. Those two people oversee more accounts, handle more volume, and respond faster to trends than the six-person team could. They're also 67% cheaper in total compensation costs.
When companies announce productivity gains, translate that to workforce reduction. When they discuss augmentation, ask how many people are being augmented versus how many are being replaced.
Which roles are getting hit hardest
Certain roles disappeared at far higher rates than others. Companies made calculated decisions about which functions they could automate, offshore, or eliminate.
Entry-level engineers
Junior engineering roles took the deepest cuts. The reasoning being AI coding assistants that made it feasible for senior engineers to handle implementation work that used to justify hiring pyramids.
The math seemed to work on paper. Months later, downstream effects emerged. Technical debt accumulated at unexpected rates. Senior engineers focused on velocity weren't doing the careful documentation and test coverage that juniors had handled. Bug rates climbed.
Middle management
Middle management took the second-hardest hit due to flattened hierarchies, reduced overhead, and the shift to AI for coordination.
This reasoning misunderstands what middle managers actually do. Leadership sees managers in meetings all day and imagines that with better tools, teams can self-organize.
What managers actually do is continuous context translation: converting strategic ambiguity into concrete technical direction, spotting conflicts between team roadmaps before they become disasters, and managing the emotional complexity of teams dealing with uncertainty.
When those roles disappeared, engineers had to spend their own time on coordination work that had been invisible when someone else handled it. This way, there was a drop in the sprint velocity.
Support and operations
The most dramatic cuts hit roles furthest from revenue generation: customer support, technical operations, documentation, and internal tools.
One common pattern: companies cut documentation teams, reasoning that LLMs could generate docs from code. The AI-generated documentation was technically accurate but contextually useless. It could describe what a function did. It couldn't explain why you'd use it, how it fit into common workflows, or what the gotchas were.
Customer support tickets increased. Sales cycles lengthened. The remaining specialists spent most of their time cleaning up after the automation that was supposed to replace their colleagues.
What skills actually matter now
The market has rewritten its rules. Companies aren't hiring for growth anymore. They're hiring for AI transformation while discovering it's harder than expected.
The capability gap companies discovered
Traditional career advice assumes companies evaluate capacity to ship features. Teams implementing AI systems face a different bottleneck. It's not coding speed. It's judgment: can you tell when the model produces nonsense? Can you craft examples that capture edge cases? Can you evaluate outputs when there's no deterministic right answer?
Companies have discovered their existing engineering teams can't reliably evaluate AI output quality. One organization built a document-processing system with strong engineers and solid architecture. They couldn't answer basic questions about where the system was failing and why. They had metrics but no operational understanding of failure modes.
They needed people who could identify patterns in failure cases and translate that back into system improvements. This wasn't machine learning engineering, and it wasn't traditional QA. It required sufficient technical depth to understand system architecture, sufficient domain knowledge to recognize meaningful errors, and sufficient product sense to prioritize what mattered.
The skills that differentiate now
Engineers successfully navigating this market share a specific capability: they work effectively in ambiguous evaluation contexts. They're comfortable making judgment calls about output quality when there's no single right answer. They design test cases that probe system boundaries. They communicate about model behavior to non-technical stakeholders without oversimplifying or hiding behind jargon.
This differs from traditional software engineering, where correctness is often deterministic. When code compiles and tests pass, you're done. With AI systems, outputs can be technically valid but wrong for context, or right for most cases but catastrophically wrong for edge cases that matter.
Engineers who retain deep expertise in specific domains and combine that with technical literacy to work with AI systems become invaluable. They spot subtle failures that matter. They're not necessarily building models, but they can tell whether outputs are actually usable.
The honest assessment
Traditional software engineering roles face sustained pressure. Not elimination, but sustained compression. Companies are serious about using AI to reduce headcount needs, and while they're overestimating how much reduction is possible, they're not wrong that some reduction is achievable.
The roles growing are ones companies didn't know they needed: evaluation specialists, output quality experts, people who bridge technical and domain expertise, and people who design systematic testing for non-deterministic systems.
The path of AI training work
Tech companies are cutting experienced engineers while struggling to get AI systems to perform reliably. The bottleneck isn't compute or algorithms. It's human expertise needed to bridge the gap between what models can theoretically do and what they reliably do in practice.
The AI training space is often misunderstood as low-skilled work. The current challenge has shifted from bulk data collection to precision: teaching models to handle edge cases, domain-specific reasoning, and nuanced judgment calls that separate demos from production systems.
One team spent three weeks debugging why their code generation model consistently failed on async patterns. The issue wasn't the volume of training data. They had millions of code examples. The problem: none of their training examples explicitly demonstrated error handling in concurrent operations. They needed someone who understood both the technical domain and how to construct examples that would generalize.
This work involves both technical QA and curriculum design. One active project involves evaluating model outputs for scientific reasoning tasks. Evaluators need to read computational biology papers, understand the methodology, identify where the model's reasoning diverges from sound practice, and articulate why. That requires the same analytical capability as senior engineering roles.
Complex technical evaluation projects typically pay rates at the higher end for specialized domains like systems programming, compiler optimization, or scientific computing. The economics reflect what's being purchased include judgment about whether a model's technical reasoning is sound. That judgment took years to develop.
The capability gap creates opportunity. Laid-off engineers have exactly what AI labs need: the instinct to ask "what breaks this?" When trying to make a model reliable, someone must systematically identify failure modes and craft examples that teach boundary conditions. Years of engineering experience develop precisely that adversarial thinking.
Contribute to AI development at DataAnnotation
The technical roles surviving this transition require judgment that models can't replicate yet. Evaluating model outputs, identifying edge cases, and understanding context that breaks automated systems: this work still requires human expertise.
If your background includes technical expertise, domain knowledge, or the critical thinking to evaluate complex trade-offs, AI training at DataAnnotation positions you at the frontier of AGI development.
Over 100,000 remote workers have contributed to this infrastructure.
If you want in, getting from interested to earning takes five straightforward steps:
- Visit the DataAnnotation application page and click "Apply"
- Fill out the brief form with your background and availability
- Complete the Starter Assessment, which tests your critical thinking skills
- Check your inbox for the approval decision (typically within a few days)
- Log in to your dashboard, choose your first project, and start earning
No signup fees. We stay selective to maintain quality standards. You can only take the Starter Assessment once, so read the instructions carefully and review before submitting.
Apply to DataAnnotation if you understand why quality beats volume in advancing frontier AI — and you have the expertise to contribute.
Frequently asked questions
Which tech roles are safest from layoffs?
Roles requiring judgment that models can't replicate remain most defensible: evaluating whether AI outputs are correct for specific contexts, identifying edge cases in automated systems, and bridging technical capability with domain expertise. Pure implementation roles face the most pressure.
How long do tech layoffs typically last?
This wave differs from previous cycles because it's driven by structural automation rather than economic conditions. Previous downturns saw rehiring when markets recovered. Current cuts represent permanent workforce restructuring around AI capabilities. Some roles will return; many won't.
What industries are hiring laid-off tech workers?
Healthcare, finance, and government sectors continue hiring for technical roles, though often at lower compensation than peak tech salaries. The AI training and evaluation space has grown significantly, with frontier labs paying $40-100+ per hour for specialized technical evaluation work.
How can laid-off engineers stay competitive during job search?
The engineers finding opportunities share specific capabilities: working effectively in ambiguous evaluation contexts, making judgment calls about output quality without deterministic answers, and designing test cases that probe system boundaries. Building demonstrable experience with AI system evaluation provides concrete evidence of these capabilities.
Is remote work still available in tech after layoffs?
Remote work remains available but has become more competitive. Companies that maintained remote policies during layoffs often reduced total headcount while keeping the distributed structure. AI training and evaluation work remains predominantly remote, with most frontier labs operating distributed annotation teams across time zones.




