The engineering interview process is broken, AI cheating is exposing it faster

5 points by ssistilli a day ago

I've been thinking a lot about how technical interviews have hardly evolved in the last decade, and how AI is accelerating their collapse.

We're still using Leetcode-style problems that barely reflect the actual work engineers do. Candidates are expected to grind 50+ algorithm questions just to get a shot—even if the job has nothing to do with algorithms. Now, AI tools are being used to pass take-home assignments, do live coding rounds, or even write resumes that bypass ATS filters.

And the thing is, it's not even cheating in the traditional sense—it's just people using the tools available to them. The problem is deeper: we’re assessing the wrong things in the first place. Should we care if someone uses AI to solve a take-home if they’ll use AI on the job anyway? Should we really be judging an engineer’s ability based on whether they remember how to reverse a linked list under pressure?

We’re stuck in a system that’s easy to game and hard to justify. Has anyone seen companies doing this better? How are you adjusting your hiring process in the age of AI?

irf1 7 hours ago

Open source contribution history and interviewing with paid projects. See https://algora.io

para_parolu a day ago

In person interview. Solving real problem. Ideally, one I met recently.

  • curtisblaine 15 hours ago

    Agreed. Nothing beats in-person from the point of view of safety and nothing beats real problems to understand if the candidate is able to work with the current team. Many think that interviewing is about detecting hidden signals telling you that the candidate is a misunderstood genius, so you have advantage on competitors, but reality is much less dramatic: you just need someone with good communication skills who is used to work on the day to day issues at hand.

curtisblaine 15 hours ago

I normally ask candidates to build a sample and easy component similar to those we use at work. There are a couple of points in the exercise that I use to understand if the candidate is aware of certain deep implications of the language and/or the framework they are using. If something is not on their CV, I don't test for that, but if something is, I will test for that extensively. I ask questions all through the process. I allow them to look up resources during the interview, but I don't allow AI. This works well and it's really hard to game, especially if the candidate shares their screen or is interviewing in office. I mostly tailor the interview process to minimize false positives and I don't much care about false negatives: these are much riskier and hard to catch, and to be honest allowing them is not a big deal.