Ask HN: How has your company adapted to hiring with LLMs?

2 points by Python3267 a day ago

Now that llm's are starting to get pretty good how has your company's adapted to the new environment. It's no longer good enough to see if someone's good a programing, instead we need to screen if someone is good at engineering. In my experience Software Engineering is starting to mature like other forms of engineering. Mechanical Engineers don't mill out their parts (Well they should at least a couple of times to understand the constraints of machining). SWE's need to see if the code is "good" (Mech E's test their parts) and then design the systems around them. As far as I can see there are two ways of going forwards.

1. Only do on sites and eat the travel expenses

2. Test for systems design and culture fit

On sites allow for a level playing field where interviewees don't need to compete for the [best person hiding their llm use](https://www.reddit.com/r/technology/comments/1j436it/a_student_used_ai_to_beat_amazons_brutal/).

What are people's thoughts?

austin-cheney a day ago

1. Interview candidates with cameras on.

2. Do not ask basic software literacy questions. First of all, this was completely stupid even before LLMs. Secondly, its easy to cheat. If you absolutely have to do this then do it terms of measures. Most people in software are entirely incapable of measuring anything and LLMs cannot fix their personality deficiency.

3. Ask all questions where the expected answer is a not some factoid nonsense but a decision they must make. Evaluate their answer on the grounds of risk, coverage, delivery, and performance. For example if you are interviewing a AI/ML guy ask them about how they overcome bias in the algorithms and how they weigh the consequences of different design outcomes. If they are a QA ask them about how they will take ownership of quality analysis for work already in production or how they will coach developers when communicating steps to reproduce a defect.

4. As an interviewer you should know, by now, how to listen to people. That is so much more than just audible parsing of words. If their words say one thing, but their body language says something different then they are full of shit. Its okay that they aren't experts in everything. Their honesty and humility is far more important. They can get every question wrong, but if their honesty is on and they can make solid decisions then they are at least in the top half of consideration.

5. Finally, after evaluating their decision making ability and risk analysis then ask them for a story where they have encountered such a problem in the past and had to learn from failure.

codingdave a day ago

> It's no longer good enough to see if someone's good a programing, instead we need to screen if someone is good at engineering.

That has been true for many years. That is why we don't just ask FizzBuzz and hire people who can do it. Your ideas of the additional depth that is needed are 100% correct... but they aren't new since LLMs came out. They express the same depth that we've been interviewing for all along.

  • Python3267 a day ago

    I guess what I'm stabbing at is that the FAANG interviews I've done and my friends who work there operate with that mindset. You do need to ask questions to answer the problem in those interviews but they heavily rely on interviewee's code.

fazlerocks a day ago

we've shifted to focusing way more on problem-solving ability during interviews rather than just coding skills

still do technical screens but now we give people access to AI tools during the process - because that's how they'll actually work. want to see how they break down problems, ask the right questions, and iterate on solutions

honestly the candidates who can effectively use AI to solve complex problems are often better hires than people who can code from scratch but struggle with ambiguous requirements

the key is testing for engineering thinking, not just programming syntax

mateo_wendler 21 hours ago

I think that if generative AI will soon write flawless code for us, we must stop “testing for coding skills” entirely and instead evaluate candidates on algorithmic complexity reasoning/optimizing, scalable system design, security threat modeling, cultural alignment, teamwork aptitude, and leadership potential

A post-neuralink world will be harder to asess, though.

paulcole a day ago

Are you in the US and remote?

If so, don't even worry about it.

You'll never outsmart people who want to cut corners and beat a system. In fact hire the smartest lazy people you can find. Let them use LLMs at work and fire the ones who can't cut it.

  • Python3267 a day ago

    Agreed, but the problem is a lot of companies don't ask questions that screen for people who can build longer term systems that are extendable.