How AI is reshaping hiring, while fraud is still rampant


An executive in my professional network recently updated his LinkedIn profile with this message:

“Friends, if you received an email from me regarding an opportunity at my organization, you did not receive it. My work email address has been spoofed.”

Deepfaking involves using AI to generate convincing fake communications (as in the case of my friend LinkedIn) – emails, voice recordings, and even videos – that imitate a real individual. These communications appear so authentic that they can bypass human skepticism, fooling even the most tech-savvy among us.

You may have already encountered this problem in your organization, as it is used in increasingly sophisticated phishing attacks to trick employees into taking actions such as opening documents or sharing sensitive information.

Deepfaking and identity fraud are at the top of organizations’ concerns when it comes to using AI in the recruiting process, according to a recent survey from the Institute for Corporate Productivity (i4cp).

The survey of talent acquisition leaders found that many organizations are grappling with the implications and risks of using AI in recruiting by their own recruiters and candidates.

How AI keeps talent acquisition managers up at night

More than half (54%) of respondents said they encountered candidates in video interviews who they suspected were using AI tools to help them answer questions or solve technical challenges; 24% said it was a rare event.

But there isn’t much movement in terms of adjusting policies and practices in response to this situation: only 17% say their organization has decided to increase the use of in-person interviews in response to concerns about AI-related fraud.

The need for ongoing AI management – ​​including auditing to detect bias, ensuring compliance with evolving global regulations, and staying abreast of new advancements – can be overwhelming, particularly for organizations that rushed into AI adoption without first establishing a solid foundation of governance.

“The proliferation of AI without real governance creates diminishing returns: it’s just about optimizing specific workflows, not optimizing the overall HR function,” observed one survey respondent.

Another concern is the risk of losing the human being in the hiring process. The question of not only what AI can do in talent acquisition, but also what it should do, is an essential question to answer. The intersection of intelligence (human and machine) and how organizations can adopt AI responsibly, strategically, and with measurable impact is an important conversation for leaders to have.

Despite the hype, AI adoption remains quite tactical

Most (61%) of talent acquisition leaders said they currently leverage AI in a very tactical, assisted manner. By far the most common use of AI today is creating job descriptions.

And despite the daily deluge of articles, social media posts, and vendor advertorials about the accelerated adoption of AI in recruiting, some organizations are hesitant at the moment to move beyond tactics.

Talent acquisition leaders in industries such as financial services, defense and aerospace, energy and infrastructure, and healthcare are less likely to report that their organizations are widely adopting AI in recruiting, including security, privacy, and concerns related to the sensitivity of the data processed, regulatory restrictions, or exposure to national security threats.

The need for clarity on AI in recruitment

Most (41%) of respondents said their organization currently does not have an official position on candidates using AI tools in the recruiting process (e.g., resume optimization). Meanwhile, 29% said they encourage ethical use of AI tools by candidates, but are concerned about potential misuse. Twenty-six percent described their organization’s position as positive; they invite applicants to use these AI tools and provide guidelines for their use on their websites. Anthropic is an example: the organization broadcasts very clear messages on its career site about how and when candidates should use AI. The rules of engagement message for integrating AI into the application process generally follows these parameters:

  • Do the work yourself; use AI to refine what you created yourself. This includes the application, cover letter and resume. The AI ​​should be a final review step, not the author of what is submitted.
  • It’s good to use AI for research, preparation, and practice before interviews. This includes using AI-based platforms like InterviewPal, Interviewing.io and Google’s Interview Warmup, etc., which offer video, voice or text practice interview sessions and real-time feedback for improvement.
  • Although AI-assisted preparation is acceptable, candidates must appear for live interviews on their own.

Anthropic’s decision is an acknowledgment of the inevitable: Some candidates will use AI no matter what. Perhaps now is the time for employers to stop trying to prohibit the use of AI in any form (including requiring candidates to certify that they did not use AI assistance in their job application) and compromise where and how its use makes sense for your organization.

This starts with clarifying risks, what is acceptable and what is not, and identifying areas where it makes sense to relax policy.

Strategic Considerations for AI for Talent Acquisition

  • Develop a formal view and policy on the use of AI by both talent acquisition and candidates.
  • Publish AI usage, policies, and candidate guidelines on the company career portal. Although this practice is relatively rare at present, we will see more and more organizations issuing clear guidelines for applicants, explaining what can be used and how.
  • Update candidate disclaimers and consent forms to prohibit the use of synthetic identities or AI-altered videos, if applicable to your policy.
  • Cross-validation of credentials. Confirm your work history through direct contact with the employer, LinkedIn consistency checks and verifiable references.
  • Consider requiring live video interviews with real-time interaction.
  • Ask unexpected or personalized questions during interviews (for example, “Can you tell me about something you read in the last 24 hours that caught your attention?”) to test comprehension and spontaneity in real time.
  • Explore platforms that verify user presence and perform live ID checks.
  • Train recruiters to recognize AI-generated anomalies.
  • Create an internal deepfake playbook with examples and response steps.

About survey respondents:

  • 79% middle to senior managers
  • 82% represented larger organizations (those employing more than 1,000 people).
  • More than half (52%) represented public companies; 37% represented private companies; 11% came from non-profit or government organizations.
  • In total, 70% of these organizations are global (high level of global integration) and multinational (national/regional operations act independently).



Leave a Reply

Your email address will not be published. Required fields are marked *