The Power of Al-Enabled Recruitment Advertising in Health Care - NAHCR Session

The Power of Al-Enabled Recruitment Advertising in Health Care - NAHCR Session

Welcome to an exciting exploration of AI in healthcare recruitment! At the recent NAHCR (National Association for Health Care Recruitment) Conference, Austin Anderson, CTO at Recruitics, delivered a compelling presentation on the transformative power of AI.

Whether you're new to AI or looking to deepen your understanding, this presentation offers valuable perspectives and thought-provoking ideas.

Listen to the full presentation (press play below) to learn more about the future of AI and its impact on healthcare recruitment.

 

Key Takeaways:

      • Understanding AI and Its Historical Context: 
        AI encompasses a broad set of technologies designed to simulate human intelligence by recognizing patterns in large data sets. Austin underscores that AI is not a new concept but has been a human curiosity for centuries.
      • Applications and Impact of AI in Healthcare and Recruitment:
        In recruitment, AI-driven tools like Recruitics’ conversational analytics are improving sourcing automation, screening, skill assessment, and interview transcription. Anderson highlights how AI helps recruiters by automating language-based tasks, thus making the recruitment process more efficient and effective.
      • Ethics, Bias, and the Future of AI:
        Austin discusses a future where AI complements human abilities rather than replacing them, advocating for a collaborative approach where humans and AI work together to achieve high-impact, creative tasks. This vision aims to mitigate fears of AI while maximizing its benefits for society.


Presentation Transcription

Austin Anderson:  
Welcome! I hope you're enjoying your time here. Obviously, a lot of you are interested in AI and excited to talk about it. So, again, my name is Austin Anderson. I'm currently the Chief Technology Officer at Recruitics. We help some of the biggest brands in the world attract talent at scale using extremely sophisticated technology that we're gonna talk a little bit tonight about.

Without further ado, I think the idea behind this particular talk, we're going to start at a high level for those who are really new to AI, and kind of set the stage for AI, AI technology that kind of funnel into healthcare, recruiting, etc, to talk about some of the risks and challenges with AI as well. So again, a little bit about myself. I've done a lot of things in technology. Two jobs ago, I was leading teams at LinkedIn in San Francisco Bay area. I became really interested in recruitment and in the video games industry, so I actually ended up founding the industry's first talent marketplace driven by AI called Ruipie. I did that in about five years, and then I ended up selling it about go and then joined Recruitics, and now, not to move too far into that, but kind of set the stage in terms of my background.

And the other big thing that I'll say is that we know AI technologies. And AI encompasses a set of technologies. It's not one thing. And so today we started to kind of disappoint that for you to help me understand kind of how AI is going to be. And really what's interesting is that AI is going to become an integral part of my body and philosophy. So I guess I can say, you're listening to the right guy thinking about this, I hope its interactive.

What year do you think the term Artificial Intelligence was recorded?

1938, 1962, 1970? You were pretty close, 1955. It was a year after color television was made widely available, and this was a wild time. Who actually pointed and started a workshop and paper scientist, he bought Windows scientists and humans have been thinking about artificial intelligence for a long time, and this goes way beyond 1955. I don't want to bore you, but you go way back to beginning of civilization, you've got Egypt people have been humans. Have been vastly imbuing human thinking and reasoning into machines for thousands of years. So this is not a new concept. This is a human curiosity.

John McCarthy was the Grandfather of AI as well as Alan Turing, which he might have heard of. One of his most famous questions, is "Can machines think? And that's part of the Computing Machinery and Intelligence paper that he published in 1950 and his answer to positive conditions was: "Probably". And more sort of a aligned with an invitation. And he also really fast around 1950 predicted that in half century, we would have machines that would be able to simulate human thinking to point where the average person couldn't tell whether that was a human or an AI. And so it's a couple years late to that, but I think we're to the point where, for the most part, we can all agree that some of those AI conversations are pretty right.

So what is AI? What is this mysterious thing?
I know we all were thinking about AI. Most of them are thinking about large language model Gen AI, right? AI encompasses a more broad definition, and for the most part, the way I like to think about AI is that AI takes a bunch of it's a set of technologies and techniques that get a large amount of data, and ultimately tries to apply outcomes and patterns within that data. And that can be in that data, by the way, doesn't have to be just textual. It can be all sorts of data. It can be multi-modal, images and text. You can take audio. It contains abstract data. And ultimately, the idea is that you're trying to infer the best possible output of that data.

Is there a cat in this image? What's the next best word in a sentence (LLMs)? And ultimately, we're trying to simulate human intelligence. We're trying to make computers simulate human intelligence and do tasks that humans typically can do. That require that requires intelligence. So that's kind of how I think about data. You've data, got patterns, and outputs.

A show of hands, no ChatGPT, who here is a human? Some of these people know who these people are, people. So what's fascinating about being a human? I know like, well, we're going really strong here, is that humans, the human brain, is really good. That's one of the main things that our brains have evolved to do really well, especially in terms of social situations.

So, for example, anybody know that image? So that's a NASA photo, like he wanted to take a photo, an image called "The face on Mars" in the 1970s that picture, that photo people were framing, oh my gosh, there's a human face on Mars, right? Well, it turns out, I don't have a photo to represent this, but it's actually in different lighting, it doesn't look like a face at all. But humans exhibit something called Pareidolia. It's a type of cognitive bias, which means that your brain are hardwired to see an image in that little mountainous hill on Mars. You say, Oh, it's a human's face, and it's because our brains evolved to see human faces in information, visual stimulation, and on the left that image is a technology called Google Deep Dream, which is an arhythmic technology, within AI convolution. And Google Deep Dream does the same thing, pretty much an asterisk. But the idea behind it is that it is trained to overcompensate for pattern recognition in images. It's trained on dogs, dog faces, that sort of thing.

See if you pass something like, I think that's like, exactly sure, and it doesn't draw but leans onto that image and exaggerates certain symbols, and it's changed all over the place, but we do the same thing, and so and we built AI technologies. And not how technologies work like our brains work, but a lot of them, that's what we talked about also. We've gotten machine learning here, and machine learning is basically a set of both techniques and technologies, again, that are able to be trained to perform specific functions that enable you to do useful things. Let's just say that we keep it really high level. What's interesting is it's usually more superb data that's an author.

Some examples, less data is needed, and say, deep learning we'll go to that second. And it's a little more fixed. It's like proprietary algorithms for certain things like classification, where it's like, does this block of text have, you know, a curse word in it. That's the kind of technique that she used, machinery for.

NLP, natural language processing that deals with the relationship in patterns that exists in human language. So large language models fraction of subset of NLP. So in human language, really tough part. It's not a where in some languages, some words at the beginning relate to the words at the end middle. There's like modifications there. And then deep learning has to do with neural networks. And neural networks are basically machine learning technologies that are modeled after a brain. So the neurons that exist in neural networks. And neural networks are really fascinating because they're extremely complex, and you give them a lot of data, and they can do amazing things. And large language models like ChatGPT actually use neural networks under their head.

So, okay, great. Kind of setting the stage for what machine learning is the high level, the kind of categories that exists there, either there's other ones, like robotics, for example, that are interesting, but I'm going to go that's less applicable.

But machine learning and health concerning on the healthcare now, what? What are, you know? What are some machine learning tools that are being developed in healthcare that are really interesting? Well, Medical Scribe analytics are part of that. Like, for example, using a deep learning model to know if somebody's cancer remission is going to continue right after they leave the clinic, and how long, based on things, for example, medical image analysis, really fascinating developments on this front too, because you've got things like the ability to pass an x-ray and be able to write MRI and be able to detect whether or not there's anomalies there, and not just relying on humans to see that, but also having AI technology to augment them. So there's less, you know, false positives, for example, and things like personalized implants, right? Really cool stuff on the frontier of healthcare, where you know, taking this data, this data about you, your past medical records, use your personalized treatment plan, and it can be done in an instant, really cool stuff.

And then deep learning healthcare, things like cancer impaction, drug discovery, which is really cool and amazing, Discovery model happening in protein discovery. I read the other day, they said that they found a new protein that glows as bioluminescence. And what's interesting is AI model, a large protein model, a large language model that discovered this thing. And what was really crazy about it because that it would take 500 million years of evolution for nature to come up with this protein. And it AI model discovered created it, and in less than five minutes, crazy, right?

Genomics. Genes are complicated, and guess what? A computer can figure things out in terms of the fundamental gene of genetic anomalies, but also genetic diseases ahead of time. You know, we're using AI to come more and more to understand our profiles, like, if you're on Ancestry.com or 23andMe, they're all using AI, right? And okay?

Transcription really helping accelerate his ability to document things, patient interaction, outreach: Thanks for coming! Sentiment analysis, how was your visit? From NLP, let's kind of transition into generative AI. So generative AI is a a deep learning technique to pull in basically the whole internet and train a predictive model to understand what the next word should be in a series of words based on that particular language. So just to figure out what the next best word is, that's what gender does. Some people call this stochastic mirror, because really, all it's doing is trying to manipulate past examples that you gave it. You gave it a whole version, right? So that's what judgemental is actually doing. It's trying to infer the best next word in a series of words. And it can do it really, really well.

And it turns out, if you do that well, you can create AI systems that emulate human intelligence as in some ways, like Alan Turing said, and can seem like a simulated human thought. And what's really cool about LLMs is that they're being used in recruiting. So I've been in this space for a long time from LinkedIn, you know, I really got exposed to candidate profiles, recruiting technologies, doing searching, all of that stuff.  Then I transitioned to more active hiring with with my company Rupie, and now for Recruitics with conversational analytics, are AI bot. And so I've seen kind of this holistic scope of how generative AI helps recruiters do their job. And it turns out, I think my opinion, is that one of the big reasons why it works so well recruiting is because recruiting is inherently a language exchange.

It's a human to human exchange, and it's one of the most human to human exchanges, because it's lots of subtle cues. You're sending a lot of messages out to individuals, and there's a lot of human language, right? And so that's what I think everybody has such a mentality, such a big impact on hiring. You got things like sourcing automation, right, sending it, sending, by the way, but sending a whole lot of writing in-mails and LinkedIn or emails out to candidates to try to get them into a particular job, screenings, skill assessment, all of that stuff. I think it's really powerful, because now you can automatically screen you have an agent represent you, and you can train it to engage with candidates. I think that's extremely powerful, especially if it's convincing and in-trustful. Skills assessment, I mean, there's other reasons why you generally understand what skills mean in terms of this association bigger web meaning.

And so because of that, it understands skills and how skills relate. So you can really access successful interviews, with transcription feedback from agents, on their zoom calls now right, may raise your hand if you got an agent through my transcription in real time. Yeah. Okay.

Extremely useful, I think 50% transcriptions, interviews, huge feedback collection, and then onboarding and material engagement, as well as general activity. So this is an interesting step, which is that, on average, employees that are using Gen AI receive about two hours a day in terms of efficiency gains. That's a lot of time per day, and we're just getting started. So I always think this is really fascinating, because I didn't get to a lot of their generate a list for me. Help me ideate. It's really computation of myself, but I'm doing so I'm having us do a lot less of the redundant minutiae of every day. And so I very much believe this. We've seen it to do in our employees. So what's interesting is that healthcare has a particular set of recruitment challenges.

For example, flexible scheduling, start nurses, hospital settings, overworking, burnout, huge problem with healthcare workers. Massive worker shortage across the board, with nurses as well as doctors and then hospital differentiation. What makes my hospital great? Why should I work with that hospital?

All of these issues can be contextually solved by applying to some degree along one, Gen AI for example, predictive scheduling  can be forecasted. Do you know when are we envisioned in the automatic assignment and an AI model automatically figure out when we do nurses for this particular time.  Workload monitoring, burnout monitoring, all this stuff is possible because you can collect sentiment in real time. And you don't necessarily need a human in order to do that. You have a private engagement employees or can use confined but you can also do deep learning and measure how much time somebody is working say, okay, based on this criteria, looks like we have this, this sort of burnout threshold, something like that, talent retainment, turnover, risk analysis.

What are the risks? Who is at risk? Instead of a human having to run analyzes, you can have a model do that. Marketing is another big one, right for us to possible differentiation where you want the AI to make it kind of like what makes it great, makes this possible, the best place to work and or this clinic, whatever, and use that to effectively curate marketing language and marketing strategy. Send that out to try to capture hot talent so that you can bring the money. All of this is possible, but it's not perfect by any means. And I know that people know it's true, because the big problems on the frontier of AI, for example, data scarcity. Remember, I said that LLMs are trained on the whole internet. Well, guess what?

There's not a lot of internet after the whole interview. That's becoming a new problem, because a lot of these probably they're saying, Hey, we're running out of info here. Or any other information, we capture the whole history of humanity as a snapshot that's derived from this thing called the Internet. All this information. We run out synthetic data techniques here, but it's becoming a problem. Bias is the big one, if you're hiring for any position at all. You know that biases, and bias is exhibited in humans as well as AI, and we'll talk about this in a second. But AI, it carries bias that data has a view of it. So whoever is trading that data, if they're still in their lives, you can surely predict that LLM also have the that life And sometimes gambling, it can be a liability. What if you, what do you trust with AI so much it makes a bad decision, right?

Like, what if in the future, we ask an LLM, should I launch nuclear missiles at Russia? Yes or no, and messes up. Seems like a big liability, right? And I don't think that's all about the real possibility that which we might have play out not making that particular decision, but at least it's forming it carefully. Legislation. This is all happening now. A lot of new legislation coming out, especially the government, may have hired practices that you should all do your research and pay attention to, because that's going to completely shift how we apply AI.

But what's clear is that AI is changing hiring in a big way. I mean, 81% of HR managers are using AI in some way. And then basically, 76% are saying, hey, "if my company doesn't do the AI thing, we're going to be behind".

And I like to look at the money's flowing, you know, with my background, potential venture capital, where the money's flowing provides what predicted future, or what smart people are predicting in terms of the future, it, to some degree. Tons of money is being invested in healthcare related AI wanted for dollars of investment in healthcare that was pretty big. So there's a huge bet on any Island and in places that if we do not bet on AI, we're screwed.

And this is where it transitions into a very important topic, which is AI ethics. And AI ethics encapsulates a number of categories of thinking regarding how to steer AI in the right direction in the future. And hope prevent. Oh, no, seriously, it's important. It's not going to be dealing with this. But there's definitely a level of dystopian future that we potentially can see if we don't get this right. If we don't align with AI. It's not going to be Terminator, but it will probably create some serious sociological and socio economic problems. So we got to get it right.

But this is sort of the breakdown of problems, immediate risk. Why it's biased performance decisions, for example, If you trust AI to make a hiring decision and it makes it bad one, and it cuts out perfectly suitable candidates for some arbitrary reason, and you don't know how to critique the results, for the black box. Well, just so you just basically succumb to AI bias.

Surprise, AI can make bad decisions. Hallucinations. Hallucinate bad decisions. You ask, what three plus three is, it might say seven, something. Well, you're using that value, another thing that launches nuclear missiles in Russia, right? So in the data position, this isn't one of people don't really think about as much.

But remember, data is bias. Data is the source of truth that drives the AI. So somebody is putting bad data in your data set. You don't really look at that. Say, we've got a billion records of data that we're using to train this thing, and there's data that's providing false information about how to inject that and that will influence the output of the large language model. And then most important part is they have, it doesn't start with human values. And this one gets a little weird, because when you're using language models and say, Oh, that's a human but the truth is, it's not, in fact, in some ways, AI is more like an alien than a human to us in a way that thinks, it doesn't think in like a temporally linear way that we do, doesn't care about us, right?

So what's really important? Program AI to care about us, because if we don't, it won't, and we needed to, it's going to keep making decisions for us, right? So this is really important. AI doesn't start with human values. This is kind of a AI, ethics, privacy, governance, is it fair? Is there accountability on how an organization using AI? Do we have transparency to why our AI is providing results like this? Is it explainable? And we do we know why the AI is doing this thing? or are you kind of going to investigate? I and reproducibility, which is like, Can we repeat that answer over and over again?

Ultimately, this is a really powerful quote. This is written by, Hilke Schellman. She's a promo journalist, and she said "One biased human hiring manager can harm a lot of people in a year, and that's not great. But an algorithm that is maybe used in all incoming applications at a large company… that could harm hundreds of thousands of applicants" thats really not great, right? So that's the risk of ignorance when we're leveraging AI we should do it or we should be conscientious when we do it.

So transitioning down to AI partner in advertising, well, how are you using AI advertising decision right? Again, if we go back before the ethics conversation, there's a lot of useful things that AI can do. Always make more decisions. If we're applying AI ethics, then ostensibly this thing is doing good, right? 

There's a lot of questions involved in advertising, right? Like, for example, where should I, you know, where my audience, which channels my audience? How much is going to cost to hire dentists, for example, in Atlanta, in the future? Or things like, what you know, what's which particular job board, for example, would be best in advertising?

And what's really, really cool about this is we have a technology at Recruitics that answers these questions called Brion. and Brion is basically a conversational analyst. It makes you a super AI superpower. And by basically equipping everybody to be an analyst at scale, asking anything, it can make you seem very smart, and it can save you a lot of time so that you can focus on things that are more important. And that's what we built, right? Brion is really great use, as well as previous technologies that I was I was talking about there.

So, yeah, what vendor offers the best conversation rate for CPA? And it has this answer with a visualization, and you can grab that use it with core out there. Powerful technologies and critics we're really excited about. 

It's always really inspiring to hear the real, real world examples of people that are using AI products but also doing it responsibly. I was talking to you about how her team were actually I began verifying that there's no bias. I think that's really the core of AI ethics, and how we knew, as a consumer of this technology and internally creating checks and balances, right? That way, that moving forward, it's not, I don't think it's just good, but obviously the technology created, but it's also the users of the technology thatI think come together and create a reduction of bias by sort of synthesizing that those due process. So thank you for that honor that you're using our technology.

And just like brief, shameless plug in here again, with Recruitics we're making hiring talent at scale easy. We've got an incredible team, incredible technologies that are enabling this. We're actually working with some biggest health care clients. As well as some other really big clients that other industries like DoorDash, for example, and they're all leveraging our technology to accelerate their hiring and talent traction strategies and building big pipelines. We've do this for some of them.

So after this, please, you can find one of us, if you're interested in a kind of conversation, that's happy to do that. So you can talk about AI and how it's going to destroy the world. I guess in closing, you know, the big thing with AI is that we're at a progressiveness. The truth is, and I told you this strongly, without a doubt that yes, we are in an AI, no doubt there is no there going to be problem for sure, but based on the stats I showed you, you really think AI is just going to go away. It's not.

It's just going to get better and better. I think that the utopia here that I have is that I hope that we augment themselves with AI, this really great paper that's co published, a co-author was a part of the Jagged frontier. In the Jagged frontier papers was like maybe six months ago. There's a pretty large study they talked about how the frontier of human and AI cooperation is jagged, not flat. What we meant by that is that there's a bit of a  curve when it comes to compatible, when it comes to what AI is really good at, better than humans, and then what humans are really good at AI is not really good at, and it kind of goes like this:

And so there are things that humans are just really good at: creative human relationships. What I'm doing about now, right? If AI was doing this, I can see it, but pretty weird, right? And so it's not, it's not so clear cut. But what they also said is that in the future, there's going to be kind of two kinds of working relationship for AI. One, is the cyborgs, Terminator AI agents that are pure AI, and the second one are centaurs, and I love this in a centaur union of like horse and human, that is what we all have to become. We have to and it's where we find ourselves with AI technologies, and we realize that not all AI is here to replace us. It's not so scary. I mean, it is like an existential topic.

More than that, it's a performative advancement and enhancement to our abilities. I love the idea of future where we're all able to do more things that are high impact creative. Not so seems like a great future. Actually. I think AI is going to be able to do a lot of that stuff as we are more focused on high impact things that really feed us, feed our souls.  So that's all I've got for you. Thank you for listening, and if any questions real fast, we've got to go. Thanks. You.



Recruitics' innovative solutions and expert guidance can accelerate your ability to attract and hire top talent, ensuring you stay ahead in this dynamic field. Don't miss out on the opportunity to elevate your recruitment processes and drive success with AI.

Contact us today to start your AI journey with Recruitics. Together, we can shape the future of healthcare recruitment.

Subscribe to newsletter

Categories

Find Out How We Can Become an Extension of Your Talent Acquisition Team