Home
Categories
EXPLORE
Music
Society & Culture
History
Comedy
Education
True Crime
News
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/82/aa/e9/82aae99f-35a5-a7fa-4772-f7b64e635c69/mza_42983026472286511.png/600x600bb.jpg
Compliance Perspectives
SCCE
100 episodes
1 week ago
An SCCE Podcast
Show more...
Education
Business,
Non-Profit
RSS
All content for Compliance Perspectives is the property of SCCE and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
An SCCE Podcast
Show more...
Education
Business,
Non-Profit
Episodes (20/100)
Compliance Perspectives
Michael Savicki on Due Diligence During Mergers & Acquisitions [Podcast]
1 week ago
15 minutes 15 seconds

Compliance Perspectives
Katie Roemer on Neurodiversity as a Compliance Asset [Podcast]
2 weeks ago
10 minutes 13 seconds

Compliance Perspectives
Bailey Mack on the History of Privacy Legislation [Podcast]
3 weeks ago
8 minutes 5 seconds

Compliance Perspectives
Jay Greenberg on Executive Presence [Podcast]
1 month ago
10 minutes 10 seconds

Compliance Perspectives
Gabor Sulyok and Luciane Mallmann on a People-Centered Ethics and Compliance Framework [Podcast]
1 month ago
15 minutes 3 seconds

Compliance Perspectives
Alex Tyrrell on Shadow AI [Podcast]
By Adam Turteltaub
The rise of generative AI has brought transformative potential to healthcare—from streamlining administrative tasks to supporting clinical decision-making. But alongside these benefits comes a growing concern: Shadow AI. Alex Tyrrell, Chief Technology Officer, Health at Wolters Kluwer explains in this podcast that this term refers to the use of unauthorized, unmonitored AI tools within organizations. In healthcare, where data privacy and patient safety are paramount, Shadow AI presents a unique and urgent challenge both now and in the future.

Healthcare professionals often turn to generative AI tools with good intentions—hoping to reduce documentation burdens, improve workflows, or gain insights from complex data. However, many of these tools are unproven large language models (LLMs) that operate as black boxes. They’re prone to hallucinations, lack transparency in decision-making, and may inadvertently expose Protected Health Information (PHI) to the open internet.

This isn’t just a theoretical risk. The use of public AI tools on personal devices or in clinical settings can lead to serious consequences, including:

* Privacy violations
* Legal and regulatory non-compliance
* Patient harm due to inaccurate or misleading outputs

Despite these risks, many healthcare organizations lack visibility into how and when these tools are being used. According to recent data, only 18% of organizations have a formal policy governing the use of generative AI in the workplace, and just 20% require formal training for employees using these tools.

It’s important to recognize that most employees aren’t using Shadow AI to be reckless—they’re trying to solve real problems. The lack of clear guidance, approved tools, and education creates a vacuum that Shadow AI fills. Without a structured approach, organizations end up playing a game of whack-a-mole, reacting to issues rather than proactively managing them.

So, what can healthcare organizations do to address Shadow AI without stifling innovation?

* Audit and Monitor Usage

Start with what you can control. For organization-issued devices, conduct periodic audits to identify unauthorized AI usage. While personal devices are harder to monitor, you can still gather feedback from employees about where they see value in generative AI. This helps surface use cases that can be addressed through approved tools and structured programs.

* Procure Trusted AI Tools

Use procurement processes to source AI tools from vetted vendors. Look for solutions with:

* Transparent decision-making processes
* Clear documentation of training data sources
* No use of patient data or other confidential information for model training

Avoid tools that lack explainability or accountability—especially those that cannot guarantee data privacy.

* Establish Structured Governance

Governance isn’t just about rules—it’s about clarity and oversight. Develop a well-articulated framework that includes:

* Defined roles and responsibilities for AI oversight
* Risk assessment protocols
* Integration with existing compliance and IT governance structures

Make sure AI governance is not siloed. Those managing AI tools should be at the table during strategic planning and implementation.

* Educate and Engage

Education is the cornerstone of responsible AI use.
Show more...
1 month ago
10 minutes 20 seconds

Compliance Perspectives
Wendy Evans and Georgina Heasman on Interviewing the Subject of an Investigation [Podcasts]
By Adam Turteltaub

There are few parts of an investigation that are more stressful than the interview with the investigation’s subject.  Done right it can close all the loops.  Done wrong, everything can unravel.

To learn how to handle things best we turn in the second of our two podcasts on investigations to Wendy Evans, Senior Corporate Ethics Investigator, Lockheed Martin and Georgina Heasman, Senior Manager, Global Investigations at Booking Holdings.  The two of them are the co-authors of our new book Fundamentals of Investigations:  A Practical Guide  and lead our Fundamentals of Compliance Investigations Workshop.

In this podcast they offer a host of great insights including:

* While it’s generally best to interview the subject last, there are times, such as in cases of alleged harassment or data theft, where you likely will need to sit down for a preliminary interview sooner
* Be sure to get a read on the subject and be respectful of the stress that they are under, including giving them psychological space before asking tough questions
* Clarify your role in the process as a collector of facts and that you have not already decided that they are guilty
* Invite them to share their perspective both in the interview and, if other things come to mind, afterwards
* Remind them of the confidentiality of the process and the need to focus on the allegation, not who made it

Listen in to learn more, and be sure to investigate their book Fundamentals of Investigations:  A Practical Guide  and the Fundamentals of Compliance Investigations Workshop.
Show more...
1 month ago
15 minutes 26 seconds

Compliance Perspectives
Georgina Heasman and Wendy Evans on Best Practices for Investigations [Podcasts]
By Adam Turteltaub

Few people know more about conducting a compliance investigation than Georgina Heasman, Senior Manager, Global Investigations at Booking Holdings and Wendy Evans, Senior Corporate Ethics Investigator, Lockheed Martin.  The two of them are the co-authors of our new book Fundamentals of Investigations:  A Practical Guide  and lead our Fundamentals of Compliance Investigations Workshop.

Not wanting to miss out on their expertise, we scheduled two podcasts with them.

In this, the first of the two, they share a broad overview of best practices for conducting investigations.  Those include ensuring that even compliance team members not responsible for investigations have at least a fundamental understanding of them.

As for the investigation itself, they explain, to go well it begins with the first report.  There has to be a clear line of communication and a culture that encourages employees to come forward.

Once you receive that initial contact, it’s important to remember that it tells the story only from one side. You need to ask questions to clarify what was seen and heard and start thinking about what other information you will also need to gather.  To keep the information flowing, they recommend telling the reporter and everyone else you interview to reach out to you again if additional information comes to mind.

While testimonial evidence is invaluable, don’t stop there.  As you gather the who, what, when and where, be sure to look for the documentary evidence that you need, which requires having strong relationships with departments that have it, such as HR and security.

And, throughout the process, stay focused to avoid going down rabbit holes or getting inundated with more information than you need.

Listen in to learn more, and be sure to check out Fundamentals of Investigations:  A Practical Guide  and the Fundamentals of Compliance Investigations Workshop.
Show more...
1 month ago
12 minutes 2 seconds

Compliance Perspectives
Veronica Xu on Compliance During a Government Raid [Podcast]
By Adam Turteltaub

Uh oh.  The Feds are in the front lobby with a search warrant.  Things are bad, and you don’t want anyone on site to make it worse.

The secret is preparation, shares Veronica Xu, SCCE & HCCA Board Member and Chief Compliance Officer, HIPAA Privacy Officer, ADA Administrator at Saber Healthcare Group.  That begins with establishing a cross-functional team that likely includes compliance, the general counsel, CEO, CTO and, depending on your industry, the chief medical officer and others.

Each should play a part in shaping the plan and be ready to play their part if a raid occurs.

In addition, onsite staff, right down to the receptionist, needs to understand their responsibilities, including whom to call for help.  Not only will that avoid very costly mistakes, it will help reduce errors, fear and stress at what will likely be an extremely difficult time.

What an individual gets trained on will vary by role.  Yet, there is one commonality to the training.  Everyone needs to know the importance of staying calm, being polite and respectful.

Be sure to also outline the do’s and don’ts.

There’s one other thing she strongly advises: remember to communicate with your workforce.  Be as transparent as possible and avoid conflicting messages.  That will keep the lines of communication open and help avoid the speculation that can make the disruption even worse.

Listen in to learn more, and then take a fresh look at your current plans for responding to a government raid.
Show more...
2 months ago
14 minutes 9 seconds

Compliance Perspectives
Debbie Sabatini Hennelly on Chatbots, Trust and Reporting [Podcast]
By Adam Turteltaub

Employees may trust an AI chatbot more than they trust you, and that’s not necessarily a bad thing, if it leads to more reporting.

In this podcast, Debbie Sabatini Hennelly, Founder & President of Resiliti shares that  a recent survey conducted by Case IQ reveals that nearly 70% of respondents expressed no concerns about AI being involved in the helpline process. This openness is driven by several key factors: increased anonymity, ease of use, and a perception that AI offers a fairer, more impartial experience than speaking directly with a human.

These findings underscore a broader theme that continues to emerge in conversations about helplines: trust. Employees are more likely to report concerns or misconduct when they trust the system—when they believe their information will be handled confidentially, their identity protected, and their report taken seriously.

Not surprisingly, they also want to understand how their information is being used and how their anonymity is being safeguarded. This is especially important when helplines are outsourced to third-party vendors. Communicating clearly that the helpline is external—and therefore more secure and impartial—can go a long way in building trust.

But transparency doesn’t stop there. Employees also want to know what happens after they make a report. What’s the process? What can they expect next? Setting clear expectations and following through with updates helps reinforce that the organization is responsive and serious about addressing concerns.

It’s not enough to share this information only once a year during compliance training, she warns. Employees are constantly bombarded with messages and unless helpline communication is consistent and visible, it risks being forgotten or ignored.

Still, even with those reminders, barriers remain, especially fear of retaliation.

Organizations must address this head-on. First, there must be a clear, well-communicated prohibition against retaliation. But more importantly, leaders need to understand that retaliation isn’t always overt. It can be subtle—being passed over for key assignments, being excluded from team activities, or receiving the cold shoulder from colleagues.

Creating a culture where employees feel safe to speak up starts with leadership. Managers and executives must model the right behaviors, reinforce anti-retaliation policies, and foster an environment where concerns are welcomed, not punished.

One of the most critical—and often overlooked—elements of a successful helpline program is training leaders on how to respond when a report is made. Too often, well-meaning managers try to “get to the bottom of it” themselves. But when they start asking who reported what or conducting their own informal investigations, they can unintentionally obstruct the formal process and make employees feel unsafe.

A favorite tactic of hers for addressing this is to ask persistent leaders: “Do you want to be a witness and be deposed?” It’s a powerful reminder that involvement in an investigation has consequences—and that the best way to support the process is to let it unfold professionally and confidentially.

Listen in to learn more, and, hopefully, get employees to trust and speak-up more.
Show more...
2 months ago
15 minutes 31 seconds

Compliance Perspectives
Evie Wentink on Tone in the Middle [Podcast]
By Adam Turteltaub

If all you’re worrying about is tone at the top, you’re missing a key portion of the choir.  With most people reporting to middle managers, they play in integral role in ensuring a culture of compliance and ethics truly permeates the organization.

Evie Wentink, Senior Compliance Consultant at Ethical Edge Experts observes that while many organizations invest in crafting comprehensive codes of conduct and articulate expectations for ethical leadership, they often fall short in equipping managers with the tools, training, and support necessary to fulfill those expectations. This gap can undermine the effectiveness of compliance efforts and leave companies vulnerable to ethical lapses.

At the heart of the issue is a lack of intentional communication. Middle managers are frequently expected to embody and promote ethical leadership, yet they are rarely given a clear understanding of what that entails. To bridge this gap, organizations must develop structured plans that define ethical leadership in practical terms. These plans should include specific deliverables, resources, and expectations tailored to the manager’s role. By doing so, companies can ensure that managers are not only aware of their responsibilities but also empowered to carry them out effectively.

Authentic, ongoing conversations led by these managers are a cornerstone of a successful compliance culture. These discussions should not be limited to formal training sessions or annual reviews. Instead, they must be woven into the fabric of everyday operations. Managers should be encouraged—and required—to initiate “ethics or integrity minutes” at the start of team meetings. These brief segments provide a consistent opportunity to address ethical topics, reinforce values, and normalize open dialogue about compliance issues.

To support these conversations, organizations should provide managers with practical tools. These might include:

* Ethics spotlight cards that highlight key compliance themes.
* News articles that can be used to spark discussion around real-world ethical dilemmas.
* Access to updated policies and codes of conduct, with notifications when changes occur.

Tracking and analyzing these conversations is equally important. Compliance teams should maintain records of who is engaging in discussions, what topics are being covered, and which issues are generating the most questions. This data can be invaluable in identifying risk areas, refining training programs, and tailoring future communications. Often, the most common questions arise immediately after a training session, indicating that such moments are prime opportunities for deeper engagement.

Moreover, it’s essential to recognize the broader impact of middle management on organizational integrity. Prosecutors and regulators increasingly view middle managers as pivotal figures in corporate misconduct cases. Their actions—or inactions—can significantly influence whether a company succeeds or fails in maintaining ethical standards. Consequently, fostering a culture of accountability and proactive communication at this level is not just beneficial—it’s critical.

Ultimately, the goal is to create an environment where ethical conversations are natural, frequent, and valued. When managers consistently lead by example and facilitate open dialogue, employees become more comfortable raising concerns and asking questions. This cultural shift enhances transparency, reduces risk, and strengthens the overall integrity of the organization.

In summary, bridging the compliance gap at the middle management level requires a multifaceted approach: clear expectations, practical tools, authentic conversations, and ongoing tracking. By investing in these areas,
Show more...
2 months ago
10 minutes 23 seconds

Compliance Perspectives
Alessia Falsarone on AI Explainability [Podcast]
By Adam Turteltaub

Why did the AI do that?

It’s a simple and common question, but the answer is often opaque, with people referring to black boxes, algorithms and other words that only those in the know tend to understand.

Allesia Falsone, a non-executive director of Innovate UK, says that’s a problem.  In cases where AI has run amok, the fallout is often worse because the company is unable to explain why the AI made the decision it made and what data it was relying on.

AI, she argues, needs to be explainable to regulators and the public.  That way all sides can understand what the AI is doing (or has done) and why.

To create more explainable AI, she recommends the creation of a dashboard showing the factors that influence the decisions made.  In addition, teams need to track changes made to the model over time.

By doing so, when the regulator or public asks why something happened, the organization can respond quickly and clearly.

In addition, by embracing a more transparent process, and involving compliance early, organizations can head off potential AI issues early in the process.

Listen is to hear her explain the virtues of explainability.
Show more...
2 months ago
13 minutes 53 seconds

Compliance Perspectives
Josh Drew on What’s New with the False Claims Act [Podcast]
By Adam Turteltaub

Despite being a Civil War era statute, the False Claims Act (FCA) always has something new going on.  To find out what’s hot these days, we spoke with Joshua Drew (LinkedIn), a former federal prosecutor and chief compliance officer and currently a Member at Miller & Chevalier.

Lately, he explains, there has been a steady stream of activity.

* May:  The Civil Rights Fraud Initiative was announced by the administration and proposes to use the FCA against any federal funding recipient that it believes are operating DEI initiatives that violate antidiscrimination laws.
* July:  A new working group was created between the DOJ and HHS to focus on healthcare and life sciences.  It encouraged whistleblowers to file action in areas such as Medicare Advantage, drug device and biologics pricing and barriers to patient access, amongst others.
* A trade task force was created to encourage whistleblowing against tariff violators.

All of this occurs against a backdrop of activity by the Administration to identify and fight waste, fraud and abuse.

Listen in to learn more about where the Administration is focusing and what compliance teams can learn from recent actions.
Show more...
2 months ago
11 minutes 55 seconds

Compliance Perspectives
Zahra Timsah on Agentic AI [Podcast]
By Adam Turteltaub

The possibilities of AI don’t stop with generative AI such as ChatGPT.  Agentic AI may have more potential for compliance teams, Zahra Timsah, co-founder and CEO of i-GENTIC AI tells us.

Unlike generative AI, which is well known for its ability to create content, agentic AI can be used an internal enforcement agent.  Trained properly, she tells us, it can look for a potential violation and stop it.  For example, it can spot personal health information that is about to be transferred and redact the sensitive data automatically.

This ability to step in and take action will, she believes, free compliance teams from many routine tasks and allow them to shift their focus to matters that are more complex and fall within the grey area.  It will also help teams speed up the rate in which new laws and regulations turn into effective internal policies.

In addition, agentic AI will be able to produce measurable value by demonstrating what it can do to manage risk, improve trust and increase efficiency.

Listen in to learn more about agentic AI’s ability to improve your compliance program.
Show more...
2 months ago
9 minutes 34 seconds

Compliance Perspectives
Lewis Eisen on Writing Policies More Effectively [Podcast]
By Adam Turteltaub

Lewis Eisen (LinkedIn) is the author of the book RULES: Powerful Policy Wording to Maximize Engagement, and he wants to change the way people think about and write  policies.

Too often, he observes, policies contain parent-child language, with a scolding tone that turns people off and keeps them from wanting to read the policy, or even follow it.  It also contains a great deal of complexity, laying out all the many processes and procedures.

Instead, he recommends that companies adopt policy statements that are simpler and can tie values that people can identify with.  All the other stuff – complex procedures, examples, backgrounds and so forth – belongs elsewhere he argues, for employees to see after they have had the opportunity to see the policy and buy into it.

It’s an intriguing approach.  Listen in to learn more about how to reimagine your policy-making process.
Show more...
2 months ago
13 minutes 22 seconds

Compliance Perspectives
Andrew McBride on AI Use Cases for Compliance Programs [Podcast]
By Adam Turteltaub

Andrew McBride, Founder & Chief Executive Officer at Integrity Bridge, recently wrote an article entitled Generative Artificial Intelligence Use Cases for Ethics & Compliance Programs.  Intrigued by the topic, I sat down with him for this podcast.

He shared that many compliance teams are charged with using AI but may not have the  desire or know how to create and implement a use case.

He shares that AI is very good at doing a specific role and a specific activity.  Consequently, compliance teams should consider not just the use of AI as a whole but specific needs that they have for it.  He gives five specific use cases:

* Interpreter. AI can translate documents and training in seconds.  It can also help you distill long documents into pithy, usable summaries both for you and management.
* Drafter.  It can draft from scratch or improve what you have already put together, even creating interactive scenarios that can be useful in training.
* Researcher.  You do have to be mindful of hallucinations, but if you set up the AI to only use your own data or a trusted set of ources, it is more reliable.  Do, though, always check its work.
* Data Analyst. As compliance teams are called to amass and analyze more data, AI can help you do it, identifying, for example, relationships between training and calls to the helpline.
* Monitor, Investigator, Auditor. AI can review both structured and unstructured data, helping you identify red flags.

Listen in to learn more, and then, start building your own use case for generative AI.
Show more...
2 months ago
12 minutes 7 seconds

Compliance Perspectives
Kristy Grant-Hart on Due Diligence Questionnaires [Podcast]
By Adam Turteltaub

Why?

Why are you asking that?

Do you really need to know it?

Is it going to tell you something you need to know?

Is it a question that anyone could even answer?

All of these are questions to ask yourselves and colleagues when they propose adding an item to your due diligence questionnaire.

As Kristy Grant-Hart (LinkedIn), author, speaker and Head of Advisory at Spark Compliance, which is now owned by Diligent, explains, too often due diligence questionnaires are filled with questions that are unnecessary at best and counterproductive at worst.  They are born out a desire to cover all the bases not necessarily get you just the information you need.

Instead of throwing in everything including the kitchen sink, it’s far better to take, as elsewhere, a risk-based approach.  Work directly with those who own the risk review.  And, if the response doesn’t matter, don’t ask the question.

Listen in to learn more about how to create a due diligence questionnaire that gets the answers you need, and not the ones you don’t.
Show more...
2 months ago
11 minutes 16 seconds

Compliance Perspectives
Vera Cherepanova on Governance and Compliance [Podcast]
By Adam Turteltaub

With ever more attention paid to the role of boards in overseeing compliance, the question naturally comes up:  Do boards even understand what makes for an effective compliance program?  To help answer that question we spoke with Vera Cherepanova (LinkedIn), Executive Director of the non-profit Boards of the Future.

She shares the unfortunate news that many boards are not where they should be.  They are not fully seeing culture as a risk factor and driver of misconduct.  Nor do many understand their own duty to manage it.

That’s dangerous in these times, especially now that governments are paying closer attention to culture.

Forces, though, are starting to change the equation and force boards to understand the role they and compliance play together in ensuring both integrity within the company and business success.  Supply chain issues and ESG, for example, have brough compliance in closer contact with the governing authority.  So, too, is regionalization.  As countries take divergent paths into more and more issues, the compliance team will be essential in helping the board understand the risks that they face.

More, though, will need to be done.  Boards need to start addressing issues such as values conflicts like they do other risks.  And, more people with compliance experience should be added to boards.

Listen in to learn more about what boards are and are not doing.
Show more...
3 months ago
15 minutes 15 seconds

Compliance Perspectives
Ed White on Value-Based Care [Podcast]
By Adam Turteltaub

With a rising focus on value-based care, and a new program seeking to make the approach mandatory, we spoke with Ed White (LinkedIn), Partner at Nelson Mullins.

Previous efforts to move toward value-based models, such as Accountable Care Organizations (ACOs), faced significant barriers due to regulatory frameworks like the Stark Law and Anti-Kickback Statute. These laws were designed to prevent financial incentives from influencing medical decisions, but they also limited the ability of hospitals and physicians to collaborate in ways necessary for effective value-based care implementation.

Recognizing these constraints, CMS and the Office of Inspector General (OIG) collaborated in 2020 to issue new regulations aimed at facilitating the transition to value-based care.

The next step in the transition is the new Transforming Episode Accountability Model or TEAM program, which will become mandatory in 2026. This program includes 740 hospitals across the country and targets five specific surgical procedures. Participating hospitals must coordinate care with a range of providers—including specialists, primary care physicians, labs, durable medical equipment (DME) providers, hospice agencies, and others.

The TEAM program is designed to last for five years, during which time hospitals are responsible for ensuring that patients are connected to appropriate post-discharge care, including follow-up with primary care providers. The goal is to reduce complications, avoid emergency room readmissions, and promote better health outcomes—all while keeping costs below a CMS-established target price.

To drive efficiency, the TEAM program introduces three financial risk “tracks”:

* Upside-only track – Hospitals can earn shared savings if costs come in below the target price.
* Moderate risk (upside/downside) track – Hospitals can either earn savings or incur penalties depending on performance.
* Full-risk track – This track will offer both greater risks and rewards.

According to industry consultants, two-thirds of participating hospitals are expected to lose money in the early phases of the TEAM program.

Hospitals must rethink their compliance, care coordination, and partnership strategies in the wake of these changes. Listen in to learn more about what this all means for your compliance program both today and in the future.
Show more...
3 months ago
15 minutes 58 seconds

Compliance Perspectives
Kortney Nordrum on Life After Compliance [Podcast]
By Adam Turteltaub

Imagine that it’s time to move on from compliance to another role, either by choice or being voluntold.  Does what you learned in compliance help?

Absolutely, according to Kortney Nordrum, Vice President and Senior Corporate Counsel at Deluxe.  Amongst other benefits, it taught her how to break down large issues into more manageable pieces, better identify and manage risks and help deals close.

That isn’t to say the transition has come without challenges.  She has had to learn to trust others to run compliance and also to be less risk averse.

Listen in to learn more about how your compliance skills can help if your career ever takes you to another profession.
Show more...
3 months ago
14 minutes 11 seconds

Compliance Perspectives
An SCCE Podcast