How do I make my conda environment available on executor nodes when running pyspark jobs using Kedro?

In order to get the best help, it is suggested to answer the following questions:
Hi All,
What is the goal you are trying to achieve?
I am looking to write spark dataframes to HDFS using Pyspark. I have all my libraries installed within a conda env. When I dont run my jobs on Yarn, it runs fine.
But when I run them on Yarn, I get the below error:

Caused by: java.io.IOException: Cannot run program "./environment/bin/python": error=2, No such file or directory

I am currently using kedro version 0.17.3.