Running a pipeline in Flask API

Hi
I am trying to build a flask api which get a dataset and runs a kedro pipeline using Kedro 17.0. The given dataset is supposed to be the input to the pipeline. I am doing it currently like that:

data = pd.read_parquet( 'my_data.parquet' )
with KedroSession.create("src.my_project") as session:
   context = session.load_context()
io = context.catalog.shallow_copy()
io.add("pipeline_input", MemoryDataSet())
io.save("pipeline_input", data)
a = SequentialRunner().run(context.pipelines['monitoring_pipeline'], io)

Basically I am copying the data catalog and adding the dataset to that and pass the data to the sequential runner which works fine.
I want to know if there is a way to do it with context.run() without copying the catalog and running it with sequential runner?

Cheers!

1 Like