The ups and downs of deploying AI in the wild: A pyspark story
01/10, 17:00–17:35 (Europe/Madrid), Katherine Johnson (Teoría 7)
Idioma: English

Have you ever faced a professional challenge where you just didn’t know where to start?​

What happens to an AI model once it’s been trained and tested? In this talk, I’m going to share how I transformed into a data engineer for the duration of two weeks, while on the mission to deploy our python AI models in a Cloudera datalake using PySpark.


In this personal story of a real life challenge, I will discuss the AI solution that our team developed in python for the classification of bank transactions. We will explore the challenges faced during the deploy of the solution using pyspark and its integration in the client platform. Finally, we will take a look at what the future holds for our AI solution during the next steps of productionalization of this project.


Nivel de la propuesta

Advanced

Temática

Data Science, Machine Learning and AI

Data scientist in GFT Group in the Artificial Intelligence Strategic Initiative team. ​

Graduated in Math and Physics with a Masters in Big Data, now is dedicated to developing machine learning models. Specialized in NLP, has worked on various projects like automated document processing and text classification. ​

Currently excited to grow in areas of MLOps and Data Engineering, passionate about AI ethics and responsibility