Based in Berlin. I architect robust data pipelines, optimize cloud-native data warehouses, and build high-volume, real-time streaming solutions.
With over 7 years of experience in the data landscape, I specialize in designing, building, and maintaining high-performance data systems. Living and working in Berlin, I thrive on tackling complex architectural challenges.
Whether it's optimizing ETL pipelines (ODI), managing massive-scale streaming data (Kafka, Spark), or implementing cloud-native data warehouses (BigQuery), I focus on creating reliable infrastructure that empowers business intelligence.
When I'm not architecting pipelines, you can find me...
Authoring articles on the cosmos, from distant stars to complex physics.
Following the latest in auto tech and performance.
Crafting intricate worlds and epic campaigns as a dedicated Dungeon Master.
The tools and technologies I use to build data solutions.
Python, SQL, Java, C#, Shell Scripting
GCP (BigQuery, Storage, PubSub), Docker, Git, Bitbucket
BigQuery, PostgreSQL, Oracle, HBase, MongoDB, SQL Server
Apache Spark, Kafka, Airflow, Hadoop, Hive, Impala, ODI, Phoenix
Validated expertise in cloud and data platforms.
A selection of data challenges I've solved.
Role: Senior Big Data Engineer (Consultant)
Jul 2022 - Present
Architected and optimized high-volume, real-time streaming pipelines for Vodafone's core analytics platform. Successfully led the migration of a complex streaming project to a new Hadoop cluster and built/configured test environments to ensure quality.
Role: Big Data Engineer (Consultant)
May 2021 - Jul 2022
Developed and optimized ETL pipelines for predictive analytics in financial risk assessment, enhancing processing efficiency.
Role: Software Developer
Nov 2019 - May 2021
Developed backend components and Oracle CC&B-based Java web service pipelines for customer invoicing and payment processing.
I'm always open to discussing complex data engineering challenges, cloud architecture, or opportunities in Berlin (and beyond).