Senior Data Engineer (m/f/d)

Job Description

We are seeking an experienced Senior Data Engineer to contribute to designing and building scalable SaaS products within our AI Lab. In this role, you'll combine deep technical expertise with strategic vision to build AI-powered products that will help transform our clients' business models and enable their growth.

Simon-Kucher is at the forefront of innovation in driving commercial excellence, revamping business models, developing solutions and methodologies for unlocking better growth of our clients. Within AI Lab, we are developing cutting-edge large scale AI products to deliver sustained top-line impact for our clients.

Are you interested in working in a team of AI evangelists with a can-do attitude? Want to experience the dynamics of agile processes in open-minded teams? How about getting creative in a startup atmosphere with a steep development curve and flat hierarchies? And most importantly, do you want to make a difference? Then you've come to the right place.

Simon-Kucher is a global consultancy with more than 2,000 employees in 30+ countries.
Our sole focus is on unlocking better growth that drives measurable revenue and profit for our clients. We achieve this by optimizing every lever of their commercial strategy – product, price, innovation, marketing, and sales – based on deep insights into what customers want and value. With 40 years of experience in monetization topics of all kinds, we are regarded as the world's leading pricing and growth specialist.


  • Develop and maintain data architecture: create and manage robust data architectures that support high-volume, high-throughput SaaS applications, focusing on reliability and scalability.
  • Design and implement batch/stream pipelines (CDC, API, files) with schema evolution, idempotency, and data quality gates.
  • Integrate internal and external data sources, structured and unstructured (e.g., pricing databases, market benchmarks, CRM)
  • Model core entities and features; choose storage layouts and partitioning; build reusable data products.
  • Implement entity resolution and fuzzy matching; evaluate and tune matching quality.
  • Implement ETL/ELT processes: develop processes for extracting, transforming, and loading data from multiple sources into data warehouses or lakes for analytical use.
  • Ensure data quality and security: implement data validation, cleansing routines, and security measures, including encryption and access controls, to ensure data accuracy, privacy, and compliance with regulations
  • Own orchestration, lineage, and observability; define SLAs and error budgets.
  • Partner with Product Owner to translate customer needs into scalable data and ML solutions.
  • Partner with ML/MLOps on feature pipelines.
  • Work with Cloud Platform Engineer to deploy and manage services securely.

Technical expertise

  • 6+ years in data engineering at scale; strong Python/SQL; Spark or Flink; Parquet/columnar formats.
  • Experience with big data processing frameworks like Apache Spark and messaging systems like Kafka.
  • Orchestration (Airflow/Argo/Step Functions); IaC (Terraform/CDK) basics.
  • Data modeling for analytics/ML; partitioning, z‑ordering/clustering; performance tuning.
  • Experience with probabilistic linkage/fuzzy matching (e.g., blocking, string similarity, Fellegi–Sunter‑style models).
  • Proficiency with cloud-native services on AWS for data processing and storage. Knowledge of Azure is nice to have.
  • Databases: Experience with various SQL and NoSQL databases.
  • Security & governance (PII handling, encryption, IAM, row/column‑level policies).
  • Open table formats (Iceberg/Delta/Hudi); dbt; Kafka/MSK/Kinesis; Great Expectations/Deequ.

Problem-Solving, Collaboration & Domain Expertise

  • Analytical thinking: Ability to analyze complex datasets and design efficient data solutions.
  • Collaboration: Strong ability to collaborate effectively with cross-functional teams.
  • Communication: Excellent communication skills to explain technical concepts to both technical and non-technical stakeholders.
  • Strong understanding of AI/ML concepts for effective collaboration with data scientists.
  • Pipeline reliability (≥99% on‑time SLAs); data quality (rule pass rate); cost per processed TB; match precision/recall.

  • Advance your career with exciting professional opportunities in our thriving company with a startup feel
  • Add to your experience with our projects that focus on growth, have a positive impact, and truly matter
  • Voice your unique ideas in a corporate culture defined by our entrepreneurial spirit, openness, and integrity
  • Feel at home working with our helpful, enthusiastic colleagues who have great team spirit
  • Broaden your perspective with our extensive training curriculum and learning programs (e.g. LinkedIn Learning)
  • Speak your mind in our holistic feedback and development processes (e.g. 360-degree feedback)
  • Satisfy your need for adventure with our opportunities to live and work abroad in one of our many international offices
  • Enjoy our benefits, such as hybrid working, daycare allowance, corporate discounts, and wellbeing support (e.g. Headspace)
  • Unwind in our break areas where you can help yourself to the healthy snacks and beverages provided
  • See another side of your coworkers at our frequent employee events and highly anticipated World Meeting and Holiday Party

We believe in building a culture that embraces diversity, equity, and inclusion, creating an environment in which our people feel valued, are able to be themselves and feel their contribution matters. If we get that right, remarkable things will happen; people will grow faster, innovate, feel valued, and create better outcomes for everyone – our people, our clients and, of course, our business.

View More