Dunamu is seeking a Data Engineer to join their data hub organization that manages large-scale data and designs flexible, scalable systems. The role focuses on ensuring stable data storage and processing environments while contributing to business performance through rapid data analysis and processing.
As a Data Engineer, you'll have the opportunity to design and operate large-scale distributed processing systems, establish and manage company-wide central data management standards, gain experience with high-level compliance and governance, and work with both real-time and batch-based data pipelines.
The ideal candidate will have at least 3 years of data engineering experience, with expertise in distributed systems like Apache Spark and Apache Flink. Experience with cloud environments, Kubernetes, and data pipeline construction is essential. Knowledge of data lakehouse technologies like Apache Iceberg and Apache Hudi is a plus.
The position offers the chance to work with cutting-edge technology in a dynamic fintech environment, with opportunities to contribute to service quality improvement through knowledge sharing with colleagues. The role is based in Seoul, South Korea, with a focus on building robust data infrastructure that supports various services and applications.
The company provides a structured onboarding process with a 3-month probation period and offers visa sponsorship for international candidates. They particularly value candidates who can demonstrate experience in handling large-scale data pipelines and contributing to data quality improvements.