Software Engineer - Data Infrastructure - Kafka

Canonical is a pioneering tech firm that is at the forefront of the global move to open source, publishing Ubuntu, one of the most important open source projects and the platform for AI, IoT and the cloud.
Seville, Spain
Data
Mid-Level Software Engineer
Remote
1,000 - 5,000 Employees
3+ years of experience
AI · Enterprise SaaS · Cloud
This job posting may no longer be active. You may be interested in these related jobs instead:
Data Platform Engineer

Data Platform Engineer role at Canonical, focusing on building automation solutions for multi-cloud and on-premise data platforms using Python and distributed systems.

Technical Program Manager, Analytics, Energy and Location Strategy

Technical Program Manager position at Google focusing on analytics and energy strategy, requiring expertise in Python, SQL, and analytical modeling with a salary range of $132K-$189K.

Technical Program Manager III, Capacity Planning Analytics, Google Cloud

Technical Program Manager III position at Google Cloud focusing on capacity planning analytics, requiring 5 years of experience and offering comprehensive compensation package.

Technical Program Manager, Energy Contracts and Asset Management

Technical Program Manager position at Google focusing on energy contracts and asset management, requiring expertise in program management and energy markets.

Product Manager II, Health Data Governance, Fitbit

Product Manager II position at Google's Fitbit division, focusing on health data governance and product development for health tracking devices and services.

Description For Software Engineer - Data Infrastructure - Kafka

Canonical is building a comprehensive automation suite to provide multi-cloud and on-premise data solutions for the enterprise. The data platform team is responsible for developing managed solutions for a full range of data stores and technologies, from big data to NoSQL, cache-layer capabilities, analytics, and structured SQL engines.

As a Software Engineer in the Data Infrastructure team, you will:

  • Collaborate with a distributed team to create new features using high-quality, idiomatic Python code
  • Debug issues and interact with upstream communities publicly
  • Work on fault-tolerant mission-critical distributed systems
  • Focus on the creation and automation of infrastructure features for data platforms
  • Ensure fault-tolerant replication, TLS, installation, backups, and more for Big Data technologies like Kafka and Spark
  • Provide domain-specific expertise on data systems to other teams within Canonical

Requirements:

  • Proven hands-on experience in software development using Python
  • Proven hands-on experience in distributed systems, such as Kafka and Spark
  • Bachelor's or equivalent in Computer Science, STEM, or a similar degree
  • Willingness to travel up to 4 times a year for internal events

Additional skills that would be beneficial:

  • Experience operating and managing other data platform technologies (SQL, NoSQL)
  • Linux systems administration and infrastructure operations experience
  • Experience with public cloud or private cloud solutions like OpenStack
  • Experience with operating Kubernetes clusters

Canonical offers a competitive base pay, fully remote working environment, personal learning and development budget, annual compensation review, recognition rewards, annual holiday leave, parental leave, and more. The company values diversity and fosters a workplace free from discrimination.

Join Canonical to be part of a pioneering tech firm that's changing the world on a daily basis through open source technology.

Last updated 7 months ago

Responsibilities For Software Engineer - Data Infrastructure - Kafka

  • Collaborate with a distributed team
  • Write high-quality, idiomatic Python code to create new features
  • Debug issues and interact with upstream communities publicly
  • Work on fault-tolerant mission-critical distributed systems
  • Ensure fault-tolerant replication, TLS, installation, backups for Big Data technologies
  • Provide domain-specific expertise on data systems to other teams

Requirements For Software Engineer - Data Infrastructure - Kafka

Python
Kafka
Linux
Kubernetes
  • Proven hands-on experience in software development using Python
  • Proven hands-on experience in distributed systems, such as Kafka and Spark
  • Bachelor's or equivalent in Computer Science, STEM, or a similar degree
  • Willingness to travel up to 4 times a year for internal events

Benefits For Software Engineer - Data Infrastructure - Kafka

Equity
  • Fully remote working environment
  • Personal learning and development budget of 2,000USD per annum
  • Annual compensation review
  • Recognition rewards
  • Annual holiday leave
  • Parental Leave
  • Employee Assistance Programme
  • Opportunity to travel to new locations to meet colleagues twice a year
  • Priority Pass for travel and travel upgrades for long haul company events

Interested in this job?