Software Engineer - Data Infrastructure - Kafka

Canonical is a pioneering tech firm that is at the forefront of the global move to open source, publishing Ubuntu, one of the most important open source projects and the platform for AI, IoT and the cloud.
Seville, Spain
Data
Mid-Level Software Engineer
Remote
1,000 - 5,000 Employees
3+ years of experience
AI · Enterprise SaaS · Cloud
This job posting may no longer be active. You may be interested in these related jobs instead:
Python and Kubernetes Software Engineer - Data, AI/ML & Analytics

Python and Kubernetes Software Engineer position at Canonical focusing on Data, AI/ML & Analytics solutions, working with open-source tools and distributed systems.

Business Intelligence Engineer, PLEX-SIA

Business Intelligence Engineer position at Amazon focusing on data analysis, metrics development, and business insights, requiring SQL and Python expertise with 2+ years of experience.

Business Intelligence Engineer, SCOT - Automated Inventory Management

Business Intelligence Engineer role at Amazon's SCOT team, focusing on supply chain optimization through data analytics and system improvements.

Data Engineer - Amazon FinTech

Data Engineer position at Amazon FinTech building scalable financial planning and analysis solutions using AWS and TM1 technologies.

Data Engineer, Infrastructure Science

Data Engineer role at AWS Infrastructure Services, building and maintaining data pipelines for power, cooling, and supply chain optimization, with 1+ years experience required.

Description For Software Engineer - Data Infrastructure - Kafka

Canonical is building a comprehensive automation suite to provide multi-cloud and on-premise data solutions for the enterprise. The data platform team is responsible for developing managed solutions for a full range of data stores and technologies, from big data to NoSQL, cache-layer capabilities, analytics, and structured SQL engines.

As a Software Engineer in the Data Infrastructure team, you will:

  • Collaborate with a distributed team to create new features using high-quality, idiomatic Python code
  • Debug issues and interact with upstream communities publicly
  • Work on fault-tolerant mission-critical distributed systems
  • Focus on the creation and automation of infrastructure features for data platforms
  • Ensure fault-tolerant replication, TLS, installation, backups, and more for Big Data technologies like Kafka and Spark
  • Provide domain-specific expertise on data systems to other teams within Canonical

Requirements:

  • Proven hands-on experience in software development using Python
  • Proven hands-on experience in distributed systems, such as Kafka and Spark
  • Bachelor's or equivalent in Computer Science, STEM, or a similar degree
  • Willingness to travel up to 4 times a year for internal events

Additional skills that would be beneficial:

  • Experience operating and managing other data platform technologies (SQL, NoSQL)
  • Linux systems administration and infrastructure operations experience
  • Experience with public cloud or private cloud solutions like OpenStack
  • Experience with operating Kubernetes clusters

Canonical offers a competitive base pay, fully remote working environment, personal learning and development budget, annual compensation review, recognition rewards, annual holiday leave, parental leave, and more. The company values diversity and fosters a workplace free from discrimination.

Join Canonical to be part of a pioneering tech firm that's changing the world on a daily basis through open source technology.

Last updated 8 months ago

Responsibilities For Software Engineer - Data Infrastructure - Kafka

  • Collaborate with a distributed team
  • Write high-quality, idiomatic Python code to create new features
  • Debug issues and interact with upstream communities publicly
  • Work on fault-tolerant mission-critical distributed systems
  • Ensure fault-tolerant replication, TLS, installation, backups for Big Data technologies
  • Provide domain-specific expertise on data systems to other teams

Requirements For Software Engineer - Data Infrastructure - Kafka

Python
Kafka
Linux
Kubernetes
  • Proven hands-on experience in software development using Python
  • Proven hands-on experience in distributed systems, such as Kafka and Spark
  • Bachelor's or equivalent in Computer Science, STEM, or a similar degree
  • Willingness to travel up to 4 times a year for internal events

Benefits For Software Engineer - Data Infrastructure - Kafka

Equity
  • Fully remote working environment
  • Personal learning and development budget of 2,000USD per annum
  • Annual compensation review
  • Recognition rewards
  • Annual holiday leave
  • Parental Leave
  • Employee Assistance Programme
  • Opportunity to travel to new locations to meet colleagues twice a year
  • Priority Pass for travel and travel upgrades for long haul company events

Interested in this job?