Taro Logo

Data Engineer (AWS, Databricks)

NISC develops technology solutions for utility and broadband companies, serving over 950 Members with 16 million end customers.
Cedar Rapids, IA, USALake St Louis, MO, USAMandan, ND 58554, USA
Data
Mid-Level Software Engineer
Hybrid
501 - 1,000 Employees
Enterprise SaaS
This job posting may no longer be active. You may be interested in these related jobs instead:
Data Scientist

Data Scientist position at Walmart focusing on machine learning, statistical analysis, and data-driven solutions, based in Bentonville, AR.

Data Engineer III

Data Engineer III position at Walmart focusing on big data analytics, pipeline development, and data modeling using Java, Python, and various big data technologies.

Data Engineer III

Data Engineer III position at Walmart focusing on building and maintaining data infrastructure using Python, Java, and modern data warehousing technologies.

Data Scientist

Data Scientist position at Walmart focusing on machine learning, statistical analysis, and business problem-solving using Python, PyTorch, and SQL in Bentonville, AR.

Data Engineer III

Data Engineer III position at Walmart focusing on building scalable data pipelines and systems using Python, SQL, and big data technologies in Bentonville, AR.

Description For Data Engineer (AWS, Databricks)

National Information Solutions Cooperative (NISC) is seeking an experienced Data Engineer to join their growing team of data analytics experts. With over 50 years of experience, NISC develops technology solutions for utility and broadband companies, serving over 950 Members with 16 million end customers.

The Data Engineer will be responsible for:

  • Curating and optimizing data and data pipeline architecture
  • Optimizing data flow and collection for various application teams
  • Supporting application experts, software developers, database architects, and data analysts on a Data Roadmap strategy
  • Ensuring optimal data delivery architecture is consistent throughout ongoing projects
  • Supporting the data needs of multiple teams, systems, and products

The ideal candidate will thrive in a team environment, be committed to accomplishing common goals, and be excited about optimizing or re-designing the company's data architecture.

Key Requirements:

  • Experience with AWS technologies, Databricks, and Delta Lake
  • Proficiency in big data tools such as Hadoop, Spark, and Kafka
  • Experience with relational SQL and NoSQL databases, including Oracle, Postgres, Cassandra, and DynamoDB
  • Familiarity with data pipeline and workflow management tools like Hevo Data and Airflow
  • Experience with stream-processing systems like Apache Spark and Kafka Streams
  • Proficiency in object-oriented languages such as Java and Scala

Work Schedule:

  • Hybrid from one of NISC's office locations: Cedar Rapids, IA; Lake Saint Louis, MO; or Mandan, ND
  • Minimum of 3 days per week in the office, with Tuesday and Wednesday required

NISC offers a comprehensive benefits package, including medical, dental, and vision insurance, 401(k) with company match, paid time off, and various professional development opportunities.

Join NISC to be part of a company committed to innovation, teamwork, and empowering individuals to make a difference in the utility and broadband industry.

Last updated 8 months ago

Responsibilities For Data Engineer (AWS, Databricks)

  • Curate and optimize data and data pipeline architecture
  • Optimize data flow and collection for various application teams
  • Support application experts, software developers, database architects, and data analysts on a Data Roadmap strategy
  • Ensure optimal data delivery architecture is consistent throughout ongoing projects
  • Support the data needs of multiple teams, systems, and products

Requirements For Data Engineer (AWS, Databricks)

Java
Scala
Python
  • Experience with AWS technologies, Databricks, and Delta Lake
  • Proficiency in big data tools such as Hadoop, Spark, and Kafka
  • Experience with relational SQL and NoSQL databases, including Oracle, Postgres, Cassandra, and DynamoDB
  • Familiarity with data pipeline and workflow management tools like Hevo Data and Airflow
  • Experience with stream-processing systems like Apache Spark and Kafka Streams
  • Proficiency in object-oriented languages such as Java and Scala
  • Bachelor's degree in Computer Science, Statistics, Informatics, Information Systems or similar discipline (preferred)

Benefits For Data Engineer (AWS, Databricks)

Medical Insurance
Dental Insurance
Vision Insurance
401k
  • Medical Insurance
  • Dental Insurance
  • Vision Insurance
  • 401k
  • Paid Time Off
  • Professional Development

Interested in this job?