Kafka Topics Senior Engineer-Remote Vietnam, Indonesia, Malaysia

Remote
Full Time
Experienced

We are seeking a "Kafka Topics developer" that is responsible for designing, developing, and managing data streaming pipelines using Apache Kafka, primarily focusing on the creation, configuration, and optimization of Kafka topics to ensure efficient data flow within a distributed system; their responsibilities include defining topic structures, managing partitions, handling data retention policies, and monitoring topic performance to facilitate reliable real-time data processing across applications.

Responsibilities:
 

Topic Design:
  • Define topic structures based on data types, usage patterns, and scalability requirements.
  • Determine appropriate partition strategies for optimal data distribution and consumer behavior.
  • Set data retention policies and cleanup strategies to manage topic size.
Kafka Cluster Management:
  • Create, manage, and monitor Kafka topics across a distributed cluster.
  • Monitor topic performance metrics like message throughput, latency, and consumer offsets.
  • Troubleshoot and resolve issues related to topic creation, data delivery, and consumer access.
Data Pipeline Integration:
  • Develop Kafka producers and consumers to integrate diverse applications with the streaming platform.
  • Implement data transformation and enrichment logic using Kafka Streams or other stream processing frameworks.
Performance Optimization:
  • Analyze and optimize topic configurations to ensure high throughput and low latency.
  • Monitor and adjust partition counts based on data volume and consumer patterns.
Collaboration:
  • Work closely with development teams to understand data requirements and design robust Kafka topic architectures.
  • Collaborate with system administrators to manage Kafka cluster infrastructure and security.
Qualifications:
  • Thorough understanding of Kafka architecture, producers, consumers, partitions, brokers, and topic management.
  • Expertise in Java or other languages supported by Kafka client libraries.
  • Familiarity with Kafka Streams, Apache Flink, or other stream processing tools.
  • Ability to design scalable and fault-tolerant Kafka data pipelines.
  • Experience with monitoring Kafka cluster health and debugging data flow issues
Send your CV's to [email protected]
Share

Apply for this position

Required*
Apply with Indeed
We've received your resume. Click here to update it.
Attach resume as .pdf, .doc, .docx, .odt, .txt, or .rtf (limit 5MB) or Paste resume

Paste your resume here or Attach resume file

Human Check*