Sr. Big Data Java Engineer
Chicago Financial Search - Chicago, IL
Apply NowJob Description
Sr. Big Data Java Engineer Chicago - Hybrid - 3 Days Onsite and 2 Days Offsite About The Role: As our Senior Big Data Java Engineer, you will be responsible for developing and enhancing our data lake streaming platform on Azure. What You'll Do: Design, develop, and implement Kafka Streams-based Java applications on Azure. Build and optimize data pipelines for large-scale Big Data processing using Spark (Java) in Azure. Write clean, efficient, and maintainable Java code. Architect and deploy scalable, resilient systems to handle high data volumes. Conduct code reviews, provide design recommendations, and mentor team members to enhance development processes. Work with distributed systems managing massive datasets. Troubleshoot and resolve performance issues effectively. Collaborate with the Product Owner to break down customer requirements into detailed development tasks. Deliver high-quality, production-ready code that meets acceptance criteria and definition of done. Develop and maintain deployment scripts, unit tests, and version-controlled source code. Monitor the delivery pipeline to ensure software quality and consistency. Oversee testing, deployment, and production activities to maintain system stability, adhering to best practices. Engage in pair programming to produce well-structured, supportable code. Write comprehensive unit tests using JUnit, Mockito, and behavior-driven development (BDD) tests with Cucumber. Participate in backlog refinement and planning sessions to estimate and prioritize upcoming work. What We're Looking For: Bachelor's or Master's degree in Computer Science, Information Technology, or a related technical field. 7+ years as a Senior Java Developer, with at least 3 years of hands-on experience in Spark, Kafka, and cloud technologies. Must have strong proficiency in big data streaming, particularly Kafka, along with Java and Spark. Solid understanding of distributed systems and system design for large-scale Big Data processing (both batch and real-time). Experience optimizing and troubleshooting Spark jobs and performance issues. Proven ability to handle large-scale data processing, including both batch and real-time workloads. Hands-on experience with cloud platforms (AWS or Azure required). Proficiency in Spring Boot or similar Java backend frameworks, Kafka, Elasticsearch, Kibana, and Kubernetes. Strong background in designing RESTful APIs and integrating third-party APIs. Familiarity with version control systems, preferably Git. Experience working in Agile environments, ideally Scrum. Strong understanding of automated testing approaches, including test-driven development (TDD), unit testing, integration testing, and behavior-driven development (BDD). Exposure to continuous integration (CI/CD) tools and best practices. Solid understanding of service-oriented architectures and message brokers. Strong analytical and problem-solving skills, with the ability to break down complex challenges into manageable solutions. Results-driven mindset with a focus on delivering high-quality outcomes efficiently. Team-oriented and client-focused, with a collaborative mindset. Open to diverse perspectives and adaptable to different viewpoints. Self-aware and mindful of work styles, fostering an inclusive team environment. OOJ-1418G Skills: Big Data,Java,Kafka,Cloud Services,Spark,Cloud Applications
Created: 2026-03-10