IT - Technology Lead | Big Data - Data Processing | ...
SysMind Tech - Tampa, FL
Apply NowJob Description
POC: Sam Chavez ATTENTION ALL SUPPLIERS!!! READ BEFORE SUBMITTING • UPDATED CONTACT NUMBER and EMAIL ID is a MANDATORY REQUEST from our client for all the submissions • Limited to 1 submission per supplier. Please submit your best. • We prioritize endorsing those with complete and accurate information • Avoid submitting duplicate profiles. We will Reject/Disqualify immediately. • Make sure that candidate's interview schedules are updated. Please inform the candidate to keep their lines open. • Please submit profiles within the max proposed rate. • Please make sure to TAG the profiles correctly if the candidate has WORKED FOR Client as a SUBCON or FTE. MANDATORY: Please include in the resume the candidate's complete & updated contact information (Phone number, Email address and Skype ID) as well as a set of 5 interview timeslots over a 72-hour period after submitting the profile when the hiring managers could potentially reach to them. PROFILES WITHOUT THE REQUIRED DETAILS and TIME SLOTS will be REJECTED. Job Title: Technology Lead | Big Data - Data Processing | Spark -- Big Data Engineer Work Location & Reporting Address: Tampa, FL 33602 (Onsite-Hybrid. Will consider candidates willing to relocate to client's location) Contract duration: 12 MAX VENDOR RATE: market rate hour max (own W2 candidates only) Target Start Date: 01 Mar 2026 Does this position require Visa independent candidates only? Yes Must Have Skills: • Spark • Scala • Python • AWS Nice to Have Skills: • Spark Streaming • Hadoop • Hive • SQL • Sqoop • Impala Detailed Job Description: • At least 8+ years of experience and strong knowledge in Scala programming language. • Able to write clean, maintainable and efficient Scala code following best practices. • Good knowledge on the fundamental Data Structures and their usage • At least 8+ years of experience in designing and developing large scale, distributed data processing pipelines using Apache Spark and related technologies. • Having expertise in Spark Core, Spark SQL and Spark Streaming. • Experience with Hadoop, HDFS, Hive and other BigData technologies. • Familiarity with Data warehousing and ETL concepts and techniques • Having expertise in Database concepts and SQL/NoSQL operations. • UNIX shell scripting will be an added advantage in scheduling/running application jobs. • At least 8 years of experience in Project development life cycle activities and maintenance/support projects. • Work in an Agile environment and participation in scrum daily standups, sprint planning reviews and retrospectives. • Understand project requirements and translate them into technical solutions which meets the project quality standards • Ability to work in team in diverse/multiple stakeholder environment and collaborate with upstream/downstream functional teams to identify, troubleshoot and resolve data issues. • Strong problem solving and Good Analytical skills. • Excellent verbal and written communication skills. • Experience and desire to work in a Global delivery environment. • Stay up to date with new technologies and industry trends in Development. Minimum Years of Experience: • 8+ years Certifications Needed: • Yes. Karat Assessment Top 3 responsibilities you would expect the Subcon to shoulder and execute: • Coding • Requirement gathering • Team Lead Interview Process (Is face to face required?) • Yes Any additional information you would like to share about the project specs/nature of work:
Created: 2026-03-10