Principal Data Engineer (Remote)
Advance Auto Parts - Austin, TX
Apply NowJob Description
Job DescriptionWHO WE ARECome join our diverse Advance AI Team and start reimagining the future of the automotive aftermarket. We are a highly motivated, tech-focused organization, excited to be in the midst of dynamic innovation and transformational change. Driven by Advance''s top-down commitment to empowering our team members, we are focused on delighting our Customers with Care and Speed, through delivery of world class technology solutions and products.We value and cultivate our culture by seeking to always be collaborative, intellectually curious, fun, open, and diverse. As a Data Engineer within the Advance AI team, you will be a key member of a growing and passionate group focused on collaborating across business and technology resources to drive forward key programs and projects building enterprise data & analytics capabilities across Advance Auto Parts.This position is Remote and candidates will be considered from any US city. THE OPPORTUNITYAs the Principal Data Engineer, you will be hands-on using AWS, SQL and Python daily with a team of Software, Data and DevOps Engineers. This position has access to massive amounts of customer data and will be responsible for the end-to-end management of data from various sources.You will work with structured and unstructured data to solve complex problems in the aftermarket auto care industry.ESSENTIAL DUTIES AND RESPONSIBILITIES include the following; other duties may be assigned:ENGINEERBuild processes supporting data transformation, data structures, metadata management, dependency, and workload management.Implement cloud services such as AWS EMR, EC2, EMR, Snowflake, Elastic-Search, Juypter notebooksDevelop stream-processing systems: Storm, Spark-Streaming, Kafka etcDeploy data pipeline and workflow management tools: Azkaban, Luigi, Airflow, JenkinsScale and deploy AI products for customer facing application impacting millions of end-users using Openshift on hybrid cloud.Develop the architecture for deploying AI products that can scale and meet enterprise security and SLA standardsDevelop, construct, test, and maintain optimal data architectures.Identify, design, and implement internal process improvements such as automating manual processes, optimizing data delivery, and redesigning infrastructure for greater scalability.Perform hardware provisioning, forecasting hardware usage, and managing to a budgetPerform security standards like symmetric and asymmetric encryption, virtual private clouds, IP management, LDAP authentication, and other methodsLEADERSHIPShare outcomes through written communication, including an ability to effectively communicate with both business and technical teamsSuccinctly communicate timelines, updates and changes to existing & new projects and deliverables in timely fashionWork with the senior leadership within the organization to develop a long-term technical strategy. Map business goals to appropriate technology investmentsMentor and manage other Data Engineers, ensuring data engineering best practices are being followedChallenge technical status quo and be comfortable in receiving feedback on your ideas and work Be a hands-on practitioner and lead by exampleQUALIFICATIONSBe a self-starter, comfortable with ambiguity and enjoy working in a fast-paced dynamic environmentAdvanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databasesStrong analytic skills related to working with unstructured datasetsExperience building and optimizing 'big data'' data pipelines, architectures and data setsExperience in cloud services such as AWS EMR, EC2, Juypter notebooksExperience performing root cause analysis on internal and external data and processes to answer specific business questions and see opportunities for improvementA successful history of manipulating, processing, and extracting value from large, disconnected datasetsSolid understanding of message queuing, stream processing, and highly scalable 'big data'' data storesStrong project management and organizational skills. Knowledge or experience with Agile methodologies is a plusExperience supporting and working with multi-functional teams in a dynamic environmentExperience using the following software/tools:Strong experience with AWS cloud services: EC2, EMR, Snowflake, Elastic-Search, OpenshiftExperience with stream-processing systems: Storm, Spark-Streaming, Kafka etc.Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.Experience with big data tools: Hadoop, Spark, Kafka, Kubernates etc.Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.EDUCATION AND EXPERIENCE REQUIREMENTSMaster''s in Computer Science or related field required8-10+ years of direct experience developing enterprise grade applicationLI-LM1 REMOTE
Created: 2025-09-06