Snowflake Architect
ClifyX - Dallas, TX
Apply NowJob Description
Job Description Snowflake Architect Must Have Technical/Functional Skills • Strong hands-on experience with Snowflake architecture and performance tuning • Expertise in DBT (models, testing, macros, documentation, environments) • Solid experience with ETL/ELT frameworks and data integration patterns • Proficiency in Python for data engineering and automation • Experience with Snowpark Implementation • Strong knowledge of cloud data services (AWS, Azure, or GCP) • dvanced SQL and data modeling skills Roles & Responsibilities We are seeking an experienced Snowflake Data Architect to design, build, and optimize scalable cloud-based data platforms. The ideal candidate will have deep expertise in Snowflake, DBT, Snowpark, ETL/ELT pipelines, Python, and cloud data services (AWS, Azure, or GCP). This role will lead architecture decisions, ensure best practices, and enable analytics and data science teams with high-quality, reliable data solutions. Key Responsibilities: rchitecture & Design Design and implement end-to-end Snowflake-based data architectures for analytics, reporting, and advanced data use cases Define data modeling strategies (dimensional, data vault, and analytical models) optimized for Snowflake Establish standards for data ingestion, transformation, storage, and consumption. Snowflake Platform Management rchitect and manage Snowflake features including Warehouses, Databases, Schemas, Cloning, Time Travel, Secure Data Sharing, Data Clean Rooms and Resource Monitoring Optimize performance and cost using warehouse sizing, clustering, caching, and query optimization Implement security best practices including RBAC, masking policies, row access policies, and data governance. Data Transformation & ETL/ELT Lead ELT pipeline development using DBT (models, macros, tests, documentation, and deployments) Design and implement ETL/ELT pipelines using cloud-native Snowpark and third-party tools. Implement Real time streaming and Batch data Processing. Ensure data quality, lineage, and observability across pipelines. Cloud & Big Data Integration rchitect solutions leveraging cloud data services (AWS, Azure, or GCP) such as object storage, messaging, and orchestration services Integrate Apache Spark (Databricks or equivalent) for large-scale data processing and advanced transformations Support hybrid and multi-cloud data architectures. Development & Automation Develop data processing and automation solutions using Python Build reusable frameworks for ingestion, transformation, validation, and monitoring Implement CI/CD pipelines for data workloads and DBT, Snowpark deployments.
Created: 2026-03-10