Senior Data Engineer
CoPoint AI
Date: 11 hours ago
City: Dallas, TX
Contract type: Full time

Senior Data Engineer Location: Hybrid, US (Dallas, TX-based preferred)
About CoPointAI
AI isn’t coming — it’s here. And we help enterprises make it real. At CoPointAI, we work inside the enterprise — not just around it — to turn AI potential into practical wins. From hands-on C-suite workshops (our AI Foundations series) to AI-native MVPs built in weeks, our team partners deeply with yours to find, build, and scale what matters.
We’re looking for a Senior Data Engineer who excels at designing and building scalable, high-performance data systems. In this role, you’ll work internally and with clients to design, implement, and optimize data pipelines, dimensional models, and analytics frameworks that form the foundation of our clients' AI initiatives.
What You’ll Do
About CoPointAI
AI isn’t coming — it’s here. And we help enterprises make it real. At CoPointAI, we work inside the enterprise — not just around it — to turn AI potential into practical wins. From hands-on C-suite workshops (our AI Foundations series) to AI-native MVPs built in weeks, our team partners deeply with yours to find, build, and scale what matters.
We’re looking for a Senior Data Engineer who excels at designing and building scalable, high-performance data systems. In this role, you’ll work internally and with clients to design, implement, and optimize data pipelines, dimensional models, and analytics frameworks that form the foundation of our clients' AI initiatives.
What You’ll Do
- Design and implement robust, scalable data pipelines using modern ELT/ETL patterns.
- Develop and maintain dimensional data models (star, snowflake, or Data Vault schemas), ensuring consistency, integrity, and performance across analytical datasets — including understanding and application of dimension types (Type 1, Type 2, etc.).
- Partner with clients to design high-performance datasets that serve dashboards, reports, and ML applications.
- Optimize data storage, partitioning, and retrieval strategies across cloud data platforms (e.g., Azure Synapse, Snowflake, Databricks).
- Build, automate, and orchestrate data pipelines using Airflow, Azure Data Factory, or dbt, including developing custom transformations and workflow scripts in Python or PySpark.
- Collaborate in defining and enforcing best practices for data quality, lineage, and governance.
- Support continuous improvement of data architecture through monitoring, observability, and performance tuning.
- 5+ years of experience in data engineering or related roles.
- Strong command of SQL, Python, and data transformation frameworks (dbt, Spark, etc.).
- Experience with Azure data services (Synapse, Data Lake, Data Factory, Databricks) or equivalent cloud ecosystems.
- Proven understanding of data modeling fundamentals, including:
- Dimensional modeling and schema design
- Fact and dimension tables
- Slowly changing dimensions (Types 1–3)
- Data normalization and denormalization strategies
- Solid grasp of database fundamentals (query optimization, indexing, partitioning).
- Knowledge of data governance, metadata management, and data quality frameworks.
- Familiarity with CI/CD for data pipelines and modern version control workflows (Git).
- Strong analytical mindset, attention to detail, and a passion for building reliable, well-structured data systems.
- Microsoft Certified: Azure Data Engineer Associate (DP-203) or Microsoft Certified: Fabric Data Engineer Associate (DP-700).
- Experience integrating analytical models with AI/ML platforms.
- Background in performance tuning and data warehouse optimization.
- Prior experience in a fast-paced startup or SaaS environment.
- Opportunity to shape data architecture at a cutting-edge AI company.
- Collaborative, remote-first culture built on innovation and learning.
- Competitive compensation, equity, and benefits.
How to apply
To apply for this job you need to authorize on our website. If you don't have an account yet, please register.
Post a resume