Senior Data Modeler - ITAJS


IT - DC - Washington, DC
Washington, District of Columbia
Locations: Washington, Capitol Hill, Georgetown, Logan Circle
Posted On: August 12, 2025
Last Day to Apply: August 26, 2025
Pay: $70.00 to $90.00 per hour

Job Title: Senior Data Modeler
Location: Washington, DC (Onsite/Hybrid options may apply)
Type: Contract

Job Description:

We are seeking a highly skilled Senior Data Modeler with a strong background in the financial industry and deep expertise in Databricks on AWS. The ideal candidate will play a key role in defining and optimizing data models that support both new and existing business domains. This role demands a strategic thinker who can collaborate across teams and deliver robust, scalable, and high-performance data solutions.

Key Responsibilities:

  • Design and implement conceptual, logical, and physical data models in Databricks to support enterprise-level business domains

  • Collaborate with product owners, system architects, data engineers, and vendors to develop data models that are optimized for performance, compute, and storage

  • Define and enforce best practices for Bronze/Silver/Gold lakehouse architecture layers

  • Generate comprehensive documentation including data models, dictionaries, definitions, and metadata artifacts

  • Drive governance standards for data integrity, security, and compliance

Required Skills and Qualifications:

  • 10+ years of experience in AI, Data Science, or Software Engineering, with deep expertise in enterprise data ecosystems

  • Bachelor’s degree in Computer Science, Information Systems, or a related field (or equivalent professional experience)

  • Proven experience in data modeling—designing optimized storage, retrieval, and analytics models in Databricks on AWS

  • Hands-on expertise in Databricks SQL, Runtime, clusters, notebooks, and integrations

  • Proficiency in building ELT pipelines using Databricks tools and Apache Spark

  • Strong background in data integration from various sources such as relational databases, APIs, and flat files

  • Skilled in performance optimization—partitioning, caching, clustering, and Spark tuning techniques

  • Solid understanding of data governance, including security and compliance frameworks

  • Proficient in advanced SQL and Spark (Scala or Python) for data transformation and analysis

  • Familiarity with cloud architecture, particularly AWS services

  • Basic experience with data visualization tools (e.g., Tableau) for reporting and analytics

  • Knowledge of government cloud compliance standards such as FedRAMP and FISMA is a strong plus

Skip to the main content