Need submission details for Data Architect (Microsoft Fabric & Azure Data bricks), Atlanta, GA, Hybrid

  • Atlanta, Georgia, United States
  • Full-time
  • Salary: Not Available
  • Posted on:
  • Expires on:

JOB TITLE:

Need submission details for Data Architect (Microsoft Fabric & Azure Data bricks), Atlanta, GA, Hybrid

JOB Type:

Contractual

JOB SKILLS:

Not Provided

JOB Location:

Atlanta, Georgia, United States

JOB DESCRIPTION

Dear Partner,

Good Morning ,
GreetingsfromNukasanigroupInc!,Wehavebelowurgentlongtermcontractprojectimmediatelyavailable for****Data Architect (Microsoft Fabric & Azure Databricks), Atlanta, GA, Hybrid**** need submissions  you please review the below role, if you are available,  could you please  send me updated word resume, and below candidate submission format details,  immediately. If you are not available, any referrals would be greatly appreciated.Interviews are in progress, urgent response is appreciated. Looking forward for your immediate response and working with you.

**Candidate Submission Format - needed from you**
Full Legal Name
Personal Cell No ( Not google phone number)
Email Id
Skype Id
Interview Availability
Availability to start, if selected
Current Location
Open to Relocate
Work Authorization
Total Relevant Experience
Education./  Year of graduation
University Name, Location
Last 4 digits of SSN
Country of Birth
Contractor Type
 DOB: mm/dd     
Home Zip Code

Assigned Job Details

**Job Title : Data Architect (Microsoft Fabric & Azure Databricks)**
**Location: Atlanta, GA, Hybrid**

**Rate : Best competitive rate**

Supervises the coordination of design and security for computer databases to store, track, and maintain a large volume of critical business information.

Job Description: Data Architect - Microsoft Fabric & Azure Databricks

The Department of Early Care & Learning (DECAL) is seeking an experienced Data Architect to design and implement enterprise data solutions using Microsoft Fabric and Azure Databricks for integration with state-level systems. This role will focus on creating scalable data architecture that enables seamless data flow between IES Gateway and our analytics platform. The ideal candidate will have deep expertise in modern data architecture, with specific experience in Microsoft's data platform and Delta Lake architecture.

**Work Location & Attendance Requirements:**

• Must be physically located in Georgia

• On-site: Tuesday to Thursday, per manager's discretion

• Mandatory in-person meetings:

o All Hands

o Enterprise Applications

o On-site meetings

o DECAL All Staff

• Work arrangements subject to management's decision

While the intent may be a long-term tenure, this position is subject to annual budget restrictions.  The initial contract is through the end of this fiscal year and is anticipated to be renewed July 1st.

**Key Responsibilities:**

++Data Architecture:++

· Design end-to-end data architecture leveraging Microsoft Fabric's capabilities.

· Design data flows within the Microsoft Fabric environment.

· Implement OneLake storage strategies.

· Configure Synapse Analytics workspaces.

· Establish Power BI integration patterns.

I++ntegration Design:++

· Architect data integration patterns between IES Gateway and the analytics platform using Azure Databricks and Microsoft Fabric.

· Design Delta Lake architecture for IES Gateway data.

· Implement medallion architecture (Bronze/Silver/Gold layers).

· Create real-time data ingestion patterns.

· Establish data quality frameworks.

++Lakehouse Architecture:++

· Implement modern data lakehouse architecture using Delta Lake, ensuring data reliability and performance.

++Data Governance:++

· Establish data governance frameworks incorporating Microsoft Purview for data quality, lineage, and compliance.

++Implement row-level security.++

· Configure Microsoft Purview policies.

· Establish data masking for sensitive information.

· Design audit logging mechanisms.

++Pipeline Development:++

· Design scalable data pipelines using Azure Databricks for ETL/ELT processes and real-time data integration.

++Performance Optimization++:

· Implement performance tuning strategies for large-scale data processing and analytics workloads.

· Optimize Spark configurations.

· Implement partitioning strategies.

· Design caching mechanisms.

· Establish monitoring frameworks.

++Security Framework:++

Design and implement security patterns aligned with federal and state requirements for sensitive data handling.

**Required Qualifications:**

Education:

Bachelor’s degree in computer science or related field.

Experience:

• 6+ years of experience in data architecture and engineering.

• 2+ years hands-on experience with Azure Databricks and Spark.

• Recent experience with Microsoft Fabric platform.

**Technical Skills:**

Microsoft Fabric Expertise:

• Data Integration: Combining and cleansing data from various sources.

• Data Pipeline Management: Creating, orchestrating, and troubleshooting data pipelines.

• Analytics Reporting: Building and delivering detailed reports and dashboards to derive meaningful insights from large datasets.

• Data Visualization Techniques: Representing data graphically in impactful and informative ways.

• Optimization and Security: Optimizing queries, improving performance, and securing data

Azure Databricks Experience:

• Apache Spark Proficiency: Utilizing Spark for large-scale data processing and analytics.

• Data Engineering: Building and managing data pipelines, including ETL (Extract, Transform, Load) processes.

• Delta Lake: Implementing Delta Lake for data versioning, ACID transactions, and schema enforcement.

• Data Analysis and Visualization: Using Databricks notebooks for exploratory data analysis (EDA) and creating visualizations.

• Cluster Management: Configuring and managing Databricks clusters for optimized performance. (Ex: autoscaling and automatic termination)

• Integration with Azure Services: Integrating Databricks with other Azure services like Azure Data Lake, Azure SQL Database, and Azure Synapse Analytics.

• Machine Learning: Developing and deploying machine learning models using Databricks MLflow and other tools.

• Data Governance: Implementing data governance practices using Unity Catalog and Microsoft Purview

Programming & Query Languages:

SQL: Proficiency in SQL for querying and managing databases, including skills in SELECT statements, JOINs, subqueries, and window functions12.

Python: Using Python for data manipulation, analysis, and scripting, including libraries like Pandas, NumPy, and PySpark

Data Modeling:

• Dimensional modeling

• Real-time data modeling patterns

Soft Skills:

• Strong analytical and problem-solving abilities

• Excellent communication skills for technical and non-technical audiences

• Experience working with government stakeholders

Preferred Experience:

• Azure DevOps

• Infrastructure as Code (Terraform)

• CI/CD for data pipelines

• Data mesh architecture

Certifications (preferred):

• Microsoft Azure Data Engineer Associate

• Databricks Data Engineer Professional

• Microsoft Fabric certifications (as they become available)

Project-Specific Requirements:

• Experience designing data architectures for grant management systems

• Knowledge of federal/state compliance requirements for data handling

• Understanding of financial data processing requirements

• Experience with real-time integration patterns

This position requires strong expertise in modern data architecture with specific focus on Microsoft's data platform. The successful candidate will play a crucial role in designing and implementing scalable data solutions that enable efficient data processing and analytics for state-level grant management and reporting systems.

Skill

Required / Desired

Amount

of Experience

• 6+ years of experience in data architecture and engineering.Required6Years

• 2+ years hands-on experience with Azure Databricks and Spark.Required2Years

Recent experience with Microsoft Fabric platform.Required2Years

Azure Databricks ExperienceRequired2Years

Proficiency in SQL for querying and managing databases, including skills in SELECT statements, JOINs, subqueries, and window functions12.Required3Years

Using Python for data manipulation, analysis, and scripting, including libraries like Pandas, NumPy, and PySparkRequired3Years

**Thanks regards**

**Bhavani |Technical recruitment| Nukasani Group |**

**1001 E Chicago Ave, Unit B 111, Naperville IL 60540.**

**Email: bhavani@nukasanigroupusa.com**

**People, Process, Technology Integrator.**

**An E-Verified Company**

Position Details

Posted:

Employment:

CTC

INDUSTRY:

-

Salary:

Not Disclosed

REFERENCE NUMBER:

OOJ - 12032

CITY:

Atlanta

JOB ORIGIN:

OWN