Responsibilities:
Data Pipeline Development: Build, maintain, and optimize robust data pipelines to ensure efficient data flow and accurate transformation across various systems. Manage and implement transformational rules to support data integrity and business requirements.
Technical Implementation: Develop and deploy data solutions using Python and Java, leveraging Databricks and Databricks Notebooks to handle large-scale data processing and analytics tasks. Ensure code quality and scalability in all development activities.
Project Management: Deliver projects within established timelines, managing multiple assignments simultaneously. Prioritize tasks effectively to meet deadlines without compromising quality.
Collaboration and Communication: Work closely with data scientists, analysts, and other engineering teams to support data-driven initiatives. Communicate project progress, challenges, and solutions clearly with clients and team members to ensure alignment and transparency.
Problem-Solving and Innovation: Analyze project requirements to develop innovative solutions independently. Address technical challenges proactively, ensuring seamless project execution with minimal oversight.
Cloud Platform Utilization: Utilize cloud platforms such as AWS or Azure to design, implement, and manage data infrastructure. Leverage cloud services to enhance data processing capabilities and support scalable solutions.
Requirements: