Python Developer

BLDG SVC 32 B-JNew York, NY
8d

About The Position

Under the supervision of the Team Lead, Data Integration, the Python Developer is responsible for building secure, scalable data pipelines and integrating data from multiple sources. The role involves deep collaboration with business analysts, other developers, and analytics and data scientists. The ideal candidate will have hands-on experience using Python APIs, managing Python environments, and implementing data security practices such as secure configuration and encryption. The role involves working with Databricks, modern data platforms, and following solid SDLC, documentation, and DevOps practices.

Requirements

  • 2+ years of experience in Python-based Data Engineering
  • Experience working with RESTful APIs in Python
  • Experience configuring and managing Python environments (venv, conda, pip)
  • Hands-on experience with Azure Key Vault for secrets management
  • Knowledge of data encryption and data security fundamentals
  • Strong SQL skills with SQL Server, PostgreSQL, and T-SQL
  • Experience building ETL/ELT pipelines using Databricks (PySpark)
  • Understanding of SDLC, version control (Git), and CI/CD processes

Nice To Haves

  • Experience with SSIS
  • Experience with Azure Data Factory
  • Familiarity with Dremio
  • Exposure to Azure cloud services and AWS
  • Knowledge of data modeling techniques
  • Experience working in Agile/Scrum environments

Responsibilities

  • Design, develop, and maintain robust Python-based applications and scalable data pipelines using Python
  • Write clean, scalable, and efficient code following best practices
  • Develop and consume REST APIs in Python for data ingestion and integration
  • Configure and manage Python environments (virtual environments, dependency management)
  • Optimize applications for maximum speed and scalability
  • Implement data encryption and security best practices for configuration as well as data in transit and at rest
  • Build and optimize data workflows using Databricks (PySpark)
  • Write, optimize, and maintain T-SQL queries for SQL Server and PostgreSQL
  • Perform ETL/ELT data processing and transformations
  • Support data integration using SSIS and Azure Data Factory
  • Develop well-documented Jupyter/Databricks notebooks and maintain clear technical and process documentation for data pipelines and workflows
  • Follow SDLC best practices throughout development and deployment
  • Use Git / Azure DevOps Git for source code control and CI/CD collaboration
  • Participate in code reviews and troubleshoot data quality or performance issues
  • Perform tasks as required by management/supervisory staff.
  • Provide support after hours, and on weekends as needed.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service