Data Engineer - Newark, NJ

SOC Support ServicesNewark, NJ
77d

About The Position

Data Engineer needed for a contract opportunity with SOC's client to work in Newark, NJ. This position located in Newark, NJ. Ideal candidate must be local to the area and able to go into the office periodically when needed. If not local, they must be willing to travel as needed. Contract Length: 6 months, possible extension. The Data Engineer will build and maintain the infrastructure for the organization's data by designing, constructing, and optimizing data pipelines and architecture.

Requirements

  • Bachelor's degree in related field with 8+ years of experience as a data engineer. Additional years of experience may be considered in lieu of a degree.
  • Proficiency in programming languages like Python or Java, strong SQL skills, and knowledge of big data tools like Apache Hadoop, Spark, or Kafka.
  • Experience with cloud platforms (AWS, Azure, GCP) and data warehousing solutions (Snowflake, Redshift, BigQuery)
  • Self-driven and have demonstrated the ability to work independently with minimum guidance
  • Demonstrated multitasking ability, problem solving skills and a consistent record of on time delivery and customer service
  • Excellent organizational and communication skills

Nice To Haves

  • Utilities experience
  • AWS Certified Data Engineer - Associate
  • AWS Certified Developer - Associate

Responsibilities

  • Ingest data from SAP, Salesforce, Google Analytics, OKTA, and other sources into AWS S3 Raw layer
  • Curate and transform data into standardized datasets in the Curated layer
  • Define and implement data mapping and transformation logic/code and load data into Redshift tables and views for optimal performance
  • Deploy and promote data pipeline code from lower environments (Dev/Test) to Production following PSEG governance and change control processes
  • Develop and maintain ELT/ETL pipelines using AWS Glue, Step Functions, Lambda, DMS, and AppFlow
  • Automate transformations and model refresh using Python, PySpark and SQL
  • Implement end-to-end source-to-target data integration, mapping and transformation across data source ? Raw ? Curated ? Redshift
  • Proficient in Redshift SQL for transformations, optimization, and model refresh
  • Integrate with on-prem and SaaS data sources, such as SAP (via Simplement), Salesforce, OKTA, MuleSoft and JAMS
  • Implement CI/CD deployment using Github
  • Familiar with CloudWatch, CloudTrail, Secrets Manager for monitoring and security
  • Manage metadata and lineage via AWS Glue Data Catalog

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Industry

Administrative and Support Services

Education Level

Bachelor's degree

Number of Employees

1,001-5,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service