Responsible in developing and managing end-to-end Data Warehouse and Data Integration.
• Will be responsible on data integration, data architecture and testing.
• Ensure accurate performance tuning for all project components.
• 3+ years of ETL development (On-premise to Cloud Migration experience highly preferred)
• 3+ years of developing, deploying and debugging experience in the Amazon Cloud Environment AWS (S3, EMR, AWS Glue, Databricks, Amazon Redshift, Data Lake, Parquet etc.) and AWS architecture best practices. AWS certification and experience in Teradata a plus.
• 2+ years of experience in Hadoop big data platform, such as Spark, Hive, Presto, Sqoop, and Flume.
• Ability to output the results in several formats (JSON, data feeds, reports, etc.).
• Ability to perform data manipulations, load, extract from several sources of data into another schema.
• Proficiency in SQL, understanding various databases (Aurora, DynamoDB, Oracle, Postgres, etc.) and ability to design schemas to meet the requirements.
Graduate of Computer Science or Information Technology
Atleast 3 -5 years related experience
Responsible in developing and managing end-to-end Data Warehouse and Data Integration by taking into consideration the business cases, objectives and requirements. In addition, the responsibilities will include data integration, data architecture, information delivery, infrastructure, testing, performance tuning for all components of the project.