Job Information
Dell Consultant, Data Engineer in Georgia
Responsibilities
Analyzes business needs and creates software and hardware solution blueprints.
Work closely with Data Product Managers and Solution Architects to define use cases and measurable business metrics
Works with engineering teams to check the feasibility of the solution, build stories and architects the solution for the Projects. Drives use cases through complete lifecycle.
Prepares flow charts, systems diagrams and design documentation to assist in problem analysis.
Designs, codes, tests and debugs software according to Dell’s standards, policies and procedures.
Mentoring junior team members on technical and functional skills. Should be a technical lead for the team and a great team player. Functional knowledge of business processes is required.
Skills
Possesses and applies a broad knowledge of application programming processes and procedures to the completion of complex assignments.
Competent to analyze diverse and complex problems.
Leads large budget projects.
Advanced ability to effectively troubleshoot program errors.
Build high reliability, high quality, high volume data pipelines
Setup batch, micro batch, streaming pipelines
Data ingestion, transformation, processing – batch & near real time
Automated tests and tie outs, self-healing data jobs
Build products that can support themselves with none to minimal support after rollout
Ability to communicate complex insights in a precise and actionable manner
Mindset to think differently; alignment to Industry standards; awareness of emerging technologies and industry trends
Demonstrated experience in creating and presenting technical white papers
Requirements
10 - 12 years of relevant IT experience in Data-Warehousing Technologies with excellent communication and Analytical skills
Understanding of Big Data technologies
Should possess the below skillset experience –
Comfortable with ETL concepts
Experience working with Teradata/Oracle/SQL Server/Greenplum as data warehouse databases
Very strong in SQL (DDL, DML, procedural)
Should know Unix and Shell Scripting
Hands on experience with technologies like Spark and Hadoop HDFS, Hive, Hbase
Hands on experience on change capture and ingestion tools – StreamSets, Informatica
Knowledge on Kafka and Oracle GoldenGate
Excellent knowledge in scheduling tools like Control-M
Strong experience in Source code repositories like Git, SVN and Jenkins
Working knowledge of NRT and associated Tech stack -Spark, MemSQL
Ability to discuss and defend Architectural decisions with team of strong technical architects
Understand Dev and Test process E2E
Understands Data Architecture, data profiling, data quality
Adheres to standards, Audits and Tie Outs
Understands DevOps, CI/CD, DataOps
Excellent analytical and problem-solving skills is a must have
Strong communication and presentation skills
Must be experienced in diverse industry and tools and data warehousing technologies
Good to have:
Comfortable w/ pub sub model, ability to handle transaction logs;
API, Microservices
Scala, Python, Java, R, MADlib
Good to have knowledge around data Security – Data classification, encryption capabilities, technologies
Working knowledge of data science
Mindset around Data Quality and willing to explore available capabilities with tools in market
Preferences
Bachelor of Engineering or Master of Computer Applications
Experience in working in Agile (SCRUM) Methodology