Do you have a start-up mentality to your work?
Do you excel in your ability to communicate with internal/external stakeholders?
About Our Client
With high cultural standards and a team-work based, and highly collaborative structure, this business understands what tech superstars need to be successful.
Gaining constant exposure to the latest tech and working with one of the most advanced AWS shops going around, you will get the opportunity to progress your career while also build your tech stack into a marketing-lead and cutting edge force.
Day-to-day tasks of this role will see you have experience across the following:
- Ability to lead change with customers
- Focus on delivery and outcomes
- Delivery of sprint tasks within the sprint
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Continually strive toward optimal extraction, transformation, loading and management of data from a wide variety of data sources using SQL and AWS 'big data' technologies
- Has the experience to authentically tell the customer when something is an anti-pattern or will fail
- Can design Data Patterns for re-use and re-implementation (factory based approach)
- Workshops and agree technical approaches with team members and customers
- Can work in an enterprise-type environment and demonstrate the expected controls and processes adherence.
- Understand traditional and emerging data pipeline delivery as well as CI and CD processes
The Successful Applicant
- From a development or operations background (or an existing DataOps Team)
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing 'big data' data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores
- Have experience with the AWS ecosystem (particularity Kinesis, Glue, Redshift, EMR, RDS, Aurora, DMS, etc…)
- Have some experience with industry leading ETL tools, Informatica, Talend, Snowflake
- Familiar with data modelling concepts
- Advanced experience in one or many of the following programming languages - SQL, Python, Java, SCALA etc
What's on Offer
- Attractive salary package
- Unparalleled Tech stack
- Career progression runway to grow your leadership potential as this company grows and grows!