The Engineer II role plans, designs, develops and tests high-quality, innovative and fully performing software systems or applications for software enhancements and new products. Key responsibilities include:
- Contribute to full software development life cycle
- Write maintainable, extensible, tested code, while complying with coding standards
- Produce specifications and determine operational feasibility
- Continuously Integrate and deliver software components into a fully functional software system
- Facilitate end to end user testing with customers
- Troubleshoot, debug and upgrade existing systems
CAREER LEVEL SUMMARY
- Proficiency: Fully mastered in immediate function/domain and has developed competent skills in complimentary functions or domains. Is able to train junior members in mastered domain knowledge.
- Direction: Is largely autonomous, working on a day to day basis without supervision or support. Occasionally checks in with manager for questions, direction. Provides support or direction to more junior members.
- Business Focus: Understands TCNA’s business model, as well as the specific roadmap of assigned product or function. Understands the interconnectedness of business systems, products, and/or technologies. Understands the needs of the customer, and approaches work with a desire to exceed customer expectations. Shows basic understanding of technology costs and validates with Manager on impact of choices when unsure.
- Growth Mindset: Exhibits strong growth mindset, approaches feedback and constructive criticism eagerly and actively implements plans for change. Tolerant to organizational turbulence. Positively contributes to team and organizational culture. Raises concerns in a constructive manner.
- 5+ years of total IT experience
- 3+ years of experience as a Data Engineer with large, complex data sources
- Experience designing, building and operationalizing large scale enterprise data solutions and applications using AWS data and analytics services in combination with other services/platforms – Spark, EMR, RedShift, Kinesis, Kinesis Firehose, Athena, Lambda & AWS Glue
- Experience with data pipeline and workflow management tools: Airflow, Step Functions etc
- Ability to implement both batch and streaming data pipelines in the AWS and change data capture (CDC) experience
- Advanced working SQL knowledge and experience working with relational databases and NOSQL databases.
- Experience designing and build production data pipelines from ingestion to consumption within a big data architecture, using Java, Scala.
- Experience with infrastructure-as-code tools such as Terraform (preferably) or CloudFormation
- Kubernetes experience for developing, deploying, and orchestrating
micro-services is plus
- Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations
- Understanding of concepts regarding security, privacy, performance, etc