DATA ENGINEERData Engineer
Compensation is DOE (target is ~110-130k)
Carlsbad, CA 92011 – Relocation Assistance is available
• Strong Relational Database experience and SQL Queries (Postgres experience is a plus)
• Experience with AWS cloud services: EC2, EMR, RDS, Redshift (experience with other cloud platforms such as GCP or Azure is ok too)
• Experience with stream-processing systems: Storm, Spark-Streaming, etc.
• Experience with object-oriented/object function scripting languages: Python, Ruby, Java, C++, etc.
• Experience with big data tools like Hadoop, Spark, Kafka, Cassandra, etc.
• A friendly, humble personality that understands priorities and is hungry to learn
• Exceptional problem solving, analysis, decomposition, and communication skills
• Create and maintain optimal data pipeline architecture.
• Assemble large, complex data sets that meet functional / non-functional business requirements.
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
• Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
• Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
• Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
• Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
• Work with data and analytics experts to strive for greater functionality in our data systems.
• What you will bring:
• Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
• Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
• Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
• Strong analytic skills related to working with unstructured datasets.
• Build processes supporting data transformation, data structures, metadata, dependency and workload management.
• A successful history of manipulating, processing and extracting value from large disconnected datasets.
• Knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
• Project management and organizational skills.
• Experience supporting and working with cross-functional teams in a dynamic environment.
We are a critical logistics startup. We’re what happens when next-day shipping isn’t fast enough? From Airplane parts to movie reels and even human organs, our software is revolutionizing the industry by giving shippers a view of their packages in every stage of its transportation. We calculate optimal routes, track airplanes in flight, and automatically dispatch drivers using our native mobile app. We have the obligatory pantry full of snacks, fun company activities, unlimited PTO, gym subsidy, and a growing list of benefits that keep employees happy.
• 100 employees, founded in 2014
• Raised 28M in Venture funding, recently closed Series B
• Stock Options after 3 months
• Unlimited PTO
• 401k with Match
• Good work life balance
• Some work from home flexibility
• Helping to create a product that can save people’s lives
Click here to send us your resume