Overview
We are an early-stage startup developing ‘bolt-on’ autonomy for the agricultural sector - hardware and software that turn existing tractors, sprayers, and harvesters into autonomous machines. We fuse advanced vision / AI navigation with rugged, field-tested hardware to deliver cost-effective and intuitive autonomy that farmers can trust.
We raised our initial funding three years ago and built our core technology with a lean team of a few engineers. We’ve proven the core technology in the field, validated market demand, and are now entering commercial trials in NZ and California - so we’re growing our engineering team over the coming months.
We’re hiring roles across software / ML, mechanical, and electronics, with multiple roles open that can be shaped around the strengths of the right candidates.
We are a small team with a lot to do, so you’ll need to be highly self-driven, able to figure things out without a detailed brief, and know when to push hard and when to be pragmatic.
Key Responsibilities
- Develop and optimise vision-based ML models (object detection, semantic segmentation, depth estimation), maintaining the datasets, training pipelines, and evaluation tools that support them.
- Explore and integrate state-of-the-art vision and ML techniques (e.g., vision transformers, self-supervised learning, temporal models) to improve robustness and generalisation under challenging orchard conditions.
- Build and maintain ML training and evaluation pipelines for perception and mapping tasks.
- Create aerial image-based classification pipelines for orchard mapping and GIS layers.
- Implement temporal scene reconstruction, fusing multi-frame data into consistent 3D world models.
- Design and optimise SLAM and visual odometry algorithms for orchard navigation.
- Classify and segment 3D point clouds derived from stereo cameras or aerial data for terrain, row, and obstacle understanding.
- Implement self-monitoring mechanisms within perception modules to ensure safety and reliability.
- Develop radar processing pipelines (e.g. for cross-traffic detection), combining outputs with vision-based perception.
Who You Are
Ideally, you’re a standout engineer - someone with strong ML expertise, fast learning ability, and the drive to solve challenging perception problems. We’re open to a range of experience levels, but ML experience is essential.
Proven experience building and deploying ML models for computer vision tasks (object detection, segmentation, depth estimation).Strong understanding of deep learning architectures (CNNs, transformers) and frameworks.Solid grasp of geometry, 3D vision, or SLAM methods and how they integrate with ML-based approaches.Product-minded engineer who can take models from lab experiments to robust real-world performance.Self-driven problem-solver who thrives under uncertainty and iterates quickly based on field feedback.Excited by the challenge of building complex systems in a lean startup environment.Preferred Skills / Experience
Software : Python, C++, ROS.ML deployment : Experience with ONNX or similar frameworks for model export and optimisation.Robotics : Experience with stereo vision, sensor fusion, or 3D point cloud processing.Bonus : FMEA, safety-critical system design, or radar signal processing.Freedom to innovate - minimal bureaucracy, maximum impact.Own systems end-to-end – from design to deployment on autonomous machinery.You’ll have opportunities to spend time on-site with growers in California and Bay of Plenty, shaping the technology hands-on.Practical things
We run a hybrid working model with mandatory days in the office.Our preference is a full time employee, but we’re open to contracting.Available to travel for up to 6 weeks at a time.Job details
Seniority level : Entry levelEmployment type : Full-timeJob function : Engineering and Information TechnologyIndustries : Automation Machinery Manufacturing#J-18808-Ljbffr