
EvokeEdge Pty Ltd (business name Fortifyedge)
Human-Centered Agentic AI for Wearables
WHAT’S IN IT FOR YOU?
Hands-on Development with Agentic AI and Firebase GenKit:
The student will gain firsthand experience working with Google’s latest agentic AI tools (e.g., Gemini 2.5 Pro, Agent Builder, Firebase GenKit) to develop context-aware AI agents for wearable-based decision support and mixed-reality training environments.
Applied Research in Human-Centered AI & TinyML:
The project offers the opportunity to explore and apply multimodal sensor fusion (IMU, HRV, pupillometry, acoustic cues) using on-device AI models built with TinyML. These models support real-time cognitive state analysis for frontline operators in critical scenarios.
You will also get to work with real customers & end users on a recently awarded Pilot Project with tactical operators.
Industry-Grade Wearable & Edge AI Integration:
The intern will contribute to the Fortifyedge.ai platform, integrating the Samsung Health SDK and Wear OS features into Firebase-based microservices. This work includes building secure, scalable apps for biometric authentication, indoor tracking, and AR/VR-based operator training simulations.
Mentorship & Career-Ready Impact in High-Value Sectors:
Supervised by Fortifyedge.ai’s CTO and research leads, the student will work on impactful deployments in defense, public safety, and critical infrastructure, gaining experience in enterprise integration, privacy-first engineering, and tactical MLOps.
Collaboration with a State-of-the-Art Interdisciplinary Team:
The intern will collaborate with a world-class team spanning AI pipelines, edge computing, biometrics, LLMs, transformers, and deep learning. Research partners include Western Sydney University’s MARCS Institute and other domain experts supporting real-world validation with defense and frontline customers.
RESEARCH TO BE CONDUCTED
The proposed research explores the development of agentic AI applications that fuse wearable sensor data with Firebase-based AI services for real-time cognitive monitoring and situational awareness. The intern will investigate on-device TinyML models for fusing multimodal signals (IMU, HRV, pupillometry, audio) from Samsung wearables to infer operator stress and attention states. These insights will be integrated into a mixed-reality training environment using adaptive feedback loops powered by Gemini 2.5 and Firebase GenKit. The research will also evaluate the efficacy of AI-generated recommendations, secure peer-to-peer telemetry syncing, and personalized dashboards for frontline operator support in high-stakes indoor environments.
SKILLS WISH LIST
If you meet some or all the below (or similar) we want to hear from you. We strongly encourage women, indigenous and disadvantaged candidates to apply:
Experience with Python or JavaScript/TypeScript for backend and AI integration
Familiarity with Google Firebase (Firestore, Cloud Functions, Firebase GenKit)
Knowledge of agentic AI concepts and frameworks (e.g., Google Gemini, Agent Builder)
Understanding of machine learning, TinyML, or on-device AI model deployment
Experience working with wearable data (e.g., IMU, HRV, audio, biometric signals)
Interest in signal processing, sensor fusion, or physiological state estimation
Familiarity with Samsung Wear OS and Health SDKs is a strong advantage
Exposure to transformer models, LLMs, and generative AI (e.g., Gemini 2.5 Pro)
Experience or coursework in AR/VR or mixed-reality environments
Ability to build or enhance web dashboards (React, Flutter, or similar)
Basic understanding of data privacy, encryption, and secure data transmission
Strong problem-solving skills and an interest in applied, real-world impact
Ability to work independently and collaboratively with a remote, interdisciplinary team
Familiarity with GitHub, MLOps workflows, and agile development practices
We expect you will get to learn to use some of this for the first time and just want someone that is comfortable to build on their base skills to learn new areas.
RESEARCH OUTCOMES
The proposed project aims to produce a working prototype and research validation of agentic AI applications that operate on edge devices and Firebase-integrated cloud infrastructure. The primary outcomes include:
A functional agentic AI system that utilizes Google Gemini and Firebase GenKit to deliver adaptive, context-aware support to frontline operators based on real-time multimodal data collected from Samsung wearables.
Validated on-device TinyML models capable of inferring cognitive load, stress, and attention states by fusing IMU, HRV, noise, and pupillometry signals, with personalized profiling and adaptive feedback mechanisms.
Secure and scalable Firebase-based backend services including peer-to-peer encrypted telemetry syncing, RESTful APIs, and dashboard visualizations for training analysis, operator support, and performance insights.
Integration with a mixed-reality training loop, demonstrating how real-time physiological and behavioral insights can adapt training content and interaction style to operator status and workload.
Technical documentation and research artefacts, such as annotated datasets, edge AI model pipelines, and UX/UI insights for real-world deployability in safety-critical environments (e.g., defense and public safety).
Contribution to academic and applied knowledge through potential publications or white papers on agentic AI design patterns, edge-based AI in training environments, and privacy-first biometric intelligence.
These outcomes will support Fortifyedge.ai’s mission to deliver scalable, secure, and intelligent human-AI teaming tools to frontline operators, while giving the intern practical experience at the intersection of research, development, and impact.