Team CYNET.ai - SubT Challenge Qualifier
Team CYNET.ai was selected by DARPA (the Defense Advanced Research Projects Agency) as a qualifier in its ground-breaking SubTerranean Challenge (SubT).
SubT is a multi-million dollar competition that will task teams of robots with autonomous exploration deep beneath the surface of the Earth.
After a few months of developing software for the qualification process Team CYNET.ai was one of the only 8 teams around the world chosen to continue through the virtual track of the competition. Other competitors include NASA, CalTech, MIT, Carnegie Mellon, Georgia Tech and several other robotics companies. Most of them are being funded by DARPA. Team CYNET is self-funded and doesn't benefit from the availability of university labs.
The first round (Tunnel Circuit) didn't go very well but it was a good learning experience for us. In the second round (Urban Circuit) we scored more points and now we're really looking forward to improve our systems for the Cave Circuit.
Want to be a part of the team’s success? The SubT Challenge is a multi-year, complex endeavour. Team CYNET is looking to form partnerships and welcomes any kind of financial and equipment assistance in order to compete with the major institutional teams. Sponsors will get exposure in our media, demos, videos and tech articles.
The DARPA Subterranean Challenge aims to develop innovative technologies that would augment operations underground. The SubT Challenge will explore new approaches to
rapidly map, navigate, search, and exploit complex underground environments, including human-made tunnel systems, urban underground, and natural cave networks.
Check out more details about SubT.
Programming the next generation of
autonomous underground robots
Team CYNET.ai is using advanced technologies in the field of robotics, computer vision and deep learning in order to deploy a swarm of robots through the current tunnel environment. These robots move auonomously, they build a map of the unknown environment, exmplore it, find artifacts, calculate their location, and then communicate their findings back to the Base Station.
Our robotics software project has three main components:
Mapping and localization
The robots don't know the environment. As they explore it, they must both create a dynamic map and calculate their position by sensor fusion.
Robot motion control
It involves both optimized path planning through the tunnel structure (as battery life is limited) and a PID controller to steer through the narrow passages.
Images from the onboard camera pass through a convolutional neural network to identify and localize known objects.
These are just some of the technologies we're using for this project.
- ROS framework
- Gazebo Ignition simulation
- C++ and Python nodes
- Simultaneous Localization and Mapping (SLAM)
- OpenCV image processing
- Deep learning with PyTorch
- YoloV3 object detector
Meet Our Team