Request a Quote
Items in Your Cart0
0Items in Your Cart
Start building your packageShop Now
The use of different imaging sensors on ROVs such as stereo cameras, sonars, and laser scanners allow operators to capture data to build detailed structural models in both 2D or 3D, even in turbid conditions. Using cloud-based software, operators and engineers can share, view and compare their underwater missions in great detail. This data can then be logged, tracked, and monitored over time. The resulting portfolio of information provides asset management teams with invaluable insight to optimize asset management practices.
Figure 1: A Deep Trekker ROV Demonstrating Imaging Sonar Through Turbid Conditions
In addition to advanced scanning capabilities, advancements in battery powered tcommercial-grade ROVs allow for remote deployment in rural or difficult areas. This is especially helpful in situations like offshore energy platforms, where inspection equipment is commonly flown in by helicopter.
Working in conjunction with the collected visual data via camera or imaging sensors, ROVs equipped with Ultra Short Baseline (USBL) and Doppler Velocity Log (DVL) sensors can accurately pinpoint and display their location on a map. This adds an additional layer of data to surveying, since operators can not only generae detailed models, but also understand the relative position/size of the asset being modeled.
Figure 2: A Deep Trekker ROV Tracking its Position and Mission Path
These advanced positioning systems are also being developed into methods of performing autonomous surveys without user input. This would streamline any repeated survey of a fixed structure, since the route would be pre programmed and repeatable, with active yaw stabilization to combat unpredictable current patterns. Additionally, beginner pilots could use these functions to lessen any learning curve, since human interaction would be reduced.
Relying on case studies within the sustainable energy industry, this paper aims to provide real world examples of how emerging ROV technologies can greatly improve underwater asset modeling. While these samples showcase what is currently possible, future applications will also be covered to open up discussions of the possibilities within these remote technologies.
Figure 3: Infographic Comparing 2D vs. 3D Modeling
Using a multibeam high-frequency (3.0MHz) imaging sonar mounted to an ROV, operators can receive highly accurate imagery within ranges under 20 meters. With incorporated grid lines to provide distance measurements, stills taken during sonar readings can act as a 2D model and be used to analyze objects like ship hulls, hydro dams, pump stations, piers, moorings, or pipelines for abnormalities or damage.
Figure 4: Exported Oculus Sonar Feed from a Deep Trekker ROV Mounted Unit
Sonar Model Example: Fulcrum and IQUA Robotics
In late 2022, IQUA and Fulcrum Robotics co-worked on a project where construction was taking place over top of a heritage water tunnel. They were asked to conduct an optical and sonar survey of the tunnel to provide an accurate map of the tunnel in relation to the proposed new structure. This was challenging as the site was extremely dangerous to access via divers. It is located 10 meters vertically below the surface, runs for 60 meters, and has minimum widths of 0.88 meters (just over one foot). The only way to access this tunnel for inspection was via a remote solution.x
The Equipment Used
The solution was to use Deep Trekker’s REVOLUTION ROV equipped with a Blueprint Subsea Oculus M3000d Sonar. This unit was small enough to comfortably fit within the confines of the tunnel, and offered enough stability to accurately position the imaging sonar. Over a three day period, operators were able to safely and effectively record sonar feeds and a visual inspection using the main camera module of the entire 60 meter tunnel.
Figure 5: The Completed 2D Tunnel Model from the DTG3 Capture Data
Using IQUA’s new software ‘SoundTiles’, the team involved was able to render the visual data into a raw 2D data set. Using this reformatted data, they could then export it into a spatial software to position the sonar and photogrammetry data within an initial structural scan performed by a UAV. This process was conducted over a three day period following all of the visual data collection.
Laser scanners operate using the same detection principles as sonar, however instead of acoustic pulses, it calculates based on sending and receiving beams of visible light. The smaller wavelengths of light can provide more detailed models in clear conditions, however light is negatively affected by turbid water conditions. Any opaque matter within the water will disrupt the passage of light, causing unreliable data. This results in a higher precision ceiling, however water quality should be kept in mind. In clear water, laser scanners can be more accurate, but rendered unusable in imperfect conditions.
Laser Example: Afonso Group
In the Fall of 2022, Afonso Group, an underwater service group based in Newfoundland, Canada was contracted with a unique project. Working with a confidential customer in the petroleum industry, their team faced the project of conducting a visual inspection of a 300m (1,000ft) long floating, production, storage, and offloading vessel (FPSO) and generating a 2D model of the rudder and propeller to measure its clearance.
The Equipment Used
Being that they required a survey device capable of being repeatedly lowered more than 25m (80ft) off the side of the FPSO, the team needed a vehicle that was capable of performing under rough offshore conditions, yet portable enough to be deployed from substantial height without major equipment. They landed on co-utilizing a REVOLUTION and PIVOT ROVs to ensure no downtime on the battery powered units. While one was charging, the other was in the water surveying the hull. For topside evaluation, a DT640 MAG was attached to the hull via magnetic wheels, and driven along the exterior.
Figure 6: A DT640MAG surveying a ship hull
Figure 7: A sample image of a propeller taken on a Deep Trekker ROV
To actually conduct the laser scan, the team integrated their Deep Trekker ROVs with laser scanners from Voyis, a manufacturer based in Waterloo, Canada. Over four weeks, the team was able to complete an entire in-depth survey of the 1,000ft FPSO, and generate a 2D model of the rudder and propeller to conduct clearance measurements within fractions of a millimeter. While Afonso group was unable to provide classified images of the results, they stated that the operation “went exactly how they needed it to”, and they are now contracted for upcoming projects of similar scope for other FPSO vessels.
Photogrammetry Example: Stantec
Stantec Markham’s archaeological department was recently tasked with evaluating a dam structure in Nassau Mills in Peterborough, Ontario. This site housed an active dam that was approaching the end of its functional lifespan, as well as the historical remains of a previous dam from the 1800s. The goal of this survey was to examine the historical remains, while also identifying the construction methods used in past dams for new building plans.
The Equipment Used
Deep Trekker’s REVOLUTION and PIVOT ROV were used in conjunction to provide extended battery life to last through the whole workday. Both of these devices are equipped with six thrusters, as well as rotating camera heads and tool platforms. Four of the six thrusters are vectored, while the remaining two are vertical, allowing for lateral and vertical movements without adjusting the ROV pitch. The only other piece of equipment used was a GoPro camera, which was used in conjunction with the vehicle cameras for additional imaging. This camera was simply grasped between the grabber claw of the ROV when in use.
Phase 1: Imaging
The first phase of Stantec’s photogrammetric modeling was the gathering of high-resolution images. The first challenge this project faced was weather. The survey needed to be conducted during Canadian winter, with average low temperatures of -11 degrees Celsius for the area. Sending divers in at these temperatures would require extensive gear, and result in shorter dive times to limit exposure risk. Secondly, working near a dam structure produces high current and risks of differential pressure. Project Archaeologist Mike Maloney stated, “if we had divers down, we're very close to a dam there'd be a lot of safety concerns getting the work done”.
Figure 8: A PIVOT ROV Surveying a Submerged Structure
To address these issues, Mike alongside Darren Kipping, another archaeologist at Stantec, decided an ROV was the best tactic for imaging. By equipping an ROV with a GoPro camera set to take JPEG images every two seconds, the duo was able to pilot the ROV in a grid pattern for extensive photo coverage from varying angles. This tactic was able to capture near continuous images by field swapping between two vehicles. The pair stated that they were able to conduct the underwater survey much quicker than usual. Noting that “normally if we had divers down, we as the archaeologists would instruct them what to be looking for, what to record and then they would come back up and kind of inform us what they've seen”. In a single afternoon, the team was able to capture all the imagery needed to move onto the second stage; modeling.Phase 2: Modeling
Figure 9: The Completed 3D Model from Stantech’s Photogrammetric Survey
Once the necessary imagery was captured, Mike and Darren were able to upload their findings, alongside some previous side-scan sonar footage to generate photogrammetric models. The software used was ‘Agisoft Metashape’. Agisoft Metashape is a stand-alone software product that performs photogrammetric processing of digital images. It then generates 3D spatial data to be used in GIS applications, cultural heritage documentation, and visual effects production as well as for indirect measurements of objects of various scales.Figure 10: Calculating Site Measurements of the Submerged Dam Structure
The figure above demonstrates the capabilities of precise measurements found in the modeling capabilities. The images captured were able to render completely interactable models down to millimetric accuracy. Once finalized, Stantec was able to compare the photogrammetric renderings against historical documents to track the impact of environmental factors over time. This offers insights that will benefit conceptual designs for new builds that are set to take place.Figure 11: Infographic Displaying Differences Between Sonar, Laser, and Photogrammetry
Figure 12: Main Project Photo for Project Sentry
Project Sentry3 is working with a combination of visual sensors, artificial intelligence programming, and SBL positioning to provide a fully remote and autonomous fish cage inspections setup. The new resident inspection system will have a remotely operated vehicle stationed in a garage ready to deploy to perform inspections and maintenance of nets to minimize the risk of collapse, fish escapes, and ensure fish health overtime on fish farms. Working in collaboration with partners Visual Defence, Deep Trekker’s inspection system will utilize artificial intelligence and machine learning to reduce the burden of identifying defects on the human operators of the systems. While the aim is at Aquaculture for these exact projects, the technology for visual guidance, recognition of patterns, and autonomy are all applicable for many other applications underwater. Project AROWIND4 combines the use of visual sensors, acoustic positioning, and integration with a surface vessel to build 3D models of offshore wind pilings and other submerged structures. While this concept is aimed at wind installations, the technological advancements in 3D modeling and creating systems of systems will have far more applications. Families of systems are where robotics come together to accomplish task(s). In this case, it is an unmanned surface vessel deploying an ROV, but there also could be combinations of unmanned ground vehicles, aerial drones, and autonomous underwater vehicles that achieve other surveillance and inspection objectives. Building families of systems starts with building the compatibility of the technology to work together from a software or firmware perspective. Integrations from a mechanical perspective depend on the task at hand and will vary in scope, but generally speaking will be easier to accomplish once the individual components are established at their capabilities.Figure 13: Main Project Photo for Project AROWIND
Autonomy, 3D visualization, and machine vision are all intertwined in their development on unmanned vehicles. Autonomy underwater can stem from a variety of sensors, such as acoustic-based positioning technology like USBLs, DVLs, and SBLs, IMUs, proximity sensors, GPS logs from surfacing, and from visual sensors. Visual sensors such as stereo cameras can contribute to autonomous navigation. Autonomous navigation requires extremely accurate understanding of the vehicle’s position, which is also required to build a 3D model. Knowing where to stitch the next piece of the model together stems from positioning data, and also can be informed via machine learning engines that match pixels or similar images together. There is no one right solution to every underwater challenge. Deep Trekker will continue to push to be at the forefront using the variety of technologies available, integrating visual sensors, positioning systems, and software capabilities to make inspections as simple as possible. While we can’t publish every project we’re working on, we’d be happy to talk about your specific needs - reach out if you have any specific integrations you are looking for.March 29th, 2018
Water Tank ROV: Robotic Cleaning Solution...
January 31st, 2020
How are Remotely Operated Vehicles Used for Underwater Vessel Inspection?...
October 5th, 2020
In addition to numerous lighting and sensor options, Deep Trekker provides...