This paper presents the design of the interacting-BoomCopter (I-BoomCopter) unmanned aerial vehicle (UAV) for mounting a remote sensor package on a vertical surface. Critical to the design is the novel, custom, light-weight passive end effector. The end effector has a forward-facing sonar sensor and in-line force sensor to enable autonomous sensor mounting tasks. The I-BoomCopter's front boom is equipped with a horizontally mounted propeller, which can provide forward and reverse thrust with zero roll and pitch angles. The design and modeling of the updated I-BoomCopter platform is presented along with prototype flight test results. A teleoperated wireless camera sensor mounting task examines the updated platform's suitability for mounting remote sensor packages. Additionally, an autonomous control strategy for remote sensor mounting with the I-BoomCopter is proposed, and autonomous test flights demonstrate the efficacy of the approach.
Introduction
The infrastructure that forms our transportation networks, communication system, power grid, and utility networks requires constant attention. Naval vessels, space vehicles, nuclear reactors, dams, oil tanks, and space structures require similar types of inspection for the safety of humans and the structural integrity against leaks, spills, and even disasters. Currently, human visual inspection and direct sensing is still the most trusted and pervasive method to assess the condition of our infrastructure over its lifetime. However, this process is not as simple as it may seem. Inspectors are under severe time constraints, leaving many structures neglected, and large environmental variations such as weather, dirt, and lighting conditions impede this process. Furthermore, regions on many structures (e.g., under a bridge, or high up on a tower) require costly equipment, or even expensive closure in certain cases. The limitations of human inspection revolve around consistency, accessibility, safety, and efficiency. Although routine periodic structural inspections are conducted regularly, extreme events also trigger the need for unscheduled inspections and data collection. Natural hazards such as tornadoes, hurricanes, floods, and earthquakes necessitate an immediate inspection for a large number of critical infrastructures in a short period of time. Meticulous and timely maintenance is a significant factor driving the overall prioritization of spending for repair and replacement. Novel robotic platforms hold great potential to save considerable time and resources, while also enforcing and enabling consistency in these tasks. In fact, one can envision a fleet of unmanned aerial vehicles (UAVs) designed specifically for infrastructure inspection and maintenance tasks. The inspection tasks would require the design to be able to handle sensing payloads such as cameras and/or LIDAR sensors. The infrastructure maintenance tasks will require the UAV to physically interact with the environment for such tasks as placing remote sensors, cleaning joints, applying sealants, and marking or tagging areas of interest.
Unmanned aerial vehicles have been used quite successfully in the past for applications such as inspection, surveillance, mapping, and precision farming. A new trend is now emerging to use these UAVs to physically interact with the environment. These types of interactions include: object manipulation [1–7], transportation [8–10], assembly [11], and contact inspection tasks [12–14]. For environmental interactions, it is desired to use a small UAV that has vertical take-off and landing (VTOL) and hovering capabilities. However, VTOL platforms are generally underactuated, i.e., equipped with fewer actuators than degrees-of-freedom. For example, quadrotor UAVs have traditionally been used for environmental interactions and aerial manipulation tasks. These multirotor vehicles are very agile and provide a simple fixed four-motor/rotor configuration that yields an underactuated design. However, there is an inherent coupling between the UAV's translational and rotational dynamics, and thus it is unable to independently control the forces and torques in all dimensions. Therefore, quadrotors, and other underactuated platforms, are nonholonomic vehicles and not capable of controlling arbitrary velocities in a six degrees-of-freedom space. This behavior limits the position and attitude trajectories that are possible as well as the platform's ability to interact with the environment since complex (maintenance and repair) tasks will require the ability to instantaneously resist or apply arbitrary forces and torques. Some novel multirotor platforms have been developed recently to combat the restrictions of the underactuated platforms. They include: quadrotors with tilting propellers [15–17]; quadrotors with a horizontally mounted propeller [18]; the Omnicopter platform consisting of two central counter-rotating coaxial propellers to generate lift surrounded by an airframe with three variable angle horizontally mounted ducted fans to control attitude and provide lateral forces [19,20]; hex-rotor platforms [21–23]; and finally, an eight-propeller configuration [24].
Along these same lines, we presented the design of the interacting-BoomCopter (I-BoomCopter) UAV in our previous work [25]. It is a modified version of the BoomCopter platform found in the hobby community2 but specifically designed for interacting with the environment. It has a base tricopter configuration providing it with VTOL and hover capabilities; however, a horizontally mounted, four-blade, reversible propeller has been mounted on the front boom. This propeller can be rotated in different directions to provide a horizontal forward or reverse thrust. This setup allows the I-BoomCopter to accomplish complex interaction tasks using existing (simple) control schemes to stabilize the vehicles attitude and position, while using only the front-mounted propeller for forward and reverse motion in a level configuration. A custom-designed manipulator resides at the end of the front boom with an integrated force sensor for use in pulling or pushing environmental interaction tasks. To demonstrate the interaction capabilities of the platform, we performed an autonomous electrical panel door opening and door closing task using a passive end effector and simple vision-based control.
In this paper, we present a new I-BoomCopter UAV design specifically for mounting remote sensors on critical national infrastructure (Fig. 1). We first present a dynamic model for the UAV, followed by designs for a boom-propeller mechanism and an entirely new modular end effector with integrated sensors that satisfy the unique constraints of this new problem definition. The results of a teleoperated remote sensor mounting flight test are then presented and discussed. Finally, we propose image-based target tracking algorithms and a high-level control strategy to enable autonomous sensor mounting operations, and present preliminary results that demonstrate the effectiveness of the proposed approach.
I-BoomCopter Dynamic Modeling
The I-BoomCopter is a variant of the general tricopter UAV. It has three main rotors whose combined thrust supports its weight while flying. The right rotor moves in a clockwise sense and the left and tail rotors rotate in a counterclockwise sense. The net unbalanced clockwise torque created is counteracted by tilting the tail rotor using a servo mechanism. Increasing the thrust of the tail rotor and decreasing those of the left and the right rotors creates a positive pitch that drives the I-BoomCopter forward. The roll movement is obtained by varying the thrusts of the left and right rotors. This also provides a sideways movement to the vehicle. Yaw is achieved by tilting the tail rotor. Besides these, the I-BoomCopter has an additional rotor mounted in the vertical plane on the front of the vehicle (as seen in Fig. 1). This rotor can provide reversible thrusts, thereby enabling the vehicle to move both forward and backward without pitching. We will be referring to this propeller as the “boom-prop” for the remainder of the paper.
The I-BoomCopter operates in two modes: the tricopter mode where the boom-prop stays inactive and the vehicle flies like a conventional tricopter, and the boom-prop mode where it uses the boom-prop for in-plane thrust. We found in Ref. [25] that the tricopter mode can achieve higher velocity and acceleration than the boom-prop mode, beyond a certain angle of pitch. However, the boom-prop provides the vehicle with an additional capability of exerting physical forces on objects for environmental interactions.
The free-body diagram of the I-BoomCopter is shown in Fig. 2. x–y–z represents the right-handed earth coordinate frame (E-frame) and xb, yb, and zb represents the right-handed body coordinate frame (B-frame) of the system. -axis is aligned with the boom-prop axis, -axis points toward the left rotor, and -axis is directed upward. Roll (), pitch (θ), and yaw (ψ) are defined by right-handed rotations about the positive x–y–z axes, respectively.
where X = []T, V = []T, and Θ = []T are the position, velocity, and roll-pitch-yaw angles in the E-frame, respectively.
Note: s, c, and t are the abbreviated forms of sine, cosine, and tangent.
I = [[Ixx 0 0]T [0 Iyy 0]T [0 0 Izz]T] represents the moment of inertia, where Ixx, Iyy, and Izz are the inertias about xb-, yb-, and zb-axes, respectively.
The torque about the xb-, yb-, and zb-axes in the B-frame is given as Tb = [].
where τl, τr, and τt are counter torques of the left, right, and tail rotors, respectively, l1 and l2 are the perpendicular distances of the left and the right rotors from the xb- and yb-axes, respectively, and l2 is the perpendicular distance of the tail rotor from the yb-axis (see Fig. 2). Force Fzb = tl + tr + . Fxb is zero when the UAV is behaving like a tricopter (boom-prop not engaged) and Fxb = tb in the boom-prop mode. τxb will also have the boom-prop reaction torque τb added to (or subtracted from) it when the boom-prop is engaged and is rotating in a clockwise (or counter-clockwise) sense.
End Effector and Boom-Prop Design
In order to mount a sensor on a vertical surface, the I-BoomCopter must be able to carry the sensor to a desired location, move it into its final position on the vertical surface, attach it to the surface, and then release the sensor. These tasks require the I-BoomCopter to have precise vehicle localization capabilities, a mounting location detection system, and the ability to carry and mount a sensor package. For outdoor applications such as mounting a vibration sensor on a bridge or building, a global positioning system (GPS) signal can be used to get the I-BoomCopter close the location where the sensor will be mounted, but additional sensors are needed to precisely identify the final sensor mounting location and guide the I-BoomCopter into position. Similarly, for indoor applications, such as mounting a surveillance camera, various indoor positioning systems exist, which may be used to get the vehicle close to a desired sensor mounting location; however, these systems are not sufficient to guide the vehicle's end-effector to the sensor's final mounting position, and thus a separate mounting location detection system is needed.
The I-BoomCopter's onboard computer (BeagleBone Black) and forward-facing camera can be used to select and/or identify a target location for mounting a sensor, but these data can only be used to align the I-BoomCopter from side-to-side and up-and-down since the single-camera images lack depth information. Thus, a new end effector was designed for the I-BoomCopter, which is capable of providing depth information by sensing the forward distance to a vertical surface. The end effector was also fitted with a customized mechanism, which allows the I-BoomCopter to carry a sensor and then passively release it when it is pressed against a vertical surface, without the use of a gripper or any additional motors.
End-Effector Prototype Design.
The end effector shown in Fig. 3 was designed for the task of mounting a sensor to a vertical surface. The physical mounting of the sensor involves two primary operations: first, attaching the sensor to the surface, and second, releasing the sensor from the end effector once it has been mounted. One simple way to attach the sensor to a surface is with two-sided tape. This method requires a relatively clean and smooth surface, but will work on a wide variety of common surfaces such as painted drywall or metal beams. In our experiments, we determined that 3M RP45 very high bond two-sided foam tape is a robust adhesive for this application. In order to release the sensor after it has been pressed against a vertical surface, the end effector makes use of a push-to-release mechanism (PTRM). The clamp, slide rail, sled, and lock clip were all three-dimensional (3D) printed with ABS plastic. The assembled end effector weighs only and is completely passive, which conserves battery power for extended flight times. The sonar sensor is a low-cost HC-SR04 module that provides accurate distance measurements between and with a resolution of .
Push-to-Release Mechanism.
The PTRM is based on a simple grab latch design that is generally used to hold cabinet doors shut. When it is pressed inward, it alternates between two states: open, locked. The PTRM consists of three main components: the knob, the grabber arms, and the housing (see Fig. 4). The knob consists of a base that can be attached to a sensor package with screws or an adhesive, and a small cube extended a short distance from the base by a cylindrical cantilevered arm. The plunger slides in and out of a cavity in the housing, and has two grabber arms that rotate about a common pin to open and close around the cube portion of the knob. The grabber arms are driven to a normally open state by a torsional spring when the grabber is extended to the front of the housing cavity; however, the width of the cavity decreases along the length of the housing so that the grabber arms are forced shut as the plunger slides into the housing. If the knob is pressed against the plunger while it slides into the housing, the knob will be captured by the grabber arms and cannot be removed until the plunger slides back out of the housing. For our experiments, we purchased a PTRM and replaced the original grabber arms and knob with our own 3D printed versions that use a cube-shaped knob instead of a spherical knob. The cube shape on the knob prevents it from rotating while enclosed in the grabber arms. This allows for the orientation of the sensor package to be controlled more strictly.
The two states of the PTRM are governed by the slide pin (a hook-shaped, cantilevered pin, which is held in the center of the housing cavity by a circular, planar spring). The planar spring provides an upward restoring force proportional to the downward deflection of the slide pin, and provides left/right restoring forces proportional to any right/left deflection of the pin. Thus, the end of the slide pin is always driven back to the center point of the housing cavity. The end of the pin (which is curved upward) rests inside a three-dimensional groove that is embedded in the bottom of the plunger. This groove is designed so that the slide pin travels between two resting positions corresponding to the open and locked states.
The force required to open the PTRM is governed primarily by the spring attached to the plunger. We measured this force to be , which requires the boom-prop of the I-BoomCopter to be designed to produce a minimum of of thrust to release the sensor package.
Force Sensing.
In order to measure the forces applied by the end effector, a force-sensing resistor (FSR) was placed in series with the PTRM. This was accomplished by mounting the PTRM on a slider that glides with minimal friction along the slide rail (see Fig. 3) until it comes into contact with the FSR. Thus, all of the force transmitted from the I-BoomCopter's front arm to the sensor package is registered by the FSR.
Forward Distance Sensing.
A sonar distance sensor was attached to the end effector to measure the forward distance to a vertical surface. The sensor is raised sufficiently high above the PTRM to prevent false readings from the sensor package attached to the PTRM.
Boom-Prop Prototype Design.
Component details such as the propellers and bearings used in the boom-prop are provided in Ref. [25]. Since the PTRM places a lower bound of on the boom-prop thrust, we considered several different boom-prop designs to obtain the maximum thrust. Three aspects of the boom-prop were changed in the designs: number of propeller blades, gear ratio, and motor kV.
For each configuration, the boom-prop was rigidly mounted to an L-shaped lever-arm that was setup to transmit the thrust from the boom-prop to a weighing scale to be recorded. The boom-prop was then connected to a four-cell lithium-polymer battery, and the throttle was incremented from 0% to 100% while the thrust, current, propeller RPM, and battery voltage were recorded. Figure 5 summarizes the effect of the number of propeller blades on the static thrust of the boom-prop. These results clearly indicate that a larger number of propeller blades increases the amount of thrust obtained per RPM, and tests with other configurations confirmed that the thrust per RPM remains constant for a given number of propeller blades.
Figure 6 shows a comparison of the thrust generated by the boom-prop versus the electrical power consumed (computed from ). This plot indicates that the efficiency of the boom-prop is not significantly affected by increasing the number of propeller blades up to at least four (a result which was confirmed by tests with several other boom-prop configurations). Thus, we determined that a four-blade boom-prop configuration would provide the maximum thrust and efficiency.
In other tests, we found that decreasing the gear ratio from 2:1 down to 1.5:1 increased the maximum RPM achieved and thus the maximum thrust achieved, but decreasing from 1.5:1 down to 1.2:1 had no effect. Lower gear ratios also had no noticeable effect on the efficiency of the boom-prop (measured in gf/Watt). Using an NTM2836, 1000 kV motor, with four propeller blades and a 1.5:1 gear ratio, the maximum thrust achieved was , which gives only headroom above the minimum acceptable force. Thus, in an attempt to increase the boom-prop RPM (and thus the static thrust) further, we selected a motor with a slightly higher kV value and power rating. This motor (NTM2836, 1400 kV), combined with four propeller blades and a gear ratio of 2:1, gave the maximum overall thrust: . This gives a more comfortable headroom of above the minimum required force, which was deemed acceptable for our sensor mounting experiments.
Sensor Mounting Experimental Results
As a step toward enabling the I-BoomCopter to mount various types of sensors on arbitrary vertical surfaces autonomously, we first considered the task of manually piloting the I-BoomCopter to mount a sensor package on a wall. For this experiment, we used a small wireless camera sensor package consisting of a 2.4 GHz wireless camera, a battery and a 3D-printed, rectangular enclosure. The enclosure has tabs to hold the camera in place and a removable back that allows access to change the battery as needed between experiments. The knob of the PTRM was attached to the front of the camera sensor package (just below the camera lens) with two-sided tape, and a strip of two-sided tape () was also applied to the back of the sensor package to be attached to the wall. Figure 7 shows the dimensions of the sensor package along with an exploded view of the package components and a 3D rendering of the package mounted to the front arm of the I-BoomCopter. The complete sensor package has a mass of , and, with the package attached, the I-BoomCopter has a mass of .

(a) Wireless camera sensor package dimensions, (b) exploded view of sensor package components, and (c) 3D rendering of sensor package attached to I-BoomCopter's front arm
The sonar module and force sensor were connected to an Arduino Pro Mini microcontroller, which was programmed to refresh the force data at and the sonar data at . The onboard BeagleBone Black computer was connected via UART serial to the Arduino Pro Mini and received a data stream of forward distance and pushing force at a rate of . For each flight test performed, the forward sonar distance, force sensor data, and Pixhawk flight management unit (FMU) sensor data were recorded and analyzed.
Flight Performance.
The I-BoomCopter's position and attitude are controlled by a Pixhawk FMU. The base flight software running on the Pixhawk is the open-source ArduPilot flight stack. When flying outdoors, the Pixhawk uses a combination of GPS and barometer data to autonomously maintain the vehicle's altitude and horizontal position within the resolution of the GPS. However, when flying indoors, GPS readings are unavailable and the barometer-based altitude measurements can be unreliable due to sudden air pressure changes resulting from opening or closing a door, airflow from air-conditioning fans, etc. As such, the I-BoomCopter has a downward-facing Lidar-Lite v3 laser range finder, which enables autonomous altitude control indoors. The addition of the downward-facing laser range finder allowed the I-BoomCopter to enter Altitude Hold mode and maintain a steady altitude during teleoperated flights.
Figure 8 shows the I-BoomCopter's performance in altitude hold mode. During the 60 s window shown, the error in altitude was typically less than and reached up to only a few times.
Sensor Mounting Tests.
After the stability of the I-BoomCopter's altitude controller was confirmed through multiple test flights, we attached the wireless camera sensor package to the end effector and performed several sensor mounting tests. We used a table turned on end as a rigid vertical mounting surface, and marked the desired sensor mount location with a concentric square target at a height of (see Fig. 9). The I-BoomCopter was then flown manually to the desired altitude, switched into Altitude Hold mode, and then guided by the pilot to a position away from the target location on the wall. At this point, the boom-prop was engaged with 12% throttle by the pilot, which caused the I-BoomCopter to approach the wall at a slow speed. Once the sensor came into contact with the wall and was released from the end effector's PTRM, the pilot disengaged the boom-prop, flew the vehicle back to its launch point, and landed on the ground.
The sensor placement error for five successful sensor mounting trials is shown in Fig. 10. For these trials, the maximum, minimum, and average placement errors were , and , respectively. Figure 11 shows snapshots from three stages of the sensor mounting operation that resulted in the minimum placement error. For reference, images from the onboard webcam video feed corresponding to each stage are included to the right of each snapshot.

Successful I-BoomCopter sensor mounting operation. Right column: view from onboard webcam. A video of this flight test is available at the website.3

Successful I-BoomCopter sensor mounting operation. Right column: view from onboard webcam. A video of this flight test is available at the website.3
Figure 12 shows the sonar distance measurements during the sensor mounting operation. The slope of the decreasing sonar measurements represents the I-BoomCopter's forward velocity during the approach. This remained steady at about until impact, at which point the vehicle recoiled backward from the impact. It was then pitched backward (after the boom-prop was disengaged) to complete the landing process. This behavior is displayed in Fig. 12 with an initially shallow slope in the sonar data after the impact with the wall, followed by an increasing slope as the vehicle is pitched away from the surface.
The impulse generated during the collision can be seen in Fig. 13. The momentary peak in force occurred over a timespan of and was higher than the force required to toggle the PTRM. Thus, the sensor was released from the end effector immediately upon impact. This made it unnecessary to increase the boom-prop throttle to its maximum value, but other sensor delivery mechanisms, or a PTRM with a higher releasing force, may require the boom-prop to engage at a higher throttle after impact.
Toward Autonomous Control
The primary objective in this paper was to demonstrate the I-BoomCopter's capability of physical interaction with the environment to perform a remote sensor mounting task. As demonstrated in Sec. 4, with the assistance of autonomous altitude control, the sensor mounting task can be performed manually. However, the placement accuracy depends heavily on the skill of the pilot, and it is difficult to achieve repeatable performance. In addition, the pilot must constantly maintain a clear line of sight with the I-BoomCopter and the desired sensor mount location. Thus, at longer distances from the pilot, or in more obscure locations, the task becomes even more cumbersome. In light of these restrictions, an autonomous control strategy is proposed below.
Automating the sensor mounting task will require the development and integration of two key elements: a real-time target tracking system, and a high-level vehicle controller. Real-time target tracking will be achieved by using the forward-facing webcam (see Fig. 1) connected to the onboard computer (running image processing algorithms in the OpenCV framework), and high-level vehicle control will be achieved through communication between the onboard computer and the Pixhawk FMU.
Real-Time Target Tracking.
Ideally, the I-BoomCopter will be able to mount a sensor on any vertical surface of an existing structure. Hence, the image processing algorithm(s) employed for target tracking should be capable of detecting and tracking any visual feature as a reference point for the target mounting location.
As a preliminary step toward that goal, we have implemented a simple image processing algorithm that detects a known pattern based on the hue, saturation, and value representation of its color. In this implementation, a circular pattern (see Fig. 14(a)) with predefined hue, saturation, and values is used as a reference for the target sensor mounting location. All incoming video frames from the onboard webcam are filtered using these values, and we ensure that there are no other objects in our experimental environment with a similar color. As a result, only the desired color pattern is detected in the processed video frames, and the chance of false detection is minimal.

(a) Predefined target pattern and (b) tracking of a known pattern using onboard webcam. A bounding rectangle encloses the tracked pattern. The X, Y, and Z values shown are the distance from the center of the webcam frame (crosshairs) to the center of the tracked pattern (small circles).
Once the pattern is detected, a bounding rectangle is drawn around it along with a circle indicating its centroid (see Fig. 14(b)) and crosshairs indicating the center of the video frame. The horizontal and vertical offsets between the center of the target (circle) and the center of the video frame (crosshairs) provide an estimate of how far the I-BoomCopter needs to move to align itself with the target. Since the size of the pattern is predefined, the distance to the target from the I-BoomCopter can be estimated based on the size of the target in the current frame. Thus, we can calculate the position of the target relative to the vehicle (represented in the body coordinate frame), shown as x, y, and z distances in Fig. 14(b).
High-Level Vehicle Control.
Since the sensor mounting task requires the I-BoomCopter to perform several operations sequentially, we propose to use an extended finite state machine (EFSM) for the high-level control architecture. In each state of the EFSM, the onboard computer will calculate position, orientation, and boom-prop throttle commands to precisely control the motion of the I-BoomCopter along desired trajectories, and to press the sensor against the mounting surface. These commands will be communicated to the Pixhawk FMU using a combination of the robot operating system and the MAVLink serial communication protocol. The inputs to the EFSM will include the target position as calculated by the tracking system described above, data from the force and distance sensors embedded in the end-effector, and data from the Pixhawk FMU (such as the I-BoomCopter's current position). Based on the procedure used for mounting the sensor manually (see Sec. 4), the EFSM states will progress as shown in Fig. 15.
After takeoff, the pilot will toggle a switch to relinquish control of the vehicle, and the onboard computer will command the vehicle to move to a position near the target mount location. Then, the target tracking system will be used to position the end effector directly in front of the target mount location. From this position, the boom-prop will be engaged to approach the wall and press the sensor into place. Finally, the onboard computer will send the vehicle to a safe location away from the wall where the pilot can resume control and land the vehicle.
Preliminary Results.
We performed preliminary flight tests to validate the image processing algorithm and EFSM control architecture described above and to compare the positioning accuracy achieved during an autonomous sensor mounting operation to the accuracy achieved during manual operation. During these autonomous flight tests, the I-BoomCopter successfully aligned with a target position on a wall, and used the boom-prop to mount the sensor at the desired location on the wall. Figure 16 shows the path traveled by the I-BoomCopter during an autonomous sensor mounting flight test.

Trajectory of I-BoomCopter during an autonomous sensor mounting task. The two lower squares indicate autonomously calculated position setpoints. Left inset: force applied to sensor against wall. Right inset: final position of mounted sensor just inside square ( square indicates desired mounting region; diameter circle is tracked by the onboard webcam).

Trajectory of I-BoomCopter during an autonomous sensor mounting task. The two lower squares indicate autonomously calculated position setpoints. Left inset: force applied to sensor against wall. Right inset: final position of mounted sensor just inside square ( square indicates desired mounting region; diameter circle is tracked by the onboard webcam).
The two lower squares in Fig. 16 indicate vehicle positions calculated by the EFSM. The force applied to the sensor during impact with the wall, and the final sensor position on the wall are included as insets in Fig. 16 on the left and right, respectively. The results presented in Fig. 16 demonstrate the feasibility of the autonomous control strategy proposed above (which uses the boom-prop to control the I-BoomCopter's forward motion during the sensor mounting operation). Furthermore, the minimum sensor placement error obtained during the autonomous flight tests was , which is 42% lower than minimum achieved during the manual flight tests (see Fig. 10). These results are promising, and indicate that the proposed autonomous control strategy is capable of achieving greater sensor placement accuracy than manual control.
Conclusion
In this paper, we presented the design of a custom end effector and boom-prop mechanism that enables the I-BoomCopter to perform remote sensor mounting tasks. As a demonstration of this capability, we successfully mounted a custom wireless camera sensor on a wall during several manually operated flights with the aid of the I-BoomCopter's autonomous altitude control mode. We also showed results from preliminary flight tests that complete the remote sensor mounting task autonomously by making use of the I-BoomCopter's onboard computer and webcam to provide vision-based control feedback, determine the desired sensor mounting location, engage the boom-prop, and send vehicle position commands to the flight controller. Future work will characterize and improve the performance of the autonomous flight control system and consider the effects of external factors such as wind and different types of mounting surfaces on the performance of the system.
Acknowledgment
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Funding Data
Division of Graduate Education (Grant No. DGE-1333468).