Skip to content

Commit 6d56922

Browse files
committed
init perception
Signed-off-by: ismetatabay <[email protected]> asd
1 parent d086011 commit 6d56922

File tree

5 files changed

+134
-2
lines changed

5 files changed

+134
-2
lines changed
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
11
nav:
22
- index.md
33
- Tuning localization: localization-tuning
4+
- Tuning perception: perception-tuning

docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/localization-tuning/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,8 @@
44

55
In this section,
66
our focus will be on refining localization accuracy within the YTU Campus environment through updates to localization parameters and methods.
7-
Our approach entails using NDT as the pose input source,
8-
and the Gyro Odometer as the twist input source.
7+
Our approach involves
8+
utilizing NDT as the pose input source and the Gyro Odometer as the twist input source.
99
These adjustments play a pivotal role
1010
in achieving a heightened level of precision and reliability in our localization processes,
1111
ensuring optimal performance in the specific conditions of the YTU campus.
Loading
Loading
Lines changed: 131 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,131 @@
1+
# Tuning perception
2+
3+
## Introduction
4+
5+
In this section, we plan to enhance our perception accuracy within the YTU Campus environment
6+
by updating some parameters and methods.
7+
We will enable camera-lidar
8+
fusion as our chosen perception method. This approach holds the potential to significantly
9+
improve our ability to accurately perceive and understand the surroundings, enabling our vehicles
10+
to navigate more effectively and safely within the campus premises. By fine-tuning these perception
11+
parameters, we aim to advance the capabilities of our systems and further optimize their performance
12+
in this specific environment.
13+
14+
## Perception parameter tuning
15+
16+
### Enabling camera-lidar fusion
17+
18+
To enable camera-lidar fusion, you need to first calibrate both your camera and lidar.
19+
Following that, you will need to utilize the `image_info`
20+
and `rectified_image` topics in order to employ the `tensorrt_yolo` node.
21+
Once these ROS 2 topics are prepared,
22+
we can proceed with enabling camera-lidar fusion as our chosen perception method:
23+
24+
!!! note "Enabling camera lidar fusion on [`autoware.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/2255356e0164430ed5bc7dd325e3b61e983567a3/autoware_launch/launch/autoware.launch.xml#L42)"
25+
26+
```diff
27+
- <arg name="perception_mode" default="lidar" description="select perception mode. camera_lidar_radar_fusion, camera_lidar_fusion, lidar_radar_fusion, lidar, radar"/>
28+
+ <arg name="perception_mode" default="camera_lidar_fusion" description="select perception mode. camera_lidar_radar_fusion, camera_lidar_fusion, lidar_radar_fusion, lidar, radar"/>
29+
```
30+
31+
After that,
32+
we need
33+
to run the [TensorRT YOLO node](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/tensorrt_yolo) for our camera topics
34+
if it hasn't been launched on your sensor model.
35+
You can launch the tensorrt_yolo nodes by uncommenting the following lines in the [`camera_lidar_fusion_based_detection.launch.xml`](https://github.com/autowarefoundation/autoware.universe/blob/main/launch/tier4_perception_launch/launch/object_recognition/detection/camera_lidar_fusion_based_detection.launch.xml)
36+
file:
37+
38+
!!! note "Please adjust the following lines in the `camera_lidar_fusion_based_detection.launch.xml` file based on the number of your cameras (image_number)"
39+
40+
```xml
41+
<include file="$(find-pkg-share tensorrt_yolo)/launch/yolo.launch.xml">
42+
<arg name="image_raw0" value="$(var image_raw0)"/>
43+
<arg name="image_raw1" value="$(var image_raw1)"/>
44+
<arg name="image_raw2" value="$(var image_raw2)"/>
45+
...
46+
```
47+
48+
### Tuning ground segmentation
49+
50+
!!! warning
51+
52+
under construction
53+
54+
### Tuning euclidean clustering
55+
56+
The `euclidean_clustering` package applies Euclidean clustering methods
57+
to cluster points into smaller parts for classifying objects.
58+
Please refer to [`euclidean_clustering` package documentation](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/euclidean_cluster) for more information.
59+
This package is used in the detection pipeline of Autoware architecture.
60+
There are two different euclidean clustering methods included in this package:
61+
`euclidean_cluster` and `voxel_grid_based_euclidean_cluster`.
62+
In the default design of Autoware,
63+
the `voxel_grid_based_euclidean_cluster` method serves as the default Euclidean clustering method.
64+
65+
In the YTU campus environment, there are many small objects like birds,
66+
dogs, cats, balls, cones, etc. To detect, track,
67+
and predict these small objects, we aim to assign clusters to them as small as possible.
68+
69+
Firstly, we will change our object filter method from lanelet_filter to position_filter
70+
to detect objects that are outside the lanelet boundaries at the [`tier4_perception_component.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/launch/components/tier4_perception_component.launch.xml).
71+
72+
```diff
73+
- <arg name="detected_objects_filter_method" default="lanelet_filter" description="options: lanelet_filter, position_filter"/>
74+
+ <arg name="detected_objects_filter_method" default="position_filter" description="options: lanelet_filter, position_filter"/>
75+
```
76+
77+
After changing the filter method for objects,
78+
the output of our perception pipeline looks like the image below:
79+
80+
<figure markdown>
81+
![default-clustering](images/initial-clusters.png){ align=center }
82+
<figcaption>
83+
The default clustering parameters with the object position filter default parameters.
84+
</figcaption>
85+
</figure>
86+
87+
Now, we can detect unknown objects that are outside the lanelet map,
88+
but we still need to update the filter range
89+
or disable the filter for unknown objects in the [`object_position_filter.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/object_recognition/detection/object_filter/object_position_filter.param.yaml) file.
90+
91+
```diff
92+
upper_bound_x: 100.0
93+
- lower_bound_x: 0.0
94+
+ lower_bound_x: -100.0
95+
- upper_bound_y: 10.0
96+
+ upper_bound_y: 100.0
97+
- lower_bound_y: -10.0
98+
+ lower_bound_y: -100.0
99+
```
100+
101+
Also, you can simply disable the filter for unknown labeled objects.
102+
103+
```diff
104+
- UNKNOWN : true
105+
+ UNKNOWN : false
106+
```
107+
108+
After that,
109+
we can update our clustering parameters
110+
since we can detect all objects regardless of filtering objects with the lanelet2 map.
111+
As we mentioned earlier, we want to detect small objects.
112+
Therefore,
113+
we will decrease the minimum cluster size to 1 in the [`voxel_grid_based_euclidean_cluster.param.yaml` file](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/object_recognition/detection/clustering/voxel_grid_based_euclidean_cluster.param.yaml).
114+
115+
```diff
116+
- min_cluster_size: 10
117+
+ min_cluster_size: 1
118+
```
119+
120+
After making these changes, our perception output is shown in the following image:
121+
122+
<figure markdown>
123+
![update-clustering](images/after-tuning-clustering.png){ align=center }
124+
<figcaption>
125+
The minimum cluster size is set to "1" with the unknown object filter disabled.
126+
</figcaption>
127+
</figure>
128+
129+
If you want to use an object filter after fine-tuning clusters for unknown objects,
130+
you can utilize either the lanelet filter or the position filter for unknown objects.
131+
Please refer to the documentation of the [`detected_object_validation` package page](https://autowarefoundation.github.io/autoware.universe/main/perception/detected_object_validation/) for further information.

0 commit comments

Comments
 (0)