You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the excellent work and your great contributions. When annotating multi-sensor image lidar datasets, it's really tough to pinpoint and annotate short objects, road hazards, or street markings just by looking at the lidar view. Current tools don't have features to make it easier to mark short objects or hazards that aren't clear in the lidar data alone. Here are two ways Xtreme could improve this:
2D to 3D back projection: This feature lets annotators select a polygon in the image and then back-project it into the 3D view at a certain height. This means they can mark things they see in the image but not in the lidar data.
RGB colored point cloud: If we add support for showing RGB colored point clouds, annotators can spot RGB patterns in the lidar view. This helps them mark objects in 3D based on their colors.
Including street markings and 3D annotations within the lidar view enables the training of bird's eye view (BEV) models capable of segmenting and detecting all elements in the scene based on multi-sensory inputs.
The text was updated successfully, but these errors were encountered:
2D to 3D back projection: This is a great idea, and we have tried some algorithms, but we have yet to work out. 2D images lack a dimension compared to 3D, so the results will be messed up when the project returns to 3D. If you are an expert on related algorithms, please provide some help on this feature
RGB colored point cloud: Our SaaS version supports it, but a bug occurs when migrating into Xtreme1. We will renew our documentation and fix this bug soon. @jaggerwang please link to this issue when the bug is fixed.
We appreciate any contribution to our 2d project back to 3d feature. Any code, reference, or discussion are welcome!
Thanks for the excellent work and your great contributions. When annotating multi-sensor image lidar datasets, it's really tough to pinpoint and annotate short objects, road hazards, or street markings just by looking at the lidar view. Current tools don't have features to make it easier to mark short objects or hazards that aren't clear in the lidar data alone. Here are two ways Xtreme could improve this:
2D to 3D back projection: This feature lets annotators select a polygon in the image and then back-project it into the 3D view at a certain height. This means they can mark things they see in the image but not in the lidar data.
RGB colored point cloud: If we add support for showing RGB colored point clouds, annotators can spot RGB patterns in the lidar view. This helps them mark objects in 3D based on their colors.
Including street markings and 3D annotations within the lidar view enables the training of bird's eye view (BEV) models capable of segmenting and detecting all elements in the scene based on multi-sensory inputs.
The text was updated successfully, but these errors were encountered: