Tu Chenxi 「Quaternion and Pre-processing for Testing Stationarity」

2016/02/07 20:58 に masayuki tsuboi が投稿   [ 2016/06/07 21:31 に 後藤太一 さんが更新しました ]
Just like we I discussed with Pro Grathe, I plan to convert point cloud data into 2D image[3] and test the stationarity to every small part of it.
But the result is really bad.In the example, least we can recognize something like tree or cars by our eyes on this
image and that is what is I expect. But in the result of my program recognize nothing
According to me, the reason why our result is far from the example can be divided into 3 parts.
In the end, even now we know converting into a 2D image is not a good an ideal. But because we have known
the pitch and yaw angle. We can still divided data into small groups by them. Because what we want to at .rst,
is divided range image by its xy coordinate(pitch and yaw angle), now we do it in 3D space and then test the stationarity.

Comments