As that app was displaying different levels of processing of the incoming camera stream, it looks like a good idea to start from the second tutorial that comes with OpenCV. I follow these steps in order to make a copy of that project and start editing it.
The following is the code running OpenCV ball detection, all in java, no native, and (this is not good, so, will work on it later), running on the main/UI thread. I.e., if your phone is not powerful enough, it may just hang... Also, obviously, I want to port a lot of that processing into native, so, that I expedite things. There are also some other issues with the detection, but will work on a clean up version later.
This is my Ball3Activity.java
package com.cell0907.ball3;
import java.util.ArrayList;
import java.util.List;
import org.opencv.android.BaseLoaderCallback;
import org.opencv.android.CameraBridgeViewBase;
import org.opencv.android.CameraBridgeViewBase.CvCameraViewFrame;
import org.opencv.android.CameraBridgeViewBase.CvCameraViewListener2;
import org.opencv.android.LoaderCallbackInterface;
import org.opencv.android.OpenCVLoader;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.imgproc.Imgproc;
import org.opencv.core.Point;
import android.app.Activity;
import android.os.Bundle;
import android.util.Log;
import android.view.Menu;
import android.view.MenuItem;
import android.view.WindowManager;
public class Ball3Activity extends Activity implements CvCameraViewListener2 {
private static final String TAG = "OCVSample::Activity";
private static final int VIEW_MODE_RGBA = 0;
private static final int VIEW_MODE_GRAY = 1;
private static final int VIEW_MODE_CANNY = 2;
private static final int VIEW_MODE_FEATURES = 5;
private int mViewMode;
private Mat mRgba;
private Mat mIntermediateMat;
private Mat mGray;
private Mat mHSV;
private Mat mThresholded;
private Mat mThresholded2;
private Mat array255;
private Mat distance;
private MenuItem mItemPreviewRGBA;
private MenuItem mItemPreviewGray;
private MenuItem mItemPreviewCanny;
private MenuItem mItemPreviewFeatures;
private CameraBridgeViewBase mOpenCvCameraView;
private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {
@Override
public void onManagerConnected(int status) {
switch (status) {
case LoaderCallbackInterface.SUCCESS:
{
Log.i(TAG, "OpenCV loaded successfully");
// Load native library after(!) OpenCV initialization
//System.loadLibrary("mixed_sample");
mOpenCvCameraView.enableView();
} break;
default:
{
super.onManagerConnected(status);
} break;
}
}
};
public Ball3Activity() {
Log.i(TAG, "Instantiated new " + this.getClass());
}
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
Log.i(TAG, "called onCreate");
super.onCreate(savedInstanceState);
getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
setContentView(R.layout.tutorial2_surface_view);
mOpenCvCameraView = (CameraBridgeViewBase) findViewById(R.id.tutorial2_activity_surface_view);
mOpenCvCameraView.setCvCameraViewListener(this);
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
Log.i(TAG, "called onCreateOptionsMenu");
mItemPreviewRGBA = menu.add("RGBA");
mItemPreviewGray = menu.add("HSV");
mItemPreviewCanny = menu.add("Thresholded");
mItemPreviewFeatures = menu.add("Ball");
return true;
}
@Override
public void onPause()
{
super.onPause();
if (mOpenCvCameraView != null)
mOpenCvCameraView.disableView();
}
@Override
public void onResume()
{
super.onResume();
OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_3, this, mLoaderCallback);
}
public void onDestroy() {
super.onDestroy();
if (mOpenCvCameraView != null)
mOpenCvCameraView.disableView();
}
public void onCameraViewStarted(int width, int height) {
mRgba = new Mat(height, width, CvType.CV_8UC4);
mHSV = new Mat(height, width, CvType.CV_8UC4);
mIntermediateMat = new Mat(height, width, CvType.CV_8UC4);
mGray = new Mat(height, width, CvType.CV_8UC1);
array255=new Mat(height,width,CvType.CV_8UC1);
distance=new Mat(height,width,CvType.CV_8UC1);
mThresholded=new Mat(height,width,CvType.CV_8UC1);
mThresholded2=new Mat(height,width,CvType.CV_8UC1);
}
public void onCameraViewStopped() {
mRgba.release();
mGray.release();
mIntermediateMat.release();
}
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
final int viewMode = mViewMode;
mRgba = inputFrame.rgba();
if (viewMode==VIEW_MODE_RGBA) return mRgba;
List<Mat> lhsv = new ArrayList<Mat>(3);
Mat circles = new Mat(); // No need (and don't know how) to initialize it.
// The function later will do it... (to a 1*N*CV_32FC3)
array255.setTo(new Scalar(255));
Scalar hsv_min = new Scalar(0, 50, 50, 0);
Scalar hsv_max = new Scalar(6, 255, 255, 0);
Scalar hsv_min2 = new Scalar(175, 50, 50, 0);
Scalar hsv_max2 = new Scalar(179, 255, 255, 0);
//double[] data=new double[3];
// One way to select a range of colors by Hue
Imgproc.cvtColor(mRgba, mHSV, Imgproc.COLOR_RGB2HSV,4);
if (viewMode==VIEW_MODE_GRAY) return mHSV;
Core.inRange(mHSV, hsv_min, hsv_max, mThresholded);
Core.inRange(mHSV, hsv_min2, hsv_max2, mThresholded2);
Core.bitwise_or(mThresholded, mThresholded2, mThresholded);
/*Core.line(mRgba, new Point(150,50), new Point(202,200), new Scalar(100,10,10)CV_BGR(100,10,10), 3);
Core.circle(mRgba, new Point(210,210), 10, new Scalar(100,10,10),3);
data=mRgba.get(210, 210);
Core.putText(mRgba,String.format("("+String.valueOf(data[0])+","+String.valueOf(data[1])+","+String.valueOf(data[2])+")"),new Point(30, 30) , 3 //FONT_HERSHEY_SCRIPT_SIMPLEX
,1.0,new Scalar(100,10,10,255),3);*/
// Notice that the thresholds don't really work as a "distance"
// Ideally we would like to cut the image by hue and then pick just
// the area where S combined V are largest.
// Strictly speaking, this would be something like sqrt((255-S)^2+(255-V)^2)>Range
// But if we want to be "faster" we can do just (255-S)+(255-V)>Range
// Or otherwise 510-S-V>Range
// Anyhow, we do the following... Will see how fast it goes...
Core.split(mHSV, lhsv); // We get 3 2D one channel Mats
Mat S = lhsv.get(1);
Mat V = lhsv.get(2);
Core.subtract(array255, S, S);
Core.subtract(array255, V, V);
S.convertTo(S, CvType.CV_32F);
V.convertTo(V, CvType.CV_32F);
Core.magnitude(S, V, distance);
Core.inRange(distance,new Scalar(0.0), new Scalar(200.0), mThresholded2);
Core.bitwise_and(mThresholded, mThresholded2, mThresholded);
/* if (viewMode==VIEW_MODE_CANNY){
Imgproc.cvtColor(mThresholded, mRgba, Imgproc.COLOR_GRAY2RGB, 4);
return mRgba;
}*/
// Apply the Hough Transform to find the circles
Imgproc.GaussianBlur(mThresholded, mThresholded, new Size(9,9),0,0);
Imgproc.HoughCircles(mThresholded, circles, Imgproc.CV_HOUGH_GRADIENT, 2, mThresholded.height()/4, 500, 50, 0, 0);
if (viewMode==VIEW_MODE_CANNY){
Imgproc.Canny(mThresholded, mThresholded, 500, 250); // This is not needed.
// It is just for display
Imgproc.cvtColor(mThresholded, mRgba, Imgproc.COLOR_GRAY2RGB, 4);
return mRgba;
}
//int cols = circles.cols();
int rows = circles.rows();
int elemSize = (int)circles.elemSize(); // Returns 12 (3 * 4bytes in a float)
float[] data2 = new float[rows * elemSize/4];
if (data2.length>0){
circles.get(0, 0, data2); // Points to the first element and reads the whole thing
// into data2
for(int i=0; i<data2.length; i=i+3) {
Point center= new Point(data2[i], data2[i+1]);
Core.ellipse( mRgba, center, new Size((double)data2[i+2], (double)data2[i+2]), 0, 0, 360, new Scalar( 255, 0, 255 ), 4, 8, 0 );
}
}
return mRgba;
}
public boolean onOptionsItemSelected(MenuItem item) {
Log.i(TAG, "called onOptionsItemSelected; selected item: " + item);
if (item == mItemPreviewRGBA) {
mViewMode = VIEW_MODE_RGBA;
} else if (item == mItemPreviewGray) {
mViewMode = VIEW_MODE_GRAY;
} else if (item == mItemPreviewCanny) {
mViewMode = VIEW_MODE_CANNY;
} else if (item == mItemPreviewFeatures) {
mViewMode = VIEW_MODE_FEATURES;
}
return true;
}
//public native void FindFeatures(long matAddrGr, long matAddrRgba);
}
And this is my tutorial2_surface_view.xml
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
xmlns:opencv="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="match_parent" >
<org.opencv.android.JavaCameraView
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:id="@+id/tutorial2_activity_surface_view"
opencv:show_fps="true"
opencv:camera_id="any" />
</LinearLayout>
Notice the use of opencv:show_fps="true" to see the fps on screen. To include this line, you will need to also add on top xmlns:opencv="http://schemas.android.com/apk/res-auto". On my HTC One, the fps will go from 10fps when just capturing the images down to barely 2 when is doing all the processing...
PS.: Click here to see the index of these series of posts on OpenCV
hi
ReplyDeleteI'm graduate student in mechanical engineering department. Thank you for providing this gorgeous code.
I'm using your android opencv code for ball tracking but I encounter some problems.
first, tracked circle center data is too noisy which means the circle trembles too much.
second, It cannot track far object.
Ho can i solve this problem?
Hi,
DeleteThanks for the kind comment! Sorry didn't reply before but I am swamped by my pay job :(
I am afraid I can't really help you. This is a demo only, never finished my original goal. The difference between this and what you ask is the next level of work, which I never did. For instance, for the first one, there will be things like illumination that will alter the result. I am sure there are techniques to deal with those, like averaging in time... Some of those techniques will bring in new problems... Etc.
About the second one, when object is far, if I remember right, there are some settings in the HoughCircles that you can adjust to decide min size object. Can't remember the details now but they have the risk of triggering also false positives...
Good luck with your project! And if you ever solve it and have time, please post it somewhere and send me a link, just in case I ever go back to this project :)
Ok,
ReplyDeleteI have my java app that can track my ping pong ball (orange) using your code..
Now I would like to port the code to android.
Eclipse is OK, I can run the OpenCV sample on my phone.
Do I have to use the tutorial-2-mixedprocessing sample or could I choose another one? (I already played and changed it..) ?
After copying the eclipse project do I have to copy the code in the class ?
tks,
Danilo
This comment has been removed by the author.
ReplyDeleteHi
ReplyDeleteThank you for providing this code.
I'm using your android opencv code for ball tracking but now i don't undrstand how to seperate this object into an imageview
Ho can i solve this problem?
Good job !
ReplyDeleteThank you!! Appreciate it :)
DeleteHi,
ReplyDeleteThe code works, but the framerate is too slow. When i tried to move the object, the response was too slow. Where in the code can we tweak, so that the framerate + the regcon-rate improved?
Many thanks.
Sorry, it's been long time that I worked on this and can't really help without significant effort on my side... Good luck, though!
Delete"service intent must be expilcit" this error is coming in logcat and application getting force closed.
ReplyDeleteShowing error in :
ReplyDeleteCore.ellipse( mRgba, center, new Size((double)data2[i+2], (double)data2[i+2]), 0, 0, 360, new Scalar( 255, 0, 255 ), 4, 8, 0 );
*Here ellipse is invalidate and cannot resolve the error in android studio
I am also facing the same problem. Do anyone have idea how to resolve it?
DeleteIn Opencv SDK v3, you should use Imgproc.line instead of Core.line, Core.line does not exist any more.
Deletecan you please share the whole project file via any cloud link, i have tried your code but there is no output screen or camera view.
ReplyDeleteI am also facing the same problem. If any one have an any idea please.
Delete