Image Classification using Inception: Tensorflow

Tensorflow enables you to transfer learning as humans do. For example, we are learn about alphabets from our teacher. In this case, teacher were transferring their knowledge of alphabets to us. Similarly, we can teach computer classify images using a googles image classification model known Inception. Best part about Inception is that it can be applied on various learning problem. You can teach inception to classify flower images or buildings or any other data you want it to classify for you.

inception_v3_architecture

Above diagram show inception model architecture. In this case we need to train second last layer with our data and replace the last layer (output layer) with our program in order to get results classified as per the classes we trained the second last layer. This is all we need got get industry grade machine learning algorithm to work for us. Following are the steps we need to follow to achieve image classification using inception:

  1. Clone tensorflow code:
     git clone https://github.com/tensorflow/tensorflow

    As shown in the figure below:git_clone

  2. Navigate to image_retraining example:
     cd tf_demo\tensorflow\tensorflow\examples\image_retraining
  3. Prepare training data:
    1. I used fatkun batch download image chrome plugin, so install this plugin on your chrome
    2. Then search for images, I searched for Kawasaki Ninja H2R and Yamaha YZF R1.
    3. When page loads click on fatkun plugin icon and select this tab to open image download page, as shown in the following figures
    4. Store the downloaded images into: tf_demo\tensorflow\tensorflow\examples\image_retraining\<image_dir>\kawasaki ninja_h2r or \yamaha yzf r1
    downloaded_example_folder_data
    Remember that the name of the folder under which you save your data is very important as it will be used as label for image classificationtraining_data
  4. Run the following command on your terminal window:
     python retrain.py --image_dir <image_dir>

    Hit enter and wait for next 20-30 minutes, you will get following console logs once done:training_complete

  5. Lets write a utility to consume our retrained model. Create a file name try-retrain.py
  6. Add following import statements to the file:
     # Imports
     import tensorflow as tf
     import numpy as np
     import argparse
  7. Add constants pointing to trained graph and labels files:
     # Retrained graph
     MODEL_PATH = "/tmp/output_graph.pb"

    Paths to files created as part of retraining Inception. Change these if you saved your files in a different location.

     # such as "H2R, YZF R1, ..."
     LABEL_PATH = "/tmp/output_labels.txt"

    Above code create a label for the newly retrained graph. These would be the new classes being classified.

  8. Add a main method which will accept the command line parameter as image to be classified:
    # Get the path to the image you want to predict.
    if __name__ == '__main__':
        # Ensure the user passes the image_path          
        parser = argparse.ArgumentParser(description="Process arguments")
        parser.add_argument('image_path', type=str, default='',
        help='Path of image to classify.')
        args = parser.parse_args()
        # We can only handle jpeg images.
        if args.image_path.lower().endswith(('.jpg', '.jpeg')):
            # predict the class of the image
            predict_image_class(args.image_path, LABEL_PATH)
        else:
            print('File must be a jpeg image.')
  9. Now let’s create a method to clean out label data from garbage values, add following code for this task:
     def filter_delimiters(text):
         filtered = text[:-3]
         filtered = filtered.strip("b'")
         filtered = filtered.strip("'")
     return filtered
  10. Create a method that will process the image provided in command line parameter to this program:
     def predict_image_class(imagePath, labelPath):
  11. Check is the image exists, use following code:
     matches = None # Default return to none
     if not tf.gfile.Exists(imagePath):
     tf.logging.fatal('File does not exist %s', imagePath)
     return matches
  12. Load image into memory for processing, using following code:
     image_data = tf.gfile.FastGFile(imagePath, 'rb').read()
  13. Load the retrained incpetion graph using following code:
     with tf.gfile.FastGFile(MODEL_PATH, 'rb') as f:
         # init GraphDef object
         graph_def = tf.GraphDef()
         # Read in the graphy from the file
         graph_def.ParseFromString(f.read())
         _ = tf.import_graph_def(graph_def, name='')
  14. Create a session for tensor execution using following code:
     with tf.Session() as sess:
  15. We find the final result tensor by name in the retrained model, we will be using this tensor to process our data into model graph:
     softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
  16. Get prediction of our image using the following code:
     predictions = sess.run(softmax_tensor,{'DecodeJpeg/contents:0': image_data})
  17. Format the predicted classes for display, following code converts the tensor into 1D vector of probability values:
     predictions = np.squeeze(predictions)
  18. Get top five predictions using the following piece of code:
     top_k = predictions.argsort()[-5:][::-1]
  19. Next is to read the class labels from the file using following code:
     f = open(labelPath, 'rb')
     lines = f.readlines()
     labels = [str(w).replace("\n", "") for w in lines]
     print("")
     print ("Image Classification Probabilities")
  20. Print the result on console using following code:
     for node_id in top_k:for node_id in top_k:            
        human_string = filter_delimiters(labels[node_id])            
        score = predictions[node_id]            
        print('{0:s} (score = {1:.5f})'.format(human_string, score))
        print("")
        answer = labels[top_k[0]]
        return answer
  21. Save the file and run it using following command:
    python try-retrain.py <path_to_image>/<image_name.jpg>

    Sample executions in my case resulted in following output:

    second_runpic_220first_runpic_004

     

    That’s all for now, go ahead and train inception to solve a real world problem or just play around with it.

Advertisements

Amazon Alexa on Raspberry Pi

Recently I ended up with Raspberry Pi. This little piece of hardware is very powerful and I decided to try it out. Very first thing I did was installed an operating system on my Pi. Here is the link which I followed while installing Raspberian.

Hardware required:

Here is a quick list of stuff you need before getting started:

  1. Raspberry Pi (with raspberian installed and micro SD card inserted)
  2. USB sound card, or a USB mic
  3. Speaker (For me HDMI and my TV served the purpose)
  4. HDMI Cable (Setup VNC on Pi and you will not need this steps here)
  5. A Mouse and Keyboard
  6. A power adapter for Pi

Connect USB keyboard, mouse, sound card (or mic ), HDMI, and power adapter. Connecting the power adapter will boot raspberry pi. Once the boot is completed we are good to go.

Registering the device:

An developer account is required for this step. Create an account by registering your self https://developer.amazon.com/login.html here. Once registered you need to add a device. Follow these steps to register you device:

  1. Login and go to ALEXA tab on your developer dashboard. As shown in the following figure:amazon_dev_dashboard.PNG
  2. Click on Alexa Voice Services button. As shown in figure below:alexa home.PNG
  3. Click on Create a new Device Type link. Following page will open:register device
  4. Provide appropriate Device Type ID and Display Name. And then click Next button. Following screen will appear:securityprofile
  5. Fill in the required information and then click Next button. Following screen will appear:sec_profile_details
  6. Note down above details as you will need them later. Click on Web Settings tab and then click the Edit button. Then click on Add Another link of Allowed Origin pane and type https://localhost:3000. Then repeat above step for Allowed Return URLs pane and type https://localhost:3000/authResponse as shown in the images below:
  7. Click on Next button and you will see following screen:devcie details page
  8. Provide device details information and then click Next button. Following screen will appear:device capabilities
  9. Click the Submit button, and you device will be registered and shown on the devices list as follows:registered device

Running services on pi:

  1. Open a terminal window on raspberry pi and type in the following command:
    cd Desktop
    git clone https://github.com/alexa/alexa-avs-sample-app.git

    Above command will download the sample app on the pi. Following figure shows terminal executing git clone command:clone command

  2. Once sample app is downloaded, we are ready to add Amazon services security credentials to the sample app so that it is able to authenticate itself with Amazon services. Use the following commands to add security credentials:
    cd ~/Desktop/alexa-avs-sample-app
    nano automated_install.sh

    nano commandsecprofiledetails

    Press Ctrl+X and then Y from the keyboard for saving the changes.

  3. Above command will update the installation script file. Now we are ready to run the installation script. Use the following command to run the installation script:
    automated_install.sh

    install services

  4. This installation will take time, so sit back and relax. It will install third party utilities along with sensory and kitt_ai. Using these utilities you can invoke alexa voice services using the word ‘Alexa’. Once installation is done we are ready to start the client and talk to alexa. For this we need minimum of two terminals, but if you want to use wake word utilities then you can run them from third terminal window.
  5. In first terminal window we will run the web-services authorization utility. Use the following command to start authorization:
    cd ~/Desktop/alexa-avs-sample-app/samples
    cd companionService && npm start
  6. The following figure shows startup of authorization utility:started Companion service
  7. Next step is to start client app, open another terminal window and type the following command:
    cd ~/Desktop/alexa-avs-sample-app/samples
    cd javaclient && mvn exec:exec

  8. When the client is started for the first time it will ask for authentication. When above command executes completely a dialog box is displayed as shown in the image below:login_register_dialog_client
  9. Click Yes button. It will open browser window as shown in the image below. Click on ADVANCED link located left of Back to safety button. Then click on the Proceed to localhost (Unsafe) link.
  10. Login page is displayed as shown in the image below:amazon_login_page
  11. Type in your Amazon developer account credentials. And the click the Sign in using our secure server button. Dev authorization page is displayed as shown in image below:terms_and_conditions_page
  12. Click the Okay button. You will be redirected to http://localhost:3000/authResponse page with message “device token ready“.
  13. Go to java client, you will be presented with following dialog box:avs-click-ok.png
  14. Click the OK button and Java client will be ready to use as shown in image below:client started
  15. Click the Tap to speak to alexa button and say “Who’s the weather in New Delhi” or “What is Google”.

Note: You might get some errors when you click Tap to speak to alexa button. Keep an eye on the terminal window. If any error appears then go to Start -> Preferences -> Audio preferences. Select your sound card and select Microphone, Capture Microphone and Audio Gain options from the configure dialog box. I did a hit and trial to make it work.

You can also start wake word utilities using the following commands:

Sensory wake word utility:

cd ~/Desktop/alexa-avs-sample-app/samples
cd wakeWordAgent/src && ./wakeWordAgent -e sensory

KITT.AI’s wake word utility:

cd ~/Desktop/alexa-avs-sample-app/samples
cd wakeWordAgent/src && ./wakeWordAgent -e kitt_ai

You can add your own skills to alexa and perform custom actions for example ask alexa to switch on you lights, switch on TV and what ever you think is possible.

MVP, Retrofit, and Room: Clean Code

Biggest challenge for a developer is to write clean, reusable, and maintainable code. It is a best practice to divide Android app into separate modules, so that each module performs specific task. For example, app module for android app related code, network module for network related code, and so on. With my past experience I have learned that (the hard way though) when ever we need to make changes into app design poorly written code eats up your development time. And you will end up wasting time searching for a solution or fixing up hard to understand code.

In order to, avoid this it’s better to take out some time and decide how you want to design your app while keeping future prospects of design changes into consideration during the initial phase of your project.

This demo will try and resolve these issues (to some extent). In this demo we will try to get weather information from wunderground API. Our first step is to create an Android project. I believe this will be easy for every android developer. Second step would be to get current location. Add following code in your app module’s build.gradle file:

compile 'com.google.android.gms:play-services-location:11.0.0'

Add following code in your MainActivity.java:

@Override
public void onStart() {
    super.onStart();

    if (!checkPermissions()) {
        requestPermissions();
    } else {
        getLastLocation();
    }
}

Above code check if permission is required if yes then it requests permission or else it fetches location.

private void requestPermissions() {
    boolean shouldProvideRationale =
            ActivityCompat.shouldShowRequestPermissionRationale(this,
                    Manifest.permission.ACCESS_COARSE_LOCATION);

    // Provide an additional rationale to the user. This would happen if the user denied the
    // request previously, but didn't check the "Don't ask again" checkbox.
    if (shouldProvideRationale) {
        Log.i(TAG, "Displaying permission rationale to provide additional context.");

        showSnackbar(R.string.permission_rationale, android.R.string.ok,
                new View.OnClickListener() {
                    @Override
                    public void onClick(View view) {
                        // Request permission
                        startLocationPermissionRequest();
                    }
                });

    } else {
        Log.i(TAG, "Requesting permission");
        // Request permission. It's possible this can be auto answered if device policy
        // sets the permission in a given state or the user denied the permission
        // previously and checked "Never ask again".
        startLocationPermissionRequest();
    }
}

Above code request location access permission. Once permission is granted it uses following method to get the locations:

private void getLastLocation() {
    mFusedLocationClient.getLastLocation()
            .addOnCompleteListener(this, new OnCompleteListener<Location>() {
                @Override
                public void onComplete(@NonNull Task<Location> task) {
                    if (task.isSuccessful() && task.getResult() != null) {
                        mLastLocation = task.getResult();
                        double latitude = mLastLocation.getLatitude();
                        double longitude = mLastLocation.getLongitude();
                        String latlong = latitude+","+longitude;
                    } else {
                        Log.w(TAG, "getLastLocation:exception", task.getException());
                        showSnackbar(getString(R.string.no_location_detected));
                    }
                }
            });
}

But how do we know that user has acted upon permission dialog. Following code check’s if user has accepted the permission request or has blocked it:

@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
                                       @NonNull int[] grantResults) {
    Log.i(TAG, "onRequestPermissionResult");
    if (requestCode == REQUEST_PERMISSIONS_REQUEST_CODE) {
        if (grantResults.length <= 0) {
            // If user interaction was interrupted, the permission request is cancelled and you
            // receive empty arrays.
            Log.i(TAG, "User interaction was cancelled.");
        } else if (grantResults[0] == PackageManager.PERMISSION_GRANTED) {
            // Permission granted.
            getLastLocation();
        } else {
            // Permission denied.

            // Notify the user via a SnackBar that they have rejected a core permission for the
            // app, which makes the Activity useless. In a real app, core permissions would
            // typically be best requested during a welcome-screen flow.

            // Additionally, it is important to remember that a permission might have been
            // rejected without asking the user for permission (device policy or "Never ask
            // again" prompts). Therefore, a user interface affordance is typically implemented
            // when permissions are denied. Otherwise, your app could appear unresponsive to
            // touches or interactions which have required permissions.
            showSnackbar(R.string.permission_denied_explanation, R.string.settings,
                    new View.OnClickListener() {
                        @Override
                        public void onClick(View view) {
                            // Build intent that displays the App settings screen.
                            Intent intent = new Intent();
                            intent.setAction(
                                    Settings.ACTION_APPLICATION_DETAILS_SETTINGS);
                            Uri uri = Uri.fromParts("package",
                                    BuildConfig.APPLICATION_ID, null);
                            intent.setData(uri);
                            intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
                            startActivity(intent);
                        }
                    });
        }
    }
}

Now we are ready to request weather information from wunderground API. But before that you need to generate wunderground API key. Once you get the API key we are good to make weather information request. But wait, before we do that we need to organize our app a bit. So let’s add another module in our project and name it ‘webservicesmanager’.

MVP

We have organised our app module using MVP. MVP enables our app module well structured and independent of network module (webservicesmanager module). Activity delegates it’s tasks to presenter and presenter interacts with the models and services. In this example, when we have successfully obtained the location we will ask MainPresenter.java class to get us weather information. Add following code in getLastLocation method as follows:

private void getLastLocation() {
    .
    .
    .                       
                        String latlong = latitude+","+longitude;
                        mPresenter.getWeatherInfo(latlong);
     .
     .
     .
}

The interface Contract.java defines the contract between MainActivity.java and MainPresenter.java. This makes our Activities and presenter work independent of each other and changes in one does not affect other. So when we make getWeatherInfo() request from presenter we do it using the defined contract. And the request is delegated to presenter. When presenter is ready with the response it notifies the Activity using the same contract, in this case onSucess and onError methods.

In MainActivity, create an instance of presenter by using the following code:

private Contract.presenter mPresenter;
@Override
public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);

    mPresenter = new MainPresenter(this, this);
    .
    .
    .
}

Following is the code for Contract.java interface:

public interface Contract {

    interface view {
        void onSucess(String cityName);
        void onError(String errorMsg);
    }

    interface presenter {
        void getWeatherInfo(String latlong);
    }
}

The presenter in turn invokes the MainService.java class. This class does the parsing of response and deals with webservicesmanager module. Hence, if in future you change implementation of web-services module the only class you have to change is MainServices.java rest of the application code will remain intact. Enabling us to minimize changes and keeping our code clean.

public class MainService implements NetworkResponseHandler {
    MainPresenter mainPresenter;
    NetworkRequestManager networkRequestManager;

    public MainService(MainPresenter mainPresenter) {
        this.mainPresenter = mainPresenter;
        this.networkRequestManager = new NetworkRequestManager(this);
    }

    public void getWeatherInfo(String latlong) {
        networkRequestManager.getWeatherInfo(latlong);
    }

    @Override
    public void onResponse(Call<String> call, Response<String> response) {
        // handle response here.  
    }

    private void mapWithDataModel(WeatherInfoModel weatherInfoModel) {

        if(weatherInfoModel != null){
            WeatherInfo weatherInfo = new WeatherInfo();
            String cityName = "";
            String weatherConditionIconUrl ="";
            String CurrentTemp = "";
            String humidity = "";
            String feelsLike = "";
            String dayHighTemp = "";
            String dayLowTemp = "";
            CurrentObservation currentObservation = weatherInfoModel.getCurrentObservation();
            if(currentObservation != null){
                weatherConditionIconUrl = currentObservation.getIconUrl();
                CurrentTemp = String.valueOf(currentObservation.getTempC());
                humidity = currentObservation.getRelativeHumidity();
                feelsLike = currentObservation.getFeelslikeC();
                cityName = currentObservation.getDisplayLocation().getCity();
            }
            Forecast forcast = weatherInfoModel.getForecast();
            if(forcast != null){
                Simpleforecast simepleForcast = forcast.getSimpleforecast();
                if(simepleForcast != null){
                    List<Forecastday_> forecastdayList = simepleForcast.getForecastday();
                    if(!forecastdayList.isEmpty()){
                        dayHighTemp = forecastdayList.get(0).getHigh().getCelsius();
                        dayLowTemp = forecastdayList.get(0).getLow().getCelsius();
                    }
                }
            }
            weatherInfo.setWeatherConditionIconUrl(weatherConditionIconUrl);
            weatherInfo.setCityName(cityName);
            weatherInfo.setCurrentTemp(CurrentTemp);
            weatherInfo.setDayHighTemp(dayHighTemp);
            weatherInfo.setDayLowTemp(dayLowTemp);
            weatherInfo.setHumidity(humidity);
            weatherInfo.setFeelsLike(feelsLike);
            mainPresenter.onSuccess(cityName);
        }
    }

    @Override
    public void onFailure(Call<String> call, Throwable t) {
        mainPresenter.onError(mainPresenter.getContext().getResources().getString(R.string.request_error));
    }

    //notify presenter using this callback interface
    public interface ServiceCallBack{
        void onSuccess(String cityName);
        void onError(String s);
    }
}

NetworkResponseHandler.java is defined in webservicesmanager module.  This interface provide us with the responses of retrofit requests. Adding another level of abstraction. If in future, you want to change you implementation from retrofit to volley and delegating volleys onsucess and onfailure to notify changes using this interface. This enables app module completely intact of underlying implementation of webservicemanager module.

Retrofit

Lets take a look on integrating retroft in webservicesmanager module. First step is to get retrofit dependency in modules gradle:

compile 'com.squareup.retrofit2:retrofit:2.0.2'
compile 'com.squareup.retrofit2:converter-scalars:2.1.0'

Create an interface for calling wunderground API, this interface will act as network service to be invoked from mock server as shown in the code below:

public interface WeatherApiInterface {
    @GET
    Call<String> getWeatherDetails(@Url String url);
}

@GET specifies the network request type. @Url specifies the URL will be appended with Base service url. If your request have query parameters you can specify them using @Query.

Create WeatherServiceManager class, this class provides retrofit instance for making network calls. Following is the code for this class:

public class WeatherServiceManager {
    private static Retrofit retrofit = null;
    private static WeatherServiceManager weatherServiceManager;

    private WeatherApiInterface weatherApiInterface = null;

    private WeatherServiceManager(){
        if (retrofit == null) {
            retrofit = new Retrofit.Builder()
                    .baseUrl(Constants.BASE_URL)
                    .addConverterFactory(ScalarsConverterFactory.create())
                    .build();
        }
        weatherApiInterface = retrofit.create(WeatherApiInterface.class);
    }

    public static WeatherServiceManager getInstance(){
        if(weatherServiceManager == null){
            weatherServiceManager = new WeatherServiceManager();
        }
        return weatherServiceManager;
    }

    public WeatherApiInterface getWeatherApiInterface(){
        return weatherApiInterface;
    }

}

Above class builds retrofit and provides network service interface (WeatherApiInterface) instance. We will be using this instance for calling actual wunderground API.

Create NetworkRequestManager.java class for performing network requests. Following code:

public class NetworkRequestManager {
    private NetworkResponseHandler networkResponseHandler;

    public NetworkRequestManager(NetworkResponseHandler networkResponseHandler){
        this.networkResponseHandler = networkResponseHandler;
    }

    public void getWeatherInfo(String latlong){
        String url = Constants.WUNDERGROUND_API_PART +Constants.WUNDERGROUND_API_KEY+ Constants.WUNDERGROUND_QUESRY_PART + latlong + Constants.JSON_FILE_EXTENSION;
        WeatherApiInterface weatherApiInterface = WeatherServiceManager.getInstance().getWeatherApiInterface();
        if(weatherApiInterface != null){
            Call<String> call = weatherApiInterface.getWeatherDetails(url);
            call.enqueue(networkResponseHandler);
        }
    }
}

This class adds weather information request into retrofit queue, and will be handled by retrofit automatically. All the callbacks will be received by our main service class because our MainService class implements NetworkResponseHandler which is specified as retrofit call back handler in call.enqueue() method in above code.

Room Persistence

WebService response will be handled by MainService  class. This is the place where we can parse the service response and successful parsing of response we are good to store the information into database. Parsing can be done in presenter but that will make our presenter cluttered and hard to understand hence it is best to keep parsing code out or presenter.

Prior to room database access was done through SQL queries and there was not mechanism for verification of these queries. On top of that developers need to write lots of boiler plate code. Room takes care of all these concerns.

Room provides three main components:

  1. Entity: It represents data for a single table row. Room provide construction of entity using annotations.
  2. DAO: It defines the method that access the database. Annotations are used to bind SQL with each method declared in DAO.
  3. Database: It defines the list of entities and database version. The content of this class defines list of DAO’s.

WeatherInfo class is the entity, that we will be using it to map with table row. Following is the code for this class:

@Entity(tableName = "weather")
public class WeatherInfo {
    @PrimaryKey(autoGenerate = true)
    private int uid;
    @ColumnInfo(name = "city_name")
    private String cityName;
    @ColumnInfo(name = "icn")
    private String weatherConditionIconUrl;
    @ColumnInfo(name = "current_temp")
    private String CurrentTemp;
    @ColumnInfo(name = "day_high")
    private String dayHighTemp;
    @ColumnInfo(name = "day_low")
    private String dayLowTemp;
    @ColumnInfo(name = "humidity")
    private String humidity;
    @ColumnInfo(name = "feels_like")
    private String feelsLike;

    public int getUid() {
        return uid;
    }

    public void setUid(int uid) {
        this.uid = uid;
    }

    public String getCityName() {
        return cityName;
    }

    public void setCityName(String cityName) {
        this.cityName = cityName;
    }

    public String getWeatherConditionIconUrl() {
        return weatherConditionIconUrl;
    }

    public void setWeatherConditionIconUrl(String weatherConditionIconUrl) {
        this.weatherConditionIconUrl = weatherConditionIconUrl;
    }

    public String getCurrentTemp() {
        return CurrentTemp;
    }

    public void setCurrentTemp(String currentTemp) {
        CurrentTemp = currentTemp;
    }

    public String getDayHighTemp() {
        return dayHighTemp;
    }

    public void setDayHighTemp(String dayHighTemp) {
        this.dayHighTemp = dayHighTemp;
    }

    public String getDayLowTemp() {
        return dayLowTemp;
    }

    public void setDayLowTemp(String dayLowTemp) {
        this.dayLowTemp = dayLowTemp;
    }

    public String getHumidity() {
        return humidity;
    }

    public void setHumidity(String humidity) {
        this.humidity = humidity;
    }

    public String getFeelsLike() {
        return feelsLike;
    }

    public void setFeelsLike(String feelsLike) {
        this.feelsLike = feelsLike;
    }
}

@Entity defines the table name associate with this entity. In this case, it table name is weather. @Primarykey set the variable as primary key and autogenerate = true will auto increment the primary key value. @ColumnInfo create column with defined name.

Following code is used to define the DAO:

@Dao
public interface WeatherInfoDao {

    @Query("SELECT * FROM weather")
    List<WeatherInfo> getAll();

    @Query("SELECT COUNT(*) from weather")
    int countWeatherInfos();

    @Query("SELECT * FROM weather where city_name LIKE  :cityName")
    WeatherInfo findByName(String cityName);

    @Insert
    void insertAll(WeatherInfo... weatherinfos);

    @Insert
    void insert(WeatherInfo weatherinfo);

    @Delete
    void delete(WeatherInfo weatherinfo);
}

@Query maps raw query with API. @Insert will user the underlying API for inserting data into table. Similarly, @Delete maps the API for deleting records from table.

Following code specifies the Database class. Here, we list down all the DAO’s used by our app:

@Database(entities = {WeatherInfo.class}, version = 1)
public abstract class AppDatabase extends RoomDatabase {

    private static AppDatabase INSTANCE;

    public abstract WeatherInfoDao weatherDao();

    public static AppDatabase getAppDatabase(Context context) {
        if (INSTANCE == null) {
            INSTANCE =
                    Room.databaseBuilder(context.getApplicationContext(), AppDatabase.class, "user-database")
                            // allow queries on the main thread.
                            // Don't do this on a real app! See PersistenceBasicSample for an example.
                            .allowMainThreadQueries()
                            .build();
        }
        return INSTANCE;
    }

    public static void destroyInstance() {
        INSTANCE = null;
    }

    @Override
    protected SupportSQLiteOpenHelper createOpenHelper(DatabaseConfiguration config) {
        return null;
    }

    @Override
    protected InvalidationTracker createInvalidationTracker() {
        return null;
    }
}

To get an instance of database use the following code:

AppDatabase appDatabase = AppDatabase.getAppDatabase(mainPresenter.getContext());
appDatabase.weatherDao().insert(weatherInfo);

In our, demo application we are mapping the parsed JSON models into WeatherInfo entity the then pass the entity instance to insert API.  AppDatabase.getAppDatabase(mainPresenter.getContext()) creates and instance of appdatabase class. Then we can use the weatherDao instance to perform CURD operations.

Make sure to destroy the app instance. Use following code snippet:

AppDatabase.destroyInstance();

We can further improve the clean code by making use of Dagger 2 and RxJava. Writing clean code can be achieved by discussing the requirements and providing solutions are per the problem. As each applications has different set of requirements and hence, there can be multiple solutions to achieve good design.

Summary

In this demo we have used MVP for keeping application clean and independent of each other. Contract defined between view and presenter keeps both view and presenter related to each other in a well structured manner.

We have separated out network related tasks and app specific components by creating separate modules and using network module as a library in app module. By doing so we have kept the dependency flow one directional. Any changes to the network modules are abstracted out by using interfaces appropriately.

Complete code for this demo can be downloaded from here.

Flashing firmware on ESP8266 WiFi Shield

Internet beholds the power of IoT. To solve every day problems smartly we need to connect our devices with internet. For Adruino based applications you can use Ethernet or Wifi shields. Most popular WiFi shield available in market is ESP8266. There are different variants of this WiFi device the one I will be using is WiFi shield by Wang Tongze.

IMAG1196

Softwares required:

  1. AI-Thinker Firmware:  https://raw.githubusercontent.com/sleemanj/ESP8266_Simple/master/firmware/ai-thinker-v1.1.1-115200.bin
  2. ESP8266 Flasher tool: https://doc-00-9s-docs.googleusercontent.com/docs/securesc/a9b1254bs1us64pqmr38r78ccvertirr/st5stip3lcmlm7ijjk63pmti6kl2ctbk/1498903200000/05702476862157177151/02970016904424219159/0B3dUKfqzZnlwVGc1YnFyUjgxelE?e=download&nonce=9li1ovn7lekh0&user=02970016904424219159&hash=n3kvk8tkjal7jmbco10l3o7jspku5el0

Hardware requirements:

  1. Arduino Uno/Mega
  2. ESP8266 WiFi Shield/Module
  3. FTDI cable (Optional)
  4. Connector cables

Circuit diagram:

ESP8266_flash_circuit

Firmware upgrade:

  1. Connect Arduino RESET pin with Arduino GND pin as shown in picture below:IMAG1198
  2. Connect ESP8266 TX and RX with Arduino TX and RX as shown in picture below:

  3. Connect ESP8266 +5v and GND with Arduino +5V and GND as shown in picture below:

  4. Turn on switch 3 and 4 as  shown on ESP8266 shield. It will enable firmware upgrade mode on shield.
  5. Connect Arduino with PC.
  6. Open esp8266_flasher.exe which you downloaded from link given in software requirement section. Locate firware path and set you com port. Click Download to start firmware upgrade.

esp8266_firmware_flashing

 

Setting up baud rate:

By default this shield be configured to communicate at 115200 baud rate. But Arduino serial comm supports 9600 baud rate only. In order to change baud rate we need to hook up ESP8266 with Arduino.

  1. Turn on switch 1 and 2 as  shown on ESP8266 shield. It will enable TTLSW mode on shield.
  2. Connect Arduino with computer and open Arduino IDE.
  3. Go to Tools > Serial Monitor
  4. Type following command and click Send button: AT+UART_DEF=9600,8,1,0,0

That’s it you are ready to start programming with your ESP8266 WiFi Shield.

Note: There are various ESP8266 modules available in market and most of them operate on 3.3v. In such case make sure you are using a voltage regulator otherwise you might damage your board. Always double check your connections and switch positions.

References:

  1. https://github.com/sleemanj/ESP8266_Simple/tree/master/firmware
  2. https://room-15.github.io/blog/2015/03/26/esp8266-at-command-reference/
  3. https://www.indiegogo.com/projects/cheapest-ardunio-esp8266-wifi-shield-more-gpio#/

 

Mockito for Android

Real purpose of unit testing is to test an functionality in complete isolation from other components. At times it becomes very difficult to test the functionality because the component is tightly coupled with other application components. In such scenarios Mockito comes to rescue. Mockito is a JAVA-based framework that is used for effective unit testing of JAVA applications. Mockito is used to mock interfaces so that a dummy functionality can be added to a mock interface that can be used in unit testing. This tutorial should help you learn how to create unit tests with Mockito as well as how to use its APIs in a simple and intuitive way. For example, if you are required to unit test login screen then you can use mockito to stub network calls and service responses.

  1. Add gradle dependency
    dependencies {

    testCompile ‘org.mockito:mockito-core: 2.+ ‘

    }
  2. Create a test class as shown in the example below:
    public class LoginControllerTest {@Rule
    public MockitoRule rule = MockitoJUnit.rule();

    @Mock
    LoginService loginService;
    private LoginController controller;
    @Before
    public void setUp() throws Exception {
    MockitoAnnotations.initMocks(this);
    controller= new LoginController();
    }

    @Test
    public void testDoLogin() {
    controller.doLogin(“admin”, “admin”);
    verify(view, times(1)).callLoginService(any(String.class));
    when(loginService.doLogin(“admin”, “admin”)).thenReturn(“Success”);
    }
    }

  3. Stubbing in mockito
    Using when you can stub methods. You can use when API when you want the mock to return particular value when particular method is called. when(mock.someMethod()).thenReturn(10); or
    when(mock.doLogin(“userName”, “password”)).thenReturn(“Success”);
  4. Verifying method execution
    Check methods were called with given arguments. Following example show this: verify(view, times(1)).showProgress();
    verify(view, times(1)).showProgress(any(String.class));
    Here any() matches any object of given type, excluding nulls.
  5. Spying
    Using Mockito’s spy feature, we can mock only those methods of a real object that we want to, thus retaining the rest of the original behavior. Following is an example of spy():
    List list = new LinkedList();
    List spy = spy(list);    //optionally, you can stub out some methods:   when(spy.size()).thenReturn(100);    //using the spy calls real methods   spy.add(“one”);   spy.add(“two”);    //prints “one” – the first element of a list   System.out.println(spy.get(0));    //size() method was stubbed – 100 is printed   System.out.println(spy.size());    //optionally, you can verify   verify(spy).add(“one”);   verify(spy).add(“two”);

 

Carduino: Control Arduino Car From Android Phone

Hey guys I have updated the robotic car and was able to make it work with Android device. Now, you can make it dance with your phone. Although, this is a very basic working model, and a lot can be done to make good use of this basic tutorial into a serious use case robotic rover.

For this project I created an Android app that communicates with the car via Bluetooth. I named the app Carduino. I made a few modifications to the car we made in previous article Arduino obstacle avoiding car, are as follows:

  1. Remove the Ultrasonic sensor
  2. Remove the Servo motor
  3. Add Bluetooth module

Components:

  1. Arduino Mega/Uno
  2. HC-05 Bluetooth module
  3. Adruino motor driver shield
  4. Robotic car kit
  5. Connector wires

For instructions on assembling the robotic car, connecting car motors with motor driver shield, and rest of the instructions are given in previous article.

Circuit Diagram:

Car_Blueetooth

  1. Connect HC-05 Bluetooth modules VCC with +5V of Arduino
  2. Connect HC-05 Bluetooth modules GND with GND of Arduino
  3. Connect HC-05 Bluetooth modules RX with TX1 of Arduino
  4. Connect HC-05 Bluetooth modules TX with RX1 of Arduino
  5. Connect Battery +ve terminal with +5 terminal of motor driver shield
  6. Connect Battery -ve terminal with GND terminal of motor driver shield
  7. Motor terminal connection are explained below:

M1 on outside = MOTOR1_A (+) north
M1 on inside = MOTOR1_B (-)
middle = GND
M2 on inside = MOTOR2_A (+)
M2 on outside = MOTOR2_B (-) south

Final assembled car will look like this:

IMAG1167.jpg

Android app:

The Carduino android app is available on github Carduino . This app performs following operations:

Screenshot_2017-06-11-20-19-561. Shows you a list of paired Bluetooth devices as shown in figure below:

Screenshot_2017-06-11-20-20-022. Allows you connect with Bluetooth device and send commands

Screenshot_2017-06-11-20-22-06

App is not very mature and you might face issue while using it. I will be working on it to make it more efficient. Please feel free to help me improve the app.

Arduino Code:

The Arduino code is available on github Carduino_Arduino. Arduino receives the commands via Bluetooth on RX1 port.  Whenever user presses any button from Carduino app, app sends a command to the car. Following commands are send from the device:

User Action Command Car
Up Button F Move Forward
Down Button B Move Backward
Left Button L Turn Left
Right Button R Turn Right
C Button C Connect socket
D Button D Disconnect Socket
S Button S Stop Car

Please feel free to get in touch if you have any questions/suggestions.

Arduino obstacle avoiding car

Arduino based self-driving car was just another thing on my mind. This project will give you basic understanding of robotic rovers. You can mount whatever you want on this car (what it can carry) and can perform wide range of task such as mount a camera for surveillance but you need to make a few changes add a Bluetooth receiver and control it with you phone. Its capabilities are limited by your imagination, if you can imagine something else you can achieve that.

This car uses an ultrasonic sensor to detect objects in its path and then changes direction towards a direction which has more open scope.

Components used:

  1. Robotic car kit: Chassis for mounting boards, motors, and other stuff. Two wheels and two motors, screw.
  2. Arduino board
  3. Motor driving shield for Arduino
  4. Servo motor 9G tower pro
  5. Utrasonic sensor HC-SR04:
  6. Battries AA 4
  7. Battrey holder
  8. On-Off switch
  9. Programming cable
  10. Screw driver

Components used:

Utrasonic sensor HC-SR04: The HC-SR04 Ultrasonic Sensor is a very affordable proximity/distance sensor that has been used mainly for object avoidance in various robotics projects. It essentially gives your Arduino eyes / special awareness and can prevent your robot from crashing or falling off a table. It has also been used in turret applications, water level sensing, and even as a parking sensor. This simple project will use the HC-SR04 sensor with an Arduino and a Processing sketch to provide a neat little interactive display on your computer screen.

Specification:

Working Voltage: DC 5V

Working Current: 15mA

Working Frequency: 40Hz

Max Range: 4m

Min Range: 2cm

Measuring Angle: 15 degree

Trigger Input Signal: 10μS TTL pulse

Echo Output Signal Input TTL lever signal and the range in proportion

Size: 46*20.4mm

Weight: 9g

Servomotor : Servomotor is a position control rotary actuator. It mainly consists of housing, circuit board, core-less motor, gear and position sensor. The receiver or MCU outputs a signal to the servomotor. The motor has a built-in reference circuit that gives out reference signal, cycle of 20ms and width of 1.5ms. The motor compares the acquired DC bias voltage to the voltage of the potentiometer and outputs a voltage difference. The IC on the circuit board will decide the rotate direction accordingly and drive the core-less motor. The gear then pass the force to the shaft. The sensor will determine if it has reached the commanded position according to the feedback signal. Servomotors are used in control systems that requires to have and maintain different angles. When the motor speed is definite, the gear will cause the potentiometer to rotate. When the voltage difference reduces to zero, the motor stops. Normally, the rotation angle range is among 0-180 degrees.

Servomotor comes with many specifications. But all of them have three connection wires, distinguished by brown, red, orange colours (different brand may have different colour). Brown one is for GND, red one for power positive, orange one for signal line.

Motor Driving Shield: The Arduino Motor Shield is based on the L298 (datasheet), which is a dual full-bridge driver designed todrive inductive loads such as relays, solenoids, DC and stepping motors. It lets you drive two DC motors with your Arduino board, controlling the speed and direction of each one independently.

IMAG1125

Assembling the car kit:

  1. Mounting front wheel on chassis:
    1. Connect the spacer on wheel body as shown in image below:
  1. Repeat above step for rest of the holes, mounted spaces will look like as shown in image below:

IMAG1130

  1. Screw in the spacers on chassis body as shown in the figure below
  2. Repeat above step for rest of the spacers,chasis mounted wheel will look as shown in the figure below:
  1. Mounting drive wheels with motors:
    1. Insert the motor drive gear into the socket provided on the wheel rim, motor and connected will look as shown in the figure below:
  1. Repeat above step for second motor wheel pair. Now, you are ready to mount the motors on the chasis.

IMAG1135

  1. Aluminium socket connected with motor have holes for mounting it with the chassis screw in the motor as shown in the figure below.

IMAG1136

  1. Mount the battery holder using double sided tape.

IMAG1127

Mounting the board:

Mount motor driver shield over Arduino as shown in figure below:

Add some insulating material over the screws on chassis so that they don’t touch the board. Or mount your board in such a way that it does not touch any of the screws.

IMAG1142

Once mounted connect the motor wires with the shield as shown in the following figure:

IMAG1144

Mounting the servo:

I have used a double sided tape and cardboard piece and placed it on the chassis as shown on the figure below:

IMAG1145

Slide in the servo motor into the grove on card board as shown below:

Connect the servo on motor driving shield servo slots:

Mounting ultrasonic sensor on Servo motor:

For this I have used card board and created following ultrasonic sensor holder following images show this:

Mount the ultrasonic sensor holder on servo motor gear as shown below:

Connecting the ultrasonic sensor on Arduino:

  1. Connect VCC pin of ultrasonic sensor with +5v on motor driver shield or Arduino board
  2. Connect GND pin of ultrasonic sensor with GND on motor driver shield or Arduino board
  3. Connect Echo pin of ultrasonic sensor with on digital 48 of Arduino
  4. Connect Trig pin of ultrasonic sensor with on digital 50 of Arduino
  5. Make sure wire are long enough for the movement of ultrasonic sensor mounted on servo, as the sensor will move left and right to scan for open spaces.

Circuit diagram:

circuit_diagram

Fully assembled:

The Software:

#include

//   Connector X1:

//     M1 on outside = MOTOR1_A   (+) north

//     M1 on inside  = MOTOR1_B   (-)

//     middle        = GND

//     M2 on inside  = MOTOR2_A   (+)

//     M2 on outside = MOTOR2_B   (-) south

//

//   Connector X2:

//     M3 on outside = MOTOR3_B   (-) south

//     M3 on inside  = MOTOR3_A   (+)

//     middle        = GND

//     M4 on inside  = MOTOR4_B   (-)

//     M4 on outside = MOTOR4_A   (+) north

//

//

//         ——————————-

//         | -+s                         |

//         | -+s                         |

//    M1 A |                             | M4 A

//    M1 B |                             | M4 B

//    GND  |                             | GND

//    M2 A |                             | M3 A

//    M2 B |                             | M3 B

//         |                       ….. |

//         ——————————-

//                + –

// Arduino pins for the shift register

#define MOTORLATCH 12

#define MOTORCLK 4

#define MOTORENABLE 7

#define MOTORDATA 8

// 8-bit bus after the 74HC595 shift register

// (not Arduino pins)

// These are used to set the direction of the bridge driver.

#define MOTOR1_A 2

#define MOTOR1_B 3

#define MOTOR2_A 1

#define MOTOR2_B 4

#define MOTOR3_A 5

#define MOTOR3_B 7

#define MOTOR4_A 0

#define MOTOR4_B 6

// Arduino pins for the PWM signals.

#define MOTOR1_PWM 11

#define MOTOR2_PWM 3

#define MOTOR3_PWM 6

#define MOTOR4_PWM 5

#define SERVO1_PWM 10

#define SERVO2_PWM 9

// Codes for the motor function.

#define FORWARD 1

#define BACKWARD 2

#define BRAKE 3

#define RELEASE 4

// Declare classes for Servo connectors of the MotorShield.

VarSpeedServo myservo;

int HIGH_SPEED = 255;

int LOW_SPEED = 128;

//Ultrasonice sensor

#define echoPin 48 // Echo Pin

#define trigPin 50 // Trigger Pin

#define LEDPin 13 // Onboard LED

int maximumRange = 200; // Maximum range needed

int minimumRange = 0; // Minimum range needed

long duration, distance; // Duration used to calculate distance

void setup()

{

Serial.begin(9600);

Serial.println(“Simple Motor Shield sketch”);

pinMode(trigPin, OUTPUT);

pinMode(echoPin, INPUT);

pinMode(LEDPin, OUTPUT);

myservo.attach(SERVO1_PWM);

myservo.write(90, 50, true);

}

void loop()

{

// Suppose a DC motor is connected to M1_A(+) and M1_B(-)

// Let it run full speed forward and half speed backward.

// If ‘BRAKE’ or ‘RELEASE’ is used, the ‘speed’ parameter

// is ignored.

long objectDistance = getDistanceFromObject();

if (objectDistance >= maximumRange || objectDistance <= minimumRange){

/* Send a negative number to computer and Turn LED ON

to indicate “out of range” */

//Serial.println(“-1”);

motor(3, FORWARD, HIGH_SPEED);

motor(4, FORWARD, HIGH_SPEED);

//digitalWrite(LEDPin, HIGH);

}else {

motor(3, FORWARD, HIGH_SPEED);

motor(4, FORWARD, HIGH_SPEED);

/* Send the distance to the computer using Serial protocol, and

turn LED OFF to indicate successful reading. */

//descide here which way to go

if(objectDistance = maximumRange || leftDistance = maximumRange || rightDistance <= minimumRange){

isRightValid = false;

rightDistance = -1;

Serial.print(“Right “);

Serial.println(rightDistance);

}else{

isRightValid = true;

Serial.print(“Right “);

Serial.println(rightDistance);

}

if(isLeftValid && isRightValid){

if(leftDistance rightDistance){

// go left

Serial.print(“2. Go left”);

Serial.print(” “);

Serial.print(leftDistance);

Serial.print(” : right “);

Serial.println(rightDistance);

//reset servo

myservo.write(90, 50, true);

motor(3, BACKWARD, HIGH_SPEED);

motor(4, FORWARD, HIGH_SPEED);

delay(250);

motor(3, RELEASE, 0);

motor(4, RELEASE, 0);

delay(250);

}

}else if(isLeftValid && !isRightValid){

// go left

Serial.print(“3. Go left”);

Serial.print(” “);

Serial.print(leftDistance);

Serial.print(” : right “);

Serial.println(rightDistance);

//reset servo

myservo.write(90, 50, true);

motor(3, BACKWARD, HIGH_SPEED);

motor(4, FORWARD, HIGH_SPEED);

delay(250);

motor(3, RELEASE, 0);

motor(4, RELEASE, 0);

delay(250);

}else if(!isLeftValid && isRightValid){

// go right

Serial.print(“4. Go right”);

Serial.print(” “);

Serial.print(rightDistance);

Serial.print(” : left “);

Serial.println(leftDistance);

//reset servo

myservo.write(90, 50, true);

motor(3, FORWARD, HIGH_SPEED);

motor(4, BACKWARD, HIGH_SPEED);

delay(250);

motor(3, RELEASE, 0);

motor(4, RELEASE, 0);

delay(250);

}

}

}

}

// ———————————

// motor

//

// Select the motor (1-4), the command,

// and the speed (0-255).

// The commands are: FORWARD, BACKWARD, BRAKE, RELEASE.

//

void motor(int nMotor, int command, int speed)

{

int motorA, motorB;

if (nMotor >= 1 && nMotor = 0 && speed <= 255)

{

analogWrite(motorPWM, speed);

}

}

}

// ———————————

// shiftWrite

// The parameters are just like digitalWrite().

// The output is the pin 0…7 (the pin behind

// the shift register).

// The second parameter is HIGH or LOW.

// There is no initialization function.

// Initialization is automatically done at the first

// time it is used.

void shiftWrite(int output, int high_low)

{

static int latch_copy;

static int shift_register_initialized = false;

// Do the initialization on the fly,

// at the first time it is used.

if (!shift_register_initialized)

{

// Set pins for shift register to output

pinMode(MOTORLATCH, OUTPUT);

pinMode(MOTORENABLE, OUTPUT);

pinMode(MOTORDATA, OUTPUT);

pinMode(MOTORCLK, OUTPUT);

// Set pins for shift register to default value (low);

digitalWrite(MOTORDATA, LOW);

digitalWrite(MOTORLATCH, LOW);

digitalWrite(MOTORCLK, LOW);

// Enable the shift register, set Enable pin Low.

digitalWrite(MOTORENABLE, LOW);

// start with all outputs (of the shift register) low

latch_copy = 0;

shift_register_initialized = true;

}

// The defines HIGH and LOW are 1 and 0.

// So this is valid.

bitWrite(latch_copy, output, high_low);

// Use the default Arduino ‘shiftOut()’ function to

// shift the bits with the MOTORCLK as clock pulse.

// The 74HC595 shiftregister wants the MSB first.

// After that, generate a latch pulse with MOTORLATCH.

shiftOut(MOTORDATA, MOTORCLK, MSBFIRST, latch_copy);

delayMicroseconds(5);    // For safety, not really needed.

digitalWrite(MOTORLATCH, HIGH);

delayMicroseconds(5);    // For safety, not really needed.

digitalWrite(MOTORLATCH, LOW);

}

long lookLeft(void){

myservo.write(0, 50, true);

//give sensor some time to read values

//delay(1000);

long objectDistance = getDistanceFromObject();

return objectDistance;

}

long lookRight(void){

myservo.write(180, 50, true);

//give sensor some time to read values

//delay(1000);

long objectDistance = getDistanceFromObject();

return objectDistance;

}

// calculate the distance from object when moving straight

long getDistanceFromObject(){

//read input from ultrasonic sensor

long objDistance, duration;

digitalWrite(trigPin, LOW);

delayMicroseconds(2);

digitalWrite(trigPin, HIGH);

delayMicroseconds(10);

digitalWrite(trigPin, LOW);

duration = pulseIn(echoPin, HIGH);

//Calculate the distance (in cm) based on the speed of sound.

objDistance = duration/58.2;

return objDistance;

}

The logic behind this cars functioning is that the ultrasonic sensor is continuously detecting distance from any obstacle in front of the car. When the distance is less than specified threshold then servo motor checks left and right side of the car and in whichever direction it finds more open space it turns the car into that direction by rotation both the wheels in opposite direction.

Although, in robotics you can turn a wheeled robot either by rotating one wheel and using the other wheel as pivot. Other approach which I am using is to move both the wheels in opposite direction. The benefit of the later one is that the turning speed increases allowing your robot to change direction quickly. You can catch the robot live in action on youtube.

In next article on this series I will try to integrate Bluetooth and control the car using my Android phone.

JaCoCo for Android

JaCoCo

JaCoCo is a free code coverage library for Java, which has been created by the EclEmma team based on the lessons learned from using and integration existing libraries for many years. JaCoCo also plugin support for Jenkins, which shows up a coverage graph on you project home screen in Jenkins.

Configuring JaCoCo

I.            To enable this option, we need to add a property to our debug build variant. Using the Android plugin DSL, we can enable the coverage through the testCoverageEnabled property:

android {

…       

buildTypes {                

debug {                        

testCoverageEnabled true                

}                

…       

}

       }

II.            By default, the Android plugin only generates the coverage report from instrumented tests. To be able to generate the coverage of unit testing, we must create a task manually:

apply plugin: ‘jacoco’

jacoco {
    toolVersion = “0.7.6.201602180812”

task jacocoTestReport(type: JacocoReport, dependsOn: ‘testDebugUnitTest’) {

    reports {       

xml.enabled = true       

html.enabled = true   

}

def fileFilter = [‘**/R.class’, ‘**/R$*.class’, ‘**/BuildConfig.*’, ‘**/Manifest*.*’, ‘**/*Test*.*’, ‘android/**/*.*’]   

def debugTree = fileTree(dir: “${buildDir}/intermediates/classes/debug”, excludes: fileFilter)   

def mainSrc = “${project.projectDir}/src/main/java”    

sourceDirectories = files([mainSrc])   

classDirectories = files([debugTree])   

executionData = files(“${buildDir}/jacoco/testDebugUnitTest.exec”)

}

III.            To enable the coverage report for local tests when using version 2.2.+ of Android Gradle plugin, you need to enable it in your app’s build.gradle:

android {   

…    

testOptions {       

unitTests.all {           

jacoco {               

includeNoLocationClasses = true           

}       

}   

}

IV.            If you want to enable coverage for all the build flavors then you can use the following code:

project.afterEvaluate {
    // Grab all build types and product flavors
    def buildTypes = android.buildTypes.collect { type -> type.name }
    def productFlavors = android.productFlavors.collect { flavor -> flavor.name }
    println(buildTypes)
    println(productFlavors)
    // When no product flavors defined, use empty
    if (!productFlavors) productFlavors.add(”)

    productFlavors.each { productFlavorName ->
        buildTypes.each { buildTypeName ->
            def sourceName, sourcePath

if (!productFlavorName) {
                sourceName = sourcePath = “${buildTypeName}”
            } else {
                sourceName = “${productFlavorName}${buildTypeName.capitalize()}”
                sourcePath = “${productFlavorName}/${buildTypeName}”
            }
            def testTaskName = “test${sourceName.capitalize()}UnitTest”
            println(“SourceName:${sourceName}”)
            println(“SourcePath:${sourcePath}”)
            println(“testTaskName:${testTaskName}”)// Create coverage task of form ‘testFlavorTypeCoverage’ depending on ‘testFlavorTypeUnitTest’
            task “${testTaskName}Coverage” (type:JacocoReport, dependsOn: “$testTaskName”) {
                group = “Reporting”
                description = “Generate Jacoco coverage reports on the ${sourceName.capitalize()} build.”

                classDirectories = fileTree(
                        dir: “${project.buildDir}/intermediates/classes/${sourcePath}”,
                        excludes: [‘**/R.class’,
                                   ‘**/R$*.class’,
                                   ‘**/*$ViewInjector*.*’,
                                   ‘**/*$ViewBinder*.*’,
                                   ‘**/BuildConfig.*’,
                                   ‘**/Manifest*.*’]
                )

                def coverageSourceDirs = [
                        “src/main/java”,
                        “src/$productFlavorName/java”,
                        “src/$buildTypeName/java”
                ]
                additionalSourceDirs = files(coverageSourceDirs)
                sourceDirectories = files(coverageSourceDirs)
                executionData = files(“${project.buildDir}/jacoco/${testTaskName}.exec”)
                println(“${project.buildDir}/jacoco/${testTaskName}.exec”)
                reports {
                    xml.enabled = true
                    html.enabled = true
                }
            }
        }
    }
}
Run the coverage task using gradlew clean test<build_flavor_name>UnitTestCoverage. For example, if your build flavor is PaidDebug then your command to run coverage will be gradlew clean testPaidDebugUnitTestCoverage. By default, coverage report is generated in app\build\reports\jacoco\testPaidDebugUnitTestCoverage\html directory.

JaCoCo Jenkins configuration

Although with your local configuration in gradle you can run coverage through JaCoCo. But JaCoCo also provides Jenkins plugin which you can integrate with your Jenkins installation. Following are the steps to configure JaCoCo plugin with Jenkins:

I.            Install JaCoCo plugin:·

> Go to Manage Jenkins > Manage Plugins.

> Search for JaCoCo Plugin

> Click Download now and install after restart.

II.            Configure JaCoCo in project

> Go to your project.

> Click Configure from top left Navigation menu.

> Scroll to Add post-build actions.

> Click on Add post-build actions drop down menu, a list will open

> Select Record JaCoCo coverage report from the drop-down menu. Following item will be added to your projects Jenkins configuration:

jacoco_jenkins_plugin.png

Alternatively, you can run JaCoCo coverage from Jenkins using following:

gradle_script_jacoco

In this case, you need to add gradle Tasks, in our case we need to add clean testPaidDebugUnitTestCoverage. But using this approach you won’t be able to see coverage report in your project dash board.

Arduino temperator monitoring

A year back I started to get understanding of IoT devices. But theories are freely available on web and to explore this space more I decided to get my hands dirty with some real stuff. I initially started with non-IoT projects such as controlling LED lights with Arduino and buzzers etc. Then I moved on to bit complex projects such as this one temperature monitoring. Although, it is very basic setup in custom hardware space but bit complex for beginners like me.

Components required:

Hardware:

1X LM35 Temp Sensor

1X 1602 LCD display

17X Jumper wires

1X Arduino Uno/Mega board

1X Arduino programming cable

1X Breadboard

1X 1kOhm resistor

imag1124.jpg

Software:

Arduino IDE and SDK

I purchased keyestudio super learning kit for Arduino from eBay. But these items are readily available on Amazon, and Banggood.com also.

Connecting the pieces together:

LM35 Temperature sensor: LM35 is a common and easy-to-use temperature sensor. It does not require other hardware. You just need an analog port to make it work. The difficulty lies in compiling the code to convert the analog value it reads to celsius temperature.

1602 LCD Display: 1602 LCD has wide applications. In the beginning, 1602 LCD uses a HD44780 controller. Now, almost all 1602 LCD module uses a compatible IC, so their features are basically the same.

1602LCD main parameters:

 Display capacity: 16 * 2 characters.

 Chip operating voltage: 4.5 ~ 5.5V.

 Working current: 2.0mA (5.0V).

 Optimum working voltage of the module is 5.0V.

 Character size: 2.95 * 4.35 (W * H) mm.

Pin description of 1602 LCD:

No. Mark Pin description No. Mark Pin description
1 VSS VSS Power GND 9 D2 Date I/O 9 D2 Date I/O
2 VDD positive 10 D3 Date I/O
3 VL LCD voltage bias signal 11 D4 D4 Date I/O
4 RS Select data/command(V/L) 12 D5 Date I/O
5 R/W Select read/write(H/L) 13 D6 Date I/O
6 E Enable signal 14 D7 Date I/O
7 D0 Date I/O 15 BLA Back light power positive
8 D1 Date I/O 16 BLK Back light power negative

Interface description:

  1. Two power sources, one for module power, another one for back light, generally use 5V. In this project, we use 3.3V for back light.
  2. VL is the pin for adjusting contrast ratio; it usually connects a potentiometer (no more than 5KΩ) in series for its adjustment. In this experiment, we use a 1KΩ resistor. For its connection, it has 2 methods, namely high potential and low potential. Here, we use low potential method; connect the resistor and then the GND.
  3. RS is a very common pin in LCD. It’s a selecting pin for command/data. When the pin is in high level, it’s in data mode; when it’s in low level, it’s in command mode.
  4. RW pin is also very common in LCD. It’s a selecting pin for read/write. When the pin is in high level, it’s in read operation; when it’s in low level, it’s in write operation.
  5. E pin is also very common in LCD. Usually, when the signal in the bus is stabilized, it sends out a positive pulse requiring read operation. When this pin is in high level, the bus is not allowed to have any change.
  6. D0-D7 is 8-bit bidirectional parallel bus, used for command and data transmission.
  7. BLA is anode for back light; BLK, cathode for back light.

4 basic operations of 1602LCD:

Read status input RS=L, R/W=H, E=H output D0-D7=status word
Write command input RS=L, R/W=H, D0-D7=command

code, E=high pulse

output none
Read Data input RS=H, R/W=H, E=H output D0-D7=data
Write Data input RS=H, R/W=L, D0-D7=data,

E=high pulse

output none

Connection Diagram:

1602 LCD and Arduino connection diagram:

LCD_Connection

Temperature sensor, LCD, and Arduino connection diagram:

connection_diagram

Code:

int tempin;

int DI = 12;

int RW = 11;

int DB[] = {3, 4, 5, 6, 7, 8, 9, 10};// use array to select pin for bus

int Enable = 2;

void LcdCommandWrite(int value) {

  // define all pins

  int i = 0;

  for (i = DB[0]; i <= DI; i++) // assign value for bus

  {

    digitalWrite(i, value & 01); // for 1602 LCD, it uses D7-D0( not D0-D7) for signal identification;

    //here, it’s used for signal inversion.

    value >>= 1;

  }

  digitalWrite(Enable, LOW);

  delayMicroseconds(1);

  digitalWrite(Enable, HIGH);

  delayMicroseconds(1); // wait for 1ms

  digitalWrite(Enable, LOW);

  delayMicroseconds(1); // wait for 1ms

}

void LcdDataWrite(int value) {

  // initialize all pins

  int i = 0;

  digitalWrite(DI, HIGH);

  digitalWrite(RW, LOW);

  for (i = DB[0]; i <= DB[7]; i++) {

    digitalWrite(i, value & 01);

    value >>= 1;

  }

  digitalWrite(Enable, LOW);

  delayMicroseconds(1);

  digitalWrite(Enable, HIGH);

  delayMicroseconds(1);

  digitalWrite(Enable, LOW);

  delayMicroseconds(1); // wait for 1ms

}

void setup() {

  // put your setup code here, to run once:

  int i = 0;

  for (i = Enable; i <= DI; i++) {

    pinMode(i, OUTPUT);

  }

  delay(100);

  // initialize LCD after a brief pause

  // for LCD control

  LcdCommandWrite(0x38); // select as 8-bit interface, 2-line display, 5×7 character size

  delay(64);

  LcdCommandWrite(0x38); // select as 8-bit interface, 2-line display, 5×7 character size

  delay(50);

  LcdCommandWrite(0x38); // select as 8-bit interface, 2-line display, 5×7 character size

  delay(20);

  LcdCommandWrite(0x06); // set input mode

  // auto-increment, no display of shifting

  delay(20);

  LcdCommandWrite(0x0E); // display setup

  // turn on the monitor, cursor on, no flickering

  delay(20);

  LcdCommandWrite(0x01); // clear the scree, cursor position returns to 0

  delay(100);

  LcdCommandWrite(0x80); // display setup

  // turn on the monitor, cursor on, no flickering

  delay(20);

}

void loop() {

  // put your main code here, to run repeatedly:

  int tempVal;

  int calVal;

  LcdCommandWrite(0x01); // clear the scree, cursor position returns to 0

  delay(10);

  LcdCommandWrite(0x80 + 3);

  delay(10);

  // write in welcome message

  LcdDataWrite(‘T’);

  LcdDataWrite(‘e’);

  LcdDataWrite(‘m’);

  LcdDataWrite(‘p’);

  LcdDataWrite(‘e’);

  LcdDataWrite(‘r’);

  LcdDataWrite(‘a’);

  LcdDataWrite(‘t’);

  LcdDataWrite(‘u’);

  LcdDataWrite(‘r’);

  LcdDataWrite(‘e’);

  LcdDataWrite(‘ ‘);

  LcdDataWrite(‘i’);

  LcdDataWrite(‘s’);

  LcdDataWrite(‘:’);

  delay(10);

  LcdCommandWrite(0xc0); // set cursor position at second line, second position

  delay(10);

 tempVal = analogRead(0);

  calVal = (125 * tempVal) >> 8;

  String strTemp = String(calVal);

  char tempchars[3];

  strTemp.toCharArray(tempchars, 3);

  LcdDataWrite(tempchars[0]);

  LcdDataWrite(tempchars[1]);

  LcdDataWrite(‘C’);

  LcdDataWrite(‘o’);

  delay(1000);

}

In above code tempVal = analogRead(0); will read values from temperature sensor and then converted into celcius using following code  calVal = (125 * tempVal) >> 8;. Once, this is code the value is sent to the LCD using following code, but before sending it is converted into characters using:

String strTemp = String(calVal);

char tempchars[3];

strTemp.toCharArray(tempchars, 3);

LcdDataWrite(tempchars[0]);

LcdDataWrite(tempchars[1]);

As this piece of code is inside loop() method  the values will be updated every second because we set delay(1000);.

Check out fully functional project video for this project. Hope it helps.