Thursday, 26 April 2018

Printing Numbers in Sequence Using Threads - Java Program

This post shows how you can print numbers in sequence using three threads in Java. If there are three threads thread1, thread2 and thread3 then numbers should be printed alternatively by these threads like this.

thread1 - 1
thread2 - 2
thread3 – 3
thread1 - 4
thread2 - 5
thread3 – 6
...
...
...

Java program

While printing numbers in sequence using threads trick is to use modulo division to check which thread can print the number and which threads are to be blocked waiting.

class SharedPrinter{
    int number = 1;
    int numOfThreads;
    int numInSequence;
    SharedPrinter(int numInSequence, int numOfThreads){
        this.numInSequence = numInSequence;
        this.numOfThreads = numOfThreads;
    }
    public void printNum(int result){
        synchronized(this) {
            while (number < numInSequence - 1) {
                while(number % numOfThreads != result){
                    try {
                        this.wait();
                    } catch (InterruptedException e) {
                        // TODO Auto-generated catch block
                        e.printStackTrace();
                    }
                }
                System.out.println(Thread.currentThread().getName() + " - " + number++);
                this.notifyAll();
            }
        }
    }
}
class SeqRunnable implements Runnable{
    SharedPrinter sp;
    int result;
    static Object sharedObj = new Object();
    SeqRunnable(SharedPrinter sp, int result){
        this.sp = sp;
        this.result = result;
    }
    @Override
    public void run() {
        sp.printNum(result);
    }
}
public class SeqNumber {
    final static int NUMBERS_IN_SEQUENCE = 10;
    final static int NUMBER_OF_THREADS = 3;
    public static void main(String[] args) {
        // Shared object
        SharedPrinter sp = new SharedPrinter(NUMBERS_IN_SEQUENCE, NUMBER_OF_THREADS);
        // Creating 3 threads
        Thread t1 = new Thread(new SeqRunnable(sp, 1), "Thread1");
        Thread t2 = new Thread(new SeqRunnable(sp, 2), "Thread2");
        Thread t3 = new Thread(new SeqRunnable(sp, 0), "Thread3");

        t1.start();
        t2.start();
        t3.start();
    }
}

Output

Thread1 - 1
Thread2 - 2
Thread3 - 3
Thread1 - 4
Thread2 - 5
Thread3 - 6
Thread1 - 7
Thread2 - 8
Thread3 - 9
Thread1 - 10

That's all for this topic Printing Numbers in Sequence Using Threads - Java Program. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. How to Run Threads in Sequence - Java Program
  2. Print Odd-Even Numbers Using Threads And wait-notify - Java Program
  3. Producer-Consumer Java Program Using ArrayBlockingQueue
  4. How to Create Deadlock in Java Multi-Threading - Java Program
  5. Race condition in Java multi-threading

You may also like -

>>>Go to Java Programs Page

Main Thread in Java

When a Java program starts one thread starts running immediately that thread is known as main thread in Java.

If you run the following program where you have not created any thread, still on printing the thread name you can see that the thread name is displayed as “main”. This is the main thread which started running automatically.

public class ThreadDemo {
    public static void main(String[] args) { 
     System.out.println("Thread name- " + Thread.currentThread().getName());
    }
}

Output

Thread name- main

Any other thread spawned from the current thread inherits the properties like thread priority, thread is a daemon thread or not from the current thread. Since main thread is the first thread that is started so you will spawn other threads from the context of the main thread and other threads will inherit above mentioned properties from main thread.

Also note that the JVM terminates only after all the non-daemon threads have finished execution. In your application if you have other non-daemon threads executing then main thread can terminate before those threads. So, main thread is the first to start in your application but it doesn’t have to be the last to finish.

Here is a Java example of main thread where three more threads are spawned. Since all the three threads are spawned from main thread so they should have same priority and should be non-daemon threads. Main thread object is passed to the spawned threads to keep checking that main thread is alive or not using isAlive() method.

class NumThread implements Runnable{
    Thread mainThread;
    public NumThread(Thread thread) {
        this.mainThread = thread;
    }
    @Override
    public void run() {
        System.out.println(Thread.currentThread().getName()+ " Priority " 
         + Thread.currentThread().getPriority());
        System.out.println("Is Daemon Thread " + Thread.currentThread().isDaemon());
        for (int i = 0; i < 5; i++) {             
            System.out.println(Thread.currentThread().getName() + " : " + i);
        } 
        System.out.println("Thread name " + mainThread.getName());
        System.out.println("Main Thread Status " + mainThread.isAlive());     
    }
}

public class MainThreadDemo {

    public static void main(String[] args) {
        System.out.println("Thread name- " + Thread.currentThread().getName());
        System.out.println("Thread Status " + Thread.currentThread().isAlive());
        System.out.println("Thread Priority " + Thread.currentThread().getPriority());
        System.out.println("Is Daemon Thread " + Thread.currentThread().isDaemon());
         // Creating threads
        Thread t1 = new Thread(new NumThread(Thread.currentThread()), "Thread-1");
        Thread t2 = new Thread(new NumThread(Thread.currentThread()), "Thread-2");
        Thread t3 = new Thread(new NumThread(Thread.currentThread()), "Thread-3");
        t1.start();
        try {
            Thread.sleep(1000);
        } catch (InterruptedException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
        t2.start(); 
        t3.start();
        System.out.println("Thread Status " + Thread.currentThread().isAlive());
    }
}

Output

Thread name- main
Thread Status true
Thread Priority 5
Is Daemon Thread false
Thread-1 Priority 5
Is Daemon Thread false
Thread-1 : 0
Thread-1 : 1
Thread-1 : 2
Thread-1 : 3
Thread-1 : 4
Thread name main
Main Thread Status true
Thread Status true
Thread-2 Priority 5
Is Daemon Thread false
Thread-2 : 0
Thread-2 : 1
Thread-2 : 2
Thread-3 Priority 5
Is Daemon Thread false
Thread-2 : 3
Thread-2 : 4
Thread name main
Main Thread Status false
Thread-3 : 0
Thread-3 : 1
Thread-3 : 2
Thread-3 : 3
Thread-3 : 4
Thread name main
Main Thread Status false

As you can see for all the threads, thread priority is 5 which is same as the priority of main thread. Also all the spawned threads are non-daemon threads. You can also see from the displayed messages that main thread died while thread-2 and thread-3 were still executing.

That's all for this topic Main Thread in Java. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. Creating Thread in Java
  2. Thread States in Java Multi-Threading
  3. Race Condition in Java Multi-Threading
  4. Why wait(), notify() And notifyAll() Must be Called Inside a Synchronized Method or Block
  5. Volatile in Java

You may also like -

Wednesday, 25 April 2018

How to Compress MapReduce Job Output in Hadoop

You can choose to compress the output of a Map-Reduce job in Hadoop. You can configure to do it for all the jobs in a cluster or you can set properties for specific jobs.

Configuration parameters for compressing MapReduce job output

  • mapreduce.output.fileoutputformat.compress- Set this property to true if you want to compress the MapReduce job output. Default value is false.
  • mapreduce.output.fileoutputformat.compress.type- This configuration is applicable if your MapReduce job output is a sequence file. In that case you can specify any one of these value for compression- None, Record or Block. Default is Record.
  • mapreduce.output.fileoutputformat.compress.codec– Which codec is to be used for compression. Default is org.apache.hadoop.io.compress.DefaultCodec

Configuring at cluster level

If you want to compress output of all MapReduce jobs running on the cluster, then you can configure these parameters in mapred-site.xml.
As example- If you want to compress the output of MapReduce jobs and the compression format used is Gzip.

<property>
  <name>mapreduce.output.fileoutputformat.compress</name>
  <value>true</value>
</property>
<property>
  <name>mapreduce.output.fileoutputformat.compress.type</name>
  <value>RECORD</value>
</property>
<property>
  <name>mapreduce.output.fileoutputformat.compress.codec</name>
  <value>org.apache.hadoop.io.compress.GzipCodec</value>
</property>

Configuring at per-job basis

If you want to compress output of the specific MapReduce job then add the following properties in your job configuration.

 
FileOutputFormat.setCompressOutput(job, true);
FileOutputFormat.setOutputCompressorClass(job, GzipCodec.class);
If output is a sequence file then you can set compression type too.
SequenceFileOutputFormat.setOutputCompressionType(job, CompressionType.BLOCK);

That's all for this topic How to Compress MapReduce Job Output in Hadoop. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. How to Compress Intermediate Map Output in Hadoop
  2. Data Compression in Hadoop
  3. Compressing File in snappy Format in Hadoop - Java Program
  4. Word Count MapReduce Program in Hadoop
  5. How MapReduce Works in Hadoop

You may also like -

>>>Go to Hadoop Framework Page

How to Compress Intermediate Map Output in Hadoop

In order to speed up the MaReduce job it is helpful to compress the intermediate map output in Hadoop.

Since output of the map phase is-

  1. Stored to disk.
  2. Mapper output is transferred to the reducers on different nodes as their input.

Thus compressing the map output helps in both-

  1. Saving the storage (reducing the IO) while storing map output.
  2. Reduces the amount of data transferred to reducers.

It is better to use a fast compressor like Snappy, LZO or LZ4 to compress map output as higher compression ratio would mean more time to compress. Moreover compressed output is splittable or not does not matter when compressing intermediate map output.

Configuration parameters for compressing map output

You can set configuration parameters for the whole cluster so that all the jobs running on the cluster will compress the map output. You can also opt to do it for individual MapReduce jobs.

As example- If you want to set snappy as the compression format for the map output at the cluster level then you need to set the following properties in mapred-site.xml:

<property>
  <name>mapreduce.map.output.compress</name>
  <value>true</value>
</property>
<property>
  <name>mapreduce.map.output.compress.codec</name>
  <value>org.apache.hadoop.io.compress.SnappyCodec</value>
</property>

If you want to set it for jobs individually then you need to set following properties with in your MapReduce program-

Configuration conf = new Configuration();
conf.setBoolean("mapreduce.map.output.compress", true);
conf.set("mapreduce.map.output.compress.codec", "org.apache.hadoop.io.compress.SnappyCodec");

That's all for this topic How to Compress Intermediate Map Output in Hadoop. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. How to Compress MapReduce Job Output in Hadoop
  2. Data Compression in Hadoop
  3. Compressing File in bzip2 Format in Hadoop - Java Program
  4. Word Count MapReduce Program in Hadoop
  5. What is SafeMode in Hadoop

You may also like -

>>>Go to Hadoop Framework Page

Tuesday, 24 April 2018

Compressing File in snappy Format in Hadoop - Java Program

This post shows how to compress an input file in snappy format in Hadoop. The Java program will read input file from the local file system and copy it to HDFS in compressed snappy format. Input file is large enough so that it is stored as more than one HDFS block. That way you can also see that the file is splittable or not when used in a MapReduce job. Note here that snappy format is not splittable so MapReduce job will create only a single split for the whole data.

Java program to compress file in snappy format

As explained in the post Data Compression in Hadoop, there are different codec (compressor/decompressor) classes for different compression formats. Codec class for snappy compression format is “ org.apache.hadoop.io.compress.SnappyCodec”.

Java code

import java.io.BufferedInputStream;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.CompressionCodecFactory;
import org.apache.hadoop.io.compress.CompressionOutputStream;

public class SnappyCompress {
 public static void main(String[] args) {
  Configuration conf = new Configuration();
  InputStream in = null;
  OutputStream out = null;
  try {
   FileSystem fs = FileSystem.get(conf);
   // Input file - local file system
   in = new BufferedInputStream(new FileInputStream("/netjs/Hadoop/Data/log.txt"));
   // Output file path in HDFS
   Path outFile = new Path("/user/out/test.snappy");
   // Verifying if the output file already exists
   if (fs.exists(outFile)) {
    throw new IOException("Output file already exists");
   }
   
   out = fs.create(outFile);
   
   // snappy compression 
   CompressionCodecFactory factory = new CompressionCodecFactory(conf);
   CompressionCodec codec = factory.getCodecByClassName
    ("org.apache.hadoop.io.compress.SnappyCodec");
   CompressionOutputStream compressionOutputStream = codec.createOutputStream(out);
   
   try {
    IOUtils.copyBytes(in, compressionOutputStream, 4096, false);
    compressionOutputStream.finish();
    
   } finally {
    IOUtils.closeStream(in);
    IOUtils.closeStream(compressionOutputStream);
   }
   
  } catch (IOException e) {
   e.printStackTrace();
  }
 }
}
    

To run this Java program in Hadoop environment export the class path where your .class file for the Java program resides.

$ export HADOOP_CLASSPATH=/home/netjs/eclipse-workspace/bin 

Then you can run the Java program using the following command.

$ hadoop org.netjs.SnappyCompress

18/04/24 15:49:41 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library from the embedded binaries
18/04/24 15:49:41 INFO compress.CodecPool: Got brand-new compressor [.snappy]

Once the program is successfully executed you can check the number of HDFS blocks created by running the hdfs fsck command.

$ hdfs fsck /user/out/test.snappy

 Total size: 419688027 B
 Total dirs: 0
 Total files: 1
 Total symlinks:  0
 Total blocks (validated): 4 (avg. block size 104922006 B)
 Minimally replicated blocks: 4 (100.0 %)
 Over-replicated blocks: 0 (0.0 %)
 Under-replicated blocks: 0 (0.0 %)
 Mis-replicated blocks:  0 (0.0 %)
 Default replication factor: 1
 Average block replication: 1.0
 Corrupt blocks:  0
 Missing replicas:  0 (0.0 %)
 Number of data-nodes:  1
 Number of racks:  1
FSCK ended at Tue Apr 24 15:52:09 IST 2018 in 5 milliseconds

As you can see there are 4 HDFS blocks.

Now you can give this compressed file test.snapy as input to a wordcount MapReduce program. Since the compression format used is snappy, which is not splittable, there will be only one input split though there are 4 HDFS block.

$ hadoop jar /home/netjs/wordcount.jar org.netjs.WordCount /user/out/test.snappy /user/mapout1

18/04/24 15:54:44 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
18/04/24 15:54:45 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
18/04/24 15:54:46 INFO input.FileInputFormat: Total input files to process : 1
18/04/24 15:54:46 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library from the embedded binaries

18/04/24 15:54:46 INFO mapreduce.JobSubmitter: number of splits:1

18/04/24 15:54:46 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1524565091782_0001
18/04/24 15:54:47 INFO impl.YarnClientImpl: Submitted application application_1524565091782_0001
You can see from the console message that only one input split is created for the MapReduce job.

That's all for this topic Compressing File in snappy Format in Hadoop - Java Program. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. Java Program to Read File in HDFS
  2. Word Count MapReduce Program in Hadoop
  3. Replica Placement Policy in Hadoop Framework
  4. What is SafeMode in Hadoop

You may also like -

>>>Go to Hadoop Framework Page

Compressing File in bzip2 Format in Hadoop - Java Program

This post shows how to compress an input file in bzip2 format in Hadoop. The Java program will read input file from the local file system and copy it to HDFS in compressed bzip2 format.

Input file is large enough so that it is stored as more than one HDFS block. That way you can also see that the file is splittable or not when used in a MapReduce job. Note here that bzip2 format is splittable

Java program to compress file in bzip2 format

As explained in the post Data Compression in Hadoop, there are different codec (compressor/decompressor) classes for different compression formats. Codec class for bzip2 compression format is “org.apache.hadoop.io.compress.Bzip2Codec”.

Java code

import java.io.BufferedInputStream;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.CompressionCodecFactory;
import org.apache.hadoop.io.compress.CompressionOutputStream;

public class BzipCompress {

 public static void main(String[] args) {
  Configuration conf = new Configuration();
  InputStream in = null;
  OutputStream out = null;
  try {
   FileSystem fs = FileSystem.get(conf);
   // Input file - local file system
   in = new BufferedInputStream(new FileInputStream
           ("netjs/Hadoop/Data/log.txt"));
   // Output file path in HDFS
   Path outFile = new Path("/user/out/test.bz2");
   // Verifying if the output file already exists
   if (fs.exists(outFile)) {
    System.out.println("Output file already exists");
    throw new IOException("Output file already exists");
   }
   
   out = fs.create(outFile);
   
   // bzip2 compression 
   CompressionCodecFactory factory = new CompressionCodecFactory(conf);
   CompressionCodec codec = factory.getCodecByClassName
     ("org.apache.hadoop.io.compress.BZip2Codec");
   CompressionOutputStream compressionOutputStream = codec.createOutputStream(out);
   
   try {
    IOUtils.copyBytes(in, compressionOutputStream, 4096, false);
    compressionOutputStream.finish();
    
   } finally {
    IOUtils.closeStream(in);
    IOUtils.closeStream(compressionOutputStream);
   }
   
  } catch (IOException e) {
   e.printStackTrace();
  }
 }

}
    

To run this Java program in Hadoop environment export the class path where your .class file for the Java program resides.

export HADOOP_CLASSPATH=/home/netjs/eclipse-workspace/bin  
Then you can run the Java program using the following command.
$ hadoop org.netjs.BzipCompress
    
18/04/24 10:44:05 INFO bzip2.Bzip2Factory: Successfully
  loaded & initialized native-bzip2 library system-native
18/04/24 10:44:05 INFO compress.CodecPool: Got brand-new compressor [.bz2]
 
Once the program is successfully executed you can check the number of HDFS blocks created by running the hdfs fsck command.
$ hdfs fsck /user/out/test.bz2

.Status: HEALTHY
 Total size: 228651107 B
 Total dirs: 0
 Total files: 1
 Total symlinks:  0
 Total blocks (validated): 2 (avg. block size 114325553 B)
 Minimally replicated blocks: 2 (100.0 %)
 Over-replicated blocks: 0 (0.0 %)
 Under-replicated blocks: 0 (0.0 %)
 Mis-replicated blocks:  0 (0.0 %)
 Default replication factor: 1
 Average block replication: 1.0
 Corrupt blocks:  0
 Missing replicas:  0 (0.0 %)
 Number of data-nodes:  1
 Number of racks:  1
FSCK ended at Tue Apr 24 10:49:55 IST 2018 in 1 milliseconds
 

As you can see there are 2 HDFS blocks.

In order to verify that MapReduce job will create input splits or not giving this compressed file test.bz2 as input to a wordcount MapReduce program. Since the compression format used is bz2, which is splittable, there should be 2 input splits for the job.

   
hadoop jar /home/netjs/wordcount.jar org.netjs.WordCount /user/out/test.bz2 /user/mapout

    18/04/24 10:57:10 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
    18/04/24 10:57:11 WARN mapreduce.JobResourceUploader: Hadoop command-line
    option parsing not performed. Implement the Tool interface and execute your
    application with ToolRunner to remedy this.
    18/04/24 10:57:11 WARN mapreduce.JobResourceUploader: No job jar file set.
    User classes may not be found. See Job or Job#setJar(String).
    18/04/24 10:57:11 INFO input.FileInputFormat: Total input files to process : 1
    18/04/24 10:57:11 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library from the embedded binaries
    18/04/24 10:57:11 INFO mapreduce.JobSubmitter: number of splits:2
  
You can see from the console message that the two input splits are created.

That's all for this topic Compressing File in bzip2 Format in Hadoop - Java Program. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. Compressing File in snappy Format in Hadoop - Java Program
  2. Java Program to Read File in HDFS
  3. Word Count MapReduce Program in Hadoop
  4. Replica Placement Policy in Hadoop Framework
  5. What is SafeMode in Hadoop

You may also like -

>>>Go to Hadoop Framework Page

Monday, 23 April 2018

Bounded Type Parameter in Java Generics

In the post generics in Java you would have already seen examples where type parameters are replaced by any class type. But there are times when you want to restrict the types that can be used as type arguments in a parametrized type. That can be done using bounded type parameter in Java Generics.

As example if you have a generic class with a method that operates on numbers, you would want to restrict it to accept instances of Number or its subclasses only.

Let’s first see an example where you don’t bind type parameters to analyse what happens in that case-

You have a generic class Test with a method average which returns the average of the numbers in the array passed to it. Since the class is generic so you intend to pass array of any type integer, double, float. Here return type of method average is double as you want an accurate value. Since Number class (Super class of all numeric classes) has doubleValue() method so you can always get the double value out of any type of number.

 public class Test<T> {
    T[] numArr;
    Test(T[] numArr){
        this.numArr = numArr;
    }
    public double getAvg(){
        double sum = 0.0;
        for(int i = 0; i < numArr.length; i++){
            sum += numArr[i].doubleValue();
        }
        double avg = sum/numArr.length;
        return avg;
    }
}

This code will give you compile-time error -

The method doubleValue() is undefined for the type T

You get this error as there is no way for compiler to know type T will always be used for numeric classes. You need to let the compiler know. That’s when you need bounded type to restrict the types that can be used for parametrized type. In the above case that restriction is; the type should be Number.

Bounded type in Java generics

In order to create a bounded type you need to provide an upper bound which acts as a restriction for types. As this upper bound is a superclass, the type that can be used has to be a sub class of that upper bound.

General form of bounded type parameter

To declare a bounded type parameter, list the type parameter's name, followed by the extends keyword, followed by its upper bound.

T extends superclass

In the example used above that upper bound has to be the Number class as Number class is the super class of all the numeric classes. Thus in that case your bounded type will be - T extends Number

Example code with bounded type

public class Test<T extends Number> {
    T[] numArr;
    Test(T[] numArr){
        this.numArr = numArr;
    }
    public double getAvg(){
        double sum = 0.0;
        for(int i = 0; i < numArr.length; i++){
            sum += numArr[i].doubleValue();
        }
        double avg = sum/numArr.length;
        return avg;
    }
}

Now you won’t get compile-time error as you have provided the Number class as upper bound for your generic type T. Which means any type passed for the generic type T has to be the sub class of class Number. Since doubleValue() method is in Number class it will be part of any sub class of Number through inheritance. So no compile-time error!

Multiple Bounds in Java generics

A type parameter can have multiple bounds:
<T extends B1 & B2 & B3>

A type variable with multiple bounds is a subtype of all the types listed in the bound. If one of the bounds is a class, it must be specified first. For example:

Class A { /* ... */ }
interface B { /* ... */ }
interface C { /* ... */ }

class D <T extends A & B & C> { /* ... */ }

Not specifying bounds in this order will result in compile-time error.

Generic Methods and Bounded Type Parameters

You can also use bounded types with generic methods. Let’s see an example where it becomes necessary to use bounded types. Consider a method where you want to count the number of elements in an array greater than a specified element elem.

public static <T> int countGreaterThan(T[] anArray, T elem) {
    int count = 0;
    for (T e : anArray)
        if (e > elem)  // compiler error
            ++count;
    return count;
}

This method will result in compile-time error as greater than (>) operator can be used only with primitive types such as short, int, double, long, float, byte, and char. It can’t be used to compare objects, you have to use types that implement Comparable interface in order to compare objects. Thus, Comparable interface becomes the upper bound in this case.

Code with upper bound

public class Test{
    public <T extends Comparable<T>> int countGreaterThan(T[] anArray, T elem) {
        int count = 0;
        for (T e : anArray){
            if (e.compareTo(elem) > 0) {
                ++count;
            }
        }
        return count;
   }
}

You can use the following code to run it -

Test test = new Test();
Integer[] numArr = {5, 6, 7, 1, 2};
int count = test.countGreaterThan(numArr, 5);
System.out.println("count - " + count);

Output

count – 2

That's all for this topic Bounded Type Parameter in Java Generics. If you have any doubt or any suggestions to make please drop a comment. Thanks!

Reference - https://docs.oracle.com/javase/tutorial/java/generics/boundedTypeParams.html


Related Topics

  1. Generic Class, Interface And Generic Method in Java
  2. Type Erasure in Java Generics
  3. Covariant Return Type in Java
  4. Lambda Expressions in Java 8
  5. Spliterator in Java

You may also like -

>>>Go to Java Advance Topics Page