What happens when a Java object is created?

As a software developer how many times we create an object in Java, almost for everything there is an object of a class to do the given task. But do you know what all happens behind the scene when we create an object?

In this post, we will discuss several things that happen in an order to ensure the object is constructed properly.

Introduction

Java object

Before creating a Java object, the class byte code (.class file) must be loaded from the file system to memory. Locating the byte code for a given class name and converting that code into a Java class instance is known as class loading. There is one class created for each type of Java class.

Objects in Java are created on heap memory. An object is created based on its class. A class is a template that defines states and behavior defines how an object is created.

When a Java object is created following steps will happen one by one –

    • JVM allocates 8 bytes of memory for the reference variable & assigns a default value as null.
    • JVM will verify whether class loading is done or not, if the class is already loaded then it will ignore or else it will perform class loading.
    • At the time of class loading, if any static variable is there then memory will be allocated for them.
    • At the time of class loading, if it contains any String literals a new String object will be created to represent that literal. This may not happen if the same String is previously been interned.
    • By using the new operator, object memory will be created inside heap memory.
    • At the time of object creation, if any instance variables are there then those will allocate memory inside object Memory.
    • It will assign object memory address to the reference variable which is created first.
    • If there is not sufficient space available to allocate memory for the object, then the creation of the class instance abrupt with an OutOfMemoryError. Otherwise, all the instance variables in the new object, including those declared in superclasses, are initialized to their default values.

When an object is created, memory is allocated to hold the object properties. An object reference pointing to that memory location is also created. To use the object in the future, that object reference must be stored as a local variable or as an object member variable.

There is no limit on how many objects from the same class can be created. Code and static variables are stored only once, no matter how many objects are created. Memory is allocated for the object member variables when the object is created. Thus, the size of an object is determined not by its code’s size but by the memory it needs for its member variables to be stored.

We have discussed what happens when a Java object is created, let’s check the ways to create an object in Java.  Following are some ways in which you can create objects in Java –

Creating an object with the new keyword

In most cases, the new objects are created using the new keyword. This is the most basic way to create an object by calling a constructor (default or parameterized constructor). In the below example, we create an object of AppTest when the program starts running.

public class AppTest{
    public static void main(String[] args) {
        AppTest test = new AppTest();
    }
}

Creating an object with the newInstance()

If the name of the class is known & it has a public default constructor we can create an object using – Class.forName. We can use it to create the Object of a Class. Class.forName actually loads the Class in Java but doesn’t create any Object. To create an Object of the Class you have to use the new Instance Method of the Class.

public class AppTest{

    public static void main(String[] args) {
        try {
            Class clazz = Class.forName("com.coddersdesks.AppTest");
            AppTest test =  (AppTest) clazz.newInstance();
            test.print("hello this is a test with newInstance()");
        }catch (Exception e){
            e.printStackTrace();
        }

    }

    public void print(String message){
        System.out.println("message "+message);
    }
}
message hello this is a test with newInstance()

Creating an object using the clone() method

Whenever we call the clone() method on any java object, the JVM creates a new object and copies all content of the old object into it, read more about shallow copy vs deep copy. Creating an object using the clone method does not invoke any constructor. To use the clone() method on an object we need to implement Cloneable and define the clone() method in it.

Cloning is not automatic, there is some help though, as all Java objects inherit the protected Object clone() method from the Object class. This base method would allocate the memory and do the bit by bit copying of the object’s states.

public class Student implements Cloneable{

	 int rollno;
	
	 String name;
	
	 Course course;

	public Student(int rollno, String name, Course course) {
		super();
		this.rollno = rollno;
		this.name = name;
		this.course = course;
	}

	@Override
	public String toString() {
		return "Student [rollno=" + rollno + ", name=" + name + ", course=" + course + "]";
	}
	
	protected Object clone() throws CloneNotSupportedException{
		return super.clone();
	}
}
  1. Here we are creating the clone from an existing Object and not any new Object.
  2. To support the cloning Class need to implement Cloneable Interface otherwise it will throw CloneNotSupportedException.

The clone() method copies the whole object’s memory in one operation and this is much faster than using the new keyword and copying each variable so if you need to create lots of objects with the same type, performance will be better if you create one object and clone new ones from it.

Re-creating an object using deserialization

I hope you are aware of serialization and deserialization in Java. If you want to recall it you can read Serialization in Java using Serializable Interface. Do you know there is another way as well? Serialization with Java Externalizable Interface.

So, whenever there is a need to manage the object states (properties), we can use serialization. The term Object Serialization refers to the act of converting the object to a byte stream. The common uses of serialization are sending an object over the network, persisting the object into the database using ORM such as hibernate or writing the object to the file system.

serialization in Java

During deserialization, the object can be re-created from that stream of bytes. The only requirement is that the same class has to be available both times when the object is serialized and also when the object is re-created. If that happens on different application servers, then the same class must be available on both servers. The same class means that the same version of the class must be available, otherwise, the object won’t be able to be re-created.

When a class is modified, there could be a problem re-creating those objects that were serialized using an earlier version of the class.

Creating Object using newInstance() method of Constructor

The newInstance() method of java.lang.reflect.Constructor is similar to newInstance() of Class, which can be used to create objects. Using this we can call the parameterized constructor and private constructor as well.

Both newInstance() methods are known as reflective ways to create objects. In fact newInstance() method of Class internally uses newInstance() method of Constructor class. Refer to the below code snippet for detail.

import java.lang.reflect.Constructor;
import java.lang.reflect.InvocationTargetException;

public class AppTest{

    private String testName;

    public AppTest(){

    }

    public AppTest(String testName){
        this.testName = testName;
    }

    public static void main(String[] args) throws NoSuchMethodException, IllegalAccessException, 
      InvocationTargetException, InstantiationException {

        Constructor<AppTest> constructor = AppTest.class.getDeclaredConstructor(); // default constructor
        AppTest test = constructor.newInstance();
        Constructor<AppTest> constructor1 = AppTest.class.getDeclaredConstructor(String.class); // parameterized constructor

        test.print("called using constructor newInstance() and default constructor");
        AppTest test1 = constructor1.newInstance("test name passed to constructor");
        test1.print("called using constructor newInstance() and parameterized constructor");
    }

    public void print(String message){
        System.out.println("message "+message);
        if(this.testName != null){
            System.out.println(this.testName);
        }
    }
}

When you run the above code it will print the following.

message called using constructor newInstance() and default constructor
message called using constructor newInstance() and parameterized constructor
test name passed to the constructor

Here we have discussed what happened behind the scene when we create a new Java object and various ways to create an object.

Happy Learning !!

Fundamental of MongoDB and an introduction to MongoDB CRUD operations

In this post, we will discuss the fundamentals of MongoDB and I will also introduce you to MongoDB CRUD operations with examples.

Before we head towards the CRUD operations and their example let’s understand some of the key terminologies.

key terminologies

Database: Inside a Mongo server we can have multiple databases, similar to SQL where we have multiple schemas under one database server. Inside a database, all the collections are stored.

Collection: Collection in MongoDB which holds the documents, it is similar to tables in SQL with no restricted schema.

The term no restricted schema means MongoDB does not enforce a schema to be followed by all the documents inside a particular collection, Unlike SQL where we have to define a table schema and all the records have to follow it. However, we can have schema and MongoDB allows you to validate the document against it.

Document: Document in mongodb is a single record inside a collection. The document is JSON which mogo prefers to call BSON. It is like a row in the table in terms of RDBMS.

BSON: As per wiki The name “BSON” is based on the term JSON and stands for “Binary JSON”. It is a binary form for representing simple or complex data structures including associative arrays (also known as name-value pairs), integer indexed arrays and a suite of fundamental scalar types. BSON originated in 2009 at MongoDB.

It’s time to move to the CRUD operations in MongoDB. In this section, we will discuss the following.

  • Create a database
  • Display the list of all the databases
  • The first operation of CRUD, insert record in a collection
  • Display all the collections inside a database
  • Read the document from the collection, the second operation of CRUD
  • The third operation of CRUD, update documents
  • The fourth operation of CRUD, delete documents
  • Drop collection
  • Drop database

I will use Mongo shell in this section for all the examples, if you haven’t configured it, you can read Setting up MongoDB and an introduction to Mongo shell.

Create a database

Create a database or use a database using use ‘database name’. We will create a database shop using the command, use the shop.

use shop
switched to DB shop

Display all the databases

To display all the databases we can use “show dbs” command.

show dbs
admin 0.000GB
analytics 0.003GB
config 0.000GB
local 0.000GB

Notice the shop db we have created above is not appearing here. This is because we haven’t created a single collection inside this database. No worries, let’s create one.

Insert record in a collection

db.products.insertOne({productName:'Product 1', price:299, receivedOn:new Date()})
{
"acknowledged": true,
"insertedId" : ObjectId("5ed06aa81dba2aa6bdb7a6ab")
}
> show dbs
admin 0.000GB
analytics 0.003GB
config 0.000GB
local 0.000GB
shop 0.000GB
test 0.000GB

Now you can see, after creating a document the shop db is appearing in the database list. Let’s create more documents in products collection.

db.products.insertMany([{productName:'Product 2', price:199, receivedOn:new Date()},{productName:'Product 3', price:399, receivedOn:new Date()},{productName:'Product 4', price:499, receivedOn:new Date()},{productName:'Product 5', price:599, receivedOn:new Date()}])
{
"acknowledged": true,
"insertedIds" : [
ObjectId("5ed06baf1dba2aa6bdb7a6ac"),
ObjectId("5ed06baf1dba2aa6bdb7a6ad"),
ObjectId("5ed06baf1dba2aa6bdb7a6ae"),
ObjectId("5ed06baf1dba2aa6bdb7a6af")
]
}

Display collections inside a database

To display all the collections inside a database we can “show collections” command like below.

show collections
products

Read/Fetch document from the collection

TO read documents we can use the find method like below. MongoDB also provides an aggregation framework to read the document, which will be covered in a later post.


db.products.find().pretty()
{
"_id" : ObjectId("5ed06aa81dba2aa6bdb7a6ab"),
"productName" : "Product 1",
"price" : 299,
"receivedOn" : ISODate("2020-05-29T01:51:36.729Z")
}
{
"_id" : ObjectId("5ed06baf1dba2aa6bdb7a6ac"),
"productName" : "Product 2",
"price" : 199,
"receivedOn" : ISODate("2020-05-29T01:55:59.532Z")
}
{
"_id" : ObjectId("5ed06baf1dba2aa6bdb7a6ad"),
"productName" : "Product 3",
"price" : 399,
"receivedOn" : ISODate("2020-05-29T01:55:59.532Z")
}
{
"_id" : ObjectId("5ed06baf1dba2aa6bdb7a6ae"),
"productName" : "Product 4",
"price" : 499,
"receivedOn" : ISODate("2020-05-29T01:55:59.532Z")
}
{
"_id" : ObjectId("5ed06baf1dba2aa6bdb7a6af"),
"productName" : "Product 5",
"price" : 599,
"receivedOn" : ISODate("2020-05-29T01:55:59.532Z")
}

Update documents

like find and insert we can use an update method to update a document like below.

db.products.updateOne({productName:"Product 1"},{$set:{isAvailable:true}})
{ "acknowledged" : true, "matchedCount" : 1, "modifiedCount" : 1 }

We can verify the update operation using the find method.

db.products.find().pretty()
{
"_id" : ObjectId("5ed06aa81dba2aa6bdb7a6ab"),
"productName" : "Product 1",
"price" : 299,
"receivedOn" : ISODate("2020-05-29T01:51:36.729Z"),
"isAvailable" : true
}
{
"_id" : ObjectId("5ed06baf1dba2aa6bdb7a6ac"),
"productName" : "Product 2",
"price" : 199,
"receivedOn" : ISODate("2020-05-29T01:55:59.532Z")
}
{
"_id" : ObjectId("5ed06baf1dba2aa6bdb7a6ad"),
"productName" : "Product 3",
"price" : 399,
"receivedOn" : ISODate("2020-05-29T01:55:59.532Z")
}
{
"_id" : ObjectId("5ed06baf1dba2aa6bdb7a6ae"),
"productName" : "Product 4",
"price" : 499,
"receivedOn" : ISODate("2020-05-29T01:55:59.532Z")
}
{
"_id" : ObjectId("5ed06baf1dba2aa6bdb7a6af"),
"productName" : "Product 5",
"price" : 599,
"receivedOn" : ISODate("2020-05-29T01:55:59.532Z")
}

Delete documents

Similar to update we can use the delete method to delete a document.

db.products.deleteOne({"_id" : ObjectId("5ed06baf1dba2aa6bdb7a6af")})
{ "acknowledged" : true, "deletedCount" : 1 }

The products collection after deleting a document.
db.products.find().pretty()
{
"_id" : ObjectId("5ed06aa81dba2aa6bdb7a6ab"),
"productName" : "Product 1",
"price" : 299,
"receivedOn" : ISODate("2020-05-29T01:51:36.729Z"),
"isAvailable" : true
}
{
"_id" : ObjectId("5ed06baf1dba2aa6bdb7a6ac"),
"productName" : "Product 2",
"price" : 199,
"receivedOn" : ISODate("2020-05-29T01:55:59.532Z")
}
{
"_id" : ObjectId("5ed06baf1dba2aa6bdb7a6ad"),
"productName" : "Product 3",
"price" : 399,
"receivedOn" : ISODate("2020-05-29T01:55:59.532Z")
}
{
"_id" : ObjectId("5ed06baf1dba2aa6bdb7a6ae"),
"productName" : "Product 4",
"price" : 499,
"receivedOn" : ISODate("2020-05-29T01:55:59.532Z")
}

Please note, each method mentioned above to insert, read, update, and delete has multiple variations as shown in the below table.

InsertFindUpdateDelete
insertOne()findOne()updateOne()deleteOne()
insertMany()find()UpdateMany()deleteMany()
insert()findAndModify()Update()remove()
findOneAndDelete()replaceOne()
findOneAndReplace()
findOneAndUpdate()

In the next post, we will discuss all the variations in detail.

Drop collection

db.collectionName.drop()

Drop database

db.dropDatabase()

Reference

https://docs.mongodb.com/manual/reference/method/

Happy Learning !!

Setting up MongoDB and an introduction to shell

In this post, we will install MongoDB in our local machine and we will also start working with the MongoDB shell.

Will will use the community edition of MongoDB which can be downloaded from here.

The installation process is quite simple; you can follow the steps mentioned on the official website.

Once installed successfully, set the path of the installation directory till bin folder in environment variables in windows and run mongo –version from the command prompt. Successful installation will display the following information.

MongoDB shell version v4.2.6
allocator: tcmalloc
modules: none
build environment:
distmod: 2012plus
distarch: x86_64
target_arch: x86_64

Do you know, WiredTiger is the default storage engine starting in MongoDB 3.2.

Done with configuration, open command prompt, and run mongo. If mongo db is up and running you will be connected to the Mongo shell-like below.

Mongo shell

If it failed to connect to the MongoDB and display an error as below, you need to start the MongoDB manually.


MongoDB shell version v4.2.6
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2020-05-28T11:41:46.879+0530 E QUERY Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: No connection could be made because the target machine actively refused it. :
connect@src/mongo/shell/mongo.js:341:17
@(connect):2:6
2020-05-28T11:41:46.882+0530 F - [main] exception: connect failed
2020-05-28T11:41:46.882+0530 E - [main] exiting with code 1

To start the Mongo go to the bin folder of the installation directory and run the mongod.exe. By default, MongoDB will create a data folder parallel to the bin folder to store the database detail.

Setting up MongoDB installation directory

However, you can specify a different folder to store both database and the logs by running the following command from the command prompt.

C:/Program Files/MongoDB/Server/4.2/bin/mongod --dbpath D:/mongodb/db --logpath D:/mongodb/logs/db.log

After running MongoDB successfully, let’s connect to the Mongo shell and get our hands dirty.

We will create a database and then we will insert some records using the following steps.

  1. Run use shop, to create a database shop. The use command will create a database if it does not exist otherwise it will return the existing one.
  2. Now run show dbs, to display the list of databases.
  3. Let’s insert our first record by running following command
  4. db.products.insertOne({name:”A book”,published:”10/11/2019″,price:300})
  5. Here products are the name of the collection to be created inside the shop database if it does not exist otherwise a new record will be inserted in the existing collection. You can verify it using the show collections command.

Here are all commands.

use shop
switched to db shop
> db.products.insertOne({name:"A book",publishedOn:"10/11/2019",price:300})
{
"acknowledged" : true,
"insertedId" : ObjectId("5ecf642b40f67acafaf1f1f4")
}

> show collections
products

Notice the highlighted text above. That is the output of the insert statement, and it returns acknowledgment true and the Id of the newly created document. You can verify it using the find command.

db.products.find().pretty()
{
"_id" : ObjectId("5ecf641640f67acafaf1f1f3"),
"name" : "A book",
"publishedOn" : "10/11/2019",
"price" : 300
}
>

That’s all in this post about MongoDB installation and introduction to Mongo shell.

Happy Learning !!

How to use synchronized and reentrant lock in Java

In this post, we will discuss the uses of synchronized and reentrant lock in Java. Below are the points we are gonna cover.

The problem

Before jumping into the use of a synchronized or reentrant lock, first, let see what kind of problem it’s gonna solve. Here is a simple requirement we have, we have to write a method which increments the value of an int ‘a’. Refer the below code snippet.

static int a = 0;
static void add() {
   for (int i = 0; i < 10000; i++)
     a++;
}

But when we call the same method using two threads we are not getting the expected value of ‘a’. Here is the complete java source.

public class SynchronizedAndReentrantLockExample {
    static int a = 0;

    static void add() {
        for (int i = 0; i < 10000; i++)
            a++;
    }

    public static void main(String[] args) throws InterruptedException {

        Thread t1 = new Thread(() ->{
            add();
        });

        Thread t2 = new Thread(() ->{
            add();
        });

        t1.start();
        t2.start();

        t1.join();
        t2.join();

        System.out.println("Final value of a is "+a);
    }
}

The expected value of a is 20000 but after executing this it didn’t return that.

So the problem here is that at the same time when we call the add() method using two threads, one thread overwrites the value incremented by the other thread, and the reason behind that is the method add() can be executed by two threads at the same time.

Using synchronized to fix

We can fix the above problem using the synchronized keyword. Let’s add the synchronized keyword to the add() method. As soon as we make the add () method synchronized it starts working as expected.

static synchronized void add() {
  for (int i = 0; i < 10000; i++)
     a++;
}

So, once we make this method synchronized only one thread can execute this method at a time, hence it solves the problem we have. But is this is the only way can use synchronized? The answer is a big NO.

Instead of making the entire method synchronized, we can use a synchronized block. Refer to the updated version of the add() method.

static void add() {
   synchronized (SynchronizedAndReentrantLockExample.class) {
       for (int i = 0; i < 10000; i++)
	 a++;
   }
}

The difference between synchronized block and method is when we use the synchronized method the entire method is blocked, but by using a synchronized keyword if the method has code that is not inside the synchronized block it can be executed by other threads. That could be useful in some cases.

By using these two ways we can solve the problem, but it has some limitations as well. The major one is in both cases the lock will be acquired on the class, which means if two threads are running at the same time and the first thread that acquires the lock will block the second thread even if it is calling another synchronized method.

public class SynchronizedAndReentrantLockExample {
    static int a = 0;

    static synchronized void add() {
        System.out.println("incrementing values");
        for (int i = 0; i < 10000; i++)
            a++;
        System.out.println("method add done, going to sleep now ");
        try {
            Thread.sleep(4000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }
    static synchronized void print() {
        System.out.println("the value of a is "+a);
    }


    public static void main(String[] args) throws InterruptedException {

        Thread t1 = new Thread(() ->{
            add();
        });

        Thread t2 = new Thread(() ->{
            print();
            //add();
        });

        t1.start();
        t2.start();
    }
}
incrementing values
method add done, going to sleep now 
the value of a is 10000

Problem with the synchronized method and block

Here you can see the thread t1 call the add () method which increments the values and then sleeps for 4000 ms. But the thread t2 calls another synchronized method print () which has nothing to do with the add () method. But as thread t1 acquires the lock on the class it blocks thread t2 which is entirely independent of t1. This is the problem of using a class lock.

Remember: Every problem has a solution. So let’s fix the class lock problem. It can be fixed in a couple of ways. Let’s see how we can do that.

1. Using object lock.
2. By Using ReentrantLock.

Let’s modify the add () method and introduce an object lock.

final static Object lock = new Object();

static void add() {
 synchronized (lock) {
  for (int i = 0; i < 10000; i++)
    a++;
  }
}

By using lock objects, in the case of the class lock, the lock object should be static whereas in the case of object lock it will be non-static. The benefit of using a lock object is, it allows the other threads to execute the code which is independent of this method as opposed to the class lock which does acquire the lock in the entire class and block any other thread from executing even another synchronized method.

Using ReentrantLock

The same implementation as using object lock can be achieved using ReentrantLock, the inbuilt class in java.util.concurrency package which implements Lock and serializable interface.

public class ReentrantLock implements Lock, Serializable

Here how we can rewrite the add () method using ReentrantLock. We will create a Lock object by initiating ReentrantLock.

static Lock lock = new ReentrantLock();
static void add() {
  lock.lock();
    for (int i = 0; i < 10000; i++)
      a++;
  lock.unlock();
}

The benefit of using ReentrantLock is that we don’t have to handle the low-level problem of synchronization and all.

The important point while using a reentrant lock, the lock acquisitions should be wrapped in a try/finally block, to prevent the deadlock in case of an exception.

Difference between the synchronized and reentrant lock

A reentrant mutual exclusion Lock with the same basic behavior and semantics as the implicit monitor lock accessed using synchronized methods and statements, but with extended capabilities.

The ability of ReentrantLock to make the lock fair. Synchronized blocks are unfair. We can create the lock object by passing the value of fair as true/false in the constructor.

Lock lock = new ReentrantLock(true);

The fair locks favor granting access to the longest-waiting thread.

Recommended Read

An introduction to Multithreading

Happy Learning !!

An introduction to Multithreading

Multithreading in Java is something that not every developer has explored or get a chance to work with it. But even if you have not used it, it is recommended to know the basics. In this post, we will be discussing the following topics.

What is multithreading

Generally, the programming languages process sequentially, which means it executes the code line by line. Second method calls have to wait for the first method completion even if the output of the first method is of no use for the second method, so on and so forth.

Refer to the below piece of code,

void process(){
logRequest();
saveData();
}

Here, if we say the logRequest() method is used to log the metadata of request received and the saveData() method saves the data into the database. In case of sequential programming the execution of saveData() is blocked by logRequest().

Multithreading is a way to perform multiple tasks simultaneously. Using multithreading we can break a huge task into multiple tasks, we can run multiple tasks at the same time.

For example, copying the data from one drive to another drive in windows. While copying a process starts copying the data but it does not stop the user to perform further action. Hence a user can do whatever he wants, which means the system allows us to run multiple processes at the same time.

Why do we use multithreading?

The main purpose of multithreading is to provide simultaneous execution of two or more processes to maximum utilize the CPU time. A multithreaded program contains two or more parts that can run concurrently. Each such part of a program is called a thread.

Process and Thread?

Process

  • Usually independent
  • It can have more state information than thread
  • Separate address space
  • Interacted only through system IPC

Thread

  • A subset of process
  • Multiple threads can exist in a process
  • Shared process states, memory, etc
  • Threads share their address space

Process vs Thread

Multicore vs multiprocessor

The major difference between multicore and multiprocessor is that the multicore refers to a single CPU with multiple execution units while the multiprocessor refers to a system that has two or more CPUs. Multicores have multiple cores or processing units in a single CPU. A multiprocessor contains multiple CPUs.

Advantage and disadvantage of multithreading

Advantage of multithreading

  • We can design more responsive applications by allowing multiple operations at a time.
  • We can achieve better resource utilization, generally, Java programs run on a single thread. But there may be multiple processes core available which may not be used to their strength.
  • Improve performance

Disadvantages of multithreading Ofcourse multithreading has a lot of advantages but it also has a flipside. It is not always better and can turn into a nightmare if not implemented correctly.

  • Threads manipulate the data located in the same memory area because they belong to the same process and we have to deal with this fact, the (synchronization).
  • Switching between threads is expensive, CPU has to save local data, application pointer, etc of the current thread and has to load the data of other threads as well.
  • It is difficult to develop a multi-threaded application, hard to detect bugs, even harder to fix them.

That’s it, here we have discussed what is a process and thread?  Differences between Multicore vs multiprocessor and the pros and cons of multithreading.

Read more about Java

Happy Learning !!

Spring circular dependency with resolution

Problem:

Circular Dependency

Exception:

org.springframework.beans.factory.BeanCurrentlyInCreationException

Caused by: org.springframework.beans.factory.BeanCurrentlyInCreationException: Error creating bean with name 'reportService': Bean with name 'reportService' has been injected into other beans [dataExportService] in its raw version as part of a circular reference, but has eventually been wrapped. This means that said other beans do not use the final version of the bean. This is often the result of over-eager type matching - consider using 'getBeanNamesOfType' with the 'allowEagerInit' flag turned off, for example.
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:622)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515)
	at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320)
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318)
	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
	at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:277)
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1247)
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1167)
	at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:593)
	... 45 more

The most common reason for this is a circular dependency. Where two or more dependent classes declare each other as a dependency.

Spring Circular Dependency

Like:

public class Service1 {

	@Autowired
	private Service2 service2;
}

public class Service2 {

        @Autowired
	private Service1 service1;
}

Solution:

To resolve this exception check if you can remove the dependency from one of the classes. This is likely a design problem where responsibilities are not properly separated.

If the redesign is not the choice there are other workarounds as well.

The @Lazy annotation

As the annotation name suggests, we can break the cycle by initializing one of the beans lazily, instead of fully initializing the bean. It will create a proxy to inject it into the other bean. The injected bean will only be fully created when it’s first needed. We can refactor our Service1 like below.

import org.springframework.context.annotation.Lazy;
@Service
public class Service1 {

	private Service2 service2;
	
	@Autowired
	public Service1(@Lazy Service2 service2) {
		this.service2 = service2;
	}
}

Using Setter/Field Injection

Simply changes the way beans are wired to use setter injection (or field injection) instead of constructor injection also purposed by Spring docs. This way Spring creates the beans, but the dependencies are only injected when they are needed.

Let’s refactored Service1 to use setter injection.

public class Service1 {

	private Service2 service2;

	@Autowired
	public void setService2(Service2 service2) {
		this.service2 = service2;
	}

	public Service2 getService2() {
		return service2;
	}
   }
}

The @PostConstruct annotation

Another savior from circular dependency is @PostConstruct annotation. We can use this to inject the dependency after this bean is initialized.

To use it, we can refactor both Service1 and Service2 as below.

@Service
public class Service2 {
	
	private Service1 service1;
	
	public void setService1(Service1 service1) {
		this.service1 = service1;
	}
}


import javax.annotation.PostConstruct;

@Service
public class Service1 {

	@Autowired
	private Service2 service2;

	@PostConstruct
	public void init() {
		service2.setService1(this);
	}

	public Service2 getService2() {
		return service2;
	}
}

Read More about Java Exception.

Happy Learning !!

JaversException COMMITTING_TOP_LEVEL_VALUES_NOT_SUPPORTED

Problem

Using Javers for auditing data using Spring boot and Mongo DB. In one case where I have a base class and a child class that extends the base class I am getting the following exception while persisting the data in the database.

Exception

JaversException COMMITTING_TOP_LEVEL_VALUES_NOT_SUPPORTED: Committing top-level ValueTypes like 'PolicyDocument' is not supported. You can commit only Entity or ValueObject instance.
at org.javers.core.JaversCore.assertJaversTypeNotValueTypeOrPrimitiveType(JaversCore.java:96)
at org.javers.core.JaversCore.commit(JaversCore.java:80)
at org.javers.spring.auditable.aspect.JaversCommitAdvice.commitObject(JaversCommitAdvice.java:66)
at java.util.Arrays$ArrayList.forEach(Unknown Source)
at java.util.Collections$UnmodifiableCollection.forEach(Unknown Source)
at org.javers.spring.auditable.aspect.springdata.AbstractSpringAuditableRepositoryAspect.lambda$onSave$0(AbstractSpringAuditableRepositoryAspect.java:31)
at java.util.Optional.ifPresent(Unknown Source)
at org.javers.spring.auditable.aspect.springdata.AbstractSpringAuditableRepositoryAspect.onSave(AbstractSpringAuditableRepositoryAspect.java:30)
at org.javers.spring.auditable.aspect.springdata.JaversSpringDataAuditableRepositoryAspect.onSaveExecuted(JaversSpringDataAuditableRepositoryAspect.java:41)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:644)
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:626)
at org.springframework.aop.aspectj.AspectJAfterReturningAdvice.afterReturning(AspectJAfterReturningAdvice.java:66)
at org.springframework.aop.framework.adapter.AfterReturningAdviceInterceptor.invoke(AfterReturningAdviceInterceptor.java:56)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:93)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
at com.sun.proxy.$Proxy161.save(Unknown Source)

The Repository

@JaversSpringDataAuditable
public interface PolicyDocumentRepository extends MongoRepository<PolicyDocument, String> {
}

The base class

@Data
@NoArgsConstructor
@AllArgsConstructor
@EqualsAndHashCode
@Value
public class PolicyDocumentBase {

@Id
private String policyId;

// some other properties
}

The child class

@Document(collection = "policyDocuments")
@Data
@NoArgsConstructor
@AllArgsConstructor
@EqualsAndHashCode(callSuper = true)
public class PolicyDocument extends PolicyDocumentBase {

private String status;

// some other properties
}

Solution:

Update the base class by replace @Value annotation with org.javers.core.metamodel.annotation.ValueObject (@ValueObject) as below.

@Data
@NoArgsConstructor
@AllArgsConstructor
@EqualsAndHashCode
@ValueObject
public class PolicyDocumentBase {

@Id
private String policyId;

// some other properties
}

Reference

https://javers.org/documentation/

Happy Learning !!

HTTP Cross origin resource sharing

In this tutorial, we will discuss Cross-origin resource sharing. In the modern world with the rapid changes in technology and the way we interact not just with people, but the interaction of two applications has also evolved. With the increased use of JavaScript and TypeScript, we have to establish communication between two more applications. But modern browser’s same-origin policy limits one application to make an HTTP request to a different origin.

What is the same-origin policy?

As per wiki In computing, the same-origin policy is an important concept in the web application security model. Under the policy, a web browser permits scripts contained in a first web page to access data in a second web page, but only if both web pages have the same origin.

Cross-Origin Resource Sharing enables web clients to make HTTP requests to servers hosted on different origins. CORS is a unique web technology in that it has both a server-side and a client-side component. The server-side component configures which types of cross-origin requests are allowed, while the client-side component controls how cross-origin requests are made.

CORS is a unique web technology in that it has both a server-side and a client-side component. The server-side component configures which types of cross-origin requests are allowed, while the client-side component controls how cross-origin requests are made.

So if the browsers enforce the same-origin policy, how does CORS work? The magic lies in the request and response headers. The browser and the server use HTTP headers to communicate how cross-origin requests should behave. Using the response headers, the server can indicate which clients can access the API, which HTTP methods or HTTP headers are allowed, and whether cookies are allowed in the request.

We can break down the steps to process a CORS request and response as follows.

  1. The CORS request is initiated.
  2. The browser includes additional HTTP headers on the request before sending the request to the server.
  3. The server includes HTTP headers in the response that indicates whether the request is allowed.
  4. If the request is allowed, the browser sends the response to the client code.

If the headers returned by the server don’t exist or aren’t what the browser expects, the response is rejected and the client can’t view the response.

Let’s create a simple example to call two different APIs using JQuery and see the response.

 
<html>
<head> 
<script type="text/javascript" src="https://code.jquery.com/jquery-3.4.1.min.js" ></script>
<script type="text/javascript">
let url ='http://worldtimeapi.org/api/timezone/asia/kolkata.txt';
//let url ='https://in.yahoo.com/';
$.ajax({
  url: url,
   success: function(data){
      $('#data').html(data);
   }
});
</script>
</head>
<body>
<p>Date Time detail for</p>
<span id="data"></span>
<p id="datetime"> </p>
</body> 
</html>
 

Save the above code in an HTML file and run it in chrome. When we use the World Time API as a URL for our get request, it is executed successfully and we can see the detailed response in the browser as below.

Success CORS

But, if we comment on the world time API URL and remove comments from the yahoo URL and re-run the file, we wouldn’t see any response in the browser. If you check the browser console you will see an error message with details as :

“Access to XMLHttpRequest at ‘https://in.yahoo.com/’ from origin ‘null’ has been blocked by CORS policy: No ‘Access-Control-Allow-Origin’ header is present on the requested resource.”

Fail CORS

Here you can see the world time API application allowing a request from a different origin but yahoo did not.

Happy Learning !!

Database transaction Isolation Levels and their effect

In this tutorial we will discuss the database transaction Isolation Levels, types of database transaction Isolation Levels, and their impact.

When as a developer we start developing an application to manage the data we use a database. But do we know about the job databases perform apart from executing the queries, particularly their efforts to maintain the Isolation aspect of ACID? For example, the transactions are only related to data manipulation and not to queries, which is an incorrect assumption. Transaction Isolation is all about queries, and the consistency and completeness of the data retrieved by queries. This is how it works.

Isolation gives the querying user the feeling that he owns the database. It does not matter how many concurrent users work with the same database and the same schema or even the same data. These other uses can generate new data, modify existing data, or perform any other action. The querying user must be able to get a complete, consistent picture of the data, unaffected by other users’ actions.

Consider the following scenario, which is based on a Customer table that has 1,000,000 rows:

  • At, 9:00: User A started a query “SELECT * FROM Customer”, which queries all the rows of the table. Suppose, this query takes approximately five minutes to complete, as the database must fully scan the table’s blocks from start to end and extract the rows. This is called a FULL TABLE SCAN query and is not recommended from a performance perspective.
  • 9:01: User B updates the last row in the Customer table, and commits the change.
  • 9:04: User A’s query process arrives at the row modified by User B. what will happen?

Any guess? Will User A get the original row value or the new row value? The new row value is legitimate and committed, but it was updated after User A’s query started.

The answer is not very clear and completely depends on the isolation level of the transaction. There are four types of isolation levels, as follows:

Read Uncommitted

The changes made by User B will be available for User A. This isolation level is called dirty reads, which means that read data is not consistent with other parts of the table or the query and may not yet have been committed. Therefore this isolation level ensures the quickest performance, as data is read directly from the table’s blocks with no further processing, verification, or any other validation. The process is quick and the data is as dirty as it can get.

In read committed isolation level, User A will see the changes made by user B even if user B has not yet committed the changes.

Read Committed

User A will not see the change made by User B. This is because in the Read Committed isolation level, the rows returned by a query are the rows that were committed when the query was started. The change made by User B was not present when the query started, and therefore will not be included in the query result.

Repeatable Read

User B changes will not be visible to User A. This is because, in the Repeatable Read isolation level, the rows returned by a query are the rows that were committed when the transaction was started. The change made by User B was not present when the transaction was started, and therefore will not be included in the query result.

Serializable

This isolation level specifies that all transactions occur in a completely isolated fashion, meaning as if all transactions in the system were executed serially, one after the other. The DBMS can execute two or more transactions at the same time only if the illusion of serial execution can be maintained.

Reference

Isolation (database_systems)
Database isolation levels and their effects on performance

Happy Learning !!

Understanding JSON Schema

Understanding JSON Schema

JSON schema

JSON Schema is a vocabulary that allows you to annotate and validate JSON documents. It is a powerful tool for validating the structure of JSON data.

If you’ve ever used XML Schema you probably already know what a schema is, but If all that sounds unfamiliar to you, you are at the right place. To define what JSON Schema is, we should probably first learn what JSON is?

JSON stands for “JavaScript Object Notation”, a simple data interchange format. It began as a notation for the worldwide web. Since JavaScript exists in most web browsers, and JSON is based on JavaScript, it’s very easy to support there. However, it has proven useful and simple enough that it is now used in many other contexts that don’t involve web surfing.

For more detail read JSON.

Since JSON Schema is itself a JSON, it’s not always easy to tell when something is JSON Schema or just an arbitrary chunk of JSON. The $schema keyword is used to declare that something is JSON Schema. It’s generally good practice to include it, though it is not required.

{ “$schema”: “http://json-schema.org/schema#” }

Let’s try to understand the following JSON schema declaring a property name.

 
"name" : {
      "type" : "string",
      "description" : "person name"
   }

As you can see here we have JSON property “name” of type String which basically will be used to hold the person name.

So far we have understood why we need JSON schema. Now let’s dive deep into JSON Schema.

Type-specific keyword:

The type keyword is fundamental to JSON Schema. It specifies the data type for a JSON property.

Let’s look into type in more detail with attributes like length, regular expression, and format as well.

String

String:

The string type is used for textual data. It can also contain Unicode characters.

{ "type": "string" }

The following are the valid string.

“This is a string”
“Déjà vu”
“”
“63”

Length: The length of a string can be constrained using the minLength and maxLength keywords. For both keywords, the value must be a non-negative number.

{
  "type": "string",
  "minLength": 2,
  "maxLength": 3
}

Regular Expressions: The pattern keyword is used to restrict a string to a particular regular expression.

Format: The format keyword allows for basic semantic validation on string values that are commonly used. This allows values to be constrained beyond what the other tools in JSON Schema, including Regular Expressions, can do.

{
"format":"date-time"
}

Numeric Types

Numeric type:

There are two numeric types in JSON Schema: integer and number. They share the same validation keywords.

integer: The integer type is used for integral numbers.

{ "type": "integer" }

Example of a valid integer.

45
-1

number: The number type is used for any numeric type, either integers or floating-point numbers.

{ "type": "number" }

The following are the sample valid number.
42
-1
Simple floating-point number:

5.0
The exponential notation also works:
2.99792458e8

Note: Numbers as strings are rejected. For example “42” as numeric will be rejected.

multipleOf: Numbers can be restricted to a multiple of a given number, using the multipleOf keyword. It may be set to any positive number.

{
    "type"       : "number",
    "multipleOf" : 10
}

The following are the valid sample values but 23 will be rejected as it is not a valid 10 multiple.

0
10
20

Range: Ranges of numbers are specified using a combination of the minimum and maximum keywords.

{
      "type" : "integer",
      "maximum" : 10000,
      "minimum" : 1
    }

Object

Object

Objects are key-value pairs in JSON, they define mapping type and map keys to values. In JSON, the keys must always be a string. Each of these pairs is conventionally referred to as a property of the given JSON object.

{ "type": "object" }

// Sample valid JSON Objects

// Example 1
{
   "key1"  "value 1",
   "key2" : "value 2"
}

// Example 2
[
 { 
    "name"  : "user 1",
    "email" : "user1@email.com",
    "age"   : 19 
 },
 { 
    "name"  : "user 2",
    "email" : "user2@email.com",
    "age"   : 29 
 }
]

// Sample invalid JSON Objects

// Example 1
{
    0.01 : "cm"
    1    : "m",
    1000 : "km"
}

// Example 2
["An", "array", "not", "an", "object"]

Properties: The properties (key-value pairs) on an object are defined using the properties keyword. The value of properties in an object, where each key is the name of a property, and each value is a JSON schema used to validate that property.

For example, let’s say we want to define a JSON schema for an address made up of a street no, street name, and street type:

{
  "type": "object",
  "properties": {
    "street_no":   { "type": "number" },
    "street_name": { "type": "string" },
    "street_type": { "type": "string",
                     "enum": ["Street", "Avenue", "Boulevard"]
                   }
  }
}

Now if we provide the following JSON it will be validated successfully.

{ “street_no”: 1600, “street_name”: “street name 1”, “street_type”: “Avenue” }

However, if we provided the following JSON it will fail during validation.

{ “street_no”: “1600”, “street_name”: “street name 1”, “street_type”: “Avenue” }

Here we are providing street_no in a string that will be rejected by JSON schema as it is expecting a number without quotes.

In addition to that, leaving out JSON properties is valid until we marked them as required.

Required Properties: By default, the properties defined by the properties keyword are not required. However, one can provide a list of required properties using the required keyword. Refer to the below JSON schema, it has four properties out of which only two properties are marked as required.

{
  "type": "object",
  "properties": {
    "name":      { "type": "string" },
    "email":     { "type": "string" },
    "address":   { "type": "string" },
    "phone":     { "type": "string" }
  },
  "required": ["name", "email"]
}

Let see the valid JSON sample for this schema.

 
// 1 
{
  "name": "tonny",
  "email": "tonny@someemail.com"
}

// 2 

  "name": "tonny",
  "email": "tonny@someemail.com",
  "address": "tonny aaddress, city, state, country",
  "phone": "XXXXXXXXXXXX"
}

But if we provide a JSON that misses name or email or both will be rejected by the JSON schema defined above. For example, refer to the below JSON sample.

 
{
  "name": "tonny",
   "address": "tonny aaddress, city, state, country",
   "phone": "XXXXXXXXXXXX"
}

The required keyword takes an array of zero or more strings. Each of these strings must be unique.

Size: The number of properties on an object can be restricted using the minProperties and maxProperties keywords. Each of these must be a non-negative integer. Refer to the below sample JSON Schema.

{
  "type": "object",
  "minProperties": 2,
  "maxProperties": 3
}

Array

Arrays: Arrays are used for ordered elements. In JSON, each element in an array may be of a different type.

{ "type": "array" }

// Sample valid JSON array
[1, 2, 3, 4, 5]

[3, "different", { "types" : "of values" }]

// Sample invalid JSON array

{"Not": "an array"}

Items: By default, the elements of the array may be anything at all. However, it’s often useful to validate the items of the array against some schema as well. This is done using the items, additionalItems, and contains keywords.

There are two ways in which arrays are generally used in JSON:

List validation: a sequence of arbitrary length where each item matches the same schema.

Tuple validation: a sequence of fixed length where each item may have a different schema. In this usage, the index (or location) of each item is meaningful as to how the value is interpreted.

List validation: List validation is useful for arrays of arbitrary length where each item matches the same schema. For this kind of array, set the items keyword to a single schema that will be used to validate all of the items in the array. When an item is a single schema, the additionalitems keyword is meaningless, and it should not be used.

Consider the following JSON schema, where we define that each item in an array is a number:

{
  "type": "array",
  "items": {
    "type": "number"
  }
}

Array [1, 2, 3, 4, 5] will be validated successfully for above schema, however, [1, 2, “3”, 4, 5] will be failed during validation, because it has one element as string.

Note: The empty array ([]) is always valid.

Length: The length of the array can be specified using the minItems and maxItems keywords. The value of each keyword must be a non-negative number. These keywords work whether doing List validation or Tuple validation.

{
  "type": "array",
  "minItems": 2,
  "maxItems": 3
}

Refer the above JSON schema, according to this let see the valid and invalid JSON array.

[] invalid, as it an empty array
[1] invalid, only have one element
[1, 2] valid
[1, 2, 3] valid
[1, 2, 3, 4] invalid, have more than element

Uniqueness: we can even use the JSON array to accept unique items. Simply set the unique items keyword to true.

{
  "type": "array",
  "uniqueItems": true
}

[1, 2, 3, 4, 5] valid
[1, 2, 3, 3, 4] invalid
[] The empty array always passes:

boolean: The boolean type matches only two special values: true and false. Note that values that evaluate to true or false, such as 1 and 0, are not accepted by the schema.

{ "type": "boolean" }

true, valid
false, valid
“true”, invalid
Values that evaluate to true or false are still not accepted by the schema:
0, invalid
1, invalid

null: The null type is generally used to represent a missing value. But when a schema specifies a type of null, it has only one acceptable value: null.

{ "type": "null" }

null, valid
false, invalid
0, invalid
“”, invalid

I have also created a JSON schema example using Java. Get the complete code from Github https://github.com/Kuldeep-Rana/JSON_WITH_SCHEMA.git.

Reference:

json-schema.org

Happy Learning !!