christianschnettelkerflickr100636895orig

Jump into Java microframeworks, Part 3: Spark

Spark makes fewer assumptions than the other microframeworks introduced in this short series, and is also the most lightweight of the three stacks. Spark makes pure simplicity of request handling, and it supports a variety of view templates. In Part 1 you set up a Spark project in your Eclipse development environment, loaded some dependencies via Maven, and learned Spark programming basics with a simple example. Now we'll extend the Spark Person application, adding persistence and other capabilities that you would expect from a production-ready web app.

Data persistence in Spark

If you followed my introduction to Ninja, then you'll recall that Ninja uses Guice for persistence instrumentation, with JPA/Hibernate being the default choice. Spark makes no such assumptions about the persistence layer. You can choose from a wide range of options, including JDBC, eBean, and JPA. In this case, we'll use JDBC, which I'm choosing for its openness (it won't limit our choice of database) and scalability. As I did with the Ninja example app, I'm using a MariaDB instance on localhost. Listing 1 shows the database schema for the Person application that we started developing in Part 1.

Listing 1. Simple database schema for a Spark app



create table person (first_name varchar (200), last_name varchar (200), id int not null auto_increment primary key);



CRUD (create, read, update, delete) capabilities are the heart of object-oriented persistence, so we'll begin by setting up the Person app's create-person functionality. Instead of coding the CRUD operations straightaway, we'll start with some back-end infrastructure. Listing 2 shows a basic DAO layer interface for Spark.

Listing 2. DAO.java interface



public interface DAO {

	public boolean addPerson(Map<String, Object> data);

}



Next we'll add the JdbcDAO implementation. For now we're just blocking out a stub that accepts a map of data and returns success. Later we'll use that data to define the entity fields.

Listing 3. JdbcDAO.java implementation



public class JdbcDAO implements DAO {

	@Override

	public boolean addPerson(Map<String, Object> data) {

		return true;

	}

}



We'll also need a Controller class that takes the DAO as an argument. The Controller in Listing 4 is a stub that returns a JSON string describing success or failure.

Listing 4. A stub Controller



import org.mtyson.dao.DAO;

public class Controller {

	private DAO dao;

	public Controller(DAO dao) {

		super();

		this.dao = dao;

	}

	public String add(String type){

		Map<String, Object> data = new HashMap<String, Object>();

		if (dao.addPerson(data)){

			return "{\"message\":\"Added a person!\"}"; 

		} else {

			return "{\"message\":\"Failed to add a person\"}";

		}

	}

}



Now we can reference the new controller and DAO layers in App.java, the main class for our Spark application:

Listing 5. App.java



import org.mtyson.dao.DAO;

import org.mtyson.dao.JdbcDAO;

import org.mtyson.service.Controller;

public class App {

	private final static DAO dao = new JdbcDAO();

	private final static Controller controller = new Controller(dao);

    public static void main( String[] args ){

    	//...

        Spark.post("/person", (req, res) -> { return controller.addPerson(req.body()); } ); // 1

    }

}



Notice the line in Listing 5 that is commented with the number 1. You'll recall from Part 1 that this line is how we handle a route in Spark. In the route-handler lambda, we just access the App.controller member (given that lambdas have full access to the enclosing class context), then call the addPerson() method. We pass in the request body via req.body(). A JSON request body will be expected in our request, and that body should contain the fields for the new Person entity.

If we now hit the POST /person URL (using Postman, which I introduced in Part 2) we'll get a message back indicating success. Postman shows us what a response message would look like, but it's empty of real content. For that we need to populate our database.

Populating the database

We'll use JdbcDAO to add a row or two to our database. To set this up, we first need to add some items to pom.xml, the application's Maven dependency file. The updated POM in Listing 6 includes a MySQL JDBC implementation, Apache DBUtils, and a simple wrapper library so that we don't have to manage the JDBC ourselves. I've also included Boon, a JSON project that is reputed to be the fastest way to process JSON in Java. If you're familiar with Jackson or GSON, Boon does the same thing with a similar syntax. We'll put Boon to use shortly. The POM updates are shown in Listing 6.

Listing 6. Add MySQL, DBUtils, and Boon to Maven POM



<dependency>

			<groupId>mysql</groupId>

			<artifactId>mysql-connector-java</artifactId>

			<version>5.1.37</version>

		</dependency>

		<dependency>

			<groupId>commons-dbutils</groupId>

			<artifactId>commons-dbutils</artifactId>

			<version>1.6</version>

		</dependency>

		<dependency>

			<groupId>io.fastjson</groupId>

			<artifactId>boon</artifactId>

			<version>0.33</version>

		</dependency>



Now, change JdbcDAO to look like Listing 7. The addPerson() will take the first_name and last_name values from the map argument and use them to insert a Person into the database.

Listing 7. Add a Person to the database



package org.mtyson.dao;

import java.sql.SQLException;

import java.util.ArrayList;

import java.util.List;

import java.util.Map;

import java.util.stream.Collectors;

import org.apache.commons.dbutils.QueryRunner;

import com.mysql.jdbc.jdbc2.optional.MysqlDataSource;

public class JdbcDAO implements DAO {

	private static MysqlDataSource dataSource;

    static {

        try {

        	dataSource = new MysqlDataSource();

        	dataSource.setUser("root");

        	dataSource.setPassword("password");

        	dataSource.setServerName("localhost");

        	dataSource.setDatabaseName("spark_app");

        } catch (Exception e) {

            throw new ExceptionInInitializerError(e);

        }

    }

    public boolean addPerson(Map<String, Object> data) {

    	QueryRunner run = new QueryRunner( dataSource );

		try	{

		    int inserts = run.update( "INSERT INTO Person (first_name, last_name) VALUES (?,?)", data.get("first_name"), data.get("last_name"));

		} catch(SQLException sqle) {

		    throw new RuntimeException("Problem updating", sqle);

		}

		return true;    	

    }

}



In Listing 7 we obtained a JDBC dataSource instance, which we'll use when connecting to the database instance running on localhost. In a true production scenario we'd need to do something about connection pooling, but we'll side-step that for the present. (Note that you'll want to change the root and password placeholders above to something unique for your own implementation.)

Updating the controller

Now let's return to the controller and update it. The updated controller shown in Listing 8 takes a String and modifies it into a Map, which can be passed to the DAO. We'll see how Boon lives up to its name here, because the String argument will be a bit of JSON from the UI. Listing 8 has the controller updates.

Listing 8. Controller converts a JSON String to a Java Map



import java.util.HashMap;

import java.util.Map;

import org.boon.json.JsonFactory;

import org.boon.json.ObjectMapper;

import org.mtyson.dao.DAO;

public class Controller {

	private DAO dao;

	ObjectMapper mapper =  JsonFactory.create(); // 1

	public Controller(DAO dao) {

		super();

		this.dao = dao;

	}

	public String addPerson(String json){

		Map<String,Object> data =  mapper.readValue(json, Map.class); // 2

		if (dao.addPerson(data)){ // 3

			return "{\"message\":\"Added a person!\"}"; 

		} else {

			return "{\"message\":\"Failed to add a person\"}";

		}

	}

}



The line marked 1 creates a mapper that we can use to convert JSON (it's a class member -- this ObjectMapper is designed to be reused). The line marked 2 uses the mapper to parse the string into a Java Map. Finally, in line 3, the map is passed into the DAO.

Now if we send a POST request with the body, our new Person will be added to the database. Remember that the primary key is an auto-increment field, so that isn't shown.

Listing 9. JSON body for the create Person POST



{"first_name":"David","last_name":"Gilmour"}



Here's the request displayed in Postman:

Figure 1. Creating a Person from Postman

The statically typed data layer

So far I've demonstrated a dynamically typed approach to creating the Spark data layer, modeling with maps of data rather than explicitly defined classes. If we wanted to push further in the dynamic direction, we could insert a single add(String type, Map data) method in the DAO, which would programmatically persist a given type. For this approach we'd need to write a layer to map from Java to SQL types.

The more common approach to persistence is to use model classes, so let's take a quick look at how that would work in Spark. Then we'll wrap up the remaining Person CRUD.

Persistence with a model class

For a more traditional, statically typed approach to the data layer, we start by adding a Person class to the original stub application, as seen in Listing 10. This will be our model.

Listing 10. Person model



package org.mtyson.model;

import org.boon.json.annotations.JsonProperty;

public class Person {

    Long id;

	@JsonProperty("first_name")

	private String firstName;

	@JsonProperty("last_name")

	private String lastName;

	public Long getId() {

		return id;

	}

	public void setId(Long id) {

		this.id = id;

	}

	public String getFirstName() {

		return firstName;

	}

	public void setFirstName(String firstName) {

		this.firstName = firstName;

	}

	public String getLastName() {

		return lastName;

	}

	public void setLastName(String lastName) {

		this.lastName = lastName;

	}

}



The @JsonProperty annotation in Listing 10 tells Boon to convert JSON's underscore format (which corresponds to HTML document fields) to the camel-cased fields of a Java class. If you're familiar with Jackson, you'll observe that Boon has borrowed some of its annotations. Also notice the modified addPerson() method on the controller below. It shows how the JSON String is converted in the object.

Listing 11. JSON-to-Java Person instance conversion



public String addPerson(String json){

		Person person = mapper.fromJson(json, Person.class); // Here's where we get our Person instance

		if (dao.addPerson(person)){

			return "{\"message\":\"Added a person!\"}"; 

		} else {

			return "{\"message\":\"Failed to add a person\"}";

		}

	}



In this case we aren't doing anything but persisting the Person object, but we can now use the model instance in whatever business logic we please. In Listing 12 I've updated the JdbcDAO.addPerson() method to use the Person class. The difference here is that the first and last names are now pulled from the Person getters, rather than from the Map used in Listing 7.

Listing 12. JdbcDAO with Person class



public boolean addPerson(Person person) {

    	QueryRunner run = new QueryRunner( dataSource );

		try	{

		    int inserts = run.update( "INSERT INTO Person (first_name, last_name) VALUES (?,?)", person.getFirstName(), person.getLastName());

		} catch(SQLException sqle) {

		    throw new RuntimeException("Problem updating", sqle);

		}

		return true;    	

    }



The Person application's request processing infrastructure now consists of three layers, successively converting request data from JSON, to a Map, to a Java class.

Developing the Spark UI

We have our model and a way to persist it. Next we'll begin developing a UI to save and view objects in the database. In Spark, this means adding static JavaScript resources to use in the template.html page.

To start, create a src/main/resources/public folder to hold the new resources, as shown in Figure 2.

Figure 2. Adding src/main/resources/public to the Eclipse project

Integrating jQuery

For our JavaScript tool we'll use jQuery, which is especially useful for Ajax and DOM handling. If you don't have it already, download the latest version of jQuery (2.1.4 as of this writing) and place it in your new public folder. Another option is to create a file in public and copy the jQuery source into it, or you can actually download the file and copy it into the directory.

Next, using the same process, add the Serialize Object jQuery plugin. This plugin will manage the process of converting the HTML form into a JSON format that the server can understand. (Recall that the addPerson() method from Listing 8 expects a JSON string.)

Finally, add a file called app.js into the same directory. As you can see in Listing 13, app.js contains simple controls for the template.html.

Listing 13. Custom JavaScript in app.js



App = {

		startup: function(){

			$("#addPersonButton").click(App.addPerson);

			App.loadPeople();

		},

		addPerson: function(){

			var person = $('#personForm').serializeObject();

			$.ajax({

		        url: '/person',

		        type: 'POST',

		        contentType: 'application/json',

		        data: JSON.stringify(person),

		        success: function(){

		        	App.loadPeople();

		        },

		        error: function(x){

		        	console.error("encountered a problem: ", x);

		        }

		    });

			return false;

		},

		loadPeople: function(){

			$.getJSON( "/people", function( data ) {

				  var items = [];

				  $.each( data, function( key, val ) {

				    items.push( "<li id='" + key + "'>" + val.firstName + " " + val.lastName + "</li>" );

				  });

				  $("#people").empty();

				  $("#people").html("<ul>"+items.join("")+"</ul>");

				});

		}

}

$( document ).ready(function() {

   App.startup();

});



app.js produces an App object, which contains methods we'll use to interact with server-side REST services.

Finally, we need to tell Spark about our /public directory. Do this by going to App.java, in the main() method, and adding the code from Listing 14, below. Be sure to update App.java before you define any routes! This will tell Spark to map the application's assets directory, allowing browser requests access to public resources like JavaScript files.

Listing 14. Mapping the public assets dir



public static void main( String[] args ){

    	Spark.staticFileLocation("/public");

		// ...



Completing the CRUD cycle

We defined addPerson at the beginning of this tutorial, so it's all set to be used by the App.addPerson() method. Next we'll create the /people GET endpoint for App.loadPeople.

Start by mapping the /people path in App.java, as shown in Listing 15.

Listing 15. People POST URL mapping



	Spark.get("/people", (req, res) -> { return controller.loadPeople(req.body()); });



Next add the loadPeople() method to the controller, as shown in Listing 16.

Listing 16. Controller.loadPeople()



	public String loadPeople(String body) {

		return mapper.toJson(dao.loadPeople());

	}



The loadPeople() method in Listing 16 uses Boon's JSON mapper to convert whatever dao.loadPeople returns into JSON. Note that we've also taken the request body as an argument. We won't do anything with it for now, but it's there if we need it later -- for example, if we wanted to add search parameters to the application.

Listing 17 is the JdbcDAO implementation of loadPeople(). Remember that we'll also need to add the loadPeople() to the DAO interface.

Listing 17. JdbcDAO.loadPeople()



	public List<Person> loadPeople() {

    	QueryRunner run = new QueryRunner( dataSource );

		try	{

			ResultSetHandler<List<Person>> h = new BeanListHandler<Person>(Person.class)); // 1

			List<Person> persons = run.query("SELECT * FROM Person", h);

			return persons;

		} catch(SQLException sqle) {

		    throw new RuntimeException("Problem updating", sqle);

		}

	}



JdbcDAO.loadPeople leverages DBUtils again, this time to issue the query, and also to convert the SQL resultset into a List of Persons. The SQL conversion is handled by passing in the ResultSetHandler to the query() method. You can see the definition for the ResultSetHandler in the line commented with the number 1. Also note the use of generics and the argument to specify the type of results we want back. DBUtils provides several useful handlers like this one.

At this point, we can test out the loadPerson endpoint in Postman, by sending a GET to /people. What you'll find is that it almost works: we get back our rows, but they only have the IDs, no first and last name fields. Figure 3 shows Postman returning a JSON array with only the ID fields.

Figure 3. Postman get people, with only IDs

For the Person names to be properly listed, we have to convert the SQL-style database fields (first_name) into the JavaBeans style (firstName). This is exactly analogous to when we configured Boon to convert from underscore to camel-case on the front-end. Fortunately, DBUtils makes the conversion easy; just swap line 1 in Listing 17 for what's shown in Listing 18. The DBUtils GenerousBeanProcessor accepts underscore-separated names.

Listing 18. JdbcDAO.loadPeople()



	ResultSetHandler<List<Person>> h = new BeanListHandler<Person>(Person.class,new BasicRowProcessor(new GenerousBeanProcessor())); // 1



The passed-in RowProcessor customizes our conversion, and GenerousBeanProcessor transforms our first_name to firstName. With these changes, a Postman test on /people should return the name fields we're looking for.

Figure 4 shows off the new response in Postman, which now shows a request after a couple of country music legends have been added to the database (note that this is old country). You'll see the people listed in the response as a JSON array, including the first and last name fields.

Figure 4. The new GET succeeds with all fields

Completing the UI

We'll finish up the basic Person app by adding a few more UI elements to template.html, as shown in Listing 19.

Listing 19. JdbcDAO.loadPeople()



<!DOCTYPE html>

<html lang="en">

<head>

<meta charset="utf-8"></meta>

<title>title</title>

<script language="javascript" type="text/javascript" src="jquery-2.1.4.min.js"></script>

<script language="javascript" type="text/javascript" src="serializeObject.js"></script>

<script language="javascript" type="text/javascript" src="app.js"></script>

</head>

<body>

	<form id="personForm">

		First name: <input type="text" name="first_name"></input><br></br> Last

		name: <input type="text" name="last_name"></input><br></br>

		<button id="addPersonButton">Add</button>

	</form>

	<div id="people"></div>

</body>

</html>



Listing 19 includes the JavaScript resources that we mapped in Listing 14, and uses the methods from our JavaScript main object, App, which we defined in Listing 13. When the page first loads, our jQuery onload handler will display all Persons in the database. The form fields will be sent in a JSON body to the addPerson service that we tested with Postman. Upon returning, the form will automatically refresh the person list.

Authenticate and authorize

We've covered a lot of ground, and have the basics in place for a Person application with persistence and a functional UI. In addition to Spark's core infrastructure, we've used DBUtils, Boon, and jQuery to wire together the application's data layer and UI.

For our last experiment with Spark, let's add login support to the Person app. This will let a user login, save their session info, and check for authorization; all important steps toward a more secure app.

Listing 20 shows the initial updates to the template.html file. The new loginForm will allow the user to enter his or her username and password and use buttons to login or logout. We'll use jQuery to submit the login data via Ajax.

Listing 20. Adding a loginForm to template.html



<form id="loginForm">

		User Name: <input type="text" name="username" id="username"></input><br></br>

		Password: <input type="text" name="password" id="password"></input><br></br>

		<button id="loginButton">Login</button>

		<button id="logoutButton">Logout</button>

	</form>



We add the login functionality to app.js similarly to how we added the addPerson feature in Listing 13. We'll just use a serializeObject to pull the fields from the loginForm and submit them as JSON.

Listing 21 shows the App object methods that have been updated from Listing 13. Note that in the startup method, we add a click-event handler for the login button and introduce the login() method to handle the click.

Listing 21. Adding login to app.js



startup: function(){

			$("#addPersonButton").click(App.addPerson);

			$("#loginButton").click(App.login);

			App.loadPeople();

		}

		//...

		login: function(){

			var login = $('#loginForm').serializeObject();

			$.ajax({

		        url: '/login',

		        type: 'POST',

		        contentType: 'application/json',

		        data: JSON.stringify(login),

		        success: function(data){

		        	alert("You are now logged in: " + data)

		        },

		        error: function(x){

		        	console.error("encountered a problem: ", x);

		        }

		    });

			return false;

		}



Next we add a login handler, starting with this new line in App.java:



Spark.post("/login", (req, res) -> { return controller.login(req); });

We then update our controller, as shown in Listing 22:

Listing 22. Session management



public String login(Request req) {

		Map<String,Object> data =  mapper.readValue(req.body(), Map.class);

		// Do some login logic

		req.session().attribute("username", data.get("username"));

		return "{\"message\":\"Success!\"}";  

	}



You'll note that the above login method is silly, with no real authentication logic. What it does do, is to show off Spark's session management API. Notice that we passed in the actual request object to the controller, and used that to add the username to the session. Now let's make use of the "authenticated" user. Listing 23 has our authorization credential check, which is added to App.java.

Listing 23. Authorization check



Spark.before((request, response) -> {

    	    boolean authenticated = request.session().attribute("username") != null;

    	    if (!authenticated && request.body().toLowerCase().contains("hendrix")) {

    	        Spark.halt(401, "Only logged in users are allowed to mess with Jimi.");

    	    }

    	});



Listing 23 wouldn't cut it for a real-world application, but it demonstrates Spark's implementation of filters, which we've used to handle authorization. In this case, we add a before filter and check to see whether the user is logged in. If the user isn't logged in, they won't be allowed to submit any post with "Hendrix" in it. This ensures that unauthorized users won't be able to mess with Jimi.

You can verify the authorization mechanism by attempting to create your own Jimi Hendrix Person in the UI without a login, and then again with one.

Something else to note in Listing 23 is the Spark.halt API. This API returns an HTTP error with a specified status code; in this case it's 401, unauthorized.

Conclusion

Spark's API is so lean that we've covered a good percentage of its functionality in this short tutorial. If you've followed the example application since Part 1, then you've set up Spark's service and persistence layers, built a basic UI, and seen enough of Spark's authorization and authentication services to understand how they work. For very small projects, Spark is a clear winner. It makes mapping endpoints dead simple, while introducing no obstacles to building out a larger infrastructure. For a use case where you definitely wanted to use JPA and work in an IoC container, Ninja might be a better choice. But for great flexibility with a lean footprint (even on large projects) Spark is a very good bet. Of the great wealth of open-source excellence available to us as Java developers, Spark is another fine entry.

Stay tuned for the final article in this series, an in-depth introduction to Play!

IDG Insider

PREVIOUS ARTICLE

« Shake it like a Polaroid...smartphone? Iconic company unveils two Android phones

NEXT ARTICLE

Drupal sites at risk due to insecure update mechanism »
author_image
IDG Connect

IDG Connect tackles the tech stories that matter to you

  • Mail

Recommended for You

How to (really) evaluate a developer's skillset

Adrian Bridgwater’s deconstruction & analysis of enterprise software

Unicorns are running free in the UK but Brexit poses a tough challenge

Trevor Clawson on the outlook for UK Tech startups

Cloudistics aims to trump Nutanix with 'superconvergence' play

Martin Veitch's inside track on today’s tech trends

Poll

Is your organization fully GDPR compliant?